node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
43,278 | 31 | Title: Selective Query Processing: a Risk-Sensitive Selection of System Configurations
Abstract: In information retrieval systems, search parameters are optimized to ensure high effectiveness based on a set of past searches and these optimized parameters are then used as the system configuration for all subsequent queries. A better approach, however, would be to adapt the parameters to fit the query at hand. Selective query expansion is one such an approach, in which the system decides automatically whether or not to expand the query, resulting in two possible system configurations. This approach was extended recently to include many other parameters, leading to many possible system configurations where the system automatically selects the best configuration on a per-query basis. To determine the ideal configurations to use on a per-query basis in real-world systems we developed a method in which a restricted number of possible configurations is pre-selected and then used in a meta-search engine that decides the best search configuration on a per query basis. We define a risk-sensitive approach for configuration pre-selection that considers the risk-reward trade-off between the number of configurations kept, and system effectiveness. For final configuration selection, the decision is based on query feature similarities. We find that a relatively small number of configurations (20) selected by our risk-sensitive model is sufficient to increase effectiveness by about 15% according(P@10, nDCG@10) when compared to traditional grid search using a single configuration and by about 20% when compared to learning to rank documents. Our risk-sensitive approach works for both diversity- and ad hoc-oriented searches. Moreover, the similarity-based selection method outperforms the more sophisticated approaches. Thus, we demonstrate the feasibility of developing per-query information retrieval systems, which will guide future research in this direction. | [] | Test |
43,279 | 16 | Title: DiffMimic: Efficient Motion Mimicking with Differentiable Physics
Abstract: Motion mimicking is a foundational task in physics-based character animation. However, most existing motion mimicking methods are built upon reinforcement learning (RL) and suffer from heavy reward engineering, high variance, and slow convergence with hard explorations. Specifically, they usually take tens of hours or even days of training to mimic a simple motion sequence, resulting in poor scalability. In this work, we leverage differentiable physics simulators (DPS) and propose an efficient motion mimicking method dubbed DiffMimic. Our key insight is that DPS casts a complex policy learning task to a much simpler state matching problem. In particular, DPS learns a stable policy by analytical gradients with ground-truth physical priors hence leading to significantly faster and stabler convergence than RL-based methods. Moreover, to escape from local optima, we utilize a Demonstration Replay mechanism to enable stable gradient backpropagation in a long horizon. Extensive experiments on standard benchmarks show that DiffMimic has a better sample efficiency and time efficiency than existing methods (e.g., DeepMimic). Notably, DiffMimic allows a physically simulated character to learn Backflip after 10 minutes of training and be able to cycle it after 3 hours of training, while the existing approach may require about a day of training to cycle Backflip. More importantly, we hope DiffMimic can benefit more differentiable animation systems with techniques like differentiable clothes simulation in future research. | [
25667
] | Test |
43,280 | 24 | Title: Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time
Abstract: Despite recent theoretical progress on the non-convex optimization of two-layer neural networks, it is still an open question whether gradient descent on neural networks without unnatural modifications can achieve better sample complexity than kernel methods. This paper provides a clean mean-field analysis of projected gradient flow on polynomial-width two-layer neural networks. Different from prior works, our analysis does not require unnatural modifications of the optimization algorithm. We prove that with sample size $n = O(d^{3.1})$ where $d$ is the dimension of the inputs, the network converges in polynomially many iterations to a non-trivial error that is not achievable by kernel methods using $n \ll d^4$ samples, hence demonstrating a clear separation between unmodified gradient descent and NTK. | [] | Validation |
43,281 | 24 | Title: DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint Satisfaction
Abstract: As machine learning models, specifically neural networks, are becoming increasingly popular, there are concerns regarding their trustworthiness, specially in safety-critical applications, e.g. actions of an autonomous vehicle must be safe. There are approaches that can train neural networks where such domain requirements are enforced as constraints, but they either cannot guarantee that the constraint will be satisfied by all possible predictions (even on unseen data) or they are limited in the type of constraints that can be enforced. In this paper, we present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions. The approach builds on earlier work where learning linear models is formulated as a constraint satisfaction problem (CSP). To make this idea applicable to neural networks, two crucial new elements are added: constraint propagation over the network layers, and weight updates based on a mix of gradient descent and CSP solving. Evaluation on various machine learning tasks demonstrates that our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks. | [] | Train |
43,282 | 24 | Title: Large-Scale Traffic Data Imputation with Spatiotemporal Semantic Understanding
Abstract: Large-scale data missing is a challenging problem in Intelligent Transportation Systems (ITS). Many studies have been carried out to impute large-scale traffic data by considering their spatiotemporal correlations at a network level. In existing traffic data imputations, however, rich semantic information of a road network has been largely ignored when capturing network-wide spatiotemporal correlations. This study proposes a Graph Transformer for Traffic Data Imputation (GT-TDI) model to impute large-scale traffic data with spatiotemporal semantic understanding of a road network. Specifically, the proposed model introduces semantic descriptions consisting of network-wide spatial and temporal information of traffic data to help the GT-TDI model capture spatiotemporal correlations at a network level. The proposed model takes incomplete data, the social connectivity of sensors, and semantic descriptions as input to perform imputation tasks with the help of Graph Neural Networks (GNN) and Transformer. On the PeMS freeway dataset, extensive experiments are conducted to compare the proposed GT-TDI model with conventional methods, tensor factorization methods, and deep learning-based methods. The results show that the proposed GT-TDI outperforms existing methods in complex missing patterns and diverse missing rates. The code of the GT-TDI model will be available at https://github.com/KP-Zhang/GT-TDI. | [] | Test |
43,283 | 34 | Title: Online Time-Windows TSP with Predictions
Abstract: In the Time-Windows TSP (TW-TSP) we are given requests at different locations on a network; each request is endowed with a reward and an interval of time; the goal is to find a tour that visits as much reward as possible during the corresponding time window. For the online version of this problem, where each request is revealed at the start of its time window, no finite competitive ratio can be obtained. We consider a version of the problem where the algorithm is presented with predictions of where and when the online requests will appear, without any knowledge of the quality of this side information. Vehicle routing problems such as the TW-TSP can be very sensitive to errors or changes in the input due to the hard time-window constraints, and it is unclear whether imperfect predictions can be used to obtain a finite competitive ratio. We show that good performance can be achieved by explicitly building slack into the solution. Our main result is an online algorithm that achieves a competitive ratio logarithmic in the diameter of the underlying network, matching the performance of the best offline algorithm to within factors that depend on the quality of the provided predictions. The competitive ratio degrades smoothly as a function of the quality and we show that this dependence is tight within constant factors. | [
3130,
44795
] | Train |
43,284 | 16 | Title: RoMa: Revisiting Robust Losses for Dense Feature Matching
Abstract: Dense feature matching is an important computer vision task that involves estimating all correspondences between two images of a 3D scene. In this paper, we revisit robust losses for matching from a Markov chain perspective, yielding theoretical insights and large gains in performance. We begin by constructing a unifying formulation of matching as a Markov chain, based on which we identify two key stages which we argue should be decoupled for matching. The first is the coarse stage, where the estimated result needs to be globally consistent. The second is the refinement stage, where the model needs precise localization capabilities. Inspired by the insight that these stages concern distinct issues, we propose a coarse matcher following the regression-by-classification paradigm that provides excellent globally consistent, albeit not exactly localized, matches. This is followed by a local feature refinement stage using well-motivated robust regression losses, yielding extremely precise matches. Our proposed approach, which we call RoMa, achieves significant improvements compared to the state-of-the-art. Code is available at https://github.com/Parskatt/RoMa | [
35761,
42522,
4643,
33222
] | Train |
43,285 | 16 | Title: MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
Abstract: Face-to-face communication is a common scenario including roles of speakers and listeners. Most existing research methods focus on producing speaker videos, while the generation of listener heads remains largely overlooked. Responsive listening head generation is an important task that aims to model face-to-face communication scenarios by generating a listener head video given a speaker video and a listener head image. An ideal generated responsive listening video should respond to the speaker with attitude or viewpoint expressing while maintaining diversity in interaction patterns and accuracy in listener identity information. To achieve this goal, we propose the \textbf{M}ulti-\textbf{F}aceted \textbf{R}esponsive Listening Head Generation Network (MFR-Net). Specifically, MFR-Net employs the probabilistic denoising diffusion model to predict diverse head pose and expression features. In order to perform multi-faceted response to the speaker video, while maintaining accurate listener identity preservation, we design the Feature Aggregation Module to boost listener identity features and fuse them with other speaker-related features. Finally, a renderer finetuned with identity consistency loss produces the final listening head videos. Our extensive experiments demonstrate that MFR-Net not only achieves multi-faceted responses in diversity and speaker identity information but also in attitude and viewpoint expression. | [
8421,
5445
] | Train |
43,286 | 16 | Title: Networks are Slacking Off: Understanding Generalization Problem in Image Deraining
Abstract: Deep deraining networks, while successful in laboratory benchmarks, consistently encounter substantial generalization issues when deployed in real-world applications. A prevailing perspective in deep learning encourages the use of highly complex training data, with the expectation that a richer image content knowledge will facilitate overcoming the generalization problem. However, through comprehensive and systematic experimentation, we discovered that this strategy does not enhance the generalization capability of these networks. On the contrary, it exacerbates the tendency of networks to overfit to specific degradations. Our experiments reveal that better generalization in a deraining network can be achieved by simplifying the complexity of the training data. This is due to the networks are slacking off during training, that is, learning the least complex elements in the image content and degradation to minimize training loss. When the complexity of the background image is less than that of the rain streaks, the network will prioritize the reconstruction of the background, thereby avoiding overfitting to the rain patterns and resulting in improved generalization performance. Our research not only offers a valuable perspective and methodology for better understanding the generalization problem in low-level vision tasks, but also displays promising practical potential. | [
23893,
21886
] | Validation |
43,287 | 3 | Title: Reconciling Predictive and Statistical Parity: A Causal Approach
Abstract: Since the rise of fair machine learning as a critical field of inquiry, many different notions on how to quantify and measure discrimination have been proposed in the literature. Some of these notions, however, were shown to be mutually incompatible. Such findings make it appear that numerous different kinds of fairness exist, thereby making a consensus on the appropriate measure of fairness harder to reach, hindering the applications of these tools in practice. In this paper, we investigate one of these key impossibility results that relates the notions of statistical and predictive parity. Specifically, we derive a new causal decomposition formula for the fairness measures associated with predictive parity, and obtain a novel insight into how this criterion is related to statistical parity through the legal doctrines of disparate treatment, disparate impact, and the notion of business necessity. Our results show that through a more careful causal analysis, the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions through the concept of business necessity. Finally, we demonstrate the importance of our findings on a real-world example. | [] | Train |
43,288 | 16 | Title: Deep Image Harmonization with Globally Guided Feature Transformation and Relation Distillation
Abstract: Given a composite image, image harmonization aims to adjust the foreground illumination to be consistent with background. Previous methods have explored transforming foreground features to achieve competitive performance. In this work, we show that using global information to guide foreground feature transformation could achieve significant improvement. Besides, we propose to transfer the foreground-background relation from real images to composite images, which can provide intermediate supervision for the transformed encoder features. Additionally, considering the drawbacks of existing harmonization datasets, we also contribute a ccHarmony dataset which simulates the natural illumination variation. Extensive experiments on iHarmony4 and our contributed dataset demonstrate the superiority of our method. Our ccHarmony dataset is released at https://github.com/bcmi/Image-Harmonization-Dataset-ccHarmony. | [] | Validation |
43,289 | 30 | Title: Inline Citation Classification using Peripheral Context and Time-evolving Augmentation
Abstract: Citation plays a pivotal role in determining the associations among research articles. It portrays essential information in indicative, supportive, or contrastive studies. The task of inline citation classification aids in extrapolating these relationships; However, existing studies are still immature and demand further scrutiny. Current datasets and methods used for inline citation classification only use citation-marked sentences constraining the model to turn a blind eye to domain knowledge and neighboring contextual sentences. In this paper, we propose a new dataset, named 3Cext, which along with the cited sentences, provides discourse information using the vicinal sentences to analyze the contrasting and entailing relationships as well as domain information. We propose PeriCite, a Transformer-based deep neural network that fuses peripheral sentences and domain knowledge. Our model achieves the state-of-the-art on the 3Cext dataset by +0.09 F1 against the best baseline. We conduct extensive ablations to analyze the efficacy of the proposed dataset and model fusion methods. | [] | Test |
43,290 | 10 | Title: Python Tool for Visualizing Variability of Pareto Fronts over Multiple Runs
Abstract: Hyperparameter optimization is crucial to achieving high performance in deep learning. On top of the performance, other criteria such as inference time or memory requirement often need to be optimized due to some practical reasons. This motivates research on multi-objective optimization (MOO). However, Pareto fronts of MOO methods are often shown without considering the variability caused by random seeds and this makes the performance stability evaluation difficult. Although there is a concept named empirical attainment surface to enable the visualization with uncertainty over multiple runs, there is no major Python package for empirical attainment surface. We, therefore, develop a Python package for this purpose and describe the usage. The package is available at https://github.com/nabenabe0928/empirical-attainment-func. | [
22236
] | Test |
43,291 | 23 | Title: Code Reviewer Recommendation for Architecture Violations: An Exploratory Study
Abstract: Code review is a common practice in software development and often conducted before code changes are merged into the code repository. A number of approaches for automatically recommending appropriate reviewers have been proposed to match such code changes to pertinent reviewers. However, such approaches are generic, i.e., they do not focus on specific types of issues during code reviews. In this paper, we propose an approach that focuses on architecture violations, one of the most critical type of issues identified during code review. Specifically, we aim at automating the recommendation of code reviewers, who are potentially qualified to review architecture violations, based on reviews of code changes. To this end, we selected three common similarity detection methods to measure the file path similarity of code commits and the semantic similarity of review comments. We conducted a series of experiments on finding the appropriate reviewers through evaluating and comparing these similarity detection methods in separate and combined ways with the baseline reviewer recommendation approach, RevFinder. The results show that the common similarity detection methods can produce acceptable performance scores and achieve a better performance than RevFinder. The sampling techniques used in recommending code reviewers can impact the performance of reviewer recommendation approaches. We also discuss the potential implications of our findings for both researchers and practitioners. | [] | Train |
43,292 | 4 | Title: Fides: A Generative Framework for Result Validation of Outsourced Machine Learning Workloads via TEE
Abstract: The growing popularity of Machine Learning (ML) has led to its deployment in various sensitive domains, which has resulted in significant research focused on ML security and privacy. However, in some applications, such as autonomous driving, integrity verification of the outsourced ML workload is more critical-a facet that has not received much attention. Existing solutions, such as multi-party computation and proof-based systems, impose significant computation overhead, which makes them unfit for real-time applications. We propose Fides, a novel framework for real-time validation of outsourced ML workloads. Fides features a novel and efficient distillation technique-Greedy Distillation Transfer Learning-that dynamically distills and fine-tunes a space and compute-efficient verification model for verifying the corresponding service model while running inside a trusted execution environment. Fides features a client-side attack detection model that uses statistical analysis and divergence measurements to identify, with a high likelihood, if the service model is under attack. Fides also offers a re-classification functionality that predicts the original class whenever an attack is identified. We devised a generative adversarial network framework for training the attack detection and re-classification models. The extensive evaluation shows that Fides achieves an accuracy of up to 98% for attack detection and 94% for re-classification. | [] | Validation |
43,293 | 24 | Title: Smoothly Giving up: Robustness for Simple Models
Abstract: There is a growing need for models that are interpretable and have reduced energy and computational cost (e.g., in health care analytics and federated learning). Examples of algorithms to train such models include logistic regression and boosting. However, one challenge facing these algorithms is that they provably suffer from label noise; this has been attributed to the joint interaction between oft-used convex loss functions and simpler hypothesis classes, resulting in too much emphasis being placed on outliers. In this work, we use the margin-based $\alpha$-loss, which continuously tunes between canonical convex and quasi-convex losses, to robustly train simple models. We show that the $\alpha$ hyperparameter smoothly introduces non-convexity and offers the benefit of "giving up" on noisy training examples. We also provide results on the Long-Servedio dataset for boosting and a COVID-19 survey dataset for logistic regression, highlighting the efficacy of our approach across multiple relevant domains. | [] | Validation |
43,294 | 27 | Title: In Situ Soil Property Estimation for Autonomous Earthmoving Using Physics-Infused Neural Networks
Abstract: A novel, learning-based method for in situ estimation of soil properties using a physics-infused neural network (PINN) is presented. The network is trained to produce estimates of soil cohesion, angle of internal friction, soil-tool friction, soil failure angle, and residual depth of cut which are then passed through an earthmoving model based on the fundamental equation of earthmoving (FEE) to produce an estimated force. The network ingests a short history of kinematic observations along with past control commands and predicts interaction forces accurately with average error of less than 2kN, 13% of the measured force. To validate the approach, an earthmoving simulation of a bladed vehicle is developed using Vortex Studio, enabling comparison of the estimated parameters to pseudo-ground-truth values which is challenging in real-world experiments. The proposed approach is shown to enable accurate estimation of interaction forces and produces meaningful parameter estimates even when the model and the environmental physics deviate substantially. | [] | Validation |
43,295 | 5 | Title: Early Detection for Multiversion Concurrency Control Conflicts in Hyperledger Fabric
Abstract: Hyperledger Fabric is a popular permissioned blockchain system that features a highly modular and extensible system for deploying permissioned blockchains which are expected to have a major effect on a wide range of sectors. Unlike traditional blockchain systems such as Bitcoin and Ethereum, Hyperledger Fabric uses the EOV model for transaction processing: the submitted transactions are executed by the endorsing peer, ordered and batched by the ordering services, and validated by the validating peers. Due to this EOV workflow, a well-documented issue that arises is the multi-version concurrency control conflict. This happens when two transactions try to writes and read the same key in the ledger at the same time. Existing solutions to address this problem includes eliminating blocks in favor of streaming transactions, repairing conflicts during the ordering phase, and automatically merging the conflicting transactions using CRDT (Conflict Free Replicated Data) techniques. In this paper, we propose a novel solution called Early Detection for MVCC Conflicts. Our solution detects the conflicting transactions at an early stage of the transaction execution instead of processing them until the validation phase to be aborted. The advantage of our solution is that it detects conflict as soon as possible to minimize the overhead of conflicting transaction on the network resulting in the reduction of the end-to-end transaction latency and the increase of the system's effective throughput. We have successfully implemented our solution in Hyperledger Fabric. We propose three different implementations which realize early detection. Our results show that our solutions all perform better than the baseline Fabric, with our best solution SyncMap which improves the goodput by up to 23% and reduces the latency by up to 80%. | [] | Test |
43,296 | 16 | Title: Learning Neural Volumetric Representations of Dynamic Humans in Minutes
Abstract: This paper addresses the challenge of efficiently reconstructing volumetric videos of dynamic humans from sparse multi-view videos. Some recent works represent a dynamic human as a canonical neural radiance field (NeRF) and a motion field, which are learned from input videos through differentiable rendering. But the per-scene optimization generally requires hours. Other generalizable NeRF models leverage learned prior from datasets to reduce the optimization time by only finetuning on new scenes at the cost of visual fidelity. In this paper, we propose a novel method for learning neural volumetric representations of dynamic humans in minutes with competitive visual quality. Specifically, we define a novel part-based voxelized human representation to better distribute the representational power of the network to different human parts. Furthermore, we propose a novel 2D motion parameterization scheme to increase the convergence rate of deformation field learning. Experiments demonstrate that our model can be learned 100 times faster than previous per-scene optimization methods while being competitive in the rendering quality. Training our model on a 512 × 512 video with 100 frames typically takes about 5 minutes on a single RTX 3090 GPU. The code is available on our project page: https://zju3dv.github.io/instant_nvr. | [
43673,
30835,
44501
] | Validation |
43,297 | 16 | Title: DETR Doesn't Need Multi-Scale or Locality Design
Abstract: This paper presents an improved DETR detector that maintains a"plain"nature: using a single-scale feature map and global cross-attention calculations without specific locality constraints, in contrast to previous leading DETR-based detectors that reintroduce architectural inductive biases of multi-scale and locality into the decoder. We show that two simple technologies are surprisingly effective within a plain design to compensate for the lack of multi-scale feature maps and locality constraints. The first is a box-to-pixel relative position bias (BoxRPB) term added to the cross-attention formulation, which well guides each query to attend to the corresponding object region while also providing encoding flexibility. The second is masked image modeling (MIM)-based backbone pre-training which helps learn representation with fine-grained localization ability and proves crucial for remedying dependencies on the multi-scale feature maps. By incorporating these technologies and recent advancements in training and problem formation, the improved"plain"DETR showed exceptional improvements over the original DETR detector. By leveraging the Object365 dataset for pre-training, it achieved 63.9 mAP accuracy using a Swin-L backbone, which is highly competitive with state-of-the-art detectors which all heavily rely on multi-scale feature maps and region-based feature extraction. Code is available at https://github.com/impiga/Plain-DETR . | [
37844
] | Train |
43,298 | 30 | Title: Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning
Abstract: Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic reasoning to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement stage, which utilizes the symbolic solver's error messages to revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on four logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, and LogicalDeduction. Our results show significant improvement compared to LLMs alone, with an average performance boost of 62.6% over standard prompting and 23.5% over chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning. Code and data are publicly available at https://github.com/teacherpeterpan/Logic-LLM. | [
19936,
24099,
18564,
27621,
33220,
4071,
37411,
33455,
22578,
10293,
10165,
5078,
27669,
45909,
17789
] | Test |
43,299 | 6 | Title: Image-to-Text Translation for Interactive Image Recognition: A Comparative User Study with Non-Expert Users
Abstract: Interactive machine learning (IML) allows users to build their custom machine learning models without expert knowledge. While most existing IML systems are designed with classification algorithms, they sometimes oversimplify the capabilities of machine learning algorithms and restrict the user's task definition. On the other hand, as recent large-scale language models have shown, natural language representation has the potential to enable more flexible and generic task descriptions. Models that take images as input and output text have the potential to represent a variety of tasks by providing appropriate text labels for training. However, the effect of introducing text labels to IML system design has never been investigated. In this work, we aim to investigate the difference between image-to-text translation and image classification for IML systems. Using our prototype systems, we conducted a comparative user study with non-expert users, where participants solved various tasks. Our results demonstrate the underlying difficulty for users in properly defining image recognition tasks while highlighting the potential and challenges of interactive image-to-text translation systems. | [] | Train |
43,300 | 16 | Title: Fusion-S2iGan: An Efficient and Effective Single-Stage Framework for Speech-to-Image Generation
Abstract: The goal of a speech-to-image transform is to produce a photo-realistic picture directly from a speech signal. Recently, various studies have focused on this task and have achieved promising performance. However, current speech-to-image approaches are based on a stacked modular framework that suffers from three vital issues: 1) Training separate networks is time-consuming as well as inefficient and the convergence of the final generative model strongly depends on the previous generators; 2) The quality of precursor images is ignored by this architecture; 3) Multiple discriminator networks are required to be trained. To this end, we propose an efficient and effective single-stage framework called Fusion-S2iGan to yield perceptually plausible and semantically consistent image samples on the basis of given spoken descriptions. Fusion-S2iGan introduces a visual+speech fusion module (VSFM), constructed with a pixel-attention module (PAM), a speech-modulation module (SMM) and a weighted-fusion module (WFM), to inject the speech embedding from a speech encoder into the generator while improving the quality of synthesized pictures. Fusion-S2iGan spreads the bimodal information over all layers of the generator network to reinforce the visual feature maps at various hierarchical levels in the architecture. We conduct a series of experiments on four benchmark data sets, i.e., CUB birds, Oxford-102, Flickr8k and Places-subset. The experimental results demonstrate the superiority of the presented Fusion-S2iGan compared to the state-of-the-art models with a multi-stage architecture and a performance level that is close to traditional text-to-image approaches. | [] | Validation |
43,301 | 16 | Title: Improving Presentation Attack Detection for ID Cards on Remote Verification Systems
Abstract: In this paper, an updated two-stage, end-to-end Presentation Attack Detection method for remote biometric verification systems of ID cards, based on MobileNetV2, is presented. Several presentation attack species such as printed, display, composite (based on cropped and spliced areas), plastic (PVC), and synthetic ID card images using different capture sources are used. This proposal was developed using a database consisting of 190.000 real case Chilean ID card images with the support of a third-party company. Also, a new framework called PyPAD, used to estimate multi-class metrics compliant with the ISO/IEC 30107-3 standard was developed, and will be made available for research purposes. Our method is trained on two convolutional neural networks separately, reaching BPCER\textsubscript{100} scores on ID cards attacks of 1.69\% and 2.36\% respectively. The two-stage method using both models together can reach a BPCER\textsubscript{100} score of 0.92\%. | [] | Train |
43,302 | 24 | Title: Similarity search in the blink of an eye with compressed indices
Abstract: Nowadays, data is represented by vectors. Retrieving those vectors, among millions and billions, that are similar to a given query is a ubiquitous problem, known as similarity search, of relevance for a wide range of applications. Graph-based indices are currently the best performing techniques for billion-scale similarity search. However, their random-access memory pattern presents challenges to realize their full potential. In this work, we present new techniques and systems for creating faster and smaller graph-based indices. To this end, we introduce a novel vector compression method, Locally-adaptive Vector Quantization (LVQ), that uses per-vector scaling and scalar quantization to improve search performance with fast similarity computations and a reduced effective bandwidth, while decreasing memory footprint and barely impacting accuracy. LVQ, when combined with a new high-performance computing system for graph-based similarity search, establishes the new state of the art in terms of performance and memory footprint. For billions of vectors, LVQ outcompetes the second-best alternatives: (1) in the low-memory regime, by up to 20.7x in throughput with up to a 3x memory footprint reduction, and (2) in the high-throughput regime by 5.8x with 1.4x less memory. | [] | Test |
43,303 | 16 | Title: Pre-Pruning and Gradient-Dropping Improve Differentially Private Image Classification
Abstract: Scalability is a significant challenge when it comes to applying differential privacy to training deep neural networks. The commonly used DP-SGD algorithm struggles to maintain a high level of privacy protection while achieving high accuracy on even moderately sized models. To tackle this challenge, we take advantage of the fact that neural networks are overparameterized, which allows us to improve neural network training with differential privacy. Specifically, we introduce a new training paradigm that uses \textit{pre-pruning} and \textit{gradient-dropping} to reduce the parameter space and improve scalability. The process starts with pre-pruning the parameters of the original network to obtain a smaller model that is then trained with DP-SGD. During training, less important gradients are dropped, and only selected gradients are updated. Our training paradigm introduces a tension between the rates of pre-pruning and gradient-dropping, privacy loss, and classification accuracy. Too much pre-pruning and gradient-dropping reduces the model's capacity and worsens accuracy, while training a smaller model requires less privacy budget for achieving good accuracy. We evaluate the interplay between these factors and demonstrate the effectiveness of our training paradigm for both training from scratch and fine-tuning pre-trained networks on several benchmark image classification datasets. The tools can also be readily incorporated into existing training paradigms. | [] | Test |
43,304 | 30 | Title: Data Augmentation for Low-Resource Keyphrase Generation
Abstract: Keyphrase generation is the task of summarizing the contents of any given article into a few salient phrases (or keyphrases). Existing works for the task mostly rely on large-scale annotated datasets, which are not easy to acquire. Very few works address the problem of keyphrase generation in low-resource settings, but they still rely on a lot of additional unlabeled data for pretraining and on automatic methods for pseudo-annotations. In this paper, we present data augmentation strategies specifically to address keyphrase generation in purely resource-constrained domains. We design techniques that use the full text of the articles to improve both present and absent keyphrase generation. We test our approach comprehensively on three datasets and show that the data augmentation strategies consistently improve the state-of-the-art performance. We release our source code at https://github.com/kgarg8/kpgen-lowres-data-aug. | [] | Train |
43,305 | 6 | Title: Quantifying Retrospective Human Responsibility in Intelligent Systems
Abstract: Intelligent systems have become a major part of our lives. Human responsibility for outcomes becomes unclear in the interaction with these systems, as parts of information acquisition, decision-making, and action implementation may be carried out jointly by humans and systems. Determining human causal responsibility with intelligent systems is particularly important in events that end with adverse outcomes. We developed three measures of retrospective human causal responsibility when using intelligent systems. The first measure concerns repetitive human interactions with a system. Using information theory, it quantifies the average human's unique contribution to the outcomes of past events. The second and third measures concern human causal responsibility in a single past interaction with an intelligent system. They quantify, respectively, the unique human contribution in forming the information used for decision-making and the reasonability of the actions that the human carried out. The results show that human retrospective responsibility depends on the combined effects of system design and its reliability, the human's role and authority, and probabilistic factors related to the system and the environment. The new responsibility measures can serve to investigate and analyze past events involving intelligent systems. They may aid the judgment of human responsibility and ethical and legal discussions, providing a novel quantitative perspective. | [] | Validation |
43,306 | 11 | Title: Agent-Based Modeling and its Tradeoffs: An Introduction & Examples
Abstract: Agent-based modeling is a computational dynamic modeling technique that may be less familiar to some readers. Agent-based modeling seeks to understand the behaviour of complex systems by situating agents in an environment and studying the emergent outcomes of agent-agent and agent-environment interactions. In comparison with compartmental models, agent-based models offer simpler, more scalable and flexible representation of heterogeneity, the ability to capture dynamic and static network and spatial context, and the ability to consider history of individuals within the model. In contrast, compartmental models offer faster development time with less programming required, lower computational requirements that do not scale with population, and the option for concise mathematical formulation with ordinary, delay or stochastic differential equations supporting derivation of properties of the system behaviour. In this chapter, basic characteristics of agent-based models are introduced, advantages and disadvantages of agent-based models, as compared with compartmental models, are discussed, and two example agent-based infectious disease models are reviewed. | [] | Test |
43,307 | 28 | Title: Directed Message Passing Based on Attention for Prediction of Molecular Properties
Abstract: Molecular representation learning (MRL) has long been crucial in the fields of drug discovery and materials science, and it has made significant progress due to the development of natural language processing (NLP) and graph neural networks (GNNs). NLP treats the molecules as one dimensional sequential tokens while GNNs treat them as two dimensional topology graphs. Based on different message passing algorithms, GNNs have various performance on detecting chemical environments and predicting molecular properties. Herein, we propose Directed Graph Attention Networks (D-GATs): the expressive GNNs with directed bonds. The key to the success of our strategy is to treat the molecular graph as directed graph and update the bond states and atom states by scaled dot-product attention mechanism. This allows the model to better capture the sub-structure of molecular graph, i.e., functional groups. Compared to other GNNs or Message Passing Neural Networks (MPNNs), D-GATs outperform the state-of-the-art on 13 out of 15 important molecular property prediction benchmarks. | [] | Train |
43,308 | 24 | Title: Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms
Abstract: We study the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). A line of recent works shows that when configured and analyzed properly, the Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the adversarial setting, can in fact optimally adapt to the stochastic setting as well. Such results, however, critically rely on an assumption that there exists one unique optimal arm. Recently, Ito (2021) took the first step to remove such an undesirable uniqueness assumption for one particular FTRL algorithm with the $\frac{1}{2}$-Tsallis entropy regularizer. In this work, we significantly improve and generalize this result, showing that uniqueness is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, our regret bounds also improve upon prior results even when uniqueness holds. We further provide an application of our results to the decoupled exploration and exploitation problem, demonstrating that our techniques are broadly applicable. | [
30091,
2732
] | Validation |
43,309 | 27 | Title: ARMBench: An Object-centric Benchmark Dataset for Robotic Manipulation
Abstract: This paper introduces Amazon Robotic Manipulation Benchmark (ARMBench), a large-scale, object-centric benchmark dataset for robotic manipulation in the context of a warehouse. Automation of operations in modern warehouses requires a robotic manipulator to deal with a wide variety of objects, unstructured storage, and dynamically changing inventory. Such settings pose challenges in perceiving the identity, physical characteristics, and state of objects during manipulation. Existing datasets for robotic manipulation consider a limited set of objects or utilize 3D models to generate synthetic scenes with limitation in capturing the variety of object properties, clutter, and interactions. We present a large-scale dataset collected in an Amazon warehouse using a robotic manipulator performing object singulation from containers with heterogeneous contents. ARMBench contains images, videos, and metadata that corresponds to 235K+ pick-and-place activities on 190K+ unique objects. The data is captured at different stages of manipulation, i.e., pre-pick, during transfer, and after placement. Benchmark tasks are proposed by virtue of high-quality annotations and baseline performance evaluation are presented on three visual perception challenges, namely 1) object segmentation in clutter, 2) object identification, and 3) defect detection. ARMBench can be accessed at http://armbench.com | [
23968
] | Train |
43,310 | 27 | Title: Comparative Study of Visual SLAM-Based Mobile Robot Localization Using Fiducial Markers
Abstract: This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects. | [] | Train |
43,311 | 28 | Title: Semantic Communication of Learnable Concepts
Abstract: We consider the problem of communicating a sequence of concepts, i.e., unknown and potentially stochastic maps, which can be observed only through examples, i.e., the mapping rules are unknown. The transmitter applies a learning algorithm to the available examples, and extracts knowledge from the data by optimizing a probability distribution over a set of models, i.e., known functions, which can better describe the observed data, and so potentially the underlying concepts. The transmitter then needs to communicate the learned models to a remote receiver through a rate-limited channel, to allow the receiver to decode the models that can describe the underlying sampled concepts as accurately as possible in their semantic space. After motivating our analysis, we propose the formal problem of communicating concepts, and provide its rate-distortion characterization, pointing out its connection with the concepts of empirical and strong coordination in a network. We also provide a bound for the distortion-rate function. | [] | Test |
43,312 | 23 | Title: A Comparative Study of Text Embedding Models for Semantic Text Similarity in Bug Reports
Abstract: Bug reports are an essential aspect of software development, and it is crucial to identify and resolve them quickly to ensure the consistent functioning of software systems. Retrieving similar bug reports from an existing database can help reduce the time and effort required to resolve bugs. In this paper, we compared the effectiveness of semantic textual similarity methods for retrieving similar bug reports based on a similarity score. We explored several embedding models such as TF-IDF (Baseline), FastText, Gensim, BERT, and ADA. We used the Software Defects Data containing bug reports for various software projects to evaluate the performance of these models. Our experimental results showed that BERT generally outperformed the rest of the models regarding recall, followed by ADA, Gensim, FastText, and TFIDF. Our study provides insights into the effectiveness of different embedding methods for retrieving similar bug reports and highlights the impact of selecting the appropriate one for this task. Our code is available on GitHub. | [
37293
] | Validation |
43,313 | 8 | Title: Task-Oriented Delay-Aware Multi-Tier Computing in Cell-Free Massive MIMO Systems
Abstract: Multi-tier computing can enhance the task computation by multi-tier computing nodes. In this paper, we propose a cell-free massive multiple-input multiple-output (MIMO) aided computing system by deploying multi-tier computing nodes to improve the computation performance. At first, we investigate the computational latency and the total energy consumption for task computation, regarded as total cost. Then, we formulate a total cost minimization problem to design the bandwidth allocation and task allocation, while considering realistic heterogenous delay requirements of the computational tasks. Due to the binary task allocation variable, the formulated optimization problem is non-convex. Therefore, we solve the bandwidth allocation and task allocation problem by decoupling the original optimization problem into bandwidth allocation and task allocation subproblems. As the bandwidth allocation problem is a convex optimization problem, we first determine the bandwidth allocation for given task allocation strategy, followed by conceiving the traditional convex optimization strategy to obtain the bandwidth allocation solution. Based on the asymptotic property of received signal-to-interference-plus-noise ratio (SINR) under the cell-free massive MIMO setting and bandwidth allocation solution, we formulate a dual problem to solve the task allocation subproblem by relaxing the binary constraint with Lagrange partial relaxation for heterogenous task delay requirements. At last, simulation results are provided to demonstrate that our proposed task offloading scheme performs better than the benchmark schemes, where the minimum-cost optimal offloading strategy for heterogeneous delay requirements of the computational tasks may be controlled by the asymptotic property of the received SINR in our proposed cell-free massive MIMO-aided multi-tier computing systems. | [] | Train |
43,314 | 2 | Title: Termination in Concurrency, Revisited
Abstract: Termination is a central property in sequential programming models: a term is terminating if all its reduction sequences are finite. Termination is also important in concurrency in general, and for message-passing programs in particular. A variety of type systems that enforce termination by typing have been developed. In this paper, we rigorously compare several type systems for $\pi$-calculus processes from the unifying perspective of termination. Adopting session types as reference framework, we consider two different type systems: one follows Deng and Sangiorgi's weight-based approach; the other is Caires and Pfenning's Curry-Howard correspondence between linear logic and session types. Our technical results precisely connect these very different type systems, and shed light on the classes of client/server interactions they admit as correct. | [] | Validation |
43,315 | 24 | Title: An Interpretable Constructive Algorithm for Incremental Random Weight Neural Networks and Its Application
Abstract: Incremental random weight neural networks (IRWNNs) have gained attention in view of its easy implementation and fast learning. However, a significant drawback of IRWNNs is that the elationship between the hidden parameters (node)and the residual error (model performance) is difficult to be interpreted. To address the above issue, this article proposes an interpretable constructive algorithm (ICA) with geometric information constraint. First, based on the geometric relationship between the hidden parameters and the residual error, an interpretable geometric information constraint is proposed to randomly assign the hidden parameters. Meanwhile, a node pool strategy is employed to obtain hidden parameters that is more conducive to convergence from hidden parameters satisfying the proposed constraint. Furthermore, the universal approximation property of the ICA is proved. Finally, a lightweight version of ICA is presented for large-scale data modeling tasks. Experimental results on six benchmark datasets and a numerical simulation dataset demonstrate that the ICA outperforms other constructive algorithms in terms of modeling speed, model accuracy, and model network structure. Besides, two practical industrial application case are used to validate the effectiveness of ICA in practical applications. | [] | Train |
43,316 | 24 | Title: An Option-Dependent Analysis of Regret Minimization Algorithms in Finite-Horizon Semi-Markov Decision Processes
Abstract: A large variety of real-world Reinforcement Learning (RL) tasks is characterized by a complex and heterogeneous structure that makes end-to-end (or flat) approaches hardly applicable or even infeasible. Hierarchical Reinforcement Learning (HRL) provides general solutions to address these problems thanks to a convenient multi-level decomposition of the tasks, making their solution accessible. Although often used in practice, few works provide theoretical guarantees to justify this outcome effectively. Thus, it is not yet clear when to prefer such approaches compared to standard flat ones. In this work, we provide an option-dependent upper bound to the regret suffered by regret minimization algorithms in finite-horizon problems. We illustrate that the performance improvement derives from the planning horizon reduction induced by the temporal abstraction enforced by the hierarchical structure. Then, focusing on a sub-setting of HRL approaches, the options framework, we highlight how the average duration of the available options affects the planning horizon and, consequently, the regret itself. Finally, we relax the assumption of having pre-trained options to show how in particular situations, learning hierarchically from scratch could be preferable to using a standard approach. | [] | Test |
43,317 | 38 | Title: Soft-Search: Two Datasets to Study the Identification and Production of Research Software
Abstract: Software is an important tool for scholarly work, but software produced for research is in many cases not easily identifiable or discoverable. A potential first step in linking research and software is software identification. In this paper we present two datasets to study the identification and production of research software. The first dataset contains almost 1000 human labeled annotations of software production from National Science Foundation (NSF) awarded research projects. We use this dataset to train models that predict software production. Our second dataset is created by applying the trained predictive models across the abstracts and project outcomes reports for all NSF funded projects between the years of 2010 and 2023. The result is an inferred dataset of software production for over 150,000 NSF awards. We release the Soft-Search dataset to aid in identifying and understanding research software production: https://github.com/si2-urssi/eager | [] | Validation |
43,318 | 16 | Title: DuEqNet: Dual-Equivariance Network in Outdoor 3D Object Detection for Autonomous Driving
Abstract: Outdoor 3D object detection has played an essential role in the environment perception of autonomous driving. In complicated traffic situations, precise object recognition provides indispensable information for prediction and planning in the dynamic system, improving self-driving safety and reliability. However, with the vehicle's veering, the constant rotation of the surrounding scenario makes a challenge for the perception systems. Yet most existing methods have not focused on alleviating the detection accuracy impairment brought by the vehicle's rotation, especially in outdoor 3D detection. In this paper, we propose DuEqNet, which first introduces the concept of equivariance into 3D object detection network by leveraging a hierarchical embedded framework. The dual-equivariance of our model can extract the equivariant features at both local and global levels, respectively. For the local feature, we utilize the graph-based strategy to guarantee the equivariance of the feature in point cloud pillars. In terms of the global feature, the group equivariant convolution layers are adopted to aggregate the local feature to achieve the global equivariance. In the experiment part, we evaluate our approach with different baselines in 3D object detection tasks and obtain State-Of-The-Art performance. According to the results, our model presents higher accuracy on orientation and better prediction efficiency. Moreover, our dual-equivariance strategy exhibits the satisfied plug-and-play ability on various popular object detection frameworks to improve their performance. | [
1268
] | Validation |
43,319 | 24 | Title: On the Soundness of XAI in Prognostics and Health Management (PHM)
Abstract: The aim of predictive maintenance, within the field of prognostics and health management (PHM), is to identify and anticipate potential issues in the equipment before these become serious. The main challenge to be addressed is to assess the amount of time a piece of equipment will function effectively before it fails, which is known as remaining useful life (RUL). Deep learning (DL) models, such as Deep Convolutional Neural Networks (DCNN) and Long Short-Term Memory (LSTM) networks, have been widely adopted to address the task, with great success. However, it is well known that these kinds of black box models are opaque decision systems, and it may be hard to explain their outputs to stakeholders (experts in the industrial equipment). Due to the large number of parameters that determine the behavior of these complex models, understanding the reasoning behind the predictions is challenging. This paper presents a critical and comparative revision on a number of explainable AI (XAI) methods applied on time series regression models for PM. The aim is to explore XAI methods within time series regression, which have been less studied than those for time series classification. This study addresses three distinct RUL problems using three different datasets, each with its own unique context: gearbox, fast-charging batteries, and turbofan engine. Five XAI methods were reviewed and compared based on a set of nine metrics that quantify desirable properties for any XAI method. One of the metrics introduced in this study is a novel metric. The results show that Grad-CAM is the most robust method, and that the best layer is not the bottom one, as is commonly seen within the context of image processing. | [] | Train |
43,320 | 30 | Title: From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Abstract: Language models (LMs) are pretrained on diverse data sources—news, discussion forums, books, online encyclopedias. A significant portion of this data includes facts and opinions which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. Our work develops new methods to (1) measure media biases in LMs trained on such corpora, along social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks. Our findings reveal that pretrained LMs do have political leanings which reinforce the polarization present in pretraining corpora, propagating social biases into hate speech predictions and media biases into misinformation detectors. We discuss the implications of our findings for NLP research and propose future directions to mitigate unfairness. | [
33220,
13700,
2608,
12500,
2549,
32567,
23421
] | Test |
43,321 | 23 | Title: A UML Profile for Bitcoin Blockchain
Abstract: Blockchain has received attention for its potential use in business. Bitcoin is powered by blockchain, and interest in it has surged in the past few years. It has many uses that need to be modeled. Modeling is used in many walks of life to share ideas, reduce complexity, achieve close alignment of one person viewpoint with another and provide abstractions of a system at some level of precision and detail. Software modeling is used in Model Driven Engineering (MDE), and Domain Specific Languages (DSLs) ease model development and provide intuitive syntax for domain experts. The present study has designed and evaluated a meta-model for the bitcoin application domain to facilitate application development and help in truly understanding bitcoin. The proposed meta-model, including stereotypes, tagged values, enumerations and a set of constraints defined by Object Constraint Language (OCL), was defined as a Unified Modeling Language (UML) profile and was implemented in the Sparx Enterprise Architect (Sparx EA) modeling tool. A case study developed by our meta-model is also presented. | [] | Validation |
43,322 | 31 | Title: Federated Privacy-preserving Collaborative Filtering for On-Device Next App Prediction
Abstract: In this study, we propose a novel SeqMF model to solve the problem of predicting the next app launch during mobile device usage. Although this problem can be represented as a classical collaborative filtering problem, it requires proper modification since the data are sequential, the user feedback is distributed among devices and the transmission of users' data to aggregate common patterns must be protected against leakage. According to such requirements, we modify the structure of the classical matrix factorization model and update the training procedure to sequential learning. Since the data about user experience are distributed among devices, the federated learning setup is used to train the proposed sequential matrix factorization model. One more ingredient of the proposed approach is a new privacy mechanism that guarantees the protection of the sent data from the users to the remote server. To demonstrate the efficiency of the proposed model we use publicly available mobile user behavior data. We compare our model with sequential rules and models based on the frequency of app launches. The comparison is conducted in static and dynamic environments. The static environment evaluates how our model processes sequential data compared to competitors. Therefore, the standard train-validation-test evaluation procedure is used. The dynamic environment emulates the real-world scenario, where users generate new data by running apps on devices, and evaluates our model in this case. Our experiments show that the proposed model provides comparable quality with other methods in the static environment. However, more importantly, our method achieves a better privacy-utility trade-off than competitors in the dynamic environment, which provides more accurate simulations of real-world usage. | [] | Train |
43,323 | 38 | Title: A multi-level analysis of data quality for formal software citation
Abstract: Software is a central part of modern science, and knowledge of its use is crucial for the scientific community with respect to reproducibility and attribution of its developers. Several studies have investigated in-text mentions of software and its quality, while the quality of formal software citations has only been analyzed superficially. This study performs an in-depth evaluation of formal software citation based on a set of manually annotated software references. It examines which resources are cited for software usage, to what extend they allow proper identification of software and its specific version, how this information is made available by scientific publishers, and how well it is represented in large-scale bibliographic databases. The results show that software articles are the most cited resource for software, while direct software citations are better suited for identification of software versions. Moreover, we found current practices by both, publishers and bibliographic databases, to be unsuited to represent these direct software citations, hindering large-scale analyses such as assessing software impact. We argue that current practices for representing software citations -- the recommended way to cite software by current citation standards -- stand in the way of their adaption by the scientific community, and urge providers of bibliographic data to explicitly model scientific software. | [] | Train |
43,324 | 34 | Title: Minimum Consistent Subset for Trees Revisited
Abstract: In a vertex-colored graph $G = (V, E)$, a subset $S \subseteq V$ is said to be consistent if every vertex has a nearest neighbor in $S$ with the same color. The problem of computing a minimum cardinality consistent subset of a graph is known to be NP-hard. On the positive side, Dey et al. (FCT 2021) show that this problem is solvable in polynomial time when input graphs are restricted to bi-colored trees. In this paper, we give a polynomial-time algorithm for this problem on $k$-colored trees with fixed $k$. | [
22850
] | Test |
43,325 | 27 | Title: Design and Characterization of Viscoelastic McKibben Actuators with Tunable Force-Velocity Curves
Abstract: The McKibben pneumatic artificial muscle is a commonly studied soft robotic actuator, and its quasistatic force-length properties have been well characterized and modeled. However, its damping and force-velocity properties are less well studied. Understanding these properties will allow for more robust dynamic modeling of soft robotic systems. The force-velocity response of these actuators is of particular interest because these actuators are often used as hardware models of skeletal muscles for bioinspired robots, and this force-velocity relationship is fundamental to muscle physiology. In this work, we investigated the force-velocity response of McKibben actuators and the ability to tune this response through the use of viscoelastic polymer sheaths. These viscoelastic McKibben actuators (VMAs) were characterized using iso-velocity experiments inspired by skeletal muscle physiology tests. A simplified 1D model of the actuators was developed to connect the shape of the force-velocity curve to the material parameters of the actuator and sheaths. Using these viscoelastic materials, we were able to modulate the shape and magnitude of the actuators' force-velocity curves, and using the developed model, these changes were connected back to the material properties of the sheaths. | [] | Train |
43,326 | 30 | Title: On the Origins of Bias in NLP through the Lens of the Jim Code
Abstract: In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so. | [
8648,
13821
] | Train |
43,327 | 30 | Title: A Survey on In-context Learning
Abstract: With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples. It has been a new trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, demonstration designing strategies, as well as related analysis. Finally, we discuss the challenges of ICL and provide potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL. | [
10624,
24832,
27905,
8195,
43267,
43906,
45577,
45821,
39439,
41104,
21778,
44947,
8340,
8213,
42393,
13082,
33179,
12445,
14368,
30243,
1572,
32419,
36392,
26153,
5423,
32689,
19250,
9012,
15413,
17462,
21815,
33461,
44727,
24635,
6719,
12485,
... | Train |
43,328 | 16 | Title: Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
Abstract: Inspired by the remarkable success of Latent Diffusion Models (LDMs) for image synthesis, we study LDM for text-to-video generation, which is a formidable challenge due to the computational and memory constraints during both model training and inference. A single LDM is usually only capable of generating a very limited number of video frames. Some existing works focus on separate prediction models for generating more video frames, which suffer from additional training cost and frame-level jittering, however. In this paper, we propose a framework called"Reuse and Diffuse"dubbed $\textit{VidRD}$ to produce more frames following the frames already generated by an LDM. Conditioned on an initial video clip with a small number of frames, additional frames are iteratively generated by reusing the original latent features and following the previous diffusion process. Besides, for the autoencoder used for translation between pixel space and latent space, we inject temporal layers into its decoder and fine-tune these layers for higher temporal consistency. We also propose a set of strategies for composing video-text data that involve diverse content from multiple existing datasets including video datasets for action recognition and image-text datasets. Extensive experiments show that our method achieves good results in both quantitative and qualitative evaluations. Our project page is available $\href{https://anonymous0x233.github.io/ReuseAndDiffuse/}{here}$. | [
10624,
20960,
42272,
38696,
12841,
23563,
11820,
7597,
41104,
9364,
17526,
25368
] | Train |
43,329 | 24 | Title: Deep Semi-Supervised Anomaly Detection for Finding Fraud in the Futures Market
Abstract: Modern financial electronic exchanges are an exciting and fast-paced marketplace where billions of dollars change hands every day. They are also rife with manipulation and fraud. Detecting such activity is a major undertaking, which has historically been a job reserved exclusively for humans. Recently, more research and resources have been focused on automating these processes via machine learning and artificial intelligence. Fraud detection is overwhelmingly associated with the greater field of anomaly detection, which is usually performed via unsupervised learning techniques because of the lack of labeled data needed for supervised learning. However, a small quantity of labeled data does often exist. This research article aims to evaluate the efficacy of a deep semi-supervised anomaly detection technique, called Deep SAD, for detecting fraud in high-frequency financial data. We use exclusive proprietary limit order book data from the TMX exchange in Montr\'eal, with a small set of true labeled instances of fraud, to evaluate Deep SAD against its unsupervised predecessor. We show that incorporating a small amount of labeled data into an unsupervised anomaly detection framework can greatly improve its accuracy. | [] | Test |
43,330 | 9 | Title: On the complexity of the approximate hypergraph homomorphism problem
Abstract: Understanding the computational complexity of fragments of the Constraint Satisfaction Problem (CSP) has been instrumental in the formulation of Feder-Vardi's Dichotomy Conjecture and its positive resolution by Bulatov and Zhuk. An approximation version of the CSP - known as the promise CSP - has recently gained prominence as an exciting generalisation of the CSP that captures many fundamental computational problems. In this work, we establish a computational complexity dichotomy for a natural fragment of the promise CSP consisting of homomorphism problems involving a class of 3-uniform hypergraphs. | [] | Validation |
43,331 | 16 | Title: BLiSS: Bootstrapped Linear Shape Space
Abstract: Morphable models are fundamental to numerous human-centered processes as they offer a simple yet expressive shape space. Creating such morphable models, however, is both tedious and expensive. The main challenge is establishing dense correspondences across raw scans that capture sufficient shape variation. This is often addressed using a mix of significant manual intervention and non-rigid registration. We observe that creating a shape space and solving for dense correspondence are tightly coupled -- while dense correspondence is needed to build shape spaces, an expressive shape space provides a reduced dimensional space to regularize the search. We introduce BLiSS, a method to solve both progressively. Starting from a small set of manually registered scans to bootstrap the process, we enrich the shape space and then use that to get new unregistered scans into correspondence automatically. The critical component of BLiSS is a non-linear deformation model that captures details missed by the low-dimensional shape space, thus allowing progressive enrichment of the space. | [] | Train |
43,332 | 5 | Title: Flash: An Asynchronous Payment System with Good-Case Linear Communication Complexity
Abstract: While the original purpose of blockchains was to realize a payment system, it has been shown that, in fact, such systems do not require consensus and can be implemented deterministically in asynchronous networks. State-of-the-art payment systems employ Reliable Broadcast to disseminate payments and prevent double spending, which entails O(n^2) communication complexity per payment even if Byzantine behavior is scarce or non-existent. Here we present Flash, the first payment system to achieve $O(n)$ communication complexity per payment in the good case and $O(n^2)$ complexity in the worst-case, matching the lower bound. This is made possible by sidestepping Reliable Broadcast and instead using the blocklace -- a DAG-like partially-ordered generalization of the blockchain -- for the tasks of recording transaction dependencies, block dissemination, and equivocation exclusion, which in turn prevents doublespending. Flash has two variants: for high congestion when multiple blocks that contain multiple payments are issued concurrently; and for low congestion when payments are infrequent. | [
27445
] | Train |
43,333 | 24 | Title: Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive Manufacturing Model with Operator Learning
Abstract: One predominant challenge in additive manufacturing (AM) is to achieve specific material properties by manipulating manufacturing process parameters during the runtime. Such manipulation tends to increase the computational load imposed on existing simulation tools employed in AM. The goal of the present work is to construct a fast and accurate reduced-order model (ROM) for an AM model developed within the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework, ultimately reducing the time/cost of AM control and optimization processes. Our adoption of the operator learning (OL) approach enabled us to learn a family of differential equations produced by altering process variables in the laser's Gaussian point heat source. More specifically, we used the Fourier neural operator (FNO) and deep operator network (DeepONet) to develop ROMs for time-dependent responses. Furthermore, we benchmarked the performance of these OL methods against a conventional deep neural network (DNN)-based ROM. Ultimately, we found that OL methods offer comparable performance and, in terms of accuracy and generalizability, even outperform DNN at predicting scalar model responses. The DNN-based ROM afforded the fastest training time. Furthermore, all the ROMs were faster than the original MOOSE model yet still provided accurate predictions. FNO had a smaller mean prediction error than DeepONet, with a larger variance for time-dependent responses. Unlike DNN, both FNO and DeepONet were able to simulate time series data without the need for dimensionality reduction techniques. The present work can help facilitate the AM optimization process by enabling faster execution of simulation tools while still preserving evaluation accuracy. | [] | Test |
43,334 | 4 | Title: Area Efficient Modular Reduction in Hardware for Arbitrary Static Moduli
Abstract: Modular reduction is a crucial operation in many post-quantum cryptographic schemes, including the Kyber key exchange method or Dilithium signature scheme. However, it can be computationally expensive and pose a performance bottleneck in hardware implementations. To address this issue, we propose a novel approach for computing modular reduction efficiently in hardware for arbitrary static moduli. Unlike other commonly used methods such as Barrett or Montgomery reduction, the method does not require any multiplications. It is not dependent on properties of any particular choice of modulus for good performance and low area consumption. Its major strength lies in its low area consumption, which was reduced by 60% for optimized and up to 90% for generic Barrett implementations for Kyber and Dilithium. Additionally, it is well suited for parallelization and pipelining and scales linearly in hardware resource consumption with increasing operation width. All operations can be performed in the bit-width of the modulus, rather than the size of the number being reduced. This shortens carry chains and allows for faster clocking. Moreover, our method can be executed in constant time, which is essential for cryptography applications where timing attacks can be used to obtain information about the secret key. | [] | Validation |
43,335 | 24 | Title: Learning Embeddings for Sequential Tasks Using Population of Agents
Abstract: We present an information-theoretic framework to learn fixed-dimensional embeddings for tasks in reinforcement learning. We leverage the idea that two tasks are similar to each other if observing an agent's performance on one task reduces our uncertainty about its performance on the other. This intuition is captured by our information-theoretic criterion which uses a diverse population of agents to measure similarity between tasks in sequential decision-making settings. In addition to qualitative assessment, we empirically demonstrate the effectiveness of our techniques based on task embeddings by quantitative comparisons against strong baselines on two application scenarios: predicting an agent's performance on a test task by observing its performance on a small quiz of tasks, and selecting tasks with desired characteristics from a given set of options. | [
16740
] | Train |
43,336 | 28 | Title: Asymptotic nonnegative rank of matrices
Abstract: The nonnegative rank of nonnegative matrices is an important quantity that appears in many fields, such as combinatorial optimization, communication complexity, and information theory. In this paper, we study the asymptotic growth of the nonnegative rank of a fixed nonnegative matrix under Kronecker product. This quantity is called the asymptotic nonnegative rank, which is already studied in information theory. By applying the theory of asymptotic spectra of V. Strassen (J. Reine Angew. Math. 1988), we introduce the asymptotic spectrum of nonnegative matrices and give a dual characterization of the asymptotic nonnegative rank. As the opposite of nonnegative rank, we introduce the notion of the subrank of a nonnegative matrix and show that it is exactly equal to the size of the maximum induced matching of the bipartite graph defined on the support of the matrix (therefore, independent of the value of entries). Finally, we show that two matrix parameters, namely rank and fractional cover number, belong to the asymptotic spectrum of nonnegative matrices. | [] | Train |
43,337 | 24 | Title: A Data Driven Sequential Learning Framework to Accelerate and Optimize Multi-Objective Manufacturing Decisions
Abstract: Manufacturing advanced materials and products with a specific property or combination of properties is often warranted. To achieve that it is crucial to find out the optimum recipe or processing conditions that can generate the ideal combination of these properties. Most of the time, a sufficient number of experiments are needed to generate a Pareto front. However, manufacturing experiments are usually costly and even conducting a single experiment can be a time-consuming process. So, it's critical to determine the optimal location for data collection to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This paper presents a novel data-driven Bayesian optimization framework that utilizes sequential learning to efficiently optimize complex systems with multiple conflicting objectives. Additionally, this paper proposes a novel metric for evaluating multi-objective data-driven optimization approaches. This metric considers both the quality of the Pareto front and the amount of data used to generate it. The proposed framework is particularly beneficial in practical applications where acquiring data can be expensive and resource intensive. To demonstrate the effectiveness of the proposed algorithm and metric, the algorithm is evaluated on a manufacturing dataset. The results indicate that the proposed algorithm can achieve the actual Pareto front while processing significantly less data. It implies that the proposed data-driven framework can lead to similar manufacturing decisions with reduced costs and time. | [
40115
] | Train |
43,338 | 28 | Title: Scalable Cell-Free Massive MIMO Unsourced Random Access System
Abstract: Cell-Free Massive MIMO systems aim to expand the coverage area of wireless networks by replacing a single high-performance Access Point (AP) with multiple small, distributed APs connected to a Central Processing Unit (CPU) through a fronthaul. Another novel wireless approach, known as the unsourced random access (URA) paradigm, enables a large number of devices to communicate concurrently on the uplink. This article considers a quasi-static Rayleigh fading channel paired to a scalable cell-free system, wherein a small number of receive antennas in the distributed APs serve devices equipped with a single antenna each. The goal of the study is to extend previous URA results to more realistic channels by examining the performance of a scalable cell-free system. To achieve this goal, we propose a coding scheme that adapts the URA paradigm to various cell-free scenarios. Empirical evidence suggests that using a cell-free architecture can improve the performance of a URA system, especially when taking into account large-scale attenuation and fading. | [] | Train |
43,339 | 38 | Title: Slow, slow, quick, quick, slow: five altmetric sources observed over a decade show evolving trends, by research age, attention source maturity and open access status
Abstract: nan | [] | Validation |
43,340 | 24 | Title: Learning non-Markovian Decision-Making from State-only Sequences
Abstract: Conventional imitation learning assumes access to the actions of demonstrators, but these motor signals are often non-observable in naturalistic settings. Additionally, sequential decision-making behaviors in these settings can deviate from the assumptions of a standard Markov Decision Process (MDP). To address these challenges, we explore deep generative modeling of state-only sequences with non-Markov Decision Process (nMDP), where the policy is an energy-based prior in the latent space of the state transition generator. We develop maximum likelihood estimation to achieve model-based imitation, which involves short-run MCMC sampling from the prior and importance sampling for the posterior. The learned model enables \textit{decision-making as inference}: model-free policy execution is equivalent to prior sampling, model-based planning is posterior sampling initialized from the policy. We demonstrate the efficacy of the proposed method in a prototypical path planning task with non-Markovian constraints and show that the learned model exhibits strong performances in challenging domains from the MuJoCo suite. | [] | Train |
43,341 | 28 | Title: MDS and MHDR cyclic codes over finite chain rings
Abstract: In this work, a unique set of generators for a cyclic code over a finite chain ring has been established. The minimal spanning set and rank of the code have also been determined. Further, sufficient as well as necessary conditions for a cyclic code to be an MDS code and for a cyclic code to be an MHDR code have been obtained. Some examples of optimal cyclic codes have also been presented. | [] | Train |
43,342 | 30 | Title: Datasets for Portuguese Legal Semantic Textual Similarity: Comparing weak supervision and an annotation process approaches
Abstract: The Brazilian judiciary has a large workload, resulting in a long time to finish legal proceedings. Brazilian National Council of Justice has established in Resolution 469/2022 formal guidance for document and process digitalization opening up the possibility of using automatic techniques to help with everyday tasks in the legal field, particularly in a large number of texts yielded on the routine of law procedures. Notably, Artificial Intelligence (AI) techniques allow for processing and extracting useful information from textual data, potentially speeding up the process. However, datasets from the legal domain required by several AI techniques are scarce and difficult to obtain as they need labels from experts. To address this challenge, this article contributes with four datasets from the legal domain, two with documents and metadata but unlabeled, and another two labeled with a heuristic aiming at its use in textual semantic similarity tasks. Also, to evaluate the effectiveness of the proposed heuristic label process, this article presents a small ground truth dataset generated from domain expert annotations. The analysis of ground truth labels highlights that semantic analysis of domain text can be challenging even for domain experts. Also, the comparison between ground truth and heuristic labels shows that heuristic labels are useful. | [] | Validation |
43,343 | 24 | Title: Auto-Differentiation of Relational Computations for Very Large Scale Machine Learning
Abstract: The relational data model was designed to facilitate large-scale data management and analytics. We consider the problem of how to differentiate computations expressed relationally. We show experimentally that a relational engine running an auto-differentiated relational algorithm can easily scale to very large datasets, and is competitive with state-of-the-art, special-purpose systems for large-scale distributed machine learning. | [] | Train |
43,344 | 23 | Title: Application of Systems Engineering Process in Building ML-Enabled Systems
Abstract: Machine learning (ML) components are being added to more and more critical and impactful software systems, but the software development process of real-world production systems from prototyped ML models remains challenging with additional complexity and interdisciplinary collaboration challenges. This poses difficulties in using traditional software lifecycle models such as waterfall, spiral or agile model when building ML-enabled systems. By interviewing with practitioners from multiple companies, we investigated the application of using systems engineering process in ML-enabled systems. We developed a set of propositions and proposed V4ML process model for building products with ML components. We found that V4ML process model requires more efforts on documentation, system decomposition and V&V, but it addressed the interdisciplinary collaboration challenges and additional complexity introduced by ML components. | [
26118
] | Train |
43,345 | 20 | Title: Learning topological operations on meshes with application to block decomposition of polygons
Abstract: We present a learning based framework for mesh quality improvement on unstructured triangular and quadrilateral meshes. Our model learns to improve mesh quality according to a prescribed objective function purely via self-play reinforcement learning with no prior heuristics. The actions performed on the mesh are standard local and global element operations. The goal is to minimize the deviation of the node degrees from their ideal values, which in the case of interior vertices leads to a minimization of irregular nodes. | [
1117
] | Validation |
43,346 | 24 | Title: Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
Abstract: As data-driven intelligent systems advance, the need for reliable and transparent decision-making mechanisms has become increasingly important. Therefore, it is essential to integrate uncertainty quantification and model explainability approaches to foster trustworthy business and operational process analytics. This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches, such as Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots. In addition, this study examines appropriate visualization analytics approaches to facilitate such methodological integration. By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability. Finally, the study includes expert interviews to assess the suitability of the proposed approach and designed interface for a real-world predictive process monitoring problem in the manufacturing domain. | [] | Validation |
43,347 | 27 | Title: E3-UAV: An Edge-based Energy-Efficient Object Detection System for Unmanned Aerial Vehicles
Abstract: Motivated by the advances in deep learning techniques, the application of Unmanned Aerial Vehicle (UAV)-based object detection has proliferated across a range of fields, including vehicle counting, fire detection, and city monitoring. While most existing research studies only a subset of the challenges inherent to UAV-based object detection, there are few studies that balance various aspects to design a practical system for energy consumption reduction. In response, we present the E3-UAV, an edge-based energy-efficient object detection system for UAVs. The system is designed to dynamically support various UAV devices, edge devices, and detection algorithms, with the aim of minimizing energy consumption by deciding the most energy-efficient flight parameters (including flight altitude, flight speed, detection algorithm, and sampling rate) required to fulfill the detection requirements of the task. We first present an effective evaluation metric for actual tasks and construct a transparent energy consumption model based on hundreds of actual flight data to formalize the relationship between energy consumption and flight parameters. Then we present a lightweight energy-efficient priority decision algorithm based on a large quantity of actual flight data to assist the system in deciding flight parameters. Finally, we evaluate the performance of the system, and our experimental results demonstrate that it can significantly decrease energy consumption in real-world scenarios. Additionally, we provide four insights that can assist researchers and engineers in their efforts to study UAV-based object detection further. | [] | Train |
43,348 | 30 | Title: NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery
Abstract: Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context. Existing methods for norm recognition tend to focus only on surface-level features of dialogues and do not take into account the interactions within a conversation. To address this issue, we propose NormMark, a probabilistic generative Markov model to carry the latent features throughout a dialogue. These features are captured by discrete and continuous latent variables conditioned on the conversation history, and improve the model's ability in norm recognition. The model is trainable on weakly annotated data using the variational technique. On a dataset with limited norm annotations, we show that our approach achieves higher F1 score, outperforming current state-of-the-art methods, including GPT3. | [
4670
] | Train |
43,349 | 16 | Title: Expressive Speech-Driven Facial Animation with Controllable Emotions
Abstract: It is in high demand to generate facial animation with high realism, but it remains a challenging task. Existing approaches of speech-driven facial animation can produce satisfactory mouth movement and lip synchronization, but show weakness in dramatic emotional expressions and flexibility in emotion control. This paper presents a novel deep learning-based approach for expressive facial animation generation from speech that can exhibit wide-spectrum facial expressions with controllable emotion type and intensity. We propose an emotion controller module to learn the relationship between the emotion variations (e.g., types and intensity) and the corresponding facial expression parameters. It enables emotion-controllable facial animation, where the target expression can be continuously adjusted as desired. The qualitative and quantitative evaluations show that the animation generated by our method is rich in facial emotional expressiveness while retaining accurate lip movement, outperforming other state-of-the-art methods. | [] | Test |
43,350 | 16 | Title: DESOBAv2: Towards Large-scale Real-world Dataset for Shadow Generation
Abstract: Image composition refers to inserting a foreground object into a background image to obtain a composite image. In this work, we focus on generating plausible shadow for the inserted foreground object to make the composite image more realistic. To supplement the existing small-scale dataset DESOBA, we create a large-scale dataset called DESOBAv2 by using object-shadow detection and inpainting techniques. Specifically, we collect a large number of outdoor scene images with object-shadow pairs. Then, we use pretrained inpainting model to inpaint the shadow region, resulting in the deshadowed images. Based on real images and deshadowed images, we can construct pairs of synthetic composite images and ground-truth target images. Dataset is available at https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2. | [] | Test |
43,351 | 30 | Title: Adapting Transformer Language Models for Predictive Typing in Brain-Computer Interfaces
Abstract: Brain-computer interfaces (BCI) are an important mode of alternative and augmentative communication for many people. Unlike keyboards, many BCI systems do not display even the 26 letters of English at one time, let alone all the symbols in more complex systems. Using language models to make character-level predictions, therefore, can greatly speed up BCI typing (Ghosh and Kristensson, 2017). While most existing BCI systems employ character n-gram models or no LM at all, this paper adapts several wordpiece-level Transformer LMs to make character predictions and evaluates them on typing tasks. GPT-2 fares best on clean text, but different LMs react differently to noisy histories. We further analyze the effect of character positions in a word and context lengths. | [] | Train |
43,352 | 28 | Title: Personalized Resource Allocation in Wireless Networks: An AI-Enabled and Big Data-Driven Multi-Objective Optimization
Abstract: The design and optimization of wireless networks have mostly been based on strong mathematical and theoretical modeling. Nonetheless, as novel applications emerge in the era of 5G and beyond, unprecedented levels of complexity will be encountered in the design and optimization of the network. As a result, the use of Artificial Intelligence (AI) is envisioned for wireless network design and optimization due to the flexibility and adaptability it offers in solving extremely complex problems in real-time. One of the main future applications of AI is enabling user-level personalization for numerous use cases. AI will revolutionize the way we interact with computers in which computers will be able to sense commands and emotions from humans in a non-intrusive manner, making the entire process transparent to users. By leveraging this capability, and accelerated by the advances in computing technologies, wireless networks can be redesigned to enable the personalization of network services to the user level in real-time. While current wireless networks are being optimized to achieve a predefined set of quality requirements, the personalization technology advocated in this article is supported by an intelligent big data-driven layer designed to micro-manage the scarce network resources. This layer provides the intelligence required to decide the necessary service quality that achieves the target satisfaction level for each user. Due to its dynamic and flexible design, personalized networks are expected to achieve unprecedented improvements in optimizing two contradicting objectives in wireless networks: saving resources and improving user satisfaction levels. This article presents some foundational background on the proposed network personalization technology and its enablers. Then, an AI-enabled big data-driven surrogate-assisted multi-objective optimization formulation is proposed and tested to illustrate the feasibility and prominence of this technology. | [
41250
] | Train |
43,353 | 23 | Title: Are We Ready to Embrace Generative AI for Software Q&A?
Abstract: Stack Overflow, the world's largest software Q&A (SQA) website, is facing a significant traffic drop due to the emergence of generative AI techniques. ChatGPT is banned by Stack Overflow after only 6 days from its release. The main reason provided by the official Stack Overflow is that the answers generated by ChatGPT are of low quality. To verify this, we conduct a comparative evaluation of human-written and ChatGPT-generated answers. Our methodology employs both automatic comparison and a manual study. Our results suggest that human-written and ChatGPT-generated answers are semantically similar, however, human-written answers outperform ChatGPT-generated ones consistently across multiple aspects, specifically by 10% on the overall score. We release the data, analysis scripts, and detailed results at https://anonymous.4open.science/r/GAI4SQA-FD5C. | [
18984
] | Train |
43,354 | 30 | Title: Joint Representations of Text and Knowledge Graphs for Retrieval and Evaluation
Abstract: A key feature of neural models is that they can produce semantic vector representations of objects (texts, images, speech, etc.) ensuring that similar objects are close to each other in the vector space. While much work has focused on learning representations for other modalities, there are no aligned cross-modal representations for text and knowledge base (KB) elements. One challenge for learning such representations is the lack of parallel data, which we use contrastive training on heuristics-based datasets and data augmentation to overcome, training embedding models on (KB graph, text) pairs. On WebNLG, a cleaner manually crafted dataset, we show that they learn aligned representations suitable for retrieval. We then fine-tune on annotated data to create EREDAT (Ensembled Representations for Evaluation of DAta-to-Text), a similarity metric between English text and KB graphs. EREDAT outperforms or matches state-of-the-art metrics in terms of correlation with human judgments on WebNLG even though, unlike them, it does not require a reference text to compare against. | [] | Train |
43,355 | 4 | Title: Machine Learning and Port Scans: A Systematic Review
Abstract: Port scanning is the process of attempting to connect to various network ports on a computing endpoint to determine which ports are open and which services are running on them. It is a common method used by hackers to identify vulnerabilities in a network or system. By determining which ports are open, an attacker can identify which services and applications are running on a device and potentially exploit any known vulnerabilities in those services. Consequently, it is important to detect port scanning because it is often the first step in a cyber attack. By identifying port scanning attempts, cybersecurity professionals can take proactive measures to protect the systems and networks before an attacker has a chance to exploit any vulnerabilities. Against this background, researchers have worked for over a decade to develop robust methods to detect port scanning. While there have been various surveys, none have focused solely on machine learning based detection schemes specific to port scans. Accordingly, we provide a systematic review of 15 papers published between February 2021 and January 2023. We extract critical information such as training dataset, algorithm used, technique, and model accuracy. We also collect unresolved challenges and ideas for future work. The outcomes are significant for researchers looking to step off from the latest work and for practitioners interested in novel mechanisms to detect the early stages of cyber attack. | [
15279
] | Train |
43,356 | 16 | Title: Exploring Effective Mask Sampling Modeling for Neural Image Compression
Abstract: Image compression aims to reduce the information redundancy in images. Most existing neural image compression methods rely on side information from hyperprior or context models to eliminate spatial redundancy, but rarely address the channel redundancy. Inspired by the mask sampling modeling in recent self-supervised learning methods for natural language processing and high-level vision, we propose a novel pretraining strategy for neural image compression. Specifically, Cube Mask Sampling Module (CMSM) is proposed to apply both spatial and channel mask sampling modeling to image compression in the pre-training stage. Moreover, to further reduce channel redundancy, we propose the Learnable Channel Mask Module (LCMM) and the Learnable Channel Completion Module (LCCM). Our plug-and-play CMSM, LCMM, LCCM modules can apply to both CNN-based and Transformer-based architectures, significantly reduce the computational cost, and improve the quality of images. Experiments on the public Kodak and Tecnick datasets demonstrate that our method achieves competitive performance with lower computational complexity compared to state-of-the-art image compression methods. | [] | Train |
43,357 | 10 | Title: A Text-based Approach For Link Prediction on Wikipedia Articles
Abstract: This paper present our work in the DSAA 2023 Challenge about Link Prediction for Wikipedia Articles. We use traditional machine learning models with POS tags (part-of-speech tags) features extracted from text to train the classification model for predicting whether two nodes has the link. Then, we use these tags to test on various machine learning models. We obtained the results by F1 score at 0.99999 and got 7th place in the competition. Our source code is publicly available at this link: https://github.com/Tam1032/DSAA2023-Challenge-Link-prediction-DS-UIT_SAT | [] | Train |
43,358 | 9 | Title: On the complexity of isomorphism problems for tensors, groups, and polynomials IV: linear-length reductions and their applications
Abstract: Many isomorphism problems for tensors, groups, algebras, and polynomials were recently shown to be equivalent to one another under polynomial-time reductions, prompting the introduction of the complexity class TI (Grochow&Qiao, ITCS '21; SIAM J. Comp., '23). Using the tensorial viewpoint, Grochow&Qiao (CCC '21) then gave moderately exponential-time search- and counting-to-decision reductions for a class of $p$-groups. A significant issue was that the reductions usually incurred a quadratic increase in the length of the tensors involved. When the tensors represent $p$-groups, this corresponds to an increase in the order of the group of the form $|G|^{\Theta(\log |G|)}$, negating any asymptotic gains in the Cayley table model. In this paper, we present a new kind of tensor gadget that allows us to replace those quadratic-length reductions with linear-length ones, yielding the following consequences: 1. Combined with the recent breakthrough $|G|^{O((\log |G|)^{5/6})}$-time isomorphism-test for $p$-groups of class 2 and exponent $p$ (Sun, STOC '23), our reductions extend this runtime to $p$-groups of class $c$ and exponent $p$ where $c<p$. 2. Our reductions show that Sun's algorithm solves several TI-complete problems over $F_p$, such as isomorphism problems for cubic forms, algebras, and tensors, in time $p^{O(n^{1.8} \log p)}$. 3. Polynomial-time search- and counting-to-decision reduction for testing isomorphism of $p$-groups of class $2$ and exponent $p$ in the Cayley table model. This answers questions of Arvind and T\'oran (Bull. EATCS, 2005) for this group class, thought to be one of the hardest cases of Group Isomorphism. 4. If Graph Isomorphism is in P, then testing equivalence of cubic forms and testing isomorphism of algebra over a finite field $F_q$ can both be solved in time $q^{O(n)}$, improving from the brute-force upper bound $q^{O(n^2)}$. | [
28943
] | Train |
43,359 | 5 | Title: Memory Disaggregation: Advances and Open Challenges
Abstract: Compute and memory are tightly coupled within each server in traditional datacenters. Large-scale datacenter operators have identified this coupling as a root cause behind fleetwide resource underutilization and increasing Total Cost of Ownership (TCO). With the advent of ultra-fast networks and cache-coherent interfaces, memory disaggregation has emerged as a potential solution, whereby applications can leverage available memory even outside server boundaries. This paper summarizes the growing research landscape of memory disaggregation from a software perspective and introduces the challenges toward making it practical under current and future hardware trends. We also reflect on our seven-year journey in the SymbioticLab to build a comprehensive disaggregated memory system over ultra-fast networks. We conclude with some open challenges toward building next-generation memory disaggregation systems leveraging emerging cache-coherent interconnects. | [] | Test |
43,360 | 34 | Title: Approximately Optimal Core Shapes for Tensor Decompositions
Abstract: This work studies the combinatorial optimization problem of finding an optimal core tensor shape, also called multilinear rank, for a size-constrained Tucker decomposition. We give an algorithm with provable approximation guarantees for its reconstruction error via connections to higher-order singular values. Specifically, we introduce a novel Tucker packing problem, which we prove is NP-hard, and give a polynomial-time approximation scheme based on a reduction to the 2-dimensional knapsack problem with a matroid constraint. We also generalize our techniques to tree tensor network decompositions. We implement our algorithm using an integer programming solver, and show that its solution quality is competitive with (and sometimes better than) the greedy algorithm that uses the true Tucker decomposition loss at each step, while also running up to 1000x faster. | [
24977,
38701
] | Validation |
43,361 | 24 | Title: A variational autoencoder-based nonnegative matrix factorisation model for deep dictionary learning
Abstract: Construction of dictionaries using nonnegative matrix factorisation (NMF) has extensive applications in signal processing and machine learning. With the advances in deep learning, training compact and robust dictionaries using deep neural networks, i.e., dictionaries of deep features, has been proposed. In this study, we propose a probabilistic generative model which employs a variational autoencoder (VAE) to perform nonnegative dictionary learning. In contrast to the existing VAE models, we cast the model under a statistical framework with latent variables obeying a Gamma distribution and design a new loss function to guarantee the nonnegative dictionaries. We adopt an acceptance-rejection sampling reparameterization trick to update the latent variables iteratively. We apply the dictionaries learned from VAE-NMF to two signal processing tasks, i.e., enhancement of speech and extraction of muscle synergies. Experimental results demonstrate that VAE-NMF performs better in learning the latent nonnegative dictionaries in comparison with state-of-the-art methods. | [] | Test |
43,362 | 8 | Title: Security of RPL Based 6LoWPAN Networks in the Internet of Things: A Review
Abstract: Internet of Things (IoT) is one of the fastest emerging networking paradigms enabling a large number of applications for the benefit of mankind. Advancements in embedded system technology and compressed IPv6 have enabled the support of IP stack in resource constrained heterogeneous smart devices. However, global connectivity and resource constrained characteristics of smart devices have exposed them to different insider and outsider attacks, which put users’ security and privacy at risk. Various risks associated with IoT slow down its growth and become an obstruction in the worldwide adoption of its applications. In RFC 6550, the IPv6 Routing Protocol for Low Power and Lossy Network (RPL) is specified by IETF’s ROLL working group for facilitating efficient routing in 6LoWPAN networks, while considering its limitations. Due to resource constrained nature of nodes in the IoT, RPL is vulnerable to many attacks that consume the node’s resources and degrade the network’s performance. In this paper, we present a study on various attacks and their existing defense solutions, particularly to RPL. Open research issues, challenges, and future directions specific to RPL security are also discussed. A taxonomy of RPL attacks, considering the essential attributes like resources, topology, and traffic, is shown for better understanding. In addition, a study of existing cross-layered and RPL specific network layer based defense solutions suggested in the literature is also carried out. | [
22756,
35406,
16526,
18672,
34582
] | Validation |
43,363 | 24 | Title: Achieving Counterfactual Fairness with Imperfect Structural Causal Model
Abstract: Counterfactual fairness alleviates the discrimination between the model prediction toward an individual in the actual world (observational data) and that in counterfactual world (i.e., what if the individual belongs to other sensitive groups). The existing studies need to pre-define the structural causal model that captures the correlations among variables for counterfactual inference; however, the underlying causal model is usually unknown and difficult to be validated in real-world scenarios. Moreover, the misspecification of the causal model potentially leads to poor performance in model prediction and thus makes unfair decisions. In this research, we propose a novel minimax game-theoretic model for counterfactual fairness that can produce accurate results meanwhile achieve a counterfactually fair decision with the relaxation of strong assumptions of structural causal models. In addition, we also theoretically prove the error bound of the proposed minimax model. Empirical experiments on multiple real-world datasets illustrate our superior performance in both accuracy and fairness. Source code is available at \url{https://github.com/tridungduong16/counterfactual_fairness_game_theoretic}. | [] | Train |
43,364 | 30 | Title: NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
Abstract: Reasoning has been a central topic in artificial intelligence from the beginning. The recent progress made on distributed representation and neural networks continues to improve the state-of-the-art performance of natural language inference. However, it remains an open question whether the models perform real reasoning to reach their conclusions or rely on spurious correlations. Adversarial attacks have proven to be an important tool to help evaluate the Achilles’ heel of the victim models. In this study, we explore the fundamental problem of developing attack models based on logic formalism. We propose NatLogAttack to perform systematic attacks centring around natural logic, a classical logic formalism that is traceable back to Aristotle’s syllogism and has been closely developed for natural language inference. The proposed framework renders both label-preserving and label-flipping attacks.We show that compared to the existing attack models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. The victim models are found to be more vulnerable under the label-flipping setting. NatLogAttack provides a tool to probe the existing and future NLI models’ capacity from a key viewpoint and we hope more logic-based attacks will be further explored for understanding the desired property of reasoning. | [] | Test |
43,365 | 24 | Title: Learnable Behavior Control: Breaking Atari Human World Records via Sample-Efficient Behavior Selection
Abstract: The exploration problem is one of the main challenges in deep reinforcement learning (RL). Recent promising works tried to handle the problem with population-based methods, which collect samples with diverse behaviors derived from a population of different exploratory policies. Adaptive policy selection has been adopted for behavior control. However, the behavior selection space is largely limited by the predefined policy population, which further limits behavior diversity. In this paper, we propose a general framework called Learnable Behavioral Control (LBC) to address the limitation, which a) enables a significantly enlarged behavior selection space via formulating a hybrid behavior mapping from all policies; b) constructs a unified learnable process for behavior selection. We introduce LBC into distributed off-policy actor-critic methods and achieve behavior control via optimizing the selection of the behavior mappings with bandit-based meta-controllers. Our agents have achieved 10077.52% mean human normalized score and surpassed 24 human world records within 1B training frames in the Arcade Learning Environment, which demonstrates our significant state-of-the-art (SOTA) performance without degrading the sample efficiency. | [
16784,
29283,
1252,
2070
] | Train |
43,366 | 23 | Title: Label Smoothing Improves Neural Source Code Summarization
Abstract: Label smoothing is a regularization technique for neural networks. Normally neural models are trained to an output distribution that is a vector with a single 1 for the correct prediction, and 0 for all other elements. Label smoothing converts the correct prediction location to something slightly less than 1, then distributes the remainder to the other elements such that they are slightly greater than 0. A conceptual explanation behind label smoothing is that it helps prevent a neural model from becoming "overconfident" by forcing it to consider alternatives, even if only slightly. Label smoothing has been shown to help several areas of language generation, yet typically requires considerable tuning and testing to achieve the optimal results. This tuning and testing has not been reported for neural source code summarization – a growing research area in software engineering that seeks to generate natural language descriptions of source code behavior. In this paper, we demonstrate the effect of label smoothing on several baselines in neural code summarization, and conduct an experiment to find good parameters for label smoothing and make recommendations for its use. | [] | Train |
43,367 | 6 | Title: Exploring AI-enhanced Shared Control for an Assistive Robotic Arm
Abstract: Assistive technologies and in particular assistive robotic arms have the potential to enable people with motor impairments to live a self-determined life. More and more of these systems have become available for end users in recent years, such as the Kinova Jaco robotic arm. However, they mostly require complex manual control, which can overwhelm users. As a result, researchers have explored ways to let such robots act autonomously. However, at least for this specific group of users, such an approach has shown to be futile. Here, users want to stay in control to achieve a higher level of personal autonomy, to which an autonomous robot runs counter. In our research, we explore how Artifical Intelligence (AI) can be integrated into a shared control paradigm. In particular, we focus on the consequential requirements for the interface between human and robot and how we can keep humans in the loop while still significantly reducing the mental load and required motor skills. | [
7696,
11308,
30880
] | Train |
43,368 | 6 | Title: MinMaxLTTB: Leveraging MinMax-Preselection to Scale LTTB
Abstract: Visualization plays an important role in analyzing and exploring time series data. To facilitate efficient visualization of large datasets, downsampling has emerged as a well-established approach. This work concentrates on LTTB (Largest-Triangle-Three-Buckets), a widely adopted downsampling algorithm for time series data point selection. Specifically, we propose MinMaxLTTB, a two-step algorithm that marks a significant enhancement in the scalability of LTTB. MinMaxLTTB entails the following two steps: (i) the MinMax algorithm preselects a certain ratio of minimum and maximum data points, followed by (ii) applying the LTTB algorithm on only these preselected data points, effectively reducing LTTB's time complexity. The low computational cost of the MinMax algorithm, along with its parallelization capabilities, facilitates efficient preselection of data points. Additionally, the competitive performance of MinMax in terms of visual representativeness also makes it an effective reduction method. Experiments show that MinMaxLTTB outperforms LTTB by more than an order of magnitude in terms of computation time. Furthermore, preselecting a small multiple of the desired output size already provides similar visual representativeness compared to LTTB. In summary, MinMaxLTTB leverages the computational efficiency of MinMax to scale LTTB, without compromising on LTTB's favored visualization properties. The accompanying code and experiments of this paper can be found at https://github.com/predict-idlab/MinMaxLTTB. | [] | Test |
43,369 | 8 | Title: Fresh-Fi: Enhancing Information Freshness in Commodity WiFi Systems via Customizing Lower Layers
Abstract: Enhancing information freshness in wireless networks has gained significant attention in recent years. To optimize or analyze information freshness, which is often characterized by the age of information (AoI) metric, extensive theoretical studies have been conducted on various wireless networks. Early research has demonstrated the significance of last-come-first-served (LCFS) packet scheduling and controlled status sampling (i.e., packet generation) in improving information freshness. These mechanisms have been widely adopted in subsequent studies. However, the effective implementation of these mechanisms in commercial off-the-shelf (COTS) wireless devices has not been thoroughly investigated, which could limit the practical application of information freshness-oriented protocols in real-world systems. Our work aims to address the gap by exploring the effective implementation of the information freshness-oriented mechanisms mentioned above in COTS WiFi devices that use the Linux operating system. Our attempts reveal that implementing these mechanisms in COTS systems is not a straightforward task. Specifically, we found that the physical layer queue of WiFi devices operates on a first-come-first-served (FCFS) basis, and the packet generation process cannot be precisely controlled by default. To overcome these challenges, we develop Fresh-Fi, an information freshness-oriented protocol stack that involves careful customization to the lower layers of the Linux networking protocol stack. Fresh-Fi mainly incorporates a mac80211 subsystem-based LCFS queue and a real-time kernel-based cross-layer tunnel between the mac80211 subsystem and the application layer for triggered packet generation. | [] | Train |
43,370 | 16 | Title: FER-former: Multi-modal Transformer for Facial Expression Recognition
Abstract: The ever-increasing demands for intuitive interactions in Virtual Reality has triggered a boom in the realm of Facial Expression Recognition (FER). To address the limitations in existing approaches (e.g., narrow receptive fields and homogenous supervisory signals) and further cement the capacity of FER tools, a novel multifarious supervision-steering Transformer for FER in the wild is proposed in this paper. Referred as FER-former, our approach features multi-granularity embedding integration, hybrid self-attention scheme, and heterogeneous domain-steering supervision. In specific, to dig deep into the merits of the combination of features provided by prevailing CNNs and Transformers, a hybrid stem is designed to cascade two types of learning paradigms simultaneously. Wherein, a FER-specific transformer mechanism is devised to characterize conventional hard one-hot label-focusing and CLIP-based text-oriented tokens in parallel for final classification. To ease the issue of annotation ambiguity, a heterogeneous domains-steering supervision module is proposed to make image features also have text-space semantic correlations by supervising the similarity between image features and text features. On top of the collaboration of multifarious token heads, diverse global receptive fields with multi-modal semantic cues are captured, thereby delivering superb learning capability. Extensive experiments on popular benchmarks demonstrate the superiority of the proposed FER-former over the existing state-of-the-arts. | [] | Train |
43,371 | 30 | Title: Evaluating the Efficacy of Supervised Learning vs Large Language Models for Identifying Cognitive Distortions and Suicidal Risks in Chinese Social Media
Abstract: Large language models, particularly those akin to the rapidly progressing GPT series, are gaining traction for their expansive influence. While there is keen interest in their applicability within medical domains such as psychology, tangible explorations on real-world data remain scant. Concurrently, users on social media platforms are increasingly vocalizing personal sentiments; under specific thematic umbrellas, these sentiments often manifest as negative emotions, sometimes escalating to suicidal inclinations. Timely discernment of such cognitive distortions and suicidal risks is crucial to effectively intervene and potentially avert dire circumstances. Our study ventured into this realm by experimenting on two pivotal tasks: suicidal risk and cognitive distortion identification on Chinese social media platforms. Using supervised learning as a baseline, we examined and contrasted the efficacy of large language models via three distinct strategies: zero-shot, few-shot, and fine-tuning. Our findings revealed a discernible performance gap between the large language models and traditional supervised learning approaches, primarily attributed to the models' inability to fully grasp subtle categories. Notably, while GPT-4 outperforms its counterparts in multiple scenarios, GPT-3.5 shows significant enhancement in suicide risk classification after fine-tuning. To our knowledge, this investigation stands as the maiden attempt at gauging large language models on Chinese social media tasks. This study underscores the forward-looking and transformative implications of using large language models in the field of psychology. It lays the groundwork for future applications in psychological research and practice. | [
40192,
43267,
33220,
7628,
10221,
6700,
1269,
12602,
26973
] | Train |
43,372 | 25 | Title: Finding Tori: Self-supervised Learning for Analyzing Korean Folk Song
Abstract: In this paper, we introduce a computational analysis of the field recording dataset of approximately 700 hours of Korean folk songs, which were recorded around 1980-90s. Because most of the songs were sung by non-expert musicians without accompaniment, the dataset provides several challenges. To address this challenge, we utilized self-supervised learning with convolutional neural network based on pitch contour, then analyzed how the musical concept of tori, a classification system defined by a specific scale, ornamental notes, and an idiomatic melodic contour, is captured by the model. The experimental result shows that our approach can better capture the characteristics of tori compared to traditional pitch histograms. Using our approaches, we have examined how musical discussions proposed in existing academia manifest in the actual field recordings of Korean folk songs. | [] | Validation |
43,373 | 11 | Title: ESP: Exploiting Symmetry Prior for Multi-Agent Reinforcement Learning
Abstract: Multi-agent reinforcement learning (MARL) has achieved promising results in recent years. However, most existing reinforcement learning methods require a large amount of data for model training. In addition, data-efficient reinforcement learning requires the construction of strong inductive biases, which are ignored in the current MARL approaches. Inspired by the symmetry phenomenon in multi-agent systems, this paper proposes a framework for exploiting prior knowledge by integrating data augmentation and a well-designed consistency loss into the existing MARL methods. In addition, the proposed framework is model-agnostic and can be applied to most of the current MARL algorithms. Experimental tests on multiple challenging tasks demonstrate the effectiveness of the proposed framework. Moreover, the proposed framework is applied to a physical multi-robot testbed to show its superiority. | [] | Validation |
43,374 | 24 | Title: Curiosity-Driven Reinforcement Learning based Low-Level Flight Control
Abstract: Curiosity is one of the main motives in many of the natural creatures with measurable levels of intelligence for exploration and, as a result, more efficient learning. It makes it possible for humans and many animals to explore efficiently by searching for being in states that make them surprised with the goal of learning more about what they do not know. As a result, while being curious, they learn better. In the machine learning literature, curiosity is mostly combined with reinforcement learning-based algorithms as an intrinsic reward. This work proposes an algorithm based on the drive of curiosity for autonomous learning to control by generating proper motor speeds from odometry data. The quadcopter controlled by our proposed algorithm can pass through obstacles while controlling the Yaw direction of the quad-copter toward the desired location. To achieve that, we also propose a new curiosity approach based on prediction error. We ran tests using on-policy, off-policy, on-policy plus curiosity, and the proposed algorithm and visualized the effect of curiosity in evolving exploration patterns. Results show the capability of the proposed algorithm to learn optimal policy and maximize reward where other algorithms fail to do so. | [] | Train |
43,375 | 16 | Title: Iterative Robust Visual Grounding with Masked Reference based Centerpoint Supervision
Abstract: Visual Grounding (VG) aims at localizing target objects from an image based on given expressions and has made significant progress with the development of detection and vision transformer. However, existing VG methods tend to generate false-alarm objects when presented with inaccurate or irrelevant descriptions, which commonly occur in practical applications. Moreover, existing methods fail to capture fine-grained features, accurate localization, and sufficient context comprehension from the whole image and textual descriptions. To address both issues, we propose an Iterative Robust Visual Grounding (IR-VG) framework with Masked Reference based Centerpoint Supervision (MRCS). The framework introduces iterative multi-level vision-language fusion (IMVF) for better alignment. We use MRCS to ahieve more accurate localization with point-wised feature supervision. Then, to improve the robustness of VG, we also present a multi-stage false-alarm sensitive decoder (MFSD) to prevent the generation of false-alarm objects when presented with inaccurate expressions. The proposed framework is evaluated on five regular VG datasets and two newly constructed robust VG datasets. Extensive experiments demonstrate that IR-VG achieves new state-of-the-art (SOTA) results, with improvements of 25\% and 10\% compared to existing SOTA approaches on the two newly proposed robust VG datasets. Moreover, the proposed framework is also verified effective on five regular VG datasets. Codes and models will be publicly at https://github.com/cv516Buaa/IR-VG. | [
39584,
17097,
22562,
41109
] | Train |
43,376 | 10 | Title: LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?
Abstract: Large language models (LLMs) have exhibited impressive capabilities in comprehending complex instructions. However, their blind adherence to provided instructions has led to concerns regarding risks of malicious use. Existing defence mechanisms, such as model fine-tuning or output censorship using LLMs, have proven to be fallible, as LLMs can still generate problematic responses. Commonly employed censorship approaches treat the issue as a machine learning problem and rely on another LM to detect undesirable content in LLM outputs. In this paper, we present the theoretical limitations of such semantic censorship approaches. Specifically, we demonstrate that semantic censorship can be perceived as an undecidable problem, highlighting the inherent challenges in censorship that arise due to LLMs' programmatic and instruction-following capabilities. Furthermore, we argue that the challenges extend beyond semantic censorship, as knowledgeable attackers can reconstruct impermissible outputs from a collection of permissible ones. As a result, we propose that the problem of censorship needs to be reevaluated; it should be treated as a security problem which warrants the adaptation of security-based approaches to mitigate potential risks. | [
7936,
35170,
21127,
18764,
45275,
10800,
497,
5522,
29396,
5909,
8853,
43925,
4408,
634,
4027,
10749
] | Test |
43,377 | 24 | Title: Novel batch active learning approach and its application to synthetic aperture radar datasets
Abstract: Active learning improves the performance of machine learning methods by judiciously selecting a limited number of unlabeled data points to query for labels, with the aim of maximally improving the underlying classifiers performance. Recent gains have been made using sequential active learning for synthetic aperture radar (SAR) data.1 In each iteration, sequential active learning selects a query set of size one while batch active learning selects a query set of multiple datapoints. While batch active learning methods exhibit greater efficiency, the challenge lies in maintaining model accuracy relative to sequential active learning methods. We developed a novel, two-part approach for batch active learning: Dijkstra’s Annulus Core-Set (DAC) for core-set generation and LocalMax for batch sampling. The batch active learning process that combines DAC and LocalMax achieves nearly identical accuracy as sequential active learning but is more efficient, proportional to the batch size. As an application, a pipeline is built based on transfer learning feature embedding, graph learning, DAC, and LocalMax to classify the FUSAR-Ship and OpenSARShip datasets. Our pipeline outperforms the state-of-the-art CNN-based methods. | [] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.