id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.08536 | Watch or Listen: Robust Audio-Visual Speech Recognition with Visual
Corruption Modeling and Reliability Scoring | This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal input corruption situations where audio inputs and visual inputs are both corrupted, which is not well addressed in previous research directions. Previous studies have focused on how to complement the corrupted audio inputs with the clean visual inputs with the assumption of the availability of clean visual inputs. However, in real life, clean visual inputs are not always accessible and can even be corrupted by occluded lip regions or noises. Thus, we firstly analyze that the previous AVSR models are not indeed robust to the corruption of multimodal input streams, the audio and the visual inputs, compared to uni-modal models. Then, we design multimodal input corruption modeling to develop robust AVSR models. Lastly, we propose a novel AVSR framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that is robust to the corrupted multimodal inputs. The AV-RelScore can determine which input modal stream is reliable or not for the prediction and also can exploit the more reliable streams in prediction. The effectiveness of the proposed method is evaluated with comprehensive experiments on popular benchmark databases, LRS2 and LRS3. We also show that the reliability scores obtained by AV-RelScore well reflect the degree of corruption and make the proposed model focus on the reliable multimodal representations. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 351,679 |
2111.06038 | Hybrid Saturation Restoration for LDR Images of HDR Scenes | There are shadow and highlight regions in a low dynamic range (LDR) image which is captured from a high dynamic range (HDR) scene. It is an ill-posed problem to restore the saturated regions of the LDR image. In this paper, the saturated regions of the LDR image are restored by fusing model-based and data-driven approaches. With such a neural augmentation, two synthetic LDR images are first generated from the underlying LDR image via the model-based approach. One is brighter than the input image to restore the shadow regions and the other is darker than the input image to restore the high-light regions. Both synthetic images are then refined via a novel exposedness aware saturation restoration network (EASRN). Finally, the two synthetic images and the input image are combined together via an HDR synthesis algorithm or a multi-scale exposure fusion algorithm. The proposed algorithm can be embedded in any smart phones or digital cameras to produce an information-enriched LDR image. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 265,968 |
2002.10537 | Video Monitoring Queries | Recent advances in video processing utilizing deep learning primitives achieved breakthroughs in fundamental problems in video analysis such as frame classification and object detection enabling an array of new applications. In this paper we study the problem of interactive declarative query processing on video streams. In particular we introduce a set of approximate filters to speed up queries that involve objects of specific type (e.g., cars, trucks, etc.) on video frames with associated spatial relationships among them (e.g., car left of truck). The resulting filters are able to assess quickly if the query predicates are true to proceed with further analysis of the frame or otherwise not consider the frame further avoiding costly object detection operations. We propose two classes of filters $IC$ and $OD$, that adapt principles from deep image classification and object detection. The filters utilize extensible deep neural architectures and are easy to deploy and utilize. In addition, we propose statistical query processing techniques to process aggregate queries involving objects with spatial constraints on video streams and demonstrate experimentally the resulting increased accuracy on the resulting aggregate estimation. Combined these techniques constitute a robust set of video monitoring query processing techniques. We demonstrate that the application of the techniques proposed in conjunction with declarative queries on video streams can dramatically increase the frame processing rate and speed up query processing by at least two orders of magnitude. We present the results of a thorough experimental study utilizing benchmark video data sets at scale demonstrating the performance benefits and the practical relevance of our proposals. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | 165,426 |
1907.09110 | Strategic Voting Under Uncertainty About the Voting Method | Much of the theoretical work on strategic voting makes strong assumptions about what voters know about the voting situation. A strategizing voter is typically assumed to know how other voters will vote and to know the rules of the voting method. A growing body of literature explores strategic voting when there is uncertainty about how others will vote. In this paper, we study strategic voting when there is uncertainty about the voting method. We introduce three notions of manipulability for a set of voting methods: sure, safe, and expected manipulability. With the help of a computer program, we identify voting scenarios in which uncertainty about the voting method may reduce or even eliminate a voter's incentive to misrepresent her preferences. Thus, it may be in the interest of an election designer who wishes to reduce strategic voting to leave voters uncertain about which of several reasonable voting methods will be used to determine the winners of an election. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 139,270 |
2307.07876 | Real-time goal recognition using approximations in Euclidean space | While recent work on online goal recognition efficiently infers goals under low observability, comparatively less work focuses on online goal recognition that works in both discrete and continuous domains. Online goal recognition approaches often rely on repeated calls to the planner at each new observation, incurring high computational costs. Recognizing goals online in continuous space quickly and reliably is critical for any trajectory planning problem since the real physical world is fast-moving, e.g. robot applications. We develop an efficient method for goal recognition that relies either on a single call to the planner for each possible goal in discrete domains or a simplified motion model that reduces the computational burden in continuous ones. The resulting approach performs the online component of recognition orders of magnitude faster than the current state of the art, making it the first online method effectively usable for robotics applications that require sub-second recognition. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 379,575 |
2007.10878 | DeepNetQoE: Self-adaptive QoE Optimization Framework of Deep Networks | Future advances in deep learning and its impact on the development of artificial intelligence (AI) in all fields depends heavily on data size and computational power. Sacrificing massive computing resources in exchange for better precision rates of the network model is recognized by many researchers. This leads to huge computing consumption and satisfactory results are not always expected when computing resources are limited. Therefore, it is necessary to find a balance between resources and model performance to achieve satisfactory results. This article proposes a self-adaptive quality of experience (QoE) framework, DeepNetQoE, to guide the training of deep networks. A self-adaptive QoE model is set up that relates the model's accuracy with the computing resources required for training which will allow the experience value of the model to improve. To maximize the experience value when computer resources are limited, a resource allocation model and solutions need to be established. In addition, we carry out experiments based on four network models to analyze the experience values with respect to the crowd counting example. Experimental results show that the proposed DeepNetQoE is capable of adaptively obtaining a high experience value according to user needs and therefore guiding users to determine the computational resources allocated to the network models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 188,405 |
2102.01807 | Building population models for large-scale neural recordings:
opportunities and pitfalls | Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 218,223 |
1511.08899 | Applying deep learning to classify pornographic images and videos | It is no secret that pornographic material is now a one-click-away from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a classifier based on one of the recently flourishing deep learning techniques. Convolutional neural networks contain many layers for both automatic features extraction and classification. The benefit is an easier system to build (no need for hand-crafting features and classifiers). Additionally, our experiments show that it is even more accurate than the state of the art methods on the most recent benchmark dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | true | 49,592 |
2304.07711 | Obstacle-Transformer: A Trajectory Prediction Network Based on
Surrounding Trajectories | Recurrent Neural Network, Long Short-Term Memory, and Transformer have made great progress in predicting the trajectories of moving objects. Although the trajectory element with the surrounding scene features has been merged to improve performance, there still exist some problems to be solved. One is that the time series processing models will increase the inference time with the increase of the number of prediction sequences. Another lies in which the features can not be extracted from the scene's image and point cloud in some situations. Therefore, this paper proposes an Obstacle-Transformer to predict trajectory in a constant inference time. An ``obstacle'' is designed by the surrounding trajectory rather than images or point clouds, making Obstacle-Transformer more applicable in a wider range of scenarios. Experiments are conducted on ETH and UCY data sets to verify the performance of our model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 358,451 |
2407.20152 | Hierarchically Disentangled Recurrent Network for Factorizing System
Dynamics of Multi-scale Systems | We present a knowledge-guided machine learning (KGML) framework for modeling multi-scale processes, and study its performance in the context of streamflow forecasting in hydrology. Specifically, we propose a novel hierarchical recurrent neural architecture that factorizes the system dynamics at multiple temporal scales and captures their interactions. This framework consists of an inverse and a forward model. The inverse model is used to empirically resolve the system's temporal modes from data (physical model simulations, observed data, or a combination of them from the past), and these states are then used in the forward model to predict streamflow. In a hydrological system, these modes can represent different processes, evolving at different temporal scales (e.g., slow: groundwater recharge and baseflow vs. fast: surface runoff due to extreme rainfall). A key advantage of our framework is that once trained, it can incorporate new observations into the model's context (internal state) without expensive optimization approaches (e.g., EnKF) that are traditionally used in physical sciences for data assimilation. Experiments with several river catchments from the NWS NCRFC region show the efficacy of this ML-based data assimilation framework compared to standard baselines, especially for basins that have a long history of observations. Even for basins that have a shorter observation history, we present two orthogonal strategies of training our FHNN framework: (a) using simulation data from imperfect simulations and (b) using observation data from multiple basins to build a global model. We show that both of these strategies (that can be used individually or together) are highly effective in mitigating the lack of training data. The improvement in forecast accuracy is particularly noteworthy for basins where local models perform poorly because of data sparsity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 477,061 |
2305.09795 | The Value of Competing Energy Storage in Decarbonized Power Systems | As the world seeks to transition to a sustainable energy future, energy storage technologies are increasingly recognized as critical enablers. However, the macro-energy system assessment of energy storage has often focused on isolated storage technologies and neglected competition between them, thus leaving out which energy storage to prioritise. The article applies a systematic deployment analysis method that enables system-value evaluation in perfect competitive markets and demonstrates its application to 20 different energy storage technologies across 40 distinct scenarios for a representative future power system in Africa. Here, each storage solution is explored alone and in competition with others, examining specific total system costs, deployment configuration, and cost synergies between the storage technologies. The results demonstrate the significant benefits of optimizing energy storage with competition compared to without (+10% cost savings), and highlight the relevance of several energy storage technologies in different scenarios. This work provides insights into the role of energy storage in decarbonizing power systems and informs future research and policy decisions. There is no one-size-fits-all energy storage, but rather an ideal combination of multiple energy storage options designed and operated in symbiosis. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 364,777 |
1701.03918 | Marked Temporal Dynamics Modeling based on Recurrent Neural Network | We are now witnessing the increasing availability of event stream data, i.e., a sequence of events with each event typically being denoted by the time it occurs and its mark information (e.g., event type). A fundamental problem is to model and predict such kind of marked temporal dynamics, i.e., when the next event will take place and what its mark will be. Existing methods either predict only the mark or the time of the next event, or predict both of them, yet separately. Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them. To tackle this problem, in this paper, we propose to model marked temporal dynamics by using a mark-specific intensity function to explicitly capture the dependency between the mark and the time of the next event. Extensive experiments on two datasets demonstrate that the proposed method outperforms state-of-the-art methods at predicting marked temporal dynamics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 66,778 |
2111.05174 | CAESynth: Real-Time Timbre Interpolation and Pitch Control with
Conditional Autoencoders | In this paper, we present a novel audio synthesizer, CAESynth, based on a conditional autoencoder. CAESynth synthesizes timbre in real-time by interpolating the reference sounds in their shared latent feature space, while controlling a pitch independently. We show that training a conditional autoencoder based on accuracy in timbre classification together with adversarial regularization of pitch content allows timbre distribution in latent space to be more effective and stable for timbre interpolation and pitch conditioning. The proposed method is applicable not only to creation of musical cues but also to exploration of audio affordance in mixed reality based on novel timbre mixtures with environmental sounds. We demonstrate by experiments that CAESynth achieves smooth and high-fidelity audio synthesis in real-time through timbre interpolation and independent yet accurate pitch control for musical cues as well as for audio affordance with environmental sound. A Python implementation along with some generated samples are shared online. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,713 |
2501.01371 | CLIP-UP: CLIP-Based Unanswerable Problem Detection for Visual Question
Answering | Recent Vision-Language Models (VLMs) have demonstrated remarkable capabilities in visual understanding and reasoning, and in particular on multiple-choice Visual Question Answering (VQA). Still, these models can make distinctly unnatural errors, for example, providing (wrong) answers to unanswerable VQA questions, such as questions asking about objects that do not appear in the image. To address this issue, we propose CLIP-UP: CLIP-based Unanswerable Problem detection, a novel lightweight method for equipping VLMs with the ability to withhold answers to unanswerable questions. By leveraging CLIP to extract question-image alignment information, CLIP-UP requires only efficient training of a few additional layers, while keeping the original VLMs' weights unchanged. Tested across LLaVA models, CLIP-UP achieves state-of-the-art results on the MM-UPD benchmark for assessing unanswerability in multiple-choice VQA, while preserving the original performance on other tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 522,037 |
1501.03924 | On cyclic codes over $\mathbb{Z}_q+u\mathbb{Z}_q$ | Let $R=\mathbb{Z}_q+u\mathbb{Z}_q$, where $q=p^s$ and $u^2=0$. In this paper, some structural properties of cyclic codes over the ring $R$ are considered. A necessary and sufficient condition for cyclic codes over the ring $R$ to be free is obtained and a BCH-type bound on the minimum Hamming distance for them is also given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 39,305 |
2112.09279 | Robust Upper Bounds for Adversarial Training | Many state-of-the-art adversarial training methods for deep learning leverage upper bounds of the adversarial loss to provide security guarantees against adversarial attacks. Yet, these methods rely on convex relaxations to propagate lower and upper bounds for intermediate layers, which affect the tightness of the bound at the output layer. We introduce a new approach to adversarial training by minimizing an upper bound of the adversarial loss that is based on a holistic expansion of the network instead of separate bounds for each layer. This bound is facilitated by state-of-the-art tools from Robust Optimization; it has closed-form and can be effectively trained using backpropagation. We derive two new methods with the proposed approach. The first method (Approximated Robust Upper Bound or aRUB) uses the first order approximation of the network as well as basic tools from Linear Robust Optimization to obtain an empirical upper bound of the adversarial loss that can be easily implemented. The second method (Robust Upper Bound or RUB), computes a provable upper bound of the adversarial loss. Across a variety of tabular and vision data sets we demonstrate the effectiveness of our approach -- RUB is substantially more robust than state-of-the-art methods for larger perturbations, while aRUB matches the performance of state-of-the-art methods for small perturbations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 272,094 |
2411.13908 | Hybrid Physics-ML Modeling for Marine Vehicle Maneuvering Motions in the
Presence of Environmental Disturbances | A hybrid physics-machine learning modeling framework is proposed for the surface vehicles' maneuvering motions to address the modeling capability and stability in the presence of environmental disturbances. From a deep learning perspective, the framework is based on a variant version of residual networks with additional feature extraction. Initially, an imperfect physical model is derived and identified to capture the fundamental hydrodynamic characteristics of marine vehicles. This model is then integrated with a feedforward network through a residual block. Additionally, feature extraction from trigonometric transformations is employed in the machine learning component to account for the periodic influence of currents and waves. The proposed method is evaluated using real navigational data from the 'JH7500' unmanned surface vehicle. The results demonstrate the robust generalizability and accurate long-term prediction capabilities of the nonlinear dynamic model in specific environmental conditions. This approach has the potential to be extended and applied to develop a comprehensive high-fidelity simulator. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 509,968 |
2312.09877 | Distributed Learning of Mixtures of Experts | In modern machine learning problems we deal with datasets that are either distributed by nature or potentially large for which distributing the computations is usually a standard way to proceed, since centralized algorithms are in general ineffective. We propose a distributed learning approach for mixtures of experts (MoE) models with an aggregation strategy to construct a reduction estimator from local estimators fitted parallelly to distributed subsets of the data. The aggregation is based on an optimal minimization of an expected transportation divergence between the large MoE composed of local estimators and the unknown desired MoE model. We show that the provided reduction estimator is consistent as soon as the local estimators to be aggregated are consistent, and its construction is performed by a proposed majorization-minimization (MM) algorithm that is computationally effective. We study the statistical and numerical properties for the proposed reduction estimator on experiments that demonstrate its performance compared to namely the global estimator constructed in a centralized way from the full dataset. For some situations, the computation time is more than ten times faster, for a comparable performance. Our source codes are publicly available on Github. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 415,915 |
2112.13353 | Novel Hybrid DNN Approaches for Speaker Verification in Emotional and
Stressful Talking Environments | In this work, we conducted an empirical comparative study of the performance of text-independent speaker verification in emotional and stressful environments. This work combined deep models with shallow architecture, which resulted in novel hybrid classifiers. Four distinct hybrid models were utilized: deep neural network-hidden Markov model (DNN-HMM), deep neural network-Gaussian mixture model (DNN-GMM), Gaussian mixture model-deep neural network (GMM-DNN), and hidden Markov model-deep neural network (HMM-DNN). All models were based on novel implemented architecture. The comparative study used three distinct speech datasets: a private Arabic dataset and two public English databases, namely, Speech Under Simulated and Actual Stress (SUSAS) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). The test results of the aforementioned hybrid models demonstrated that the proposed HMM-DNN leveraged the verification performance in emotional and stressful environments. Results also showed that HMM-DNN outperformed all other hybrid models in terms of equal error rate (EER) and area under the curve (AUC) evaluation metrics. The average resulting verification system based on the three datasets yielded EERs of 7.19%, 16.85%, 11.51%, and 11.90% based on HMM-DNN, DNN-HMM, DNN-GMM, and GMM-DNN, respectively. Furthermore, we found that the DNN-GMM model demonstrated the least computational complexity compared to all other hybrid models in both talking environments. Conversely, the HMM-DNN model required the greatest amount of training time. Findings also demonstrated that EER and AUC values depended on the database when comparing average emotional and stressful performances. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,219 |
2107.13743 | Malware Classification Using Transfer Learning | With the rapid growth of the number of devices on the Internet, malware poses a threat not only to the affected devices but also their ability to use said devices to launch attacks on the Internet ecosystem. Rapid malware classification is an important tools to combat that threat. One of the successful approaches to classification is based on malware images and deep learning. While many deep learning architectures are very accurate they usually take a long time to train. In this work we perform experiments on multiple well known, pre-trained, deep network architectures in the context of transfer learning. We show that almost all them classify malware accurately with a very short training period. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 248,290 |
2411.16164 | Text-to-Image Synthesis: A Decade Survey | When humans read a specific text, they often visualize the corresponding images, and we hope that computers can do the same. Text-to-image synthesis (T2I), which focuses on generating high-quality images from textual descriptions, has become a significant aspect of Artificial Intelligence Generated Content (AIGC) and a transformative direction in artificial intelligence research. Foundation models play a crucial role in T2I. In this survey, we review over 440 recent works on T2I. We start by briefly introducing how GANs, autoregressive models, and diffusion models have been used for image generation. Building on this foundation, we discuss the development of these models for T2I, focusing on their generative capabilities and diversity when conditioned on text. We also explore cutting-edge research on various aspects of T2I, including performance, controllability, personalized generation, safety concerns, and consistency in content and spatial relationships. Furthermore, we summarize the datasets and evaluation metrics commonly used in T2I research. Finally, we discuss the potential applications of T2I within AIGC, along with the challenges and future research opportunities in this field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,926 |
1503.08223 | A System View of the Recognition and Interpretation of Observed Human
Shape, Pose and Action | There is physiological evidence that our ability to interpret human pose and action from 2D visual imagery (binocular or monocular) engages the circuitry of the motor cortices as well as the visual areas of the brain. This implies that the capability of the motor cortices to solve inverse kinematics is flexible enough to apply to both motion planning as well as serving as a generative model for the visual processing of human figures, despite the differing functional requirements of the two tasks. This paper provides a computational model of the cooperation between visual and motor areas: in other words, a system view of an important class of brain computations. The model unifies the solution of the separate inverse problems involved in the task, visual transformation discovery, inverse kinematics, and adaptation to morphology variations, using several instances of the Map-seeking Circuit algorithm. While the paper is weighted toward the exposition of a neurobiological hypothesis, from mathematical formalization of the problem to neuronal circuitry, the algorithmic expression of the solution is also a functional machine vision system for human figure recognition, and 3D pose and body morphology reconstruction from monocular, perspective-less input imagery. With an inverse kinematic generative model capable of imposing a variety of endogenous and exogenous constraints the machine vision implementation acquires characteristics currently unique among such systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 41,559 |
1312.6949 | Joint Phase Tracking and Channel Decoding for OFDM Physical-Layer
Network Coding | This paper investigates the problem of joint phase tracking and channel decoding in OFDM based Physical-layer Network Coding (PNC) systems. OFDM signaling can obviate the need for tight time synchronization among multiple simultaneous transmissions in the uplink of PNC systems. However, OFDM PNC systems are susceptible to phase drifts caused by residual carrier frequency offsets (CFOs). In the traditional OFDM system in which a receiver receives from only one transmitter, pilot tones are employed to aid phase tracking. In OFDM PNC systems, multiple transmitters transmit to a receiver, and these pilot tones must be shared among the multiple transmitters. This reduces the number of pilots that can be used by each transmitting node. Phase tracking in OFDM PNC is more challenging as a result. To overcome the degradation due to the reduced number of per-node pilots, this work supplements the pilots with the channel information contained in the data. In particular, we propose to solve the problems of phase tracking and channel decoding jointly. Our solution consists of the use of the expectation-maximization (EM) algorithm for phase tracking and the use of the belief propagation (BP) algorithm for channel decoding. The two problems are solved jointly through iterative processing between the EM and BP algorithms. Simulations and real experiments based on software-defined radio show that the proposed method can improve phase tracking as well as channel decoding performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 29,424 |
1904.06535 | Look More Than Once: An Accurate Detector for Text of Arbitrary Shapes | Previous scene text detection methods have progressed substantially over the past years. However, limited by the receptive field of CNNs and the simple representations like rectangle bounding box or quadrangle adopted to describe text, previous methods may fall short when dealing with more challenging text instances, such as extremely long text and arbitrarily shaped text. To address these two problems, we present a novel text detector namely LOMO, which localizes the text progressively for multiple times (or in other word, LOok More than Once). LOMO consists of a direct regressor (DR), an iterative refinement module (IRM) and a shape expression module (SEM). At first, text proposals in the form of quadrangle are generated by DR branch. Next, IRM progressively perceives the entire long text by iterative refinement based on the extracted feature blocks of preliminary proposals. Finally, a SEM is introduced to reconstruct more precise representation of irregular text by considering the geometry properties of text instance, including text region, text center line and border offsets. The state-of-the-art results on several public benchmarks including ICDAR2017-RCTW, SCUT-CTW1500, Total-Text, ICDAR2015 and ICDAR17-MLT confirm the striking robustness and effectiveness of LOMO. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 127,574 |
2410.15804 | Deep Learning and Data Augmentation for Detecting Self-Admitted
Technical Debt | Self-Admitted Technical Debt (SATD) refers to circumstances where developers use textual artifacts to explain why the existing implementation is not optimal. Past research in detecting SATD has focused on either identifying SATD (classifying SATD items as SATD or not) or categorizing SATD (labeling instances as SATD that pertain to requirement, design, code, test debt, etc.). However, the performance of these approaches remains suboptimal, particularly for specific types of SATD, such as test and requirement debt, primarily due to extremely imbalanced datasets. To address these challenges, we build on earlier research by utilizing BiLSTM architecture for the binary identification of SATD and BERT architecture for categorizing different types of SATD. Despite their effectiveness, both architectures struggle with imbalanced data. Therefore, we employ a large language model data augmentation strategy to mitigate this issue. Furthermore, we introduce a two-step approach to identify and categorize SATD across various datasets derived from different artifacts. Our contributions include providing a balanced dataset for future SATD researchers and demonstrating that our approach significantly improves SATD identification and categorization performance compared to baseline methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 500,735 |
2410.14262 | Good Parenting is all you need -- Multi-agentic LLM Hallucination
Mitigation | This study explores the ability of Large Language Model (LLM) agents to detect and correct hallucinations in AI-generated content. A primary agent was tasked with creating a blog about a fictional Danish artist named Flipfloppidy, which was then reviewed by another agent for factual inaccuracies. Most LLMs hallucinated the existence of this artist. Across 4,900 test runs involving various combinations of primary and reviewing agents, advanced AI models such as Llama3-70b and GPT-4 variants demonstrated near-perfect accuracy in identifying hallucinations and successfully revised outputs in 85% to 100% of cases following feedback. These findings underscore the potential of advanced AI models to significantly enhance the accuracy and reliability of generated content, providing a promising approach to improving AI workflow orchestration. | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | 499,961 |
1705.03430 | Analysis of Channel-Based User Authentication by Key-Less and Key-Based
Approaches | User authentication (UA) supports the receiver in deciding whether a message comes from the claimed transmitter or from an impersonating attacker. In cryptographic approaches messages are signed with either an asymmetric or symmetric key, and a source of randomness is required to generate the key. In physical layer authentication (PLA) instead the receiver checks if received messages presumably coming from the same source undergo the same channel. We compare these solutions by considering the physical-layer channel features as randomness source for generating the key, thus allowing an immediate comparison with PLA (that already uses these features). For the symmetric-key approach we use secret key agreement, while for asymmetric-key the channel is used as entropy source at the transmitter. We focus on the asymptotic case of an infinite number of independent and identically distributed channel realizations, showing the correctness of all schemes and analyzing the secure authentication rate, that dictates the rate at which the probability that UA security is broken goes to zero as the number of used channel resources (to generate the key or for PLA) goes to infinity. Both passive and active attacks are considered and by numerical results we compare the various systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 73,185 |
2305.14575 | Towards Early Prediction of Human iPSC Reprogramming Success | This paper presents advancements in automated early-stage prediction of the success of reprogramming human induced pluripotent stem cells (iPSCs) as a potential source for regenerative cell therapies.The minuscule success rate of iPSC-reprogramming of around $ 0.01% $ to $ 0.1% $ makes it labor-intensive, time-consuming, and exorbitantly expensive to generate a stable iPSC line. Since that requires culturing of millions of cells and intense biological scrutiny of multiple clones to identify a single optimal clone. The ability to reliably predict which cells are likely to establish as an optimal iPSC line at an early stage of pluripotency would therefore be ground-breaking in rendering this a practical and cost-effective approach to personalized medicine. Temporal information about changes in cellular appearance over time is crucial for predicting its future growth outcomes. In order to generate this data, we first performed continuous time-lapse imaging of iPSCs in culture using an ultra-high resolution microscope. We then annotated the locations and identities of cells in late-stage images where reliable manual identification is possible. Next, we propagated these labels backwards in time using a semi-automated tracking system to obtain labels for early stages of growth. Finally, we used this data to train deep neural networks to perform automatic cell segmentation and classification. Our code and data are available at https://github.com/abhineet123/ipsc_prediction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 367,101 |
2210.17012 | GotFlow3D: Recurrent Graph Optimal Transport for Learning 3D Flow Motion
in Particle Tracking | Flow visualization technologies such as particle tracking velocimetry (PTV) are broadly used in understanding the all-pervasiveness three-dimensional (3D) turbulent flow from nature and industrial processes. Despite the advances in 3D acquisition techniques, the developed motion estimation algorithms in particle tracking remain great challenges of large particle displacements, dense particle distributions and high computational cost. By introducing a novel deep neural network based on recurrent Graph Optimal Transport, called GotFlow3D, we present an end-to-end solution to learn the 3D fluid flow motion from double-frame particle sets. The proposed network constructs two graphs in the geometric and feature space and further enriches the original particle representations with the fused intrinsic and extrinsic features learnt from a graph neural network. The extracted deep features are subsequently utilized to make optimal transport plans indicating the correspondences of particle pairs, which are then iteratively and adaptively retrieved to guide the recurrent flow learning. Experimental evaluations, including assessments on numerical experiments and validations on real-world experiments, demonstrate that the proposed GotFlow3D achieves state-of-the-art performance against both recently-developed scene flow learners and particle tracking algorithms, with impressive accuracy, robustness and generalization ability, which can provide deeper insight into the complex dynamics of broad physical and biological systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 327,535 |
2306.13681 | Estimating the Value of Evidence-Based Decision Making | Business/policy decisions are often based on evidence from randomized experiments and observational studies. In this article we propose an empirical framework to estimate the value of evidence-based decision making (EBDM) and the return on the investment in statistical precision. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 375,360 |
2411.11479 | Value-Spectrum: Quantifying Preferences of Vision-Language Models via
Value Decomposition in Social Media Contexts | The recent progress in Vision-Language Models (VLMs) has broadened the scope of multimodal applications. However, evaluations often remain limited to functional tasks, neglecting abstract dimensions such as personality traits and human values. To address this gap, we introduce Value-Spectrum, a novel Visual Question Answering (VQA) benchmark aimed at assessing VLMs based on Schwartz's value dimensions that capture core values guiding people's preferences and actions. We designed a VLM agent pipeline to simulate video browsing and constructed a vector database comprising over 50,000 short videos from TikTok, YouTube Shorts, and Instagram Reels. These videos span multiple months and cover diverse topics, including family, health, hobbies, society, technology, etc. Benchmarking on Value-Spectrum highlights notable variations in how VLMs handle value-oriented content. Beyond identifying VLMs' intrinsic preferences, we also explored the ability of VLM agents to adopt specific personas when explicitly prompted, revealing insights into the adaptability of the model in role-playing scenarios. These findings highlight the potential of Value-Spectrum as a comprehensive evaluation set for tracking VLM alignments in value-based tasks and abilities to simulate diverse personas. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 509,068 |
2307.13977 | Formal Verification of Robotic Contact Tasks via Reachability Analysis | Verifying the correct behavior of robots in contact tasks is challenging due to model uncertainties associated with contacts. Standard methods for testing often fall short since all (uncountable many) solutions cannot be obtained. Instead, we propose to formally and efficiently verify robot behaviors in contact tasks using reachability analysis, which enables checking all the reachable states against user-provided specifications. To this end, we extend the state of the art in reachability analysis for hybrid (mixed discrete and continuous) dynamics subject to discrete-time input trajectories. In particular, we present a novel and scalable guard intersection approach to reliably compute the complex behavior caused by contacts. We model robots subject to contacts as hybrid automata in which crucial time delays are included. The usefulness of our approach is demonstrated by verifying safe human-robot interaction in the presence of constrained collisions, which was out of reach for existing methods. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 381,767 |
2305.05355 | Turning Privacy-preserving Mechanisms against Federated Learning | Recently, researchers have successfully employed Graph Neural Networks (GNNs) to build enhanced recommender systems due to their capability to learn patterns from the interaction between involved entities. In addition, previous studies have investigated federated learning as the main solution to enable a native privacy-preserving mechanism for the construction of global GNN models without collecting sensitive data into a single computation unit. Still, privacy issues may arise as the analysis of local model updates produced by the federated clients can return information related to sensitive local data. For this reason, experts proposed solutions that combine federated learning with Differential Privacy strategies and community-driven approaches, which involve combining data from neighbor clients to make the individual local updates less dependent on local sensitive data. In this paper, we identify a crucial security flaw in such a configuration, and we design an attack capable of deceiving state-of-the-art defenses for federated learning. The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode), and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode). The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 363,116 |
1609.06666 | Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient
Convolutional Neural Networks | This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that Vote3Deep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | true | false | false | 61,330 |
2401.05535 | Theoretical and Empirical Advances in Forest Pruning | Decades after their inception, regression forests continue to provide state-of-the-art accuracy, outperforming in this respect alternative machine learning models such as regression trees or even neural networks. However, being an ensemble method, the one aspect where regression forests tend to severely underperform regression trees is interpretability. In the present work, we revisit forest pruning, an approach that aims to have the best of both worlds: the accuracy of regression forests and the interpretability of regression trees. This pursuit, whose foundation lies at the core of random forest theory, has seen vast success in empirical studies. In this paper, we contribute theoretical results that support and qualify those empirical findings; namely, we prove the asymptotic advantage of a Lasso-pruned forest over its unpruned counterpart under extremely weak assumptions, as well as high-probability finite-sample generalization bounds for regression forests pruned according to the main methods, which we then validate by way of simulation. Then, we test the accuracy of pruned regression forests against their unpruned counterparts on 19 different datasets (16 synthetic, 3 real). We find that in the vast majority of scenarios tested, there is at least one forest-pruning method that yields equal or better accuracy than the original full forest (in expectation), while just using a small fraction of the trees. We show that, in some cases, the reduction in the size of the forest is so dramatic that the resulting sub-forest can be meaningfully merged into a single tree, obtaining a level of interpretability that is qualitatively superior to that of the original regression forest, which remains a black box. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 420,822 |
2208.06616 | Self-supervised Contrastive Representation Learning for Semi-supervised
Time-Series Classification | Learning time-series representations when only unlabeled data or few labeled samples are available can be a challenging task. Recently, contrastive self-supervised learning has shown great improvement in extracting useful representations from unlabeled data via contrasting different augmented views of data. In this work, we propose a novel Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC) that learns representations from unlabeled data with contrastive learning. Specifically, we propose time-series-specific weak and strong augmentations and use their views to learn robust temporal relations in the proposed temporal contrasting module, besides learning discriminative representations by our proposed contextual contrasting module. Additionally, we conduct a systematic study of time-series data augmentation selection, which is a key part of contrastive learning. We also extend TS-TCC to the semi-supervised learning settings and propose a Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data to further improve representations learned by TS-TCC. Specifically, we leverage the robust pseudo labels produced by TS-TCC to realize a class-aware contrastive loss. Extensive experiments show that the linear evaluation of the features learned by our proposed framework performs comparably with the fully supervised training. Additionally, our framework shows high efficiency in the few labeled data and transfer learning scenarios. The code is publicly available at \url{https://github.com/emadeldeen24/CA-TCC}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,773 |
2204.05490 | Continuous-Time User Preference Modelling for Temporal Sets Prediction | Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user's preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user's historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5x and 3.0x speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 291,040 |
2002.00518 | Efficiency Analysis of the Simplified Refined Instrumental Variable
Method for Continuous-time Systems | In this paper, we derive the asymptotic Cram\'er-Rao lower bound for the continuous-time output error model structure and provide an analysis of the statistical efficiency of the Simplified Refined Instrumental Variable method for Continuous-time systems (SRIVC) based on sampled data.It is shown that the asymptotic Cram\'er-Rao lower bound is independent of the intersample behaviour of the noise-free system output and hence only depends on the intersample behaviour of the system input. We have also shown that, at the converging point of the SRIVC algorithm, the estimates do not depend on the intersample behaviour of the measured output. It is then proven that the SRIVC estimator is asymptotically efficient for the output error model structure under mild conditions. Monte Carlo simulations are performed to verify the asymptotic Cram\'er-Rao lower bound and the asymptotic covariance of the SRIVC estimates. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 162,372 |
2311.02894 | Design and Performance Analysis of a Class of Generalized Predictive
Controllers | The design and structure of generalized predictive control (GPC) are not simple and intuitive. The performance analysis does not deeply analyze how the controller parameters affect the system characteristics and the relationship between the tracking error caused by the noise and the selected controller parameters. This paper proposes a generalized predictive control, and its design is simple and intuitive for unnecessary solving the Diophantine equation. Then the relationship between desired output, disturbance, and system output is analyzed by the characteristic equation and steady-state analysis. Based on this, the study presents research findings on the steady state of the system and verifies them through simulations. Furthermore, this paper introduces GPC with disturbance compensation and incremental generalized minimum variance control (IGMVC) with disturbance compensation. The conditions for the elimination of disturbance are presented in theory and simulation for the first time. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 405,633 |
2209.12771 | Hamiltonian Monte Carlo for efficient Gaussian sampling: long and random
steps | Hamiltonian Monte Carlo (HMC) is a Markov chain algorithm for sampling from a high-dimensional distribution with density $e^{-f(x)}$, given access to the gradient of $f$. A particular case of interest is that of a $d$-dimensional Gaussian distribution with covariance matrix $\Sigma$, in which case $f(x) = x^\top \Sigma^{-1} x$. We show that HMC can sample from a distribution that is $\varepsilon$-close in total variation distance using $\widetilde{O}(\sqrt{\kappa} d^{1/4} \log(1/\varepsilon))$ gradient queries, where $\kappa$ is the condition number of $\Sigma$. Our algorithm uses long and random integration times for the Hamiltonian dynamics. This contrasts with (and was motivated by) recent results that give an $\widetilde\Omega(\kappa d^{1/2})$ query lower bound for HMC with fixed integration times, even for the Gaussian case. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 319,649 |
2404.01842 | Semi-Supervised Domain Adaptation for Wildfire Detection | Recently, both the frequency and intensity of wildfires have increased worldwide, primarily due to climate change. In this paper, we propose a novel protocol for wildfire detection, leveraging semi-supervised Domain Adaptation for object detection, accompanied by a corresponding dataset designed for use by both academics and industries. Our dataset encompasses 30 times more diverse labeled scenes for the current largest benchmark wildfire dataset, HPWREN, and introduces a new labeling policy for wildfire detection. Inspired by CoordConv, we propose a robust baseline, Location-Aware Object Detection for Semi-Supervised Domain Adaptation (LADA), utilizing a teacher-student based framework capable of extracting translational variance features characteristic of wildfires. With only using 1% target domain labeled data, our framework significantly outperforms our source-only baseline by a notable margin of 3.8% in mean Average Precision on the HPWREN wildfire dataset. Our dataset is available at https://github.com/BloomBerry/LADA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,609 |
2010.04947 | Double Forward Propagation for Memorized Batch Normalization | Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs). Although the standard BN can significantly accelerate the training of DNNs and improve the generalization performance, it has several underlying limitations which may hamper the performance in both training and inference. In the training stage, BN relies on estimating the mean and variance of data using a single minibatch. Consequently, BN can be unstable when the batch size is very small or the data is poorly sampled. In the inference stage, BN often uses the so called moving mean and moving variance instead of batch statistics, i.e., the training and inference rules in BN are not consistent. Regarding these issues, we propose a memorized batch normalization (MBN), which considers multiple recent batches to obtain more accurate and robust statistics. Note that after the SGD update for each batch, the model parameters will change, and the features will change accordingly, leading to the Distribution Shift before and after the update for the considered batch. To alleviate this issue, we present a simple Double-Forward scheme in MBN which can further improve the performance. Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference. Empirical results show that the MBN based models trained with the Double-Forward scheme greatly reduce the sensitivity of data and significantly improve the generalization performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 199,933 |
2403.10663 | Not Just Change the Labels, Learn the Features: Watermarking Deep Neural
Networks with Multi-View Data | With the increasing prevalence of Machine Learning as a Service (MLaaS) platforms, there is a growing focus on deep neural network (DNN) watermarking techniques. These methods are used to facilitate the verification of ownership for a target DNN model to protect intellectual property. One of the most widely employed watermarking techniques involves embedding a trigger set into the source model. Unfortunately, existing methodologies based on trigger sets are still susceptible to functionality-stealing attacks, potentially enabling adversaries to steal the functionality of the source model without a reliable means of verifying ownership. In this paper, we first introduce a novel perspective on trigger set-based watermarking methods from a feature learning perspective. Specifically, we demonstrate that by selecting data exhibiting multiple features, also referred to as \emph{multi-view data}, it becomes feasible to effectively defend functionality stealing attacks. Based on this perspective, we introduce a novel watermarking technique based on Multi-view dATa, called MAT, for efficiently embedding watermarks within DNNs. This approach involves constructing a trigger set with multi-view data and incorporating a simple feature-based regularization method for training the source model. We validate our method across various benchmarks and demonstrate its efficacy in defending against model extraction attacks, surpassing relevant baselines by a significant margin. The code is available at: \href{https://github.com/liyuxuan-github/MAT}{https://github.com/liyuxuan-github/MAT}. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 438,292 |
2305.18927 | Evaluating the feasibility of using Generative Models to generate Chest
X-Ray Data | In this paper, we explore the feasibility of using generative models, specifically Progressive Growing GANs (PG-GANs) and Stable Diffusion fine-tuning, to generate synthetic chest X-ray images for medical diagnosis purposes. Due to ethical concerns, obtaining sufficient medical data for machine learning is a challenge, which our approach aims to address by synthesising more data. We utilised the Chest X-ray 14 dataset for our experiments and evaluated the performance of our models through qualitative and quantitative analysis. Our results show that the generated images are visually convincing and can be used to improve the accuracy of classification models. However, further work is needed to address issues such as overfitting and the limited availability of real data for training and testing. The potential of our approach to contribute to more effective medical diagnosis through deep learning is promising, and we believe that continued advancements in image generation technology will lead to even more promising results in the future. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 369,302 |
2212.00186 | Multi-Task Imitation Learning for Linear Dynamical Systems | We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by $\tilde{O}\left( \frac{k n_x}{HN_{\mathrm{shared}}} + \frac{k n_u}{N_{\mathrm{target}}}\right)$, where $n_x > k$ is the state dimension, $n_u$ is the input dimension, $N_{\mathrm{shared}}$ denotes the total amount of data collected for each policy during representation learning, and $N_{\mathrm{target}}$ is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 333,963 |
2306.16906 | Numerical Data Imputation for Multimodal Data Sets: A Probabilistic
Nearest-Neighbor Kernel Density Approach | Numerical data imputation algorithms replace missing values by estimates to leverage incomplete data sets. Current imputation methods seek to minimize the error between the unobserved ground truth and the imputed values. But this strategy can create artifacts leading to poor imputation in the presence of multimodal or complex distributions. To tackle this problem, we introduce the $k$NN$\times$KDE algorithm: a data imputation method combining nearest neighbor estimation ($k$NN) and density estimation with Gaussian kernels (KDE). We compare our method with previous data imputation methods using artificial and real-world data with different data missing scenarios and various data missing rates, and show that our method can cope with complex original data structure, yields lower data imputation errors, and provides probabilistic estimates with higher likelihood than current methods. We release the code in open-source for the community: https://github.com/DeltaFloflo/knnxkde | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 376,526 |
2311.11278 | Transcending Forgery Specificity with Latent Space Augmentation for
Generalizable Deepfake Detection | Deepfake detection faces a critical generalization hurdle, with performance deteriorating when there is a mismatch between the distributions of training and testing data. A broadly received explanation is the tendency of these detectors to be overfitted to forgery-specific artifacts, rather than learning features that are widely applicable across various forgeries. To address this issue, we propose a simple yet effective detector called LSDA (\underline{L}atent \underline{S}pace \underline{D}ata \underline{A}ugmentation), which is based on a heuristic idea: representations with a wider variety of forgeries should be able to learn a more generalizable decision boundary, thereby mitigating the overfitting of method-specific features (see Fig.~\ref{fig:toy}). Following this idea, we propose to enlarge the forgery space by constructing and simulating variations within and across forgery features in the latent space. This approach encompasses the acquisition of enriched, domain-specific features and the facilitation of smoother transitions between different forgery types, effectively bridging domain gaps. Our approach culminates in refining a binary classifier that leverages the distilled knowledge from the enhanced features, striving for a generalizable deepfake detector. Comprehensive experiments show that our proposed method is surprisingly effective and transcends state-of-the-art detectors across several widely used benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 408,883 |
2402.02399 | FreDF: Learning to Forecast in Frequency Domain | Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences. Current research predominantly focuses on handling autocorrelation within the historical sequence but often neglects its presence in the label sequence. Specifically, emerging forecast models mainly conform to the direct forecast (DF) paradigm, generating multi-step forecasts under the assumption of conditional independence within the label sequence. This assumption disregards the inherent autocorrelation in the label sequence, thereby limiting the performance of DF-based models. In response to this gap, we introduce the Frequency-enhanced Direct Forecast (FreDF), which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain. Our experiments demonstrate that FreDF substantially outperforms existing state-of-the-art methods including iTransformer and is compatible with a variety of forecast models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 426,542 |
2206.06537 | A software toolkit and hardware platform for investigating and comparing
robot autonomy algorithms in simulation and reality | We describe a software framework and a hardware platform used in tandem for the design and analysis of robot autonomy algorithms in simulation and reality. The software, which is open source, containerized, and operating system (OS) independent, has three main components: a ROS 2 interface to a C++ vehicle simulation framework (Chrono), which provides high-fidelity wheeled/tracked vehicle and sensor simulation; a basic ROS 2-based autonomy stack for algorithm design and testing; and, a development ecosystem which enables visualization, and hardware-in-the-loop experimentation in perception, state estimation, path planning, and controls. The accompanying hardware platform is a 1/6th scale vehicle augmented with reconfigurable mountings for computing, sensing, and tracking. Its purpose is to allow algorithms and sensor configurations to be physically tested and improved. Since this vehicle platform has a digital twin within the simulation environment, one can test and compare the same algorithms and autonomy stack in simulation and reality. This platform has been built with an eye towards characterizing and managing the simulation-to-reality gap. Herein, we describe how this platform is set up, deployed, and used to improve autonomy for mobility applications. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 302,408 |
2101.06919 | Link Prediction and Unlink Prediction on Dynamic Networks | Link prediction on dynamic networks has been extensively studied and widely applied in various applications. However, temporal unlink prediction, which also plays an important role in the evolution of social networks, has not been paid much attention. Accurately predicting the links and unlinks on the future network greatly contributes to the network analysis that uncovers more latent relations between nodes. In this work, we assume that there are two kinds of relations between nodes, namely long-term relation and short-term relation, and we propose an effective algorithm called LULS for temporal link prediction and unlink prediction based on such relations. Specifically, for each snapshot of a dynamic network, LULS first collects higher-order structure as two topological matrices by applying short random walks. Then, LULS initializes and optimizes a global matrix and a sequence of temporary matrices for all the snapshots by using non-negative matrix factorization (NMF) based on the topological matrices, where the global matrix denotes long-term relation and the temporary matrices represent short-term relations of snapshots. Finally, LULS calculates the similarity matrix of the future snapshot and predicts the links and unlinks for the future network. Additionally, we further improve the prediction results by using graph regularization constraints to enhance the global matrix, resulting that the global matrix contains a wealth of topological information and temporal information. The conducted experiments on real-world networks illustrate that LULS outperforms other baselines for both link prediction and unlink prediction tasks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 215,886 |
nlin/0611054 | A Model of a Trust-based Recommendation System on a Social Network | In this paper, we present a model of a trust-based recommendation system on a social network. The idea of the model is that agents use their social network to reach information and their trust relationships to filter it. We investigate how the dynamics of trust among agents affect the performance of the system by comparing it to a frequency-based recommendation system. Furthermore, we identify the impact of network density, preference heterogeneity among agents, and knowledge sparseness to be crucial factors for the performance of the system. The system self-organises in a state with performance near to the optimum; the performance on the global level is an emergent property of the system, achieved without explicit coordination from the local interactions of agents. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 540,796 |
2204.13091 | Attention Consistency on Visual Corruptions for Single-Source Domain
Generalization | Generalizing visual recognition models trained on a single distribution to unseen input distributions (i.e. domains) requires making them robust to superfluous correlations in the training set. In this work, we achieve this goal by altering the training images to simulate new domains and imposing consistent visual attention across the different views of the same sample. We discover that the first objective can be simply and effectively met through visual corruptions. Specifically, we alter the content of the training images using the nineteen corruptions of the ImageNet-C benchmark and three additional transformations based on Fourier transform. Since these corruptions preserve object locations, we propose an attention consistency loss to ensure that class activation maps across original and corrupted versions of the same training sample are aligned. We name our model Attention Consistency on Visual Corruptions (ACVC). We show that ACVC consistently achieves the state of the art on three single-source domain generalization benchmarks, PACS, COCO, and the large-scale DomainNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 293,702 |
2010.00979 | BOSS: Bayesian Optimization over String Spaces | This article develops a Bayesian optimization (BO) method which acts directly over raw strings, proposing the first uses of string kernels and genetic algorithms within BO loops. Recent applications of BO over strings have been hindered by the need to map inputs into a smooth and unconstrained latent space. Learning this projection is computationally and data-intensive. Our approach instead builds a powerful Gaussian process surrogate model based on string kernels, naturally supporting variable length inputs, and performs efficient acquisition function maximization for spaces with syntactical constraints. Experiments demonstrate considerably improved optimization over existing approaches across a broad range of constraints, including the popular setting where syntax is governed by a context-free grammar. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 198,462 |
2208.07744 | Secrecy Performance Analysis of RIS-aided Communication System with
Randomly Flying Eavesdroppers | In this letter, we analyze the secrecy performance of a reconfigurable intelligent surface (RIS)-aided communication system with spatially random unmanned aerial vehicles (UAVs) acting as eavesdroppers. We consider the scenarios where the base station (BS) is equipped with single and multiple antennas.The signal-to-noise ratios (SNRs) of the legitimate user and the eavesdroppers are derived analytically and approximated through a computationally effective method. The ergodic secrecy capacity is approximated and derived in closed-form expressions.Simulation results validate the accuracy of the analytical and approximate expressions and show the security-enhanced effect of the deployment of the RIS. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 313,142 |
2004.04596 | Global Public Health Surveillance using Media Reports: Redesigning GPHIN | Global public health surveillance relies on reporting structures and transmission of trustworthy health reports. But in practice, these processes may not always be fast enough, or are hindered by procedural, technical, or political barriers. GPHIN, the Global Public Health Intelligence Network, was designed in the late 1990s to scour mainstream news for health events, as that travels faster and more freely. This paper outlines the next generation of GPHIN, which went live in 2017, and reports on design decisions underpinning its new functions and innovations. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 171,926 |
2110.06884 | ConditionalQA: A Complex Reading Comprehension Dataset with Conditional
Answers | We describe a Question Answering (QA) dataset that contains complex questions with conditional answers, i.e. the answers are only applicable when certain conditions apply. We call this dataset ConditionalQA. In addition to conditional answers, the dataset also features: (1) long context documents with information that is related in logically complex ways; (2) multi-hop questions that require compositional logical reasoning; (3) a combination of extractive questions, yes/no questions, questions with multiple answers, and not-answerable questions; (4) questions asked without knowing the answers. We show that ConditionalQA is challenging for many of the existing QA models, especially in selecting answer conditions. We believe that this dataset will motivate further research in answering complex questions over long documents. Data and leaderboard are publicly available at \url{https://github.com/haitian-sun/ConditionalQA}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 260,775 |
1811.01090 | Value-based Search in Execution Space for Mapping Instructions to
Programs | Training models to map natural language instructions to programs given target world supervision only requires searching for good programs at training time. Search is commonly done using beam search in the space of partial programs or program trees, but as the length of the instructions grows finding a good program becomes difficult. In this work, we propose a search algorithm that uses the target world state, known at training time, to train a critic network that predicts the expected reward of every search state. We then score search states on the beam by interpolating their expected reward with the likelihood of programs represented by the search state. Moreover, we search not in the space of programs but in a more compressed state of program executions, augmented with recent entities and actions. On the SCONE dataset, we show that our algorithm dramatically improves performance on all three domains compared to standard beam search and other baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 112,265 |
2401.06183 | End to end Hindi to English speech conversion using Bark, mBART and a
finetuned XLSR Wav2Vec2 | Speech has long been a barrier to effective communication and connection, persisting as a challenge in our increasingly interconnected world. This research paper introduces a transformative solution to this persistent obstacle an end-to-end speech conversion framework tailored for Hindi-to-English translation, culminating in the synthesis of English audio. By integrating cutting-edge technologies such as XLSR Wav2Vec2 for automatic speech recognition (ASR), mBART for neural machine translation (NMT), and a Text-to-Speech (TTS) synthesis component, this framework offers a unified and seamless approach to cross-lingual communication. We delve into the intricate details of each component, elucidating their individual contributions and exploring the synergies that enable a fluid transition from spoken Hindi to synthesized English audio. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 421,064 |
1212.1752 | Hybrid Optimized Back propagation Learning Algorithm For Multi-layer
Perceptron | Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron.[13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 20,195 |
2402.18405 | Multi-cell Coordinated Joint Sensing and Communications | This paper proposes block-level precoder (BLP) designs for a multi-input single-output (MISO) system that performs joint sensing and communication across multiple cells and users. The Cramer-Rao-Bound for estimating a target's azimuth angle is determined for coordinated beamforming (CBF) and coordinated multi-point (CoMP) scenarios while considering inter-cell communication and sensing links. The formulated optimization problems to minimize the CRB and maximize the minimum-signal-to-interference-plus-noise-ratio (SINR) are non-convex and are represented in the semidefinite relaxed (SDR) form to solve using an alternate optimization algorithm. The proposed solutions show improved performance compared to the baseline scenario that neglects the signal component from neighboring cells. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 433,420 |
1207.7245 | Autofocus Correction of Azimuth Phase Error and Residual Range Cell
Migration in Spotlight SAR Polar Format Imagery | Synthetic aperture radar (SAR) images are often blurred by phase perturbations induced by uncompensated sensor motion and /or unknown propagation effects caused by turbulent media. To get refocused images, autofocus proves to be useful post-processing technique applied to estimate and compensate the unknown phase errors. However, a severe drawback of the conventional autofocus algorithms is that they are only capable of removing one-dimensional azimuth phase errors (APE). As the resolution becomes finer, residual range cell migration (RCM), which makes the defocus inherently two-dimensional, becomes a new challenge. In this paper, correction of APE and residual RCM are presented in the framework of polar format algorithm (PFA). First, an insight into the underlying mathematical mechanism of polar reformatting is presented. Then based on this new formulation, the effect of polar reformatting on the uncompensated APE and residual RCM is investigated in detail. By using the derived analytical relationship between APE and residual RCM, an efficient two-dimensional (2-D) autofocus method is proposed. Experimental results indicate the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 17,839 |
1902.00342 | Tree-Sliced Variants of Wasserstein Distances | Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions. \OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,384 |
1906.07760 | Tumor Saliency Estimation for Breast Ultrasound Images via Breast
Anatomy Modeling | Tumor saliency estimation aims to localize tumors by modeling the visual stimuli in medical images. However, it is a challenging task for breast ultrasound due to the complicated anatomic structure of the breast and poor image quality; and existing saliency estimation approaches only model generic visual stimuli, e.g., local and global contrast, location, and feature correlation, and achieve poor performance for tumor saliency estimation. In this paper, we propose a novel optimization model to estimate tumor saliency by utilizing breast anatomy. First, we model breast anatomy and decompose breast ultrasound image into layers using Neutro-Connectedness; then utilize the layers to generate the foreground and background maps; and finally propose a novel objective function to estimate the tumor saliency by integrating the foreground map, background map, adaptive center bias, and region-based correlation cues. The extensive experiments demonstrate that the proposed approach obtains more accurate foreground and background maps with the assistance of the breast anatomy; especially, for the images having large or small tumors; meanwhile, the new objective function can handle the images without tumors. The newly proposed method achieves state-of-the-art performance when compared to eight tumor saliency estimation approaches using two breast ultrasound datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 135,673 |
2408.10998 | Audio Match Cutting: Finding and Creating Matching Audio Transitions in
Movies and Videos | A "match cut" is a common video editing technique where a pair of shots that have a similar composition transition fluidly from one to another. Although match cuts are often visual, certain match cuts involve the fluid transition of audio, where sounds from different sources merge into one indistinguishable transition between two shots. In this paper, we explore the ability to automatically find and create "audio match cuts" within videos and movies. We create a self-supervised audio representation for audio match cutting and develop a coarse-to-fine audio match pipeline that recommends matching shots and creates the blended audio. We further annotate a dataset for the proposed audio match cut task and compare the ability of multiple audio representations to find audio match cut candidates. Finally, we evaluate multiple methods to blend two matching audio candidates with the goal of creating a smooth transition. Project page and examples are available at: https://denfed.github.io/audiomatchcut/ | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 482,112 |
2303.10895 | Leapfrog Diffusion Model for Stochastic Trajectory Prediction | To model the indeterminacy of human behaviors, stochastic trajectory prediction requires a sophisticated multi-modal distribution of future trajectories. Emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks, showing potential for stochastic trajectory prediction. However, expensive time consumption prevents diffusion models from real-time prediction, since a large number of denoising steps are required to assure sufficient representation ability. To resolve the dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model, which provides real-time, precise, and diverse predictions. The core of the proposed LED is to leverage a trainable leapfrog initializer to directly learn an expressive multi-modal distribution of future trajectories, which skips a large number of denoising steps, significantly accelerating inference speed. Moreover, the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories, significantly improving prediction performances. Extensive experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL. The proposed LED also speeds up the inference 19.3/30.8/24.3/25.1 times compared to the standard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at https://github.com/MediaBrain-SJTU/LED. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,624 |
2105.14370 | BAAI-VANJEE Roadside Dataset: Towards the Connected Automated Vehicle
Highway technologies in Challenging Environments of China | As the roadside perception plays an increasingly significant role in the Connected Automated Vehicle Highway(CAVH) technologies, there are immediate needs of challenging real-world roadside datasets for bench marking and training various computer vision tasks such as 2D/3D object detection and multi-sensor fusion. In this paper, we firstly introduce a challenging BAAI-VANJEE roadside dataset which consist of LiDAR data and RGB images collected by VANJEE smart base station placed on the roadside about 4.5m high. This dataset contains 2500 frames of LiDAR data, 5000 frames of RGB images, including 20% collected at the same time. It also contains 12 classes of objects, 74K 3D object annotations and 105K 2D object annotations. By providing a real complex urban intersections and highway scenes, we expect the BAAI-VANJEE roadside dataset will actively assist the academic and industrial circles to accelerate the innovation research and achievement transformation in the field of intelligent transportation in big data era. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 237,644 |
2411.13281 | VideoAutoArena: An Automated Arena for Evaluating Large Multimodal
Models in Video Analysis through User Simulation | Large multimodal models (LMMs) with advanced video analysis capabilities have recently garnered significant attention. However, most evaluations rely on traditional methods like multiple-choice questions in benchmarks such as VideoMME and LongVideoBench, which are prone to lack the depth needed to capture the complex demands of real-world users. To address this limitation-and due to the prohibitive cost and slow pace of human annotation for video tasks-we introduce VideoAutoArena, an arena-style benchmark inspired by LMSYS Chatbot Arena's framework, designed to automatically assess LMMs' video analysis abilities. VideoAutoArena utilizes user simulation to generate open-ended, adaptive questions that rigorously assess model performance in video understanding. The benchmark features an automated, scalable evaluation framework, incorporating a modified ELO Rating System for fair and continuous comparisons across multiple LMMs. To validate our automated judging system, we construct a 'gold standard' using a carefully curated subset of human annotations, demonstrating that our arena strongly aligns with human judgment while maintaining scalability. Additionally, we introduce a fault-driven evolution strategy, progressively increasing question complexity to push models toward handling more challenging video analysis scenarios. Experimental results demonstrate that VideoAutoArena effectively differentiates among state-of-the-art LMMs, providing insights into model strengths and areas for improvement. To further streamline our evaluation, we introduce VideoAutoBench as an auxiliary benchmark, where human annotators label winners in a subset of VideoAutoArena battles. We use GPT-4o as a judge to compare responses against these human-validated answers. Together, VideoAutoArena and VideoAutoBench offer a cost-effective, and scalable framework for evaluating LMMs in user-centric video analysis. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | true | 509,740 |
2101.04804 | Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot | Robotics can be defined as the connection of perception to action. Taking this further, this project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior. In order to implement a color recognition system on the robot, open source tools are chosen, such as Processing language, Android system, Arduino platform and Pixy camera. The constraints are clear: simplicity, replicability and financial viability. In order to integrate Robotics, Computer Vision and Image Processing, the robot is applied on a typical mobile robot's issue: line following. The problem of distinguishing the path from the background is analyzed through different approaches: the popular Otsu's Method, thresholding based on color combinations through experimentation and color tracking via hue and saturation. Decision making of where to move next is based on the line center of the path and is fully automated. Using a four-legged robot as platform and a camera as its only sensor, the robot is capable of successfully follow a line. From capturing the image to moving the robot, it's evident how integrative Robotics can be. The issue of this paper alone involves knowledge of Mechanical Engineering, Electronics, Control Systems and Programming. Everything related to this work was documented and made available on an open source online page, so it can be useful in learning and experimenting with robotics. | false | false | false | false | false | false | false | true | false | false | true | true | false | false | false | false | false | false | 215,238 |
2405.04691 | Carbon Filter: Real-time Alert Triage Using Large Scale Clustering and
Fast Search | "Alert fatigue" is one of the biggest challenges faced by the Security Operations Center (SOC) today, with analysts spending more than half of their time reviewing false alerts. Endpoint detection products raise alerts by pattern matching on event telemetry against behavioral rules that describe potentially malicious behavior, but can suffer from high false positives that distract from actual attacks. While alert triage techniques based on data provenance may show promise, these techniques can take over a minute to inspect a single alert, while EDR customers may face tens of millions of alerts per day; the current reality is that these approaches aren't nearly scalable enough for production environments. We present Carbon Filter, a statistical learning based system that dramatically reduces the number of alerts analysts need to manually review. Our approach is based on the observation that false alert triggers can be efficiently identified and separated from suspicious behaviors by examining the process initiation context (e.g., the command line) that launched the responsible process. Through the use of fast-search algorithms for training and inference, our approach scales to millions of alerts per day. Through batching queries to the model, we observe a theoretical maximum throughput of 20 million alerts per hour. Based on the analysis of tens of million alerts from customer deployments, our solution resulted in a 6-fold improvement in the Signal-to-Noise ratio without compromising on alert triage performance. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 452,642 |
1509.00714 | Dictionary based Approach to Edge Detection | Edge detection is a very essential part of image processing, as quality and accuracy of detection determines the success of further processing. We have developed a new self learning technique for edge detection using dictionary comprised of eigenfilters constructed using features of the input image. The dictionary based method eliminates the need of pre or post processing of the image and accounts for noise, blurriness, class of image and variation of illumination during the detection process itself. Since, this method depends on the characteristics of the image, the new technique can detect edges more accurately and capture greater detail than existing algorithms such as Sobel, Prewitt Laplacian of Gaussian, Canny method etc which use generic filters and operators. We have demonstrated its application on various classes of images such as text, face, barcodes, traffic and cell images. An application of this technique to cell counting in a microscopic image is also presented. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 46,526 |
2502.05147 | LP-DETR: Layer-wise Progressive Relations for Object Detection | This paper presents LP-DETR (Layer-wise Progressive DETR), a novel approach that enhances DETR-based object detection through multi-scale relation modeling. Our method introduces learnable spatial relationships between object queries through a relation-aware self-attention mechanism, which adaptively learns to balance different scales of relations (local, medium and global) across decoder layers. This progressive design enables the model to effectively capture evolving spatial dependencies throughout the detection pipeline. Extensive experiments on COCO 2017 dataset demonstrate that our method improves both convergence speed and detection accuracy compared to standard self-attention module. The proposed method achieves competitive results, reaching 52.3\% AP with 12 epochs and 52.5\% AP with 24 epochs using ResNet-50 backbone, and further improving to 58.0\% AP with Swin-L backbone. Furthermore, our analysis reveals an interesting pattern: the model naturally learns to prioritize local spatial relations in early decoder layers while gradually shifting attention to broader contexts in deeper layers, providing valuable insights for future research in object detection. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,459 |
cs/0609133 | An application-oriented terminology evaluation: the case of back-of-the
book indexes | This paper addresses the problem of computational terminology evaluation not per se but in a specific application context. This paper describes the evaluation procedure that has been used to assess the validity of our overall indexing approach and the quality of the IndDoc indexing tool. Even if user-oriented extended evaluation is irreplaceable, we argue that early evaluations are possible and they are useful for development guidance. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 539,723 |
2306.17395 | Real-time Optimization for Wind-to-H2 Driven Critical Infrastructures:
High-fidelity Active Constraints and Integer Variables Prediction Enhanced by
Feature Space Expansion | This paper focuses on developing a real-time optimal operation model for a new engineering system, wind-to-hydrogen-driven low-carbon critical infrastructure (W2H-LCCI), that utilizes wind power to generate hydrogen through electrolysis and combines it with carbon capture to reduce carbon emissions from the power sector. First, a convex mathematical model for W2H-LCCI is proposed, and then optimization models for its real-time decision-making are developed, which are mixed-integer convex programs (MICPs). Furthermore, since this large-scale MICP problem must be solved in real-time, a fast solution method based on active constraint and integer variable prediction (ACIVP) is presented. ACIVP method predicts the binary variable values and the set of limited-number constraints, which most likely contain all of the active constraints, based on historical optimization data. It results in only a small-scale continuous convex optimization problem needing to be solved by optimization solvers for W2H-LCCI real-time optimal operation. To increase the accuracy of the ACIVP method, feature space expansion (FSE) is employed, and a multi-stage ACIVP-FSE method is proposed. The effects of stage design and stage ordering on ACIVP-FSE performance are also discussed. We validate the effectiveness of the developed system and solution method using two water-energy nexus case studies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 376,684 |
2306.11714 | Meta-Analysis of Transfer Learning for Segmentation of Brain Lesions | A major challenge in stroke research and stroke recovery predictions is the determination of a stroke lesion's extent and its impact on relevant brain systems. Manual segmentation of stroke lesions from 3D magnetic resonance (MR) imaging volumes, the current gold standard, is not only very time-consuming, but its accuracy highly depends on the operator's experience. As a result, there is a need for a fully automated segmentation method that can efficiently and objectively measure lesion extent and the impact of each lesion to predict impairment and recovery potential which might be beneficial for clinical, translational, and research settings. We have implemented and tested a fully automatic method for stroke lesion segmentation which was developed using eight different 2D-model architectures trained via transfer learning (TL) and mixed data approaches. Additionally, the final prediction was made using a novel ensemble method involving stacking and agreement window. Our novel method was evaluated in a novel in-house dataset containing 22 T1w brain MR images, which were challenging in various perspectives, but mostly because they included T1w MR images from the subacute (which typically less well defined T1 lesions) and chronic stroke phase (which typically means well defined T1-lesions). Cross-validation results indicate that our new method can efficiently and automatically segment lesions fast and with high accuracy compared to ground truth. In addition to segmentation, we provide lesion volume and weighted lesion load of relevant brain systems based on the lesions' overlap with a canonical structural motor system that stretches from the cortical motor region to the lowest end of the brain stem. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 374,684 |
2310.19046 | Large Language Models as Evolutionary Optimizers | Evolutionary algorithms (EAs) have achieved remarkable success in tackling complex combinatorial optimization problems. However, EAs often demand carefully-designed operators with the aid of domain expertise to achieve satisfactory performance. In this work, we present the first study on large language models (LLMs) as evolutionary combinatorial optimizers. The main advantage is that it requires minimal domain knowledge and human efforts, as well as no additional training of the model. This approach is referred to as LLM-driven EA (LMEA). Specifically, in each generation of the evolutionary search, LMEA instructs the LLM to select parent solutions from current population, and perform crossover and mutation to generate offspring solutions. Then, LMEA evaluates these new solutions and include them into the population for the next generation. LMEA is equipped with a self-adaptation mechanism that controls the temperature of the LLM. This enables it to balance between exploration and exploitation and prevents the search from getting stuck in local optima. We investigate the power of LMEA on the classical traveling salesman problems (TSPs) widely used in combinatorial optimization research. Notably, the results show that LMEA performs competitively to traditional heuristics in finding high-quality solutions on TSP instances with up to 20 nodes. Additionally, we also study the effectiveness of LLM-driven crossover/mutation and the self-adaptation mechanism in evolutionary search. In summary, our results reveal the great potentials of LLMs as evolutionary optimizers for solving combinatorial problems. We hope our research shall inspire future explorations on LLM-driven EAs for complex optimization challenges. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 403,823 |
2301.04655 | ChatGPT is not all you need. A State of the Art Review of large
Generative AI models | During the last two years there has been a plethora of large generative models such as ChatGPT or Stable Diffusion that have been published. Concretely, these models are able to perform tasks such as being a general question and answering system or automatically creating artistic images that are revolutionizing several sectors. Consequently, the implications that these generative models have in the industry and society are enormous, as several job positions may be transformed. For example, Generative AI is capable of transforming effectively and creatively texts to images, like the DALLE-2 model; text to 3D images, like the Dreamfusion model; images to text, like the Flamingo model; texts to video, like the Phenaki model; texts to audio, like the AudioLM model; texts to other texts, like ChatGPT; texts to code, like the Codex model; texts to scientific texts, like the Galactica model or even create algorithms like AlphaTensor. This work consists on an attempt to describe in a concise way the main models are sectors that are affected by generative AI and to provide a taxonomy of the main generative models published recently. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 340,135 |
2103.02843 | Pandemic Drugs at Pandemic Speed: Infrastructure for Accelerating
COVID-19 Drug Discovery with Hybrid Machine Learning- and Physics-based
Simulations on High Performance Computers | The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 223,080 |
2110.02316 | Prediction of the Facial Growth Direction is Challenging | Facial dysmorphology or malocclusion is frequently associated with abnormal growth of the face. The ability to predict facial growth (FG) direction would allow clinicians to prepare individualized therapy to increase the chance for successful treatment. Prediction of FG direction is a novel problem in the machine learning (ML) domain. In this paper, we perform feature selection and point the attribute that plays a central role in the abovementioned problem. Then we successfully apply data augmentation (DA) methods and improve the previously reported classification accuracy by 2.81%. Finally, we present the results of two experienced clinicians that were asked to solve a similar task to ours and show how tough is solving this problem for human experts. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 259,073 |
2309.13064 | InvestLM: A Large Language Model for Investment using Financial Domain
Instruction Tuning | We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset related to financial investment. Inspired by less-is-more-for-alignment (Zhou et al., 2023), we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics, from Chartered Financial Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative finance discussions. InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions. Financial experts, including hedge fund managers and research analysts, rate InvestLM's response as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of financial NLP benchmarks demonstrates strong generalizability. From a research perspective, this work suggests that a high-quality domain specific LLM can be tuned using a small set of carefully curated instructions on a well-trained foundation model, which is consistent with the Superficial Alignment Hypothesis (Zhou et al., 2023). From a practical perspective, this work develops a state-of-the-art financial domain LLM with superior capability in understanding financial texts and providing helpful investment advice, potentially enhancing the work efficiency of financial professionals. We release the model parameters to the research community. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 394,039 |
2307.03440 | A review of dynamics design methods for high-speed and high-precision
CNC machine tool feed systems | With the development of CNC machine tools toward high speed and high precision, the traditional static design methods can hardly meet the demand. Hence, in this paper, the dynamics matching design methods of existing CNC machine tool feed systems were investigated and analyzed. Further, sub-system coupling mechanisms and optimization design studies were carried out for each sub-system. First, the required kinematic indexes must be achieved when designing the feed system dynamics of high-speed, high-precision CNC machine tools. Second, the CNC machine tool feed systems generally have four sub-systems: motion process, control system, motor, and mechanical structure. The coupling effect between the sub-systems should also be considered in the design. Based on the dynamics design, each sub-system should be optimized to maximize the system dynamic performance with minimum resource allocation. Finally, based on the review, future research directions within the field were detected. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 378,044 |
1207.4155 | Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy
Clustering | In this paper, a similarity-driven cluster merging method is proposed for unsuper-vised fuzzy clustering. The cluster merging method is used to resolve the problem of cluster validation. Starting with an overspecified number of clusters in the data, pairs of similar clusters are merged based on the proposed similarity-driven cluster merging criterion. The similarity between clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive threshold is used for merging. In addition, a modified generalized ob- jective function is used for prototype-based fuzzy clustering. The function includes the p-norm distance measure as well as principal components of the clusters. The number of the principal components is determined automatically from the data being clustered. The properties of this unsupervised fuzzy clustering algorithm are illustrated by several experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 17,580 |
2404.16375 | List Items One by One: A New Data Source and Learning Paradigm for
Multimodal LLMs | Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of GPT-4V, by enabling the model to associate visual objects with tags inserted on the image. These tags, marked with alphanumerics, can be indexed via text tokens for easy reference. Despite the extraordinary performance from GPT-4V, we observe that other Multimodal Large Language Models (MLLMs) struggle to understand these visual tags. To promote the learning of SoM prompting for open-source models, we propose a new learning paradigm: "list items one by one," which asks the model to enumerate and describe all visual tags placed on the image following the alphanumeric orders of tags. By integrating our curated dataset with other visual instruction tuning datasets, we are able to equip existing MLLMs with the SoM prompting ability. Furthermore, we evaluate our finetuned SoM models on five MLLM benchmarks. We find that this new dataset, even in a relatively small size (10k-30k images with tags), significantly enhances visual reasoning capabilities and reduces hallucinations for MLLMs. Perhaps surprisingly, these improvements persist even when the visual tags are omitted from input images during inference. This suggests the potential of "list items one by one" as a new paradigm for training MLLMs, which strengthens the object-text alignment through the use of visual tags in the training stage. Finally, we conduct analyses by probing trained models to understand the working mechanism of SoM. Our code and data are available at \url{https://github.com/zzxslp/SoM-LLaVA}. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 449,467 |
2001.06370 | Approximating Activation Functions | ReLU is widely seen as the default choice for activation functions in neural networks. However, there are cases where more complicated functions are required. In particular, recurrent neural networks (such as LSTMs) make extensive use of both hyperbolic tangent and sigmoid functions. These functions are expensive to compute. We used function approximation techniques to develop replacements for these functions and evaluated them empirically on three popular network configurations. We find safe approximations that yield a 10% to 37% improvement in training times on the CPU. These approximations were suitable for all cases we considered and we believe are appropriate replacements for all networks using these activation functions. We also develop ranged approximations which only apply in some cases due to restrictions on their input domain. Our ranged approximations yield a performance improvement of 20% to 53% in network training time. Our functions also match or considerably out perform the ad-hoc approximations used in Theano and the implementation of Word2Vec. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 160,787 |
2206.02391 | Automated Circuit Sizing with Multi-objective Optimization based on
Differential Evolution and Bayesian Inference | With the ever increasing complexity of specifications, manual sizing for analog circuits recently became very challenging. Especially for innovative, large-scale circuits designs, with tens of design variables, operating conditions and conflicting objectives to be optimized, design engineers spend many weeks, running time-consuming simulations, in their attempt at finding the right configuration. Recent years brought machine learning and optimization techniques to the field of analog circuits design, with evolutionary algorithms and Bayesian models showing good results for circuit sizing. In this context, we introduce a design optimization method based on Generalized Differential Evolution 3 (GDE3) and Gaussian Processes (GPs). The proposed method is able to perform sizing for complex circuits with a large number of design variables and many conflicting objectives to be optimized. While state-of-the-art methods reduce multi-objective problems to single-objective optimization and potentially induce a prior bias, we search directly over the multi-objective space using Pareto dominance and ensure that diverse solutions are provided to the designers to choose from. To the best of our knowledge, the proposed method is the first to specifically address the diversity of the solutions, while also focusing on minimizing the number of simulations required to reach feasible configurations. We evaluate the introduced method on two voltage regulators showing different levels of complexity and we highlight that the proposed innovative candidate selection method and survival policy leads to obtaining feasible solutions, with a high degree of diversity, much faster than with GDE3 or Bayesian Optimization-based algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 300,877 |
1508.02977 | A massively parallel multi-level approach to a domain decomposition
method for the optical flow estimation with varying illumination | We consider a variational method to solve the optical flow problem with varying illumination. We apply an adaptive control of the regularization parameter which allows us to preserve the edges and fine features of the computed flow. To reduce the complexity of the estimation for high resolution images and the time of computations, we implement a multi-level parallel approach based on the domain decomposition with the Schwarz overlapping method. The second level of parallelism uses the massively parallel solver MUMPS. We perform some numerical simulations to show the efficiency of our approach and to validate it on classical and real-world image sequences. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 45,957 |
1711.06815 | WAKE: Wavelet Decomposition Coupled with Adaptive Kalman Filtering for
Pathological Tremor Extraction | Pathological Hand Tremor (PHT) is among common symptoms of several neurological movement disorders, which can significantly degrade quality of life of affected individuals. Beside pharmaceutical and surgical therapies, mechatronic technologies have been utilized to control PHTs. Most of these technologies function based on estimation, extraction, and characterization of tremor movement signals. Real-time extraction of tremor signal is of paramount importance because of its application in assistive and rehabilitative devices. In this paper, we propose a novel on-line adaptive method which can adjust the hyper-parameters of the filter to the variable characteristics of the tremor. The proposed "WAKE: Wavelet decomposition coupled with Adaptive Kalman filtering technique for pathological tremor Extraction, referred to as the WAKE framework" is composed of a new adaptive Kalman filter and a wavelet transform core to provide indirect prediction of the tremor, one sample ahead of time, to be used for its suppression. In this paper, the design, implementation and evaluation of WAKE are given. The performance is evaluated based on three different datasets, the first one is a synthetic dataset, developed in this work, that simulates hand tremor under ten different conditions. The second and third ones are real datasets recorded from patients with PHTs. The results obtained from the proposed WAKE framework demonstrate significant improvements in the estimation accuracy in comparison with two well regarded techniques in the literature. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 84,859 |
2407.02124 | Data-Driven Subsynchronous Oscillation Suppression for Renewable Energy
Integrated Power Systems Based on Koopman Operator | Recently, subsynchronous oscillations (SSOs) have emerged frequently worldwide, with the high penetration of renewable power generation in modern power systems. The SSO introduced by renewables has become a prominent new stability problem, seriously threatening the stable operation of systems. This paper proposes a data-driven dynamic optimal controller for renewable energy integrated power systems, to suppress SSOs with the control of renewables. The challenges of the controller design are the nonlinearity, complexity and hard accessibility of the system models. Using Koopman operator, the system dynamics are accurately extracted from data and utilized to the linear model predictive control (MPC). Firstly, the globally linear representation of the system dynamics is obtained by lifting, and the key states are selected as control signals by analyzing Koopman participation factors. Subsequently, augmented with the control term, the Koopman linear parameter-varying predictor of the controlled system is constructed. Finally, using MPC, the proposed controller computes control signals online in a moving horizon fashion. Case studies show that the proposed controller is effective, adaptive and robust in various conditions, surpassing other controllers with reliable control performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 469,596 |
2306.13154 | Communication-Free Distributed Charging Control for Electric Vehicle
Group | The disordered charging of electric vehicles (EVs) in residential areas leads to a rapid increase of the peak load, causing transformer overload, but the charging control of EV group can effectively alleviate this phenomenon. However, existing charging control methods need reliable two-way communication infrastructure, which brings high operation costs and security risks. To offer a backup strategy for charging control of EVs after communication facilities fail, this paper proposes a communication-free charging control scheme to provide a decentralized on-site charging strategy for EV group. First, an uncontrollable EV group baseline estimation considering charging behaviors enabled by Gaussian mixture model (GMM) is proposed to acquire the capacity margin forecasting for controllable EVs. Next, this paper proposes a probabilistic distributed control method to assist users formulate the charging plan autonomously. Here, the charging behavior of EV group is regulated from an optimization with uncertain boundary conditions to a sampling with uncertain feasible regions expressed by a probability distribution. Finally, the scheme is verified via real-world EV charging data from a residential area in Hangzhou, China. The results show that this method can reduce the probability of transformer overload caused by out-of-order EV charging after a communication failure. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 375,181 |
2404.04197 | Convex MPC and Thrust Allocation with Deadband for Spacecraft Rendezvous | This paper delves into a rendezvous scenario involving a chaser and a target spacecraft, focusing on the application of Model Predictive Control (MPC) to design a controller capable of guiding the chaser toward the target. The operational principle of spacecraft thrusters, requiring a minimum activation time that leads to the existence of a control deadband, introduces mixed-integer constraints into the optimization, posing a considerable computational challenge due to the exponential complexity on the number of integer constraints. We address this complexity by presenting two solver algorithms that efficiently approximate the optimal solution in significantly less time than standard solvers, making them well-suited for real-time applications. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 444,540 |
2001.07688 | Estimating international trade status of countries from global liner
shipping networks | Maritime shipping is a backbone of international trade and, thus, the world economy. Cargo-loaded vessels travel from one country's port to another via an underlying port-to-port transport network, contributing to international trade values of countries en route. We hypothesize that ports that involve trans-shipment activities serve as a third-party broker to mediate trade between two foreign countries and contribute to the corresponding country's status in international trade. We test this hypothesis using a port-level dataset of global liner shipping services. We propose two indices that quantify the importance of countries in the global liner shipping network and show that they explain a large amount of variation in individual countries' international trade values and related measures. These results support a long-standing view in maritime economics, which has yet to be directly tested, that countries that are strongly integrated into the global maritime transportation network have enhanced access to global markets and trade opportunities. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 161,101 |
2302.06247 | Continuous-time convolutions model of event sequences | Event sequences often emerge in data mining. Modeling these sequences presents two main challenges: methodological and computational. Methodologically, event sequences are non-uniform and sparse, making traditional models unsuitable. Computationally, the vast amount of data and the significant length of each sequence necessitate complex and efficient models. Existing solutions, such as recurrent and transformer neural networks, rely on parametric intensity functions defined at each moment. These functions are either limited in their ability to represent complex event sequences or notably inefficient. We propose COTIC, a method based on an efficient convolution neural network designed to handle the non-uniform occurrence of events over time. Our paper introduces a continuous convolution layer, allowing a model to capture complex dependencies, including, e.g., the self-excitement effect, with little computational expense. COTIC outperforms existing models in predicting the next event time and type, achieving an average rank of 1.5 compared to 3.714 for the nearest competitor. Furthermore, COTIC`s ability to produce effective embeddings demonstrates its potential for various downstream tasks. Our code is open and available at: https://github.com/VladislavZh/COTIC. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,341 |
2404.09735 | Equipping Diffusion Models with Differentiable Spatial Entropy for
Low-Light Image Enhancement | Image restoration, which aims to recover high-quality images from their corrupted counterparts, often faces the challenge of being an ill-posed problem that allows multiple solutions for a single input. However, most deep learning based works simply employ l1 loss to train their network in a deterministic way, resulting in over-smoothed predictions with inferior perceptual quality. In this work, we propose a novel method that shifts the focus from a deterministic pixel-by-pixel comparison to a statistical perspective, emphasizing the learning of distributions rather than individual pixel values. The core idea is to introduce spatial entropy into the loss function to measure the distribution difference between predictions and targets. To make this spatial entropy differentiable, we employ kernel density estimation (KDE) to approximate the probabilities for specific intensity values of each pixel with their neighbor areas. Specifically, we equip the entropy with diffusion models and aim for superior accuracy and enhanced perceptual quality over l1 based noise matching loss. In the experiments, we evaluate the proposed method for low light enhancement on two datasets and the NTIRE challenge 2024. All these results illustrate the effectiveness of our statistic-based entropy loss. Code is available at https://github.com/shermanlian/spatial-entropy-loss. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,811 |
2401.17738 | Harnessing Smartwatch Microphone Sensors for Cough Detection and
Classification | This study investigates the potential of using smartwatches with built-in microphone sensors for monitoring coughs and detecting various cough types. We conducted a study involving 32 participants and collected 9 hours of audio data in a controlled manner. Afterward, we processed this data using a structured approach, resulting in 223 positive cough samples. We further improved the dataset through augmentation techniques and employed a specialized 1D CNN model. This model achieved an impressive accuracy rate of 98.49% while non-walking and 98.2% while walking, showing smartwatches can detect cough. Moreover, our research successfully identified four distinct types of coughs using clustering techniques. | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 425,305 |
2410.18727 | Breaking Down the Barriers: Investigating Non-Expert User Experiences in
Robotic Teleoperation in UK and Japan | Robots are being created each year with the goal of integrating them into our daily lives. As such, there is an interest in research in evaluating the trust of humans toward robots. In addition, teleoperating robotic arms can be challenging for non-experts. To reduce the strain put on the user, we created TELESIM, a modular and plug-and-play framework that enables direct teleoperation of any robotic arm using a digital twin as the interface between users and the robotic system. We evaluated our framework using a user survey with three robots and control methods and recorded the user's workload and performance at completing a tower stacking task. However, an analysis of the strain on the user and their ability to trust robots was omitted. This paper addresses these omissions by presenting the additional results of our user survey of 37 participants carried out in United Kingdom. In addition, we present the results of an additional user survey, under similar conditions performed in Japan, with the goal of addressing the limitations of our previous approach, by interfacing a VR controller with a UR5e. Our experimental results show that the UR5e has more towers built. Additionally, the UR5e gives the least amount of cognitive stress, while the combination of Senseglove and UR3 provides the user with the highest physical strain and causes the user to feel more frustrated. Finally, the Japanese participants seem more trusting of robots than the British participants. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 502,009 |
1911.05940 | Distributional Clustering: A distribution-preserving clustering method | One key use of k-means clustering is to identify cluster prototypes which can serve as representative points for a dataset. However, a drawback of using k-means cluster centers as representative points is that such points distort the distribution of the underlying data. This can be highly disadvantageous in problems where the representative points are subsequently used to gain insights on the data distribution, as these points do not mimic the distribution of the data. To this end, we propose a new clustering method called "distributional clustering", which ensures cluster centers capture the distribution of the underlying data. We first prove the asymptotic convergence of the proposed cluster centers to the data generating distribution, then present an efficient algorithm for computing these cluster centers in practice. Finally, we demonstrate the effectiveness of distributional clustering on synthetic and real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,415 |
1612.09027 | On Covert Communication with Noise Uncertainty | Prior studies on covert communication with noise uncertainty adopted a worst-case approach from the warden's perspective. That is, the worst-case detection performance of the warden is used to assess covertness, which is overly optimistic. Instead of simply considering the worst limit, in this work, we take the distribution of noise uncertainty into account to evaluate the overall covertness in a statistical sense. Specifically, we define new metrics for measuring the covertness, which are then adopted to analyze the maximum achievable rate for a given covertness requirement under both bounded and unbounded noise uncertainty models. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,141 |
2302.06083 | Universal Agent Mixtures and the Geometry of Intelligence | Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is the weighted average of the original agents' intelligences. This operation enables various interesting new theorems that shed light on the geometry of RL agent intelligence, namely: results about symmetries, convex agent-sets, and local extrema. We also show that any RL agent intelligence measure based on average performance across environments, subject to certain weak technical conditions, is identical (up to a constant factor) to performance within a single environment dependent on said intelligence measure. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 345,282 |
2209.06656 | Syndrome decoding meets multiple instances | The NP-hard problem of decoding random linear codes is crucial to both coding theory and cryptography. In particular, this problem underpins the security of many code based post-quantum cryptographic schemes. The state-of-art algorithms for solving this problem are the information syndrome decoding algorithm and its advanced variants. In this work, we consider syndrome decoding in the multiple instances setting. Two strategies are applied for different scenarios. The first strategy is to solve all instances with the aid of the precomputation technique. We adjust the current framework and distinguish the offline phase and online phase to reduce the amortized complexity. Further, we discuss the impact on the concrete security of some post-quantum schemes. The second strategy is to solve one out of many instances. Adapting the analysis for some earlier algorithm, we discuss the effectiveness of using advanced variants and confirm a related folklore conjecture. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 317,473 |
2410.24071 | Local Linearity: the Key for No-regret Reinforcement Learning in
Continuous MDPs | Achieving the no-regret property for Reinforcement Learning (RL) problems in continuous state and action-space environments is one of the major open problems in the field. Existing solutions either work under very specific assumptions or achieve bounds that are vacuous in some regimes. Furthermore, many structural assumptions are known to suffer from a provably unavoidable exponential dependence on the time horizon $H$ in the regret, which makes any possible solution unfeasible in practice. In this paper, we identify local linearity as the feature that makes Markov Decision Processes (MDPs) both learnable (sublinear regret) and feasible (regret that is polynomial in $H$). We define a novel MDP representation class, namely Locally Linearizable MDPs, generalizing other representation classes like Linear MDPs and MDPS with low inherent Belmman error. Then, i) we introduce Cinderella, a no-regret algorithm for this general representation class, and ii) we show that all known learnable and feasible MDP families are representable in this class. We first show that all known feasible MDPs belong to a family that we call Mildly Smooth MDPs. Then, we show how any mildly smooth MDP can be represented as a Locally Linearizable MDP by an appropriate choice of representation. This way, Cinderella is shown to achieve state-of-the-art regret bounds for all previously known (and some new) continuous MDPs for which RL is learnable and feasible. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 504,316 |
2407.18480 | Scalable Graph Compressed Convolutions | Designing effective graph neural networks (GNNs) with message passing has two fundamental challenges, i.e., determining optimal message-passing pathways and designing local aggregators. Previous methods of designing optimal pathways are limited with information loss on the input features. On the other hand, existing local aggregators generally fail to extract multi-scale features and approximate diverse operators under limited parameter scales. In contrast to these methods, Euclidean convolution has been proven as an expressive aggregator, making it a perfect candidate for GNN construction. However, the challenges of generalizing Euclidean convolution to graphs arise from the irregular structure of graphs. To bridge the gap between Euclidean space and graph topology, we propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution. The permutations constrain all nodes in a row regardless of their input order and therefore enable the flexible generalization of Euclidean convolution to graphs. Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning. CoCN follows local feature-learning and global parameter-sharing mechanisms of convolution neural networks. The whole model can be trained end-to-end, with compressed convolution applied to learn individual node features and their corresponding structure features. CoCN can further borrow successful practices from Euclidean convolution, including residual connection and inception mechanism. We validate CoCN on both node-level and graph-level benchmarks. CoCN achieves superior performance over competitive GNN baselines. Codes are available at https://github.com/sunjss/CoCN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 476,387 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.