id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2502.03776
StarMAP: Global Neighbor Embedding for Faithful Data Visualization
Neighbor embedding is widely employed to visualize high-dimensional data; however, it frequently overlooks the global structure, e.g., intercluster similarities, thereby impeding accurate visualization. To address this problem, this paper presents Star-attracted Manifold Approximation and Projection (StarMAP), which incorporates the advantage of principal component analysis (PCA) in neighbor embedding. Inspired by the property of PCA embedding, which can be viewed as the largest shadow of the data, StarMAP introduces the concept of \textit{star attraction} by leveraging the PCA embedding. This approach yields faithful global structure preservation while maintaining the interpretability and computational efficiency of neighbor embedding. StarMAP was compared with existing methods in the visualization tasks of toy datasets, single-cell RNA sequencing data, and deep representation. The experimental results show that StarMAP is simple but effective in realizing faithful visualizations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
530,855
2006.01325
Optimal Non-Adaptive Probabilistic Group Testing in General Sparsity Regimes
In this paper, we consider the problem of noiseless non-adaptive probabilistic group testing, in which the goal is high-probability recovery of the defective set. We show that in the case of $n$ items among which $k$ are defective, the smallest possible number of tests equals $\min\{ C_{k,n} k \log n, n\}$ up to lower-order asymptotic terms, where $C_{k,n}$ is a uniformly bounded constant (varying depending on the scaling of $k$ with respect to $n$) with a simple explicit expression. The algorithmic upper bound follows from a minor adaptation of an existing analysis of the Definite Defectives (DD) algorithm, and the algorithm-independent lower bound builds on existing works for the regimes $k \le n^{1-\Omega(1)}$ and $k = \Theta(n)$. In sufficiently sparse regimes (including $k = o\big( \frac{n}{\log n} \big)$), our main result generalizes that of Coja-Oghlan {\em et al.} (2020) by avoiding the assumption $k \le n^{1-\Omega(1)}$, whereas in sufficiently dense regimes (including $k = \omega\big( \frac{n}{\log n} \big)$), our main result shows that individual testing is asymptotically optimal for any non-zero target success probability, thus strengthening an existing result of Aldridge (2019) in terms of both the error probability and the assumed scaling of $k$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
179,743
2402.01511
Simulation-based optimization of a production system topology -- a neural network-assisted genetic algorithm
There is an abundance of prior research on the optimization of production systems, but there is a research gap when it comes to optimizing which components should be included in a design, and how they should be connected. To overcome this gap, a novel approach is presented for topology optimization of production systems using a genetic algorithm (GA). This GA employs similarity-based mutation and recombination for the creation of offspring, and discrete-event simulation for fitness evaluation. To reduce computational cost, an extension to the GA is presented in which a neural network functions as a surrogate model for simulation. Three types of neural networks are compared, and the type most effective as a surrogate model is chosen based on its optimization performance and computational cost. Both the unassisted GA and neural network-assisted GA are applied to an industrial case study and a scalability case study. These show that both approaches are effective at finding the optimal solution in industrial settings, and both scale well as the number of potential solutions increases, with the neural network-assisted GA having the better scalability of the two.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
426,055
2312.12936
Concept-based Explainable Artificial Intelligence: A Survey
The field of explainable artificial intelligence emerged in response to the growing need for more transparent and reliable models. However, using raw features to provide explanations has been disputed in several works lately, advocating for more user-understandable explanations. To address this issue, a wide range of papers proposing Concept-based eXplainable Artificial Intelligence (C-XAI) methods have arisen in recent years. Nevertheless, a unified categorization and precise field definition are still missing. This paper fills the gap by offering a thorough review of C-XAI approaches. We define and identify different concepts and explanation types. We provide a taxonomy identifying nine categories and propose guidelines for selecting a suitable category based on the development context. Additionally, we report common evaluation strategies including metrics, human evaluations and dataset employed, aiming to assist the development of future methods. We believe this survey will serve researchers, practitioners, and domain experts in comprehending and advancing this innovative field.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
417,160
0901.3608
A remark on higher order RUE-resolution with EXTRUE
We show that a prominent counterexample for the completeness of first order RUE-resolution does not apply to the higher order RUE-resolution approach EXTRUE.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
3,036
2405.19212
Partial Information Decomposition for Data Interpretability and Feature Selection
In this paper, we introduce Partial Information Decomposition of Features (PIDF), a new paradigm for simultaneous data interpretability and feature selection. Contrary to traditional methods that assign a single importance value, our approach is based on three metrics per feature: the mutual information shared with the target variable, the feature's contribution to synergistic information, and the amount of this information that is redundant. In particular, we develop a novel procedure based on these three metrics, which reveals not only how features are correlated with the target but also the additional and overlapping information provided by considering them in combination with other features. We extensively evaluate PIDF using both synthetic and real-world data, demonstrating its potential applications and effectiveness, by considering case studies from genetics and neuroscience.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
458,792
2005.10091
Label Efficient Visual Abstractions for Autonomous Driving
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, i.e., the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (e.g., object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
178,094
1905.00496
A Unified Deep Learning Formalism For Processing Graph Signals
Convolutional Neural Networks are very efficient at processing signals defined on a discrete Euclidean space (such as images). However, as they can not be used on signals defined on an arbitrary graph, other models have emerged, aiming to extend its properties. We propose to review some of the major deep learning models designed to exploit the underlying graph structure of signals. We express them in a unified formalism, giving them a new and comparative reading.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
129,481
1905.01574
A Joint Convolutional Neural Networks and Context Transfer for Street Scenes Labeling
Street scene understanding is an essential task for autonomous driving. One important step towards this direction is scene labeling, which annotates each pixel in the images with a correct class label. Although many approaches have been developed, there are still some weak points. Firstly, many methods are based on the hand-crafted features whose image representation ability is limited. Secondly, they can not label foreground objects accurately due to the dataset bias. Thirdly, in the refinement stage, the traditional Markov Random Filed (MRF) inference is prone to over smoothness. For improving the above problems, this paper proposes a joint method of priori convolutional neural networks at superpixel level (called as ``priori s-CNNs'') and soft restricted context transfer. Our contributions are threefold: (1) A priori s-CNNs model that learns priori location information at superpixel level is proposed to describe various objects discriminatingly; (2) A hierarchical data augmentation method is presented to alleviate dataset bias in the priori s-CNNs training stage, which improves foreground objects labeling significantly; (3) A soft restricted MRF energy function is defined to improve the priori s-CNNs model's labeling performance and reduce the over smoothness at the same time. The proposed approach is verified on CamVid dataset (11 classes) and SIFT Flow Street dataset (16 classes) and achieves competitive performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,750
2302.14689
Robust one-shot estimation over shared networks in the presence of denial-of-service attacks
Multi-agent systems often communicate over low-power shared wireless networks in unlicensed spectrum, prone to denial-of-service attacks. We consider the following scenario: multiple pairs of agents communicating strategically over shared communication networks in the presence of a jammer who may launch a denial-of-service. We cast this problem as a game between a coordinator who optimizes the transmission and estimation policies jointly and a jammer who optimizes its probability of performing an attack. We consider two cases: point-to-point channels and large-scale networks with a countably infinite number of sensor-receiver pairs. When the jammer proactively attacks the channel, the game is nonconvex from the coordinator's perspective. However, despite the lack of convexity, we construct a saddle point equilibrium solution for any multi-variate Gaussian distribution for the observations. When the jammer is reactive, we obtain an algorithm based on sequential convex optimization, which converges swiftly to first-order Nash-equilibria. Interestingly, blocking the channel is often optimal when the jammer is reactive, even when it is idle, to create ambiguity at the receiver.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
true
348,396
1512.01003
Weighted Schatten $p$-Norm Minimization for Image Denoising and Background Subtraction
Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten $p$-Norm Minimization (WSNM), to generalize the NNM to the Schatten $p$-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex $l_p$-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
49,766
2308.13936
Learning Human-arm Reaching Motion Using IMU in Human-Robot Collaboration
Many tasks performed by two humans require mutual interaction between arms such as handing-over tools and objects. In order for a robotic arm to interact with a human in the same way, it must reason about the location of the human arm in real-time. Furthermore and to acquire interaction in a timely manner, the robot must be able predict the final target of the human in order to plan and initiate motion beforehand. In this paper, we explore the use of a low-cost wearable device equipped with two inertial measurement units (IMU) for learning reaching motion for real-time applications of Human-Robot Collaboration (HRC). A wearable device can replace or be complementary to visual perception in cases of bad lighting or occlusions in a cluttered environment. We first train a neural-network model to estimate the current location of the arm. Then, we propose a novel model based on a recurrent neural-network to predict the future target of the human arm during motion in real-time. Early prediction of the target grants the robot with sufficient time to plan and initiate motion during the motion of the human. The accuracies of the models are analyzed concerning the features included in the motion representation. Through experiments and real demonstrations with a robotic arm, we show that sufficient accuracy is achieved for feasible HRC without any visual perception. Once trained, the system can be deployed in various spaces with no additional effort. The models exhibit high accuracy for various initial poses of the human arm. Moreover, the trained models are shown to provide high success rates with additional human participants not included in the model training.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
388,113
2409.18298
Causality-based Subject and Task Fingerprints using fMRI Time-series Data
Recently, there has been a revived interest in system neuroscience causation models due to their unique capability to unravel complex relationships in multi-scale brain networks. In this paper, our goal is to verify the feasibility and effectiveness of using a causality-based approach for fMRI fingerprinting. Specifically, we propose an innovative method that utilizes the causal dynamics activities of the brain to identify the unique cognitive patterns of individuals (e.g., subject fingerprint) and fMRI tasks (e.g., task fingerprint). The key novelty of our approach stems from the development of a two-timescale linear state-space model to extract 'spatio-temporal' (aka causal) signatures from an individual's fMRI time series data. To the best of our knowledge, we pioneer and subsequently quantify, in this paper, the concept of 'causal fingerprint.' Our method is well-separated from other fingerprint studies as we quantify fingerprints from a cause-and-effect perspective, which are then incorporated with a modal decomposition and projection method to perform subject identification and a GNN-based (Graph Neural Network) model to perform task identification. Finally, we show that the experimental results and comparisons with non-causality-based methods demonstrate the effectiveness of the proposed methods. We visualize the obtained causal signatures and discuss their biological relevance in light of the existing understanding of brain functionalities. Collectively, our work paves the way for further studies on causal fingerprints with potential applications in both healthy controls and neurodegenerative diseases.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
492,180
2202.07136
Debiased Self-Training for Semi-Supervised Learning
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets and 18.9%$ against FixMatch on 13 diverse tasks. Furthermore, DST can be seamlessly adapted to other self-training methods and help stabilize their training and balance performance across classes in both cases of training from scratch and finetuning from pre-trained models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
280,445
2304.09574
The State-of-the-Art in Air Pollution Monitoring and Forecasting Systems using IoT, Big Data, and Machine Learning
The quality of air is closely linked with the life quality of humans, plantations, and wildlife. It needs to be monitored and preserved continuously. Transportations, industries, construction sites, generators, fireworks, and waste burning have a major percentage in degrading the air quality. These sources are required to be used in a safe and controlled manner. Using traditional laboratory analysis or installing bulk and expensive models every few miles is no longer efficient. Smart devices are needed for collecting and analyzing air data. The quality of air depends on various factors, including location, traffic, and time. Recent researches are using machine learning algorithms, big data technologies, and the Internet of Things to propose a stable and efficient model for the stated purpose. This review paper focuses on studying and compiling recent research in this field and emphasizes the Data sources, Monitoring, and Forecasting models. The main objective of this paper is to provide the astuteness of the researches happening to improve the various aspects of air polluting models. Further, it casts light on the various research issues and challenges also.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
359,098
1805.05009
Deep Decision Trees for Discriminative Dictionary Learning with Adversarial Multi-Agent Trajectories
With the explosion in the availability of spatio-temporal tracking data in modern sports, there is an enormous opportunity to better analyse, learn and predict important events in adversarial group environments. In this paper, we propose a deep decision tree architecture for discriminative dictionary learning from adversarial multi-agent trajectories. We first build up a hierarchy for the tree structure by adding each layer and performing feature weight based clustering in the forward pass. We then fine tune the player role weights using back propagation. The hierarchical architecture ensures the interpretability and the integrity of the group representation. The resulting architecture is a decision tree, with leaf-nodes capturing a dictionary of multi-agent group interactions. Due to the ample volume of data available, we focus on soccer tracking data, although our approach can be used in any adversarial multi-agent domain. We present applications of proposed method for simulating soccer games as well as evaluating and quantifying team strategies.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
97,360
2403.10783
StableGarment: Garment-Centric Generation via Stable Diffusion
In this paper, we introduce StableGarment, a unified framework to tackle garment-centric(GC) generation tasks, including GC text-to-image, controllable GC text-to-image, stylized GC text-to-image, and robust virtual try-on. The main challenge lies in retaining the intricate textures of the garment while maintaining the flexibility of pre-trained Stable Diffusion. Our solution involves the development of a garment encoder, a trainable copy of the denoising UNet equipped with additive self-attention (ASA) layers. These ASA layers are specifically devised to transfer detailed garment textures, also facilitating the integration of stylized base models for the creation of stylized images. Furthermore, the incorporation of a dedicated try-on ControlNet enables StableGarment to execute virtual try-on tasks with precision. We also build a novel data engine that produces high-quality synthesized data to preserve the model's ability to follow prompts. Extensive experiments demonstrate that our approach delivers state-of-the-art (SOTA) results among existing virtual try-on methods and exhibits high flexibility with broad potential applications in various garment-centric image generation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,355
1103.5586
Use of Devolved Controllers in Data Center Networks
In a data center network, for example, it is quite often to use controllers to manage resources in a centralized man- ner. Centralized control, however, imposes a scalability problem. In this paper, we investigate the use of multiple independent controllers instead of a single omniscient controller to manage resources. Each controller looks after a portion of the network only, but they together cover the whole network. This therefore solves the scalability problem. We use flow allocation as an example to see how this approach can manage the bandwidth use in a distributed manner. The focus is on how to assign components of a network to the controllers so that (1) each controller only need to look after a small part of the network but (2) there is at least one controller that can answer any request. We outline a way to configure the controllers to fulfill these requirements as a proof that the use of devolved controllers is possible. We also discuss several issues related to such implementation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
9,798
2103.15990
An Overview of Human Activity Recognition Using Wearable Sensors: Healthcare and Artificial Intelligence
With the rapid development of the internet of things (IoT) and artificial intelligence (AI) technologies, human activity recognition (HAR) has been applied in a variety of domains such as security and surveillance, human-robot interaction, and entertainment. Even though a number of surveys and review papers have been published, there is a lack of HAR overview papers focusing on healthcare applications that use wearable sensors. Therefore, we fill in the gap by presenting this overview paper. In particular, we present our projects to illustrate the system design of HAR applications for healthcare. Our projects include early mobility identification of human activities for intensive care unit (ICU) patients and gait analysis of Duchenne muscular dystrophy (DMD) patients. We cover essential components of designing HAR systems including sensor factors (e.g., type, number, and placement location), AI model selection (e.g., classical machine learning models versus deep learning models), and feature engineering. In addition, we highlight the challenges of such healthcare-oriented HAR systems and propose several research opportunities for both the medical and the computer science community.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
227,408
1910.04308
Gromov-Wasserstein Averaging in a Riemannian Framework
We introduce a theoretical framework for performing statistical tasks---including, but not limited to, averaging and principal component analysis---on the space of (possibly asymmetric) matrices with arbitrary entries and sizes. This is carried out under the lens of the Gromov-Wasserstein (GW) distance, and our methods translate the Riemannian framework of GW distances developed by Sturm into practical, implementable tools for network data analysis. Our methods are illustrated on datasets of letter graphs, asymmetric stochastic blockmodel networks, and planar shapes viewed as metric spaces. On the theoretical front, we supplement the work of Sturm by producing additional results on the tangent structure of this "space of spaces", as well as on the gradient flow of the Fr\'{e}chet functional on this space.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
148,728
2303.01513
Safe AI for health and beyond -- Monitoring to transform a health service
Machine learning techniques are effective for building predictive models because they identify patterns in large datasets. Development of a model for complex real-life problems often stop at the point of publication, proof of concept or when made accessible through some mode of deployment. However, a model in the medical domain risks becoming obsolete as patient demographics, systems and clinical practices change. The maintenance and monitoring of predictive model performance post-publication is crucial to enable their safe and effective long-term use. We will assess the infrastructure required to monitor the outputs of a machine learning algorithm, and present two scenarios with examples of monitoring and updates of models, firstly on a breast cancer prognosis model trained on public longitudinal data, and secondly on a neurodegenerative stratification algorithm that is currently being developed and tested in clinic.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
349,003
2304.13348
TextDeformer: Geometry Manipulation using Text Guidance
We present a technique for automatically producing a deformation of an input triangle mesh, guided solely by a text prompt. Our framework is capable of deformations that produce both large, low-frequency shape changes, and small high-frequency details. Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO. Notably, updating mesh geometry by taking gradient steps through differentiable rendering is notoriously challenging, commonly resulting in deformed meshes with significant artifacts. These difficulties are amplified by noisy and inconsistent gradients from CLIP. To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner (rather than locally-sub-optimal steps). Our key observation is that Jacobians are a representation that favors smoother, large deformations, leading to a global relation between vertices and pixels, and avoiding localized noisy gradients. Additionally, to ensure the resulting shape is coherent from all 3D viewpoints, we encourage the deep features computed on the 2D encoding of the rendering to be consistent for a given vertex from all viewpoints. We demonstrate that our method is capable of smoothly-deforming a wide variety of source mesh and target text prompts, achieving both large modifications to, e.g., body proportions of animals, as well as adding fine semantic details, such as shoe laces on an army boot and fine details of a face.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,546
2410.16795
Traj-Explainer: An Explainable and Robust Multi-modal Trajectory Prediction Approach
Navigating complex traffic environments has been significantly enhanced by advancements in intelligent technologies, enabling accurate environment perception and trajectory prediction for automated vehicles. However, existing research often neglects the consideration of the joint reasoning of scenario agents and lacks interpretability in trajectory prediction models, thereby limiting their practical application in real-world scenarios. To this purpose, an explainability-oriented trajectory prediction model is designed in this work, named Explainable Conditional Diffusion based Multimodal Trajectory Prediction Traj-Explainer, to retrieve the influencing factors of prediction and help understand the intrinsic mechanism of prediction. In Traj-Explainer, a modified conditional diffusion is well designed to capture the scenario multimodal trajectory pattern, and meanwhile, a modified Shapley Value model is assembled to rationally learn the importance of the global and scenario features. Numerical experiments are carried out by several trajectory prediction datasets, including Waymo, NGSIM, HighD, and MoCAD datasets. Furthermore, we evaluate the identified input factors which indicates that they are in agreement with the human driving experience, indicating the capability of the proposed model in appropriately learning the prediction. Code available in our open-source repository: \url{https://anonymous.4open.science/r/Interpretable-Prediction}.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
501,191
2412.03792
Safe Adaptive Cruise Control Under Perception Uncertainty: A Deep Ensemble and Conformal Tube Model Predictive Control Approach
Autonomous driving heavily relies on perception systems to interpret the environment for decision-making. To enhance robustness in these safety critical applications, this paper considers a Deep Ensemble of Deep Neural Network regressors integrated with Conformal Prediction to predict and quantify uncertainties. In the Adaptive Cruise Control setting, the proposed method performs state and uncertainty estimation from RGB images, informing the downstream controller of the DNN perception uncertainties. An adaptive cruise controller using Conformal Tube Model Predictive Control is designed to ensure probabilistic safety. Evaluations with a high-fidelity simulator demonstrate the algorithm's effectiveness in speed tracking and safe distance maintaining, including in Out-Of-Distribution scenarios.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
514,106
2006.10386
SceneAdapt: Scene-based domain adaptation for semantic segmentation using adversarial learning
Semantic segmentation methods have achieved outstanding performance thanks to deep learning. Nevertheless, when such algorithms are deployed to new contexts not seen during training, it is necessary to collect and label scene-specific data in order to adapt them to the new domain using fine-tuning. This process is required whenever an already installed camera is moved or a new camera is introduced in a camera network due to the different scene layouts induced by the different viewpoints. To limit the amount of additional training data to be collected, it would be ideal to train a semantic segmentation method using labeled data already available and only unlabeled data coming from the new camera. We formalize this problem as a domain adaptation task and introduce a novel dataset of urban scenes with the related semantic labels. As a first approach to address this challenging task, we propose SceneAdapt, a method for scene adaptation of semantic segmentation algorithms based on adversarial learning. Experiments and comparisons with state-of-the-art approaches to domain adaptation highlight that promising performance can be achieved using adversarial learning both when the two scenes have different but points of view, and when they comprise images of completely different scenes. To encourage research on this topic, we made our code available at our web page: https://iplab.dmi.unict.it/ParkSmartSceneAdaptation/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
182,870
2309.10228
Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
The future of autonomous vehicles lies in the convergence of human-centric design and advanced AI capabilities. Autonomous vehicles of the future will not only transport passengers but also interact and adapt to their desires, making the journey comfortable, efficient, and pleasant. In this paper, we present a novel framework that leverages Large Language Models (LLMs) to enhance autonomous vehicles' decision-making processes. By integrating LLMs' natural language capabilities and contextual understanding, specialized tools usage, synergizing reasoning, and acting with various modules on autonomous vehicles, this framework aims to seamlessly integrate the advanced language and reasoning capabilities of LLMs into autonomous vehicles. The proposed framework holds the potential to revolutionize the way autonomous vehicles operate, offering personalized assistance, continuous learning, and transparent decision-making, ultimately contributing to safer and more efficient autonomous driving technologies.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
392,916
2307.15410
Towards a Fully Unsupervised Framework for Intent Induction in Customer Support Dialogues
State of the art models in intent induction require annotated datasets. However, annotating dialogues is time-consuming, laborious and expensive. In this work, we propose a completely unsupervised framework for intent induction within a dialogue. In addition, we show how pre-processing the dialogue corpora can improve results. Finally, we show how to extract the dialogue flows of intentions by investigating the most common sequences. Although we test our work in the MultiWOZ dataset, the fact that this framework requires no prior knowledge make it applicable to any possible use case, making it very relevant to real world customer support applications across industry.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
382,257
2206.12912
Woodscape Fisheye Object Detection for Autonomous Driving -- CVPR 2022 OmniCV Workshop Challenge
Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The strong radial distortion breaks the translation invariance inductive bias of Convolutional Neural Networks. Thus, we present the WoodScape fisheye object detection challenge for autonomous driving which was held as part of the CVPR 2022 Workshop on Omnidirectional Computer Vision (OmniCV). This is one of the first competitions focused on fisheye camera object detection. We encouraged the participants to design models which work natively on fisheye images without rectification. We used CodaLab to host the competition based on the publicly available WoodScape fisheye dataset. In this paper, we provide a detailed analysis on the competition which attracted the participation of 120 global teams and a total of 1492 submissions. We briefly discuss the details of the winning methods and analyze their qualitative and quantitative results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
304,774
1302.5906
Achieving AWGN Channel Capacity With Lattice Gaussian Coding
We propose a new coding scheme using only one lattice that achieves the $\frac{1}{2}\log(1+\SNR)$ capacity of the additive white Gaussian noise (AWGN) channel with lattice decoding, when the signal-to-noise ratio $\SNR>e-1$. The scheme applies a discrete Gaussian distribution over an AWGN-good lattice, but otherwise does not require a shaping lattice or dither. Thus, it significantly simplifies the default lattice coding scheme of Erez and Zamir which involves a quantization-good lattice as well as an AWGN-good lattice. Using the flatness factor, we show that the error probability of the proposed scheme under minimum mean-square error (MMSE) lattice decoding is almost the same as that of Erez and Zamir, for any rate up to the AWGN channel capacity. We introduce the notion of good constellations, which carry almost the same mutual information as that of continuous Gaussian inputs. We also address the implementation of Gaussian shaping for the proposed lattice Gaussian coding scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
22,330
2304.03520
On the Suitability of Representations for Quality Diversity Optimization of Shapes
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance. Examination of the suitability of widely used representations for quality diversity optimization (QD) in robotic domains has yielded inconsistent results regarding the most appropriate encoding method. Given the domain-dependent nature of QD, additional evidence from other domains is necessary. This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes in an architecture setting. The results reveal that some indirect encodings outperform direct encodings and can generate more diverse solution sets, especially when considering full phenotypic diversity. The paper introduces a multi-encoding QD approach that incorporates all evaluated representations in the same archive. Species of encodings compete on the basis of phenotypic features, leading to an approach that demonstrates similar performance to the best single-encoding QD approach. This is noteworthy, as it does not always require the contribution of the best-performing single encoding.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
356,846
2304.01500
Physics-aware Roughness Optimization for Diffractive Optical Neural Networks
As a representative next-generation device/circuit technology beyond CMOS, diffractive optical neural networks (DONNs) have shown promising advantages over conventional deep neural networks due to extreme fast computation speed (light speed) and low energy consumption. However, there is a mismatch, i.e., significant prediction accuracy loss, between the DONN numerical modelling and physical optical device deployment, because of the interpixel interaction within the diffractive layers. In this work, we propose a physics-aware diffractive optical neural network training framework to reduce the performance difference between numerical modeling and practical deployment. Specifically, we propose the roughness modeling regularization in the training process and integrate the physics-aware sparsification method to introduce sparsity to the phase masks to reduce sharp phase changes between adjacent pixels in diffractive layers. We further develop $2\pi$ periodic optimization to reduce the roughness of the phase masks to preserve the performance of DONN. Experiment results demonstrate that, compared to state-of-the-arts, our physics-aware optimization can provide $35.7\%$, $34.2\%$, $28.1\%$, and $27.3\%$ reduction in roughness with only accuracy loss on MNIST, FMNIST, KMNIST, and EMNIST, respectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
356,099
2405.14369
RoPINN: Region Optimized Physics-Informed Neural Networks
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances optimization and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/RoPINN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,390
2111.05894
Graph Neural Network Training with Data Tiering
Graph Neural Networks (GNNs) have shown success in learning from graph-structured data, with applications to fraud detection, recommendation, and knowledge graph reasoning. However, training GNN efficiently is challenging because: 1) GPU memory capacity is limited and can be insufficient for large datasets, and 2) the graph-based data structure causes irregular data access patterns. In this work, we provide a method to statistical analyze and identify more frequently accessed data ahead of GNN training. Our data tiering method not only utilizes the structure of input graph, but also an insight gained from actual GNN training process to achieve a higher prediction result. With our data tiering method, we additionally provide a new data placement and access strategy to further minimize the CPU-GPU communication overhead. We also take into account of multi-GPU GNN training as well and we demonstrate the effectiveness of our strategy in a multi-GPU system. The evaluation results show that our work reduces CPU-GPU traffic by 87-95% and improves the training speed of GNN over the existing solutions by 1.6-2.1x on graphs with hundreds of millions of nodes and billions of edges.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
265,907
2309.05900
Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning
Machine learning is at the center of mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since its prediction can be changed entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers' attacks. Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence. This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems since it can withstand even the most intense perturbations. We test this evolutionary computation methodology against several adversarial attacks and noise perturbations using standard databases and a real-world problem of a shorebird called the Snowy Plover portraying a visual attention task. We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness. All neural networks suffer significant performance losses, while brain programming stands its ground and remains unaffected. Also, by studying the Snowy Plover, we remark on the importance of security in surveillance activities regarding wildlife protection and conservation.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
true
false
false
391,231
2307.15363
Robotic Vision for Human-Robot Interaction and Collaboration: A Survey and Systematic Review
Robotic vision for human-robot interaction and collaboration is a critical process for robots to collect and interpret detailed information related to human actions, goals, and preferences, enabling robots to provide more useful services to people. This survey and systematic review presents a comprehensive analysis on robotic vision in human-robot interaction and collaboration over the last 10 years. From a detailed search of 3850 articles, systematic extraction and evaluation was used to identify and explore 310 papers in depth. These papers described robots with some level of autonomy using robotic vision for locomotion, manipulation and/or visual communication to collaborate or interact with people. This paper provides an in-depth analysis of current trends, common domains, methods and procedures, technical processes, data sets and models, experimental testing, sample populations, performance metrics and future challenges. This manuscript found that robotic vision was often used in action and gesture recognition, robot movement in human spaces, object handover and collaborative actions, social communication and learning from demonstration. Few high-impact and novel techniques from the computer vision field had been translated into human-robot interaction and collaboration. Overall, notable advancements have been made on how to develop and deploy robots to assist people.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
382,241
2409.09099
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
Training deep neural networks (DNNs) is costly. Fortunately, Nvidia Ampere and Hopper GPUs can accelerate matrix multiplications twice as fast as a dense equivalent by implementing 2:4 sparsity. However, previous STE-based 2:4 pre-training methods (e.g. STE with hard-thresholding, SR-STE) suffer from optimization difficulties because of discontinuous pruning function. In this study, we comprehensively analyse the bottleneck of traditional N:M sparse training and recognize three drawbacks with discontinuity: incorrect descending direction, inability to predict the amount of descent and sparse mask oscillation. In light of this, we propose S-STE, a simple yet powerful 2:4 training method that contains two parts: to continuously project weights to be 2:4 sparse, and to rescale sparse weights with a per-tensor fixed scaling factor. Besides, we adopt minimum-variance unbiased estimation for activation gradient and FP8 quantization for whole process. Results show that our method surpasses previous 2:4 pre-training recipes and is comparable even with full parameter models. Our toolkit is available at https://github.com/huyz2023/2by4-pretrain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
488,183
2305.13141
Tight conditions for when the NTK approximation is valid
We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of $\alpha = O(T)$ suffices for the NTK approximation to be valid until training time $T$. Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of $\alpha = O(T^2)$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
366,366
1607.02754
Hybrid Recommender System Based on Personal Behavior Mining
Recommender systems are mostly well known for their applications in e-commerce sites and are mostly static models. Classical personalized recommender algorithm includes item-based collaborative filtering method applied in Amazon, matrix factorization based collaborative filtering algorithm from Netflix, etc. In this article, we hope to combine traditional model with behavior pattern extraction method. We use desensitized mobile transaction record provided by T-mall, Alibaba to build a hybrid dynamic recommender system. The sequential pattern mining aims to find frequent sequential pattern in sequence database and is applied in this hybrid model to predict customers' payment behavior thus contributing to the accuracy of the model.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
58,409
2112.13177
Algorithmic Information Dynamics of Cellular Automata
We illustrate an application of Algorithmic Information Dynamics to Cellular Automata (CA) demonstrating how this digital calculus is able to quantify change in discrete dynamical systems. We demonstrate the sensitivity of the Block Decomposition Method on 1D and 2D CA, including Conway's Game of Life, against measures of statistical nature such as compression (LZW) and Shannon Entropy in two different contexts (1) perturbation analysis and (2) dynamic-state colliding CA. The approach is interesting because it analyses a quintessential object native to software space (CA) in software space itself by using algorithmic information dynamics through a model-driven universal search instead of a traditional statistical approach e.g. LZW compression or Shannon entropy. The colliding example of two state-independent (if not three as one is regulating the collision itself) discrete dynamical systems offers a potential proof of concept for the development of a multivariate version of the AID calculus.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
273,160
1704.00389
Hidden Two-Stream Convolutional Networks for Action Recognition
Analyzing videos of human actions involves understanding the temporal relationships among video frames. State-of-the-art action recognition approaches rely on traditional optical flow estimation methods to pre-compute motion information for CNNs. Such a two-stage approach is computationally expensive, storage demanding, and not end-to-end trainable. In this paper, we present a novel CNN architecture that implicitly captures motion information between adjacent frames. We name our approach hidden two-stream CNNs because it only takes raw video frames as input and directly predicts action classes without explicitly computing optical flow. Our end-to-end approach is 10x faster than its two-stage baseline. Experimental results on four challenging action recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2 show that our approach significantly outperforms the previous best real-time approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
71,073
2105.02968
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Deep neural networks that yield human interpretable decisions by architectural design have lately become an increasingly popular alternative to post hoc interpretation of traditional black-box models. Among these networks, the arguably most widespread approach is so-called prototype learning, where similarities to learned latent prototypes serve as the basis of classifying an unseen data point. In this work, we point to an important shortcoming of such approaches. Namely, there is a semantic gap between similarity in latent space and similarity in input space, which can corrupt interpretability. We design two experiments that exemplify this issue on the so-called ProtoPNet. Specifically, we find that this network's interpretability mechanism can be led astray by intentionally crafted or even JPEG compression artefacts, which can produce incomprehensible decisions. We argue that practitioners ought to have this shortcoming in mind when deploying prototype-based models in practice.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
233,992
1805.07582
Fast Object Classification in Single-pixel Imaging
In single-pixel imaging (SPI), the target object is illuminated with varying patterns sequentially and an intensity sequence is recorded by a single-pixel detector without spatial resolution. A high quality object image can only be computationally reconstructed after a large number of illuminations, with disadvantages of long imaging time and high cost. Conventionally, object classification is performed after a reconstructed object image with good fidelity is available. In this paper, we propose to classify the target object with a small number of illuminations in a fast manner for Fourier SPI. A naive Bayes classifier is employed to classify the target objects based on the single-pixel intensity sequence without any image reconstruction and each sequence element is regarded as an object feature in the classifier. Simulation results demonstrate our proposed scheme can classify the number digit object images with high accuracy (e.g. 80% accuracy using only 13 illuminations, at a sampling ratio of 0.3%).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
97,875
2312.00622
Practical Path-based Bayesian Optimization
There has been a surge in interest in data-driven experimental design with applications to chemical engineering and drug manufacturing. Bayesian optimization (BO) has proven to be adaptable to such cases, since we can model the reactions of interest as expensive black-box functions. Sometimes, the cost of this black-box functions can be separated into two parts: (a) the cost of the experiment itself, and (b) the cost of changing the input parameters. In this short paper, we extend the SnAKe algorithm to deal with both types of costs simultaneously. We further propose extensions to the case of a maximum allowable input change, as well as to the multi-objective setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
412,112
2403.12534
ExACT: Language-guided Conceptual Reasoning and Uncertainty Estimation for Event-based Action Recognition and More
Event cameras have recently been shown beneficial for practical vision tasks, such as action recognition, thanks to their high temporal resolution, power efficiency, and reduced privacy concerns. However, current research is hindered by 1) the difficulty in processing events because of their prolonged duration and dynamic actions with complex and ambiguous semantics and 2) the redundant action depiction of the event frame representation with fixed stacks. We find language naturally conveys abundant semantic information, rendering it stunningly superior in reducing semantic uncertainty. In light of this, we propose ExACT, a novel approach that, for the first time, tackles event-based action recognition from a cross-modal conceptualizing perspective. Our ExACT brings two technical contributions. Firstly, we propose an adaptive fine-grained event (AFE) representation to adaptively filter out the repeated events for the stationary objects while preserving dynamic ones. This subtly enhances the performance of ExACT without extra computational cost. Then, we propose a conceptual reasoning-based uncertainty estimation module, which simulates the recognition process to enrich the semantic representation. In particular, conceptual reasoning builds the temporal relation based on the action semantics, and uncertainty estimation tackles the semantic uncertainty of actions based on the distributional representation. Experiments show that our ExACT achieves superior recognition accuracy of 94.83%(+2.23%), 90.10%(+37.47%) and 67.24% on PAF, HARDVS and our SeAct datasets respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
439,231
1801.09530
RGB image-based data analysis via discrete Morse theory and persistent homology
Understanding and comparing images for the purposes of data analysis is currently a very computationally demanding task. A group at Australian National University (ANU) recently developed open-source code that can detect fundamental topological features of a grayscale image in a computationally feasible manner. This is made possible by the fact that computers store grayscale images as cubical cellular complexes. These complexes can be studied using the techniques of discrete Morse theory. We expand the functionality of the ANU code by introducing methods and software for analyzing images encoded in red, green, and blue (RGB), because this image encoding is very popular for publicly available data. Our methods allow the extraction of key topological information from RGB images via informative persistence diagrams by introducing novel methods for transforming RGB-to-grayscale. This paradigm allows us to perform data analysis directly on RGB images representing water scarcity variability as well as crime variability. We introduce software enabling a a user to predict future image properties, towards the eventual aim of more rapid image-based data behavior prediction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,131
1309.6379
Diffeomorphic Metric Mapping and Probabilistic Atlas Generation of Hybrid Diffusion Imaging based on BFOR Signal Basis
We propose a large deformation diffeomorphic metric mapping algorithm to align multiple b-value diffusion weighted imaging (mDWI) data, specifically acquired via hybrid diffusion imaging (HYDI), denoted as LDDMM-HYDI. We then propose a Bayesian model for estimating the white matter atlas from HYDIs. We adopt the work given in Hosseinbor et al. (2012) and represent the q-space diffusion signal with the Bessel Fourier orientation reconstruction (BFOR) signal basis. The BFOR framework provides the representation of mDWI in the q-space and thus reduces memory requirement. In addition, since the BFOR signal basis is orthonormal, the L2 norm that quantifies the differences in the q-space signals of any two mDWI datasets can be easily computed as the sum of the squared differences in the BFOR expansion coefficients. In this work, we show that the reorientation of the $q$-space signal due to spatial transformation can be easily defined on the BFOR signal basis. We incorporate the BFOR signal basis into the LDDMM framework and derive the gradient descent algorithm for LDDMM-HYDI with explicit orientation optimization. Additionally, we extend the previous Bayesian atlas estimation framework for scalar-valued images to HYDIs and derive the expectation-maximization algorithm for solving the HYDI atlas estimation problem. Using real HYDI datasets, we show the Bayesian model generates the white matter atlas with anatomical details. Moreover, we show that it is important to consider the variation of mDWI reorientation due to a small change in diffeomorphic transformation in the LDDMM-HYDI optimization and to incorporate the full information of HYDI for aligning mDWI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
27,241
2203.15209
OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for any graph neural networks (GNNs) based on learned latent causal factors. Specifically, we construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations. This is achieved by isolating the causal factors in the latent space of graphs by maximizing the information flow measurements. We theoretically analyze the cause-effect relationships in the proposed causal graph, identify node attributes as confounders between graphs and GNN predictions, and circumvent such confounder effect by leveraging the backdoor adjustment formula. Our framework is compatible with any GNNs, and it does not require access to the process by which the target GNN produces its predictions. In addition, it does not rely on the linear-independence assumption of the explained features, nor require prior knowledge on the graph learning tasks. We show a proof-of-concept of OrphicX on canonical classification problems on graph data. In particular, we analyze the explanatory subgraphs obtained from explanations for molecular graphs (i.e., Mutag) and quantitatively evaluate the explanation performance with frequently occurring subgraph patterns. Empirically, we show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,288
2106.13852
Decomposition of transition systems into sets of synchronizing state machines
Transition systems (TS) and Petri nets (PN) are important models of computation ubiquitous in formal methods for modeling systems. An important problem is how to extract from a given TS a PN whose reachability graph is equivalent (with a suitable notion of equivalence) to the original TS. This paper addresses the decomposition of transition systems into synchronizing state machines (SMs), which are a class of Petri nets where each transition has one incoming and one outgoing arc and all markings have exactly one token. This is an important case of the general problem of extracting a PN from a TS. The decomposition is based on the theory of regions, and it is shown that a property of regions called excitation-closure is a sufficient condition to guarantee the equivalence between the original TS and a decomposition into SMs. An efficient algorithm is provided which solves the problem by reducing its critical steps to the maximal independent set problem (to compute a minimal set of irredundant SMs) or to satisfiability (to merge the SMs). We report experimental results that show a good trade-off between quality of results vs. computation time.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
243,201
2409.16667
A Character-Centric Creative Story Generation via Imagination
Creative story generation has long been a goal of NLP research. While existing methodologies have aimed to generate long and coherent stories, they fall significantly short of human capabilities in terms of diversity and character depth. To address this, we introduce a novel story generation framework called CCI (Character-centric Creative story generation via Imagination). CCI features two modules for creative story generation: IG (Image-Guided Imagination) and MW (Multi-Writer model). In the IG module, we utilize a text-to-image model to create visual representations of key story elements, such as characters, backgrounds, and main plots, in a more novel and concrete manner than text-only approaches. The MW module uses these story elements to generate multiple persona-description candidates and selects the best one to insert into the story, thereby enhancing the richness and depth of the narrative. We compared the stories generated by CCI and baseline models through statistical analysis, as well as human and LLM evaluations. The results showed that the IG and MW modules significantly improve various aspects of the stories' creativity. Furthermore, our framework enables interactive multi-modal story generation with users, opening up new possibilities for human-LLM integration in cultural development. Project page : https://www.2024cci.p-e.kr/
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
491,448
1705.08475
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier without any loss in prediction performance.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
74,024
2203.15691
Improved Counting and Localization from Density Maps for Object Detection in 2D and 3D Microscopy Imaging
Object counting and localization are key steps for quantitative analysis in large-scale microscopy applications. This procedure becomes challenging when target objects are overlapping, are densely clustered, and/or present fuzzy boundaries. Previous methods producing density maps based on deep learning have reached a high level of accuracy for object counting by assuming that object counting is equivalent to the integration of the density map. However, this model fails when objects show significant overlap regarding accurate localization. We propose an alternative method to count and localize objects from the density map to overcome this limitation. Our procedure includes the following three key aspects: 1) Proposing a new counting method based on the statistical properties of the density map, 2) optimizing the counting results for those objects which are well-detected based on the proposed counting method, and 3) improving localization of poorly detected objects using the proposed counting method as prior information. Validation includes processing of microscopy data with known ground truth and comparison with other models that use conventional processing of the density map. Our results show improved performance in counting and localization of objects in 2D and 3D microscopy data. Furthermore, the proposed method is generic, considering various applications that rely on the density map approach. Our code will be released post-review.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
288,498
2411.01814
Enhancing Social Robot Navigation with Integrated Motion Prediction and Trajectory Planning in Dynamic Human Environments
Navigating safely in dynamic human environments is crucial for mobile service robots, and social navigation is a key aspect of this process. In this paper, we proposed an integrative approach that combines motion prediction and trajectory planning to enable safe and socially-aware robot navigation. The main idea of the proposed method is to leverage the advantages of Socially Acceptable trajectory prediction and Timed Elastic Band (TEB) by incorporating human interactive information including position, orientation, and motion into the objective function of the TEB algorithms. In addition, we designed social constraints to ensure the safety of robot navigation. The proposed system is evaluated through physical simulation using both quantitative and qualitative metrics, demonstrating its superior performance in avoiding human and dynamic obstacles, thereby ensuring safe navigation. The implementations are open source at: \url{https://github.com/thanhnguyencanh/SGan-TEB.git}
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
505,238
2202.10649
On Local Distributions in Graph Signal Processing
Graph filtering is the cornerstone operation in graph signal processing (GSP). Thus, understanding it is key in developing potent GSP methods. Graph filters are local and distributed linear operations, whose output depends only on the local neighborhood of each node. Moreover, a graph filter's output can be computed separately at each node by carrying out repeated exchanges with immediate neighbors. Graph filters can be compactly written as polynomials of a graph shift operator (typically, a sparse matrix description of the graph). This has led to relating the properties of the filters with the spectral properties of the corresponding matrix -- which encodes global structure of the graph. In this work, we propose a framework that relies solely on the local distribution of the neighborhoods of a graph. The crux of this approach is to describe graphs and graph signals in terms of a measurable space of rooted balls. Leveraging this, we are able to seamlessly compare graphs of different sizes and coming from different models, yielding results on the convergence of spectral densities, transferability of filters across arbitrary graphs, and continuity of graph signal properties with respect to the distribution of local substructures.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
281,606
2201.05848
TWikiL -- The Twitter Wikipedia Link Dataset
Recent research has shown how strongly Wikipedia and other web services or platforms are connected. For example, search engines rely heavily on surfacing Wikipedia links to satisfy their users' information needs and volunteer-created Wikipedia content frequently gets re-used on other social media platforms like Reddit. However, publicly accessible datasets that enable researchers to study the interrelationship between Wikipedia and other platforms are sparse. In addition to that, most studies only focus on certain points in time and don't consider the historical perspective. To begin solving these problems we developed TWikiL, the Twitter Wikipedia Link Dataset, which contains all Wikipedia links posted on Twitter in the period 2006 to January 2021. We extract Wikipedia links from Tweets and enrich the referenced articles with their respective Wikidata identifiers and Wikipedia topic categories, which will make this dataset immediately useful for a large range of scholarly use cases. In this paper, we describe the data collection process, perform an initial exploratory analysis and present a comprehensive overview of how this dataset can be useful for the research community.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
275,524
2412.00994
DSSRNN: Decomposition-Enhanced State-Space Recurrent Neural Network for Time-Series Analysis
Time series forecasting is a crucial yet challenging task in machine learning, requiring domain-specific knowledge due to its wide-ranging applications. While recent Transformer models have improved forecasting capabilities, they come with high computational costs. Linear-based models have shown better accuracy than Transformers but still fall short of ideal performance. To address these challenges, we introduce the Decomposition State-Space Recurrent Neural Network (DSSRNN), a novel framework designed for both long-term and short-term time series forecasting. DSSRNN uniquely combines decomposition analysis to capture seasonal and trend components with state-space models and physics-based equations. We evaluate DSSRNN's performance on indoor air quality datasets, focusing on CO2 concentration prediction across various forecasting horizons. Results demonstrate that DSSRNN consistently outperforms state-of-the-art models, including transformer-based architectures, in terms of both Mean Squared Error (MSE) and Mean Absolute Error (MAE). For example, at the shortest horizon (T=96) in Office 1, DSSRNN achieved an MSE of 0.378 and an MAE of 0.401, significantly lower than competing models. Additionally, DSSRNN exhibits superior computational efficiency compared to more complex models. While not as lightweight as the DLinear model, DSSRNN achieves a balance between performance and efficiency, with only 0.11G MACs and 437MiB memory usage, and an inference time of 0.58ms for long-term forecasting. This work not only showcases DSSRNN's success but also establishes a new benchmark for physics-informed machine learning in environmental forecasting and potentially other domains.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
512,898
1803.06604
A Robust AUC Maximization Framework with Simultaneous Outlier Detection and Feature Selection for Positive-Unlabeled Classification
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples. Building robust classifier for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies AUC maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
92,877
1905.01127
Uncertainty-Aware Principal Component Analysis
We present a technique to perform dimensionality reduction on data that is subject to uncertainty. Our method is a generalization of traditional principal component analysis (PCA) to multivariate probability distributions. In comparison to non-linear methods, linear dimensionality reduction techniques have the advantage that the characteristics of such probability distributions remain intact after projection. We derive a representation of the PCA sample covariance matrix that respects potential uncertainty in each of the inputs, building the mathematical foundation of our new method: uncertainty-aware PCA. In addition to the accuracy and performance gained by our approach over sampling-based strategies, our formulation allows us to perform sensitivity analysis with regard to the uncertainty in the data. For this, we propose factor traces as a novel visualization that enables to better understand the influence of uncertainty on the chosen principal components. We provide multiple examples of our technique using real-world datasets. As a special case, we show how to propagate multivariate normal distributions through PCA in closed form. Furthermore, we discuss extensions and limitations of our approach.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
129,646
1411.7798
Cross-Modal Learning via Pairwise Constraints
In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint, and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to deal with the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a cross-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ?21 regularization along with an iteratively reweighted algorithm to find the global optimum. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/retrieval accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
37,962
2403.10384
Coordination in Noncooperative Multiplayer Matrix Games via Reduced Rank Correlated Equilibria
Coordination in multiplayer games enables players to avoid the lose-lose outcome that often arises at Nash equilibria. However, designing a coordination mechanism typically requires the consideration of the joint actions of all players, which becomes intractable in large-scale games. We develop a novel coordination mechanism, termed reduced rank correlated equilibria, which reduces the number of joint actions to be considered and thereby mitigates computational complexity. The idea is to approximate the set of all joint actions with the actions used in a set of pre-computed Nash equilibria via a convex hull operation. In a game with n players and each player having m actions, the proposed mechanism reduces the number of joint actions considered from O(m^n) to O(mn). We demonstrate the application of the proposed mechanism to an air traffic queue management problem. Compared with the correlated equilibrium-a popular benchmark coordination mechanism-the proposed approach is capable of solving a problem involving four thousand times more joint actions while yielding similar or better performance in terms of a fairness indicator and showing a maximum optimality gap of 0.066% in terms of the average delay cost. In the meantime, it yields a solution that shows up to 99.5% improvement in a fairness indicator and up to 50.4% reduction in average delay cost compared to the Nash solution, which does not involve coordination.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
true
438,178
1505.06812
Optimizing Non-decomposable Performance Measures: A Tale of Two Classes
Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure. Such measures have spurred much interest and pose specific challenges to learning algorithms since their non-additive nature precludes a direct application of well-studied large scale optimization methods such as stochastic gradient descent. In this paper we reveal that for two large families of performance measures that can be expressed as functions of true positive/negative rates, it is indeed possible to implement point stochastic updates. The families we consider are concave and pseudo-linear functions of TPR, TNR which cover several popularly used performance measures such as F-measure, G-mean and H-mean. Our core contribution is an adaptive linearization scheme for these families, using which we develop optimization techniques that enable truly point-based stochastic updates. For concave performance measures we propose SPADE, a stochastic primal dual solver; for pseudo-linear measures we propose STAMP, a stochastic alternate maximization procedure. Both methods have crisp convergence guarantees, demonstrate significant speedups over existing methods - often by an order of magnitude or more, and give similar or more accurate predictions on test data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
43,480
2408.10323
SDP bounds on quantum codes
This paper provides a semidefinite programming hierarchy based on state polynomial optimization to determine the existence of quantum codes with given parameters. The hierarchy is complete, in the sense that if a $(\!(n,K,\delta)\!)_2$ code does not exist then a level of the hierarchy is infeasible. It is not limited to stabilizer codes and thus applicable generally. While it is formally dimension-free, we restrict it to qubit codes through quasi-Clifford algebras. We derive the quantum analog of a range of classical results: first, from an intermediate level a Lov\'asz bound for self-dual quantum codes is recovered. Second, a symmetrization of a minor variation of this Lov\'asz bound recovers the quantum Delsarte bound. Third, a symmetry reduction using the Terwilliger algebra leads to semidefinite programming bounds of size $O(n^4)$. With this we give an alternative proof that there is no $(\!(7,1,4)\!)_2$ quantum code, and show that $(\!(8,9,3)\!)_2$ and $(\!(10,5,4)\!)_2$ codes do not exist.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
481,806
2501.07473
Quantifying Polarization: A Comparative Study of Measures and Methods
Political polarization, a key driver of social fragmentation, has drawn increasing attention for its role in shaping online and offline discourse. Despite significant efforts, accurately measuring polarization within ideological distributions remains a challenge. This study evaluates five widely used polarization measures, testing their strengths and weaknesses with synthetic datasets and a real-world case study on YouTube discussions during the 2020 U.S. Presidential Election. Building on these findings, we present a novel adaptation of Kleinberg's burst detection algorithm to improve mode detection in polarized distributions. By offering both a critical review and an innovative methodological tool, this work advances the analysis of ideological patterns in social media discourse.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
524,407
2109.03375
Malware Squid: A Novel IoT Malware Traffic Analysis Framework using Convolutional Neural Network and Binary Visualisation
Internet of Things devices have seen a rapid growth and popularity in recent years with many more ordinary devices gaining network capability and becoming part of the ever growing IoT network. With this exponential growth and the limitation of resources, it is becoming increasingly harder to protect against security threats such as malware due to its evolving faster than the defence mechanisms can handle with. The traditional security systems are not able to detect unknown malware as they use signature-based methods. In this paper, we aim to address this issue by introducing a novel IoT malware traffic analysis approach using neural network and binary visualisation. The prime motivation of the proposed approach is to faster detect and classify new malware (zero-day malware). The experiment results show that our method can satisfy the accuracy requirement of practical application.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
254,039
2403.08813
Federated Deep Q-Learning and 5G load balancing
Despite advances in cellular network technology, base station (BS) load balancing remains a persistent problem. Although centralized resource allocation methods can address the load balancing problem, it still remains an NP-hard problem. In this research, we study how federated deep Q learning can be used to inform each user equipment (UE) of the each BS's load conditions. Federated deep Q learning's load balancing enables intelligent UEs to independently select the best BS while also limiting the amount of private information exposed to the network. In this study, we propose and analyze a federated deep Q learning load balancing system, which is implemented using the Open-RAN xAPP framework and the near-Real Time Radio Interface Controller (near-RT RIC). Our simulation results indicate that compared to the maximum Signal-To-Noise-Ratio (MAX-SINR) method currently used by UEs, our proposed deep Q learning model can consistently provide better High average UE quality of service
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
true
437,492
2410.01072
Generating Seamless Virtual Immunohistochemical Whole Slide Images with Content and Color Consistency
Immunohistochemical (IHC) stains play a vital role in a pathologist's analysis of medical images, providing crucial diagnostic information for various diseases. Virtual staining from hematoxylin and eosin (H&E)-stained whole slide images (WSIs) allows the automatic production of other useful IHC stains without the expensive physical staining process. However, current virtual WSI generation methods based on tile-wise processing often suffer from inconsistencies in content, texture, and color at tile boundaries. These inconsistencies lead to artifacts that compromise image quality and potentially hinder accurate clinical assessment and diagnoses. To address this limitation, we propose a novel consistent WSI synthesis network, CC-WSI-Net, that extends GAN models to produce seamless synthetic whole slide images. Our CC-WSI-Net integrates a content- and color-consistency supervisor, ensuring consistency across tiles and facilitating the generation of seamless synthetic WSIs while ensuring Sox10 immunohistochemistry accuracy in melanocyte detection. We validate our method through extensive image-quality analyses, objective detection assessments, and a subjective survey with pathologists. By generating high-quality synthetic WSIs, our method opens doors for advanced virtual staining techniques with broader applications in research and clinical care.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
493,593
1811.06187
Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in Navigation
Combining deep neural networks with reinforcement learning has shown great potential in the next-generation intelligent control. However, there are challenges in terms of safety and cost in practical applications. In this paper, we propose the Intervention Aided Reinforcement Learning (IARL) framework, which utilizes human intervened robot-environment interaction to improve the policy. We used the Unmanned Aerial Vehicle (UAV) as the test platform. We built neural networks as our policy to map sensor readings to control signals on the UAV. Our experiment scenarios cover both simulation and reality. We show that our approach substantially reduces the human intervention and improves the performance in autonomous navigation, at the same time it ensures safety and keeps training cost acceptable.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
113,474
2208.09825
Hilti-Oxford Dataset: A Millimetre-Accurate Benchmark for Simultaneous Localization and Mapping
Simultaneous Localization and Mapping (SLAM) is being deployed in real-world applications, however many state-of-the-art solutions still struggle in many common scenarios. A key necessity in progressing SLAM research is the availability of high-quality datasets and fair and transparent benchmarking. To this end, we have created the Hilti-Oxford Dataset, to push state-of-the-art SLAM systems to their limits. The dataset has a variety of challenges ranging from sparse and regular construction sites to a 17th century neoclassical building with fine details and curved surfaces. To encourage multi-modal SLAM approaches, we designed a data collection platform featuring a lidar, five cameras, and an IMU (Inertial Measurement Unit). With the goal of benchmarking SLAM algorithms for tasks where accuracy and robustness are paramount, we implemented a novel ground truth collection method that enables our dataset to accurately measure SLAM pose errors with millimeter accuracy. To further ensure accuracy, the extrinsics of our platform were verified with a micrometer-accurate scanner, and temporal calibration was managed online using hardware time synchronization. The multi-modality and diversity of our dataset attracted a large field of academic and industrial researchers to enter the second edition of the Hilti SLAM challenge, which concluded in June 2022. The results of the challenge show that while the top three teams could achieve an accuracy of 2cm or better for some sequences, the performance dropped off in more difficult sequences.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
313,848
2307.16259
Communication-Sensing Region for Cell-Free Massive MIMO ISAC Systems
This paper investigates the system model and the transmit beamforming design for the Cell-Free massive multi-input multi-output (MIMO) integrated sensing and communication (ISAC) system. The impact of the uncertainty of the target locations on the propagation of wireless signals is considered during both uplink and downlink phases, and especially, the main statistics of the MIMO channel estimation error are theoretically derived in the closed-form fashion. A fundamental performance metric, termed communication-sensing (C-S) region, is defined for the considered system via three cases, i.e., the sensing-only case, the communication-only case and the ISAC case. The transmit beamforming design problems for the three cases are respectively carried out through different reformulations, e.g., the Lagrangian dual transform and the quadratic fractional transform, and some combinations of the block coordinate descent method and the successive convex approximation method. Numerical results present a 3-dimensional C-S region with a dynamic number of access points to illustrate the trade-off between communication and radar sensing. The advantage for radar sensing of the Cell-Free massive MIMO system is also studied via a comparison with the traditional cellular system. Finally, the efficacy of the proposed beamforming scheme is validated in comparison with zero-forcing and maximum ratio transmission schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
382,543
2409.18038
MMDVS-LF: A Multi-Modal Dynamic-Vision-Sensor Line Following Dataset
Dynamic Vision Sensors (DVS), offer a unique advantage in control applications, due to their high temporal resolution, and asynchronous event-based data. Still, their adoption in machine learning algorithms remains limited. To address this gap, and promote the development of models that leverage the specific characteristics of DVS data, we introduce the Multi-Modal Dynamic-Vision-Sensor Line Following dataset (MMDVS-LF). This comprehensive dataset, is the first to integrate multiple sensor modalities, including DVS recordings, RGB video, odometry, and Inertial Measurement Unit (IMU) data, from a small-scale standardized vehicle. Additionally, the dataset includes eye-tracking and demographic data of drivers performing a Line Following task on a track. With its diverse range of data, MMDVS-LF opens new opportunities for developing deep learning algorithms, and conducting data science projects across various domains, supporting innovation in autonomous systems and control applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
492,081
2208.02803
Semantic Data Augmentation based Distance Metric Learning for Domain Generalization
Domain generalization (DG) aims to learn a model on one or more different but related source domains that could be generalized into an unseen target domain. Existing DG methods try to prompt the diversity of source domains for the model's generalization ability, while they may have to introduce auxiliary networks or striking computational costs. On the contrary, this work applies the implicit semantic augmentation in feature space to capture the diversity of source domains. Concretely, an additional loss function of distance metric learning (DML) is included to optimize the local geometry of data distribution. Besides, the logits from cross entropy loss with infinite augmentations is adopted as input features for the DML loss in lieu of the deep features. We also provide a theoretical analysis to show that the logits can approximate the distances defined on original features well. Further, we provide an in-depth analysis of the mechanism and rational behind our approach, which gives us a better understanding of why leverage logits in lieu of features can help domain generalization. The proposed DML loss with the implicit augmentation is incorporated into a recent DG method, that is, Fourier Augmented Co-Teacher framework (FACT). Meanwhile, our method also can be easily plugged into various DG methods. Extensive experiments on three benchmarks (Digits-DG, PACS and Office-Home) have demonstrated that the proposed method is able to achieve the state-of-the-art performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
311,572
2409.12677
(Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains, including machine learning models and human decision-makers in real-world applications. This involves calculating the disparities between probabilistic outcomes among social groups, such as acceptance rates between male and female applicants. However, traditional fairness metrics do not account for the uncertainty in these processes and lack of comparability when two decision-makers exhibit the same disparity. Using Bayesian statistics, we quantify the uncertainty of the disparity to enhance discrimination assessments. We represent each decision-maker, whether a machine learning model or a human, by its disparity and the corresponding uncertainty in that disparity. We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker according to a utility function that ranks decision-makers based on these preferences. The decision-maker with the highest utility score can be interpreted as the one for whom we are most certain that it is fair.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
489,678
2403.04551
Dissecting Sample Hardness: A Fine-Grained Analysis of Hardness Characterization Methods for Data-Centric AI
Characterizing samples that are difficult to learn from is crucial to developing highly performant ML models. This has led to numerous Hardness Characterization Methods (HCMs) that aim to identify "hard" samples. However, there is a lack of consensus regarding the definition and evaluation of "hardness". Unfortunately, current HCMs have only been evaluated on specific types of hardness and often only qualitatively or with respect to downstream performance, overlooking the fundamental quantitative identification task. We address this gap by presenting a fine-grained taxonomy of hardness types. Additionally, we propose the Hardness Characterization Analysis Toolkit (H-CAT), which supports comprehensive and quantitative benchmarking of HCMs across the hardness taxonomy and can easily be extended to new HCMs, hardness types, and datasets. We use H-CAT to evaluate 13 different HCMs across 8 hardness types. This comprehensive evaluation encompassing over 14K setups uncovers strengths and weaknesses of different HCMs, leading to practical tips to guide HCM selection and future development. Our findings highlight the need for more comprehensive HCM evaluation, while we hope our hardness taxonomy and toolkit will advance the principled evaluation and uptake of data-centric AI methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
435,641
0705.0959
The Kinematic Analysis of a Symmetrical Three-Degree-of-Freedom Planar Parallel Manipulator
Presented in this paper is the kinematic analysis of a symmetrical three-degree-of-freedom planar parallel manipulator. In opposite to serial manipulators, parallel manipulators can admit not only multiple inverse kinematic solutions, but also multiple direct kinematic solutions. This property produces more complicated kinematic models but allows more flexibility in trajectory planning. To take into account this property, the notion of aspects, i.e. the maximal singularity-free domains, was introduced, based on the notion of working modes, which makes it possible to separate the inverse kinematic solutions. The aim of this paper is to show that a non-singular assembly-mode changing trajectory exist for a symmetrical planar parallel manipulator, with equilateral base and platform triangle.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
171
2407.08159
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
The training phase of machine learning models is a delicate step, especially in cybersecurity contexts. Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks without altering the training labels. With this work, we propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks, while preserving the model utility. By performing density-based clustering on a carefully chosen feature subspace, and progressively isolating the suspicious clusters through a novel iterative scoring procedure, our defensive mechanism can mitigate the attacks without requiring many of the common assumptions in the existing backdoor defense literature. To show the generality of our proposed mitigation, we evaluate it on two clean-label model-agnostic attacks on two different classic cybersecurity data modalities: network flows classification and malware classification, using gradient boosting and neural network models.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
472,040
2407.01292
Preserving Relative Localization of FoV-Limited Drone Swarm via Active Mutual Observation
Relative state estimation is crucial for vision-based swarms to estimate and compensate for the unavoidable drift of visual odometry. For autonomous drones equipped with the most compact sensor setting -- a stereo camera that provides a limited field of view (FoV), the demand for mutual observation for relative state estimation conflicts with the demand for environment observation. To balance the two demands for FoV limited swarms by acquiring mutual observations with a safety guarantee, this paper proposes an active localization correction system, which plans camera orientations via a yaw planner during the flight. The yaw planner manages the contradiction by calculating suitable timing and yaw angle commands based on the evaluation of localization uncertainty estimated by the Kalman Filter. Simulation validates the scalability of our algorithm. In real-world experiments, we reduce positioning drift by up to 65% and managed to maintain a given formation in both indoor and outdoor GPS-denied flight, from which the accuracy, efficiency, and robustness of the proposed system are verified.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
469,230
1909.11687
Extremely Small BERT Models from Mixed-Vocabulary Training
Pretrained language models like BERT have achieved good results on NLP tasks, but are impractical on resource-limited devices due to memory footprint. A large fraction of this footprint comes from the input embeddings with large input vocabulary and embedding dimensions. Existing knowledge distillation methods used for model compression cannot be directly applied to train student models with reduced vocabulary sizes. To this end, we propose a distillation method to align the teacher and student embeddings via mixed-vocabulary training. Our method compresses BERT-LARGE to a task-agnostic model with smaller vocabulary and hidden dimensions, which is an order of magnitude smaller than other distilled BERT models and offers a better size-accuracy trade-off on language understanding benchmarks as well as a practical dialogue task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
146,882
2305.05293
On the Limitations of Model Stealing with Uncertainty Quantification Models
Model stealing aims at inferring a victim model's functionality at a fraction of the original training cost. While the goal is clear, in practice the model's architecture, weight dimension, and original training data can not be determined exactly, leading to mutual uncertainty during stealing. In this work, we explicitly tackle this uncertainty by generating multiple possible networks and combining their predictions to improve the quality of the stolen model. For this, we compare five popular uncertainty quantification models in a model stealing task. Surprisingly, our results indicate that the considered models only lead to marginal improvements in terms of label agreement (i.e., fidelity) to the stolen model. To find the cause of this, we inspect the diversity of the model's prediction by looking at the prediction variance as a function of training iterations. We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
363,087
2010.05037
Cross-Stack Workload Characterization of Deep Recommendation Systems
Deep learning based recommendation systems form the backbone of most personalized cloud services. Though the computer architecture community has recently started to take notice of deep recommendation inference, the resulting solutions have taken wildly different approaches - ranging from near memory processing to at-scale optimizations. To better design future hardware systems for deep recommendation inference, we must first systematically examine and characterize the underlying systems-level impact of design decisions across the different levels of the execution stack. In this paper, we characterize eight industry-representative deep recommendation models at three different levels of the execution stack: algorithms and software, systems platforms, and hardware microarchitectures. Through this cross-stack characterization, we first show that system deployment choices (i.e., CPUs or GPUs, batch size granularity) can give us up to 15x speedup. To better understand the bottlenecks for further optimization, we look at both software operator usage breakdown and CPU frontend and backend microarchitectural inefficiencies. Finally, we model the correlation between key algorithmic model architecture features and hardware bottlenecks, revealing the absence of a single dominant algorithmic component behind each hardware bottleneck.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
199,968
1710.02733
Combinatorial Miller-Hagberg Algorithm for Randomization of Dense Networks
We propose a slightly revised Miller-Hagberg (MH) algorithm that efficiently generates a random network from a given expected degree sequence. The revision was to replace the approximated edge probability between a pair of nodes with a combinatorically calculated edge probability that better captures the likelihood of edge presence especially where edges are dense. The computational complexity of this combinatorial MH algorithm is still in the same order as the original one. We evaluated the proposed algorithm through several numerical experiments. The results demonstrated that the proposed algorithm was particularly good at accurately representing high-degree nodes in dense, heterogeneous networks. This algorithm may be a useful alternative of other more established network randomization methods, given that the data are increasingly becoming larger and denser in today's network science research.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
82,214
2308.16890
TouchStone: Evaluating Vision-Language Models by Language Models
Large vision-language models (LVLMs) have recently witnessed rapid advancements, exhibiting a remarkable capacity for perceiving, understanding, and processing visual information by connecting visual receptor with large language models (LLMs). However, current assessments mainly focus on recognizing and reasoning abilities, lacking direct evaluation of conversational skills and neglecting visual storytelling abilities. In this paper, we propose an evaluation method that uses strong LLMs as judges to comprehensively evaluate the various abilities of LVLMs. Firstly, we construct a comprehensive visual dialogue dataset TouchStone, consisting of open-world images and questions, covering five major categories of abilities and 27 subtasks. This dataset not only covers fundamental recognition and comprehension but also extends to literary creation. Secondly, by integrating detailed image annotations we effectively transform the multimodal input content into a form understandable by LLMs. This enables us to employ advanced LLMs for directly evaluating the quality of the multimodal dialogue without requiring human intervention. Through validation, we demonstrate that powerful LVLMs, such as GPT-4, can effectively score dialogue quality by leveraging their textual capabilities alone, aligning with human preferences. We hope our work can serve as a touchstone for LVLMs' evaluation and pave the way for building stronger LVLMs. The evaluation code is available at https://github.com/OFA-Sys/TouchStone.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
389,151
1807.04183
Optimization over Continuous and Multi-dimensional Decisions with Observational Data
We consider the optimization of an uncertain objective over continuous and multi-dimensional decision spaces in problems in which we are only provided with observational data. We propose a novel algorithmic framework that is tractable, asymptotically consistent, and superior to comparable methods on example problems. Our approach leverages predictive machine learning methods and incorporates information on the uncertainty of the predicted outcomes for the purpose of prescribing decisions. We demonstrate the efficacy of our method on examples involving both synthetic and real data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
102,684
2407.13842
Language-Driven 6-DoF Grasp Detection Using Negative Prompt Guidance
6-DoF grasp detection has been a fundamental and challenging problem in robotic vision. While previous works have focused on ensuring grasp stability, they often do not consider human intention conveyed through natural language, hindering effective collaboration between robots and users in complex 3D environments. In this paper, we present a new approach for language-driven 6-DoF grasp detection in cluttered point clouds. We first introduce Grasp-Anything-6D, a large-scale dataset for the language-driven 6-DoF grasp detection task with 1M point cloud scenes and more than 200M language-associated 3D grasp poses. We further introduce a novel diffusion model that incorporates a new negative prompt guidance learning strategy. The proposed negative prompt strategy directs the detection process toward the desired object while steering away from unwanted ones given the language input. Our method enables an end-to-end framework where humans can command the robot to grasp desired objects in a cluttered scene using natural language. Intensive experimental results show the effectiveness of our method in both benchmarking experiments and real-world scenarios, surpassing other baselines. In addition, we demonstrate the practicality of our approach in real-world robotic applications. Our project is available at https://airvlab.github.io/grasp-anything.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
474,538
2408.12060
Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs
Given the widespread dissemination of misinformation on social media, implementing fact-checking mechanisms for online claims is essential. Manually verifying every claim is very challenging, underscoring the need for an automated fact-checking system. This paper presents our system designed to address this issue. We utilize the Averitec dataset (Schlichtkrull et al., 2023) to assess the performance of our fact-checking system. In addition to veracity prediction, our system provides supporting evidence, which is extracted from the dataset. We develop a Retrieve and Generate (RAG) pipeline to extract relevant evidence sentences from a knowledge base, which are then inputted along with the claim into a large language model (LLM) for classification. We also evaluate the few-shot In-Context Learning (ICL) capabilities of multiple LLMs. Our system achieves an 'Averitec' score of 0.33, which is a 22% absolute improvement over the baseline. Our Code is publicly available on https://github.com/ronit-singhal/evidence-backed-fact-checking-using-rag-and-few-shot-in-context-learning-with-llms.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
482,562
2501.15700
Adapting Biomedical Abstracts into Plain language using Large Language Models
A vast amount of medical knowledge is available for public use through online health forums, and question-answering platforms on social media. The majority of the population in the United States doesn't have the right amount of health literacy to make the best use of that information. Health literacy means the ability to obtain and comprehend the basic health information to make appropriate health decisions. To build the bridge between this gap, organizations advocate adapting this medical knowledge into plain language. Building robust systems to automate the adaptations helps both medical and non-medical professionals best leverage the available information online. The goal of the Plain Language Adaptation of Biomedical Abstracts (PLABA) track is to adapt the biomedical abstracts in English language extracted from PubMed based on the questions asked in MedlinePlus for the general public using plain language at the sentence level. As part of this track, we leveraged the best open-source Large Language Models suitable and fine-tuned for dialog use cases. We compare and present the results for all of our systems and our ranking among the other participants' submissions. Our top performing GPT-4 based model ranked first in the avg. simplicity measure and 3rd on the avg. accuracy measure.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
527,669
2205.07260
Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks
L2 regularization for weights in neural networks is widely used as a standard training trick. However, L2 regularization for gamma, a trainable parameter of batch normalization, remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this paper, we study whether L2 regularization for gamma is valid. To explore this issue, we consider two approaches: 1) variance control to make the residual network behave like identity mapping and 2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable gamma to apply L2 regularization and propose four guidelines for managing them. In several experiments, we observed the increase and decrease in performance caused by applying L2 regularization to gamma of four categories, which is consistent with our four guidelines. Our proposed guidelines were validated through various tasks and architectures, including variants of residual networks and transformers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,536
2406.18884
Sequential three-way group decision-making for double hierarchy hesitant fuzzy linguistic term set
Group decision-making (GDM) characterized by complexity and uncertainty is an essential part of various life scenarios. Most existing researches lack tools to fuse information quickly and interpret decision results for partially formed decisions. This limitation is particularly noticeable when there is a need to improve the efficiency of GDM. To address this issue, a novel multi-level sequential three-way decision for group decision-making (S3W-GDM) method is constructed from the perspective of granular computing. This method simultaneously considers the vagueness, hesitation, and variation of GDM problems under double hierarchy hesitant fuzzy linguistic term sets (DHHFLTS) environment. First, for fusing information efficiently, a novel multi-level expert information fusion method is proposed, and the concepts of expert decision table and the extraction/aggregation of decision-leveled information based on the multi-level granularity are defined. Second, the neighborhood theory, outranking relation and regret theory (RT) are utilized to redesign the calculations of conditional probability and relative loss function. Then, the granular structure of DHHFLTS based on the sequential three-way decision (S3WD) is defined to improve the decision-making efficiency, and the decision-making strategy and interpretation of each decision-level are proposed. Furthermore, the algorithm of S3W-GDM is given. Finally, an illustrative example of diagnosis is presented, and the comparative and sensitivity analysis with other methods are performed to verify the efficiency and rationality of the proposed method.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
468,215
2106.09533
Author Clustering and Topic Estimation for Short Texts
Analysis of short text, such as social media posts, is extremely difficult because of their inherent brevity. In addition to classifying topics of such posts, a common downstream task is grouping the authors of these documents for subsequent analyses. We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document, with user-level topic distributions. We also simultaneously cluster users, removing the need for post-hoc cluster estimation and improving topic estimation by shrinking noisy user-level topic distributions towards typical values. Our method performs as well as -- or better -- than traditional approaches, and we demonstrate its usefulness on a dataset of tweets from United States Senators, recovering both meaningful topics and clusters that reflect partisan ideology. We also develop a novel measure of echo chambers among these politicians by characterizing insularity of topics discussed by groups of Senators and provide uncertainty quantification.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
241,694
2406.11025
Large Language Models for Dysfluency Detection in Stuttered Speech
Accurately detecting dysfluencies in spoken language can help to improve the performance of automatic speech and language processing components and support the development of more inclusive speech and language technologies. Inspired by the recent trend towards the deployment of large language models (LLMs) as universal learners and processors of non-lexical inputs, such as audio and video, we approach the task of multi-label dysfluency detection as a language modeling problem. We present hypotheses candidates generated with an automatic speech recognition system and acoustic representations extracted from an audio encoder model to an LLM, and finetune the system to predict dysfluency labels on three datasets containing English and German stuttered speech. The experimental results show that our system effectively combines acoustic and lexical information and achieves competitive results on the multi-label stuttering detection task.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
464,676
1401.5791
Advanced Signal Processing Techniqes to Study Normal and Epileptic EEG
EEG monitoring has an important milestone provide valuable information of those candidates who suffer from epilepsy.In this paper human normal and epileptic Electroencephalogram signals are analyzed with popular and efficient signal processing techniques like Fourier and Wavelet transform. The delta, theta, alpha, beta and gamma sub bands of EEG are obtained and studied for detection of seizure and epilepsy. The extracted feature is then applied to ANN for classification of the EEG signals.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
30,242
2409.17524
JoyType: A Robust Design for Multilingual Visual Text Creation
Generating images with accurately represented text, especially in non-Latin languages, poses a significant challenge for diffusion models. Existing approaches, such as the integration of hint condition diagrams via auxiliary networks (e.g., ControlNet), have made strides towards addressing this issue. However, diffusion models often fall short in tasks requiring controlled text generation, such as specifying particular fonts or producing text in small fonts. In this paper, we introduce a novel approach for multilingual visual text creation, named JoyType, designed to maintain the font style of text during the image generation process. Our methodology begins with assembling a training dataset, JoyType-1M, comprising 1 million pairs of data. Each pair includes an image, its description, and glyph instructions corresponding to the font style within the image. We then developed a text control network, Font ControlNet, tasked with extracting font style information to steer the image generation. To further enhance our model's ability to maintain font style, notably in generating small-font text, we incorporated a multi-layer OCR-aware loss into the diffusion process. This enhancement allows JoyType to direct text rendering using low-level descriptors. Our evaluations, based on both visual and accuracy metrics, demonstrate that JoyType significantly outperforms existing state-of-the-art methods. Additionally, JoyType can function as a plugin, facilitating the creation of varied image styles in conjunction with other stable diffusion models on HuggingFace and CivitAI. Our project is open-sourced on https://jdh-algo.github.io/JoyType/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
491,847
2305.17018
Formal Modelling for Multi-Robot Systems Under Uncertainty
Purpose of Review: To effectively synthesise and analyse multi-robot behaviour, we require formal task-level models which accurately capture multi-robot execution. In this paper, we review modelling formalisms for multi-robot systems under uncertainty, and discuss how they can be used for planning, reinforcement learning, model checking, and simulation. Recent Findings: Recent work has investigated models which more accurately capture multi-robot execution by considering different forms of uncertainty, such as temporal uncertainty and partial observability, and modelling the effects of robot interactions on action execution. Other strands of work have presented approaches for reducing the size of multi-robot models to admit more efficient solution methods. This can be achieved by decoupling the robots under independence assumptions, or reasoning over higher level macro actions. Summary: Existing multi-robot models demonstrate a trade off between accurately capturing robot dependencies and uncertainty, and being small enough to tractably solve real world problems. Therefore, future research should exploit realistic assumptions over multi-robot behaviour to develop smaller models which retain accurate representations of uncertainty and robot interactions; and exploit the structure of multi-robot problems, such as factored state spaces, to develop scalable solution methods.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
368,374
2501.14356
Causal-Inspired Multitask Learning for Video-Based Human Pose Estimation
Video-based human pose estimation has long been a fundamental yet challenging problem in computer vision. Previous studies focus on spatio-temporal modeling through the enhancement of architecture design and optimization strategies. However, they overlook the causal relationships in the joints, leading to models that may be overly tailored and thus estimate poorly to challenging scenes. Therefore, adequate causal reasoning capability, coupled with good interpretability of model, are both indispensable and prerequisite for achieving reliable results. In this paper, we pioneer a causal perspective on pose estimation and introduce a causal-inspired multitask learning framework, consisting of two stages. \textit{In the first stage}, we try to endow the model with causal spatio-temporal modeling ability by introducing two self-supervision auxiliary tasks. Specifically, these auxiliary tasks enable the network to infer challenging keypoints based on observed keypoint information, thereby imbuing causal reasoning capabilities into the model and making it robust to challenging scenes. \textit{In the second stage}, we argue that not all feature tokens contribute equally to pose estimation. Prioritizing causal (keypoint-relevant) tokens is crucial to achieve reliable results, which could improve the interpretability of the model. To this end, we propose a Token Causal Importance Selection module to identify the causal tokens and non-causal tokens (\textit{e.g.}, background and objects). Additionally, non-causal tokens could provide potentially beneficial cues but may be redundant. We further introduce a non-causal tokens clustering module to merge the similar non-causal tokens. Extensive experiments show that our method outperforms state-of-the-art methods on three large-scale benchmark datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
527,093
2406.17484
MedCare: Advancing Medical LLMs through Decoupling Clinical Alignment and Knowledge Aggregation
Large language models (LLMs) have shown substantial progress in natural language understanding and generation, proving valuable especially in the medical field. Despite advancements, challenges persist due to the complexity and diversity inherent in medical tasks, which can be categorized as knowledge-intensive tasks and alignment-required tasks. Previous approaches either ignore the latter task or focus on a minority of tasks and hence lose generalization. To address these drawbacks, we propose a progressive fine-tuning pipeline. This pipeline employs a Knowledge Aggregator and a Noise aggregator to encode diverse knowledge in the first stage and filter out detrimental information. In the second stage, we drop the Noise Aggregator to avoid the interference of suboptimal representation and leverage an additional alignment module optimized towards an orthogonal direction to the knowledge space to mitigate knowledge forgetting. Based on this two-stage paradigm, we proposed a Medical LLM through decoupling Clinical Alignment and Knowledge Aggregation (MedCare), which is designed to achieve state-of-the-art (SOTA) performance on over 20 medical tasks, as well as SOTA results on specific medical alignment tasks. Various model sizes of MedCare (1.8B, 7B, 14B) all demonstrate significant improvements over existing models with similar model sizes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
467,593
2407.18903
Using high-fidelity discrete element simulation to calibrate an expeditious terramechanics model in a multibody dynamics framework
The wheel-soil interaction has great impact on the dynamics of off-road vehicles in terramechanics applications. The Soil Contact Model (SCM), which anchors an empirical method to characterize the frictional contact between a wheel and soil, has been widely used in off-road vehicle dynamics simulations because it quickly produces adequate results for many terramechanics applications. The SCM approach calls for a set of model parameters that are obtained via a bevameter test. This test is expensive and time consuming to carry out, and in some cases difficult to set up, e.g., in extraterrestrial applications. We propose an approach to address these concerns by conducting the bevameter test in simulation, using a model that captures the physics of the actual experiment with high fidelity. To that end, we model the bevameter test rig as a multibody system, while the dynamics of the soil is captured using a discrete element model (DEM). The multibody dynamics--soil dynamics co-simulation is used to replicate the bevameter test, producing high-fidelity ground truth test data that is subsequently used to calibrate the SCM parameters within a Bayesian inference framework. To test the accuracy of the resulting SCM terramechanics, we run single wheel and full rover simulations using both DEM and SCM terrains. The SCM results match well with those produced by the DEM solution, and the simulation time for SCM is two to three orders of magnitude lower than that of DEM. All simulations in this work are performed using Chrono, an open-source, publicly available simulator. The scripts and models used are available in a public repository for reproducibility studies and further research.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
476,558
2009.11465
Model Identification and Control of a Low-Cost Wheeled Mobile Robot Using Differentiable Physics
We present the design of a low-cost wheeled mobile robot, and an analytical model for predicting its motion under the influence of motor torques and friction forces. Using our proposed model, we show how to analytically compute the gradient of an appropriate loss function, that measures the deviation between predicted motion trajectories and real-world trajectories, which are estimated using Apriltags and an overhead camera. These analytical gradients allow us to automatically infer the unknown friction coefficients, by minimizing the loss function using gradient descent. Motion trajectories that are predicted by the optimized model are in excellent agreement with their real-world counterparts. Experiments show that our proposed approach is computationally superior to existing black-box system identification methods and other data-driven techniques, and also requires very few real-world samples for accurate trajectory prediction. The proposed approach combines the data efficiency of analytical models based on first principles, with the flexibility of data-driven methods, which makes it appropriate for low-cost robots. Using the learned model and our gradient-based optimization approach, we show how to automatically compute motor control signals for driving the robot along pre-specified curves.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
197,178
2305.14706
PruMUX: Augmenting Data Multiplexing with Model Compression
As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods -- structured pruning and data multiplexing -- to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
367,198
1501.01563
A New Proof for the DoF Region of the MIMO Networks with No CSIT
In this paper, a new proof for the degrees of freedom (DoF) region of the K-user multiple-input multiple-output (MIMO) broadcast channel (BC) with no channel state information at the transmitter (CSIT) and perfect channel state information at the receivers (CSIR) is provided. Based on this proof, the capacity region of a certain class of MIMO BC with channel distribution information at the transmitter (CDIT) and perfect CSIR is derived. Finally, an outer bound for the DoF region of the K-user MIMO interference channel (IC) with no CSIT is provided.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,094
2403.17678
Hierarchical Light Transformer Ensembles for Multimodal Trajectory Forecasting
Accurate trajectory forecasting is crucial for the performance of various systems, such as advanced driver-assistance systems and self-driving vehicles. These forecasts allow us to anticipate events that lead to collisions and, therefore, to mitigate them. Deep Neural Networks have excelled in motion forecasting, but overconfidence and weak uncertainty quantification persist. Deep Ensembles address these concerns, yet applying them to multimodal distributions remains challenging. In this paper, we propose a novel approach named Hierarchical Light Transformer Ensembles (HLT-Ens) aimed at efficiently training an ensemble of Transformer architectures using a novel hierarchical loss function. HLT-Ens leverages grouped fully connected layers, inspired by grouped convolution techniques, to capture multimodal distributions effectively. We demonstrate that HLT-Ens achieves state-of-the-art performance levels through extensive experimentation, offering a promising avenue for improving trajectory forecasting techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
441,571
2412.19153
Sketch-MoMa: Teleoperation for Mobile Manipulator via Interpretation of Hand-Drawn Sketches
To use assistive robots in everyday life, a remote control system with common devices, such as 2D devices, is helpful to control the robots anytime and anywhere as intended. Hand-drawn sketches are one of the intuitive ways to control robots with 2D devices. However, since similar sketches have different intentions from scene to scene, existing work needs additional modalities to set the sketches' semantics. This requires complex operations for users and leads to decreasing usability. In this paper, we propose Sketch-MoMa, a teleoperation system using the user-given hand-drawn sketches as instructions to control a robot. We use Vision-Language Models (VLMs) to understand the user-given sketches superimposed on an observation image and infer drawn shapes and low-level tasks of the robot. We utilize the sketches and the generated shapes for recognition and motion planning of the generated low-level tasks for precise and intuitive operations. We validate our approach using state-of-the-art VLMs with 7 tasks and 5 sketch shapes. We also demonstrate that our approach effectively specifies the detailed motions, such as how to grasp and how much to rotate. Moreover, we show the competitive usability of our approach compared with the existing 2D interface through a user experiment with 14 participants.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
520,738
2004.14439
EER: Enterprise Expert Ranking using Employee Reputation
The emergence of online enterprises spread across continents have given rise to the need for expert identification in this domain. Scenarios that includes the intention of the employer to find tacit expertise and knowledge of an employee that is not documented or self-disclosed has been addressed in this article. The existing reputation based approaches towards expertise ranking in enterprises utilize PageRank, normal distribution, and hidden Markov model for expertise ranking. These models suffer issue of negative referral, collusion, reputation inflation, and dynamism. The authors have however proposed a Bayesian approach utilizing beta probability distribution based reputation model for employee ranking in enterprises. The experimental results reveal improved performance compared to previous techniques in terms of Precision and Mean Average Error (MAE) with almost 7% improvement in precision on average for the three data sets. The proposed technique is able to differentiate categories of interactions in a dynamic context. The results reveal that the technique is independent of the rating pattern and density of data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
174,881