id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1306.0733
Fast Gradient-Based Inference with Continuous Latent Variable Models in Auxiliary Form
We propose a technique for increasing the efficiency of gradient-based inference and learning in Bayesian networks with multiple layers of continuous latent vari- ables. We show that, in many cases, it is possible to express such models in an auxiliary form, where continuous latent variables are conditionally deterministic given their parents and a set of independent auxiliary variables. Variables of mod- els in this auxiliary form have much larger Markov blankets, leading to significant speedups in gradient-based inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based optimization. The relative efficiency is confirmed in ex- periments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
24,985
2012.04483
A Novel Transformation Approach of Shared-link Coded Caching Schemes for Multiaccess Networks
This paper considers the multiaccess coded caching systems formulated by Hachem et al., including a central server containing $N$ files connected to $K$ cache-less users through an error-free shared link, and $K$ cache-nodes, each equipped with a cache memory size of $M$ files. Each user has access to $L$ neighbouring cache-nodes with a cyclic wrap-around topology. The coded caching scheme proposed by Hachem et al. suffers from the case that $L$ does not divide $K$, where the needed number of transmissions (a.k.a. load) is at most four times the load expression for the case where $L$ divides $K$. Our main contribution is to propose a novel {\it transformation} approach to smartly extend the schemes satisfying some conditions for the well known shared-link caching systems to the multiaccess caching systems. Then we can get many coded caching schemes with different subpacketizations for multiaccess coded caching system. These resulting schemes have the maximum local caching gain (i.e., the cached contents stored at any $L$ neighbouring cache-nodes are different such that the number of retrieval packets by each user from the connected cache-nodes is maximal) and the same coded caching gain as the original schemes. Applying the transformation approach to the well-known shared-link coded caching scheme proposed by Maddah-Ali and Niesen, we obtain a new multiaccess coded caching scheme that achieves the same load as the scheme of Hachem et al. but for any system parameters. Under the constraint of the cache placement used in this new multiaccess coded caching scheme, our delivery strategy is approximately optimal when $K$ is sufficiently large. Finally, we also show that the transmission load of the proposed scheme can be further reduced by compressing the multicast message.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
210,473
1710.06608
Cell Segmentation in 3D Confocal Images using Supervoxel Merge-Forests with CNN-based Hypothesis Selection
Automated segmentation approaches are crucial to quantitatively analyze large-scale 3D microscopy images. Particularly in deep tissue regions, automatic methods still fail to provide error-free segmentations. To improve the segmentation quality throughout imaged samples, we present a new supervoxel-based 3D segmentation approach that outperforms current methods and reduces the manual correction effort. The algorithm consists of gentle preprocessing and a conservative super-voxel generation method followed by supervoxel agglomeration based on local signal properties and a postprocessing step to fix under-segmentation errors using a Convolutional Neural Network. We validate the functionality of the algorithm on manually labeled 3D confocal images of the plant Arabidopis thaliana and compare the results to a state-of-the-art meristem segmentation algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
82,804
2111.09013
Image Super-Resolution Using T-Tetromino Pixels
For modern high-resolution imaging sensors, pixel binning is performed in low-lighting conditions and in case high frame rates are required. To recover the original spatial resolution, single-image super-resolution techniques can be applied for upscaling. To achieve a higher image quality after upscaling, we propose a novel binning concept using tetromino-shaped pixels. It is embedded into the field of compressed sensing and the coherence is calculated to motivate the sensor layouts used. Next, we investigate the reconstruction quality using tetromino pixels for the first time in literature. Instead of using different types of tetrominoes as proposed elsewhere, we show that using a small repeating cell consisting of only four T-tetrominoes is sufficient. For reconstruction, we use a locally fully connected reconstruction (LFCR) network as well as two classical reconstruction methods from the field of compressed sensing. Using the LFCR network in combination with the proposed tetromino layout, we achieve superior image quality in terms of PSNR, SSIM, and visually compared to conventional single-image super-resolution using the very deep super-resolution (VDSR) network. For PSNR, a gain of up to \SI[retain-explicit-plus]{+1.92}{dB} is achieved.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
266,875
2002.10670
Exploring BERT Parameter Efficiency on the Stanford Question Answering Dataset v2.0
In this paper we explore the parameter efficiency of BERT arXiv:1810.04805 on version 2.0 of the Stanford Question Answering dataset (SQuAD2.0). We evaluate the parameter efficiency of BERT while freezing a varying number of final transformer layers as well as including the adapter layers proposed in arXiv:1902.00751. Additionally, we experiment with the use of context-aware convolutional (CACNN) filters, as described in arXiv:1709.08294v3, as a final augmentation layer for the SQuAD2.0 tasks. This exploration is motivated in part by arXiv:1907.10597, which made a compelling case for broadening the evaluation criteria of artificial intelligence models to include various measures of resource efficiency. While we do not evaluate these models based on their floating point operation efficiency as proposed in arXiv:1907.10597, we examine efficiency with respect to training time, inference time, and total number of model parameters. Our results largely corroborate those of arXiv:1902.00751 for adapter modules, while also demonstrating that gains in F1 score from adding context-aware convolutional filters are not practical due to the increase in training and inference time.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
165,476
2201.09910
Learning Neural Contextual Bandits Through Perturbed Rewards
Thanks to the power of representation learning, neural contextual bandit algorithms demonstrate remarkable performance improvement against their classical counterparts. But because their exploration has to be performed in the entire neural network parameter space to obtain nearly optimal regret, the resulting computational cost is prohibitively high. We perturb the rewards when updating the neural network to eliminate the need of explicit exploration and the corresponding computational overhead. We prove that a $\tilde{O}(\tilde{d}\sqrt{T})$ regret upper bound is still achievable under standard regularity conditions, where $T$ is the number of rounds of interactions and $\tilde{d}$ is the effective dimension of a neural tangent kernel matrix. Extensive comparisons with several benchmark contextual bandit algorithms, including two recent neural contextual bandit models, demonstrate the effectiveness and computational efficiency of our proposed neural bandit algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,812
2012.10758
Consolidated Dataset and Metrics for High-Dynamic-Range Image Quality
Increasing popularity of high-dynamic-range (HDR) image and video content brings the need for metrics that could predict the severity of image impairments as seen on displays of different brightness levels and dynamic range. Such metrics should be trained and validated on a sufficiently large subjective image quality dataset to ensure robust performance. As the existing HDR quality datasets are limited in size, we created a Unified Photometric Image Quality dataset (UPIQ) with over 4,000 images by realigning and merging existing HDR and standard-dynamic-range (SDR) datasets. The realigned quality scores share the same unified quality scale across all datasets. Such realignment was achieved by collecting additional cross-dataset quality comparisons and re-scaling data with a psychometric scaling method. Images in the proposed dataset are represented in absolute photometric and colorimetric units, corresponding to light emitted from a display. We use the new dataset to retrain existing HDR metrics and show that the dataset is sufficiently large for training deep architectures. We show the utility of the dataset on brightness aware image compression.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,431
2402.07192
Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a noncontact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
428,603
2004.00503
Deep Learning Approach for Enhanced Cyber Threat Indicators in Twitter Stream
In recent days, the amount of Cyber Security text data shared via social media resources mainly Twitter has increased. An accurate analysis of this data can help to develop cyber threat situational awareness framework for a cyber threat. This work proposes a deep learning based approach for tweet data analysis. To convert the tweets into numerical representations, various text representations are employed. These features are feed into deep learning architecture for optimal feature extraction as well as classification. Various hyperparameter tuning approaches are used for identifying optimal text representation method as well as optimal network parameters and network structures for deep learning models. For comparative analysis, the classical text representation method with classical machine learning algorithm is employed. From the detailed analysis of experiments, we found that the deep learning architecture with advanced text representation methods performed better than the classical text representation and classical machine learning algorithms. The primary reason for this is that the advanced text representation methods have the capability to learn sequential properties which exist among the textual data and deep learning architectures learns the optimal features along with decreasing the feature size.
false
false
false
true
false
false
true
false
true
false
false
false
true
false
false
true
false
false
170,660
2007.05566
Contrastive Training for Improved Out-of-Distribution Detection
Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks. By introducing and employing the Confusion Log Probability (CLP) score, which quantifies the difficulty of the OOD detection task by capturing the similarity of inlier and outlier datasets, we show that our method especially improves performance in the `near OOD' classes -- a particularly challenging setting for previous methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,714
1912.01454
Representation Learning on Unit Ball with 3D Roto-Translational Equivariance
Convolution is an integral operation that defines how the shape of one function is modified by another function. This powerful concept forms the basis of hierarchical feature learning in deep neural networks. Although performing convolution in Euclidean geometries is fairly straightforward, its extension to other topological spaces---such as a sphere ($\mathbb{S}^2$) or a unit ball ($\mathbb{B}^3$)---entails unique challenges. In this work, we propose a novel `\emph{volumetric convolution}' operation that can effectively model and convolve arbitrary functions in $\mathbb{B}^3$. We develop a theoretical framework for \emph{volumetric convolution} based on Zernike polynomials and efficiently implement it as a differentiable and an easily pluggable layer in deep networks. By construction, our formulation leads to the derivation of a novel formula to measure the symmetry of a function in $\mathbb{B}^3$ around an arbitrary axis, that is useful in function analysis tasks. We demonstrate the efficacy of proposed volumetric convolution operation on one viable use case i.e., 3D object recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,094
2110.05041
Provenance in Temporal Interaction Networks
In temporal interaction networks, vertices correspond to entities, which exchange data quantities (e.g., money, bytes, messages) over time. Tracking the origin of data that have reached a given vertex at any time can help data analysts to understand the reasons behind the accumulated quantity at the vertex or behind the interactions between entities. In this paper, we study data provenance in a temporal interaction network. We investigate alternative propagation models that may apply to different application scenarios. For each such model, we propose annotation mechanisms that track the origin of propagated data in the network and the routes of data quantities. Besides analyzing the space and time complexity of these mechanisms, we propose techniques that reduce their cost in practice, by either (i) limiting provenance tracking to a subset of vertices or groups of vertices, or (ii) tracking provenance only for quantities that were generated in the near past or limiting the provenance data in each vertex by a budget constraint. Our experimental evaluation on five real datasets shows that quantity propagation models based on generation time or receipt order scale well on large graphs; on the other hand, a model that propagates quantities proportionally has high space and time requirements and can benefit from the aforementioned cost reduction techniques.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
260,144
1512.03549
Words are not Equal: Graded Weighting Model for building Composite Document Vectors
Despite the success of distributional semantics, composing phrases from word vectors remains an important challenge. Several methods have been tried for benchmark tasks such as sentiment classification, including word vector averaging, matrix-vector approaches based on parsing, and on-the-fly learning of paragraph vectors. Most models usually omit stop words from the composition. Instead of such an yes-no decision, we consider several graded schemes where words are weighted according to their discriminatory relevance with respect to its use in the document (e.g., idf). Some of these methods (particularly tf-idf) are seen to result in a significant improvement in performance over prior state of the art. Further, combining such approaches into an ensemble based on alternate classifiers such as the RNN model, results in an 1.6% performance improvement on the standard IMDB movie review dataset, and a 7.01% improvement on Amazon product reviews. Since these are language free models and can be obtained in an unsupervised manner, they are of interest also for under-resourced languages such as Hindi as well and many more languages. We demonstrate the language free aspects by showing a gain of 12% for two review datasets over earlier results, and also release a new larger dataset for future testing (Singh,2015).
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
50,053
2401.13462
Growing from Exploration: A self-exploring framework for robots based on foundation models
Intelligent robot is the ultimate goal in the robotics field. Existing works leverage learning-based or optimization-based methods to accomplish human-defined tasks. However, the challenge of enabling robots to explore various environments autonomously remains unresolved. In this work, we propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention. To achieve this goal, we devise modules including self-exploration, knowledge-base-building, and close-loop feedback based on foundation models. Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks. During the process of exploration, the robot will acquire skills from beneficial experiences that are useful in the future. GExp provides robots with the ability to solve complex tasks through self-exploration. GExp work is independent of prior interactive knowledge and human intervention, allowing it to adapt directly to different scenarios, unlike previous studies that provided in-context examples as few-shot learning. In addition, we propose a workflow of deploying the real-world robot system with self-learned skills as an embodied assistant.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
423,736
2312.07174
Investigation into the Training Dynamics of Learned Optimizers
Optimization is an integral part of modern deep learning. Recently, the concept of learned optimizers has emerged as a way to accelerate this optimization process by replacing traditional, hand-crafted algorithms with meta-learned functions. Despite the initial promising results of these methods, issues with stability and generalization still remain, limiting their practical use. Moreover, their inner workings and behavior under different conditions are not yet fully understood, making it difficult to come up with improvements. For this reason, our work examines their optimization trajectories from the perspective of network architecture symmetries and parameter update distributions. Furthermore, by contrasting the learned optimizers with their manually designed counterparts, we identify several key insights that demonstrate how each approach can benefit from the strengths of the other.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
414,824
2203.05132
Compilable Neural Code Generation with Compiler Feedback
Automatically generating compilable programs with (or without) natural language descriptions has always been a touchstone problem for computational linguistics and automated software engineering. Existing deep-learning approaches model code generation as text generation, either constrained by grammar structures in decoder, or driven by pre-trained language models on large-scale code corpus (e.g., CodeGPT, PLBART, and CodeT5). However, few of them account for compilability of the generated programs. To improve compilability of the generated programs, this paper proposes COMPCODER, a three-stage pipeline utilizing compiler feedback for compilable code generation, including language model fine-tuning, compilability reinforcement, and compilability discrimination. Comprehensive experiments on two code generation tasks demonstrate the effectiveness of our proposed approach, improving the success rate of compilation from 44.18 to 89.18 in code completion on average and from 70.3 to 96.2 in text-to-code generation, respectively, when comparing with the state-of-the-art CodeGPT.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
284,717
1607.00436
Terminal-Set-Enhanced Community Detection in Social Networks
Community detection aims to reveal the community structure in a social network, which is one of the fundamental problems. In this paper we investigate the community detection problem based on the concept of terminal set. A terminal set is a group of users within which any two users belong to different communities. Although the community detection is hard in general, the terminal set can be very helpful in designing effective community detection algorithms. We first present a 2-approximation algorithm running in polynomial time for the original community detection problem. In the other issue, in order to better support real applications we further consider the case when extra restrictions are imposed on feasible partitions. For such customized community detection problems, we provide two randomized algorithms which are able to find the optimal partition with a high probability. Demonstrated by the experiments performed on benchmark networks the proposed algorithms are able to produce high-quality communities.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
58,073
2410.03791
People are poorly equipped to detect AI-powered voice clones
As generative artificial intelligence (AI) continues its ballistic trajectory, everything from text to audio, image, and video generation continues to improve at mimicking human-generated content. Through a series of perceptual studies, we report on the realism of AI-generated voices in terms of identity matching and naturalness. We find human participants cannot consistently identify recordings of AI-generated voices. Specifically, participants perceived the identity of an AI-voice to be the same as its real counterpart approximately 80% of the time, and correctly identified a voice as AI generated only about 60% of the time.
true
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
494,972
2103.05950
FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding
Emerging interests have been brought to recognize previously unseen objects given very few training examples, known as few-shot object detection (FSOD). Recent researches demonstrate that good feature embedding is the key to reach favorable few-shot learning performance. We observe object proposals with different Intersection-of-Union (IoU) scores are analogous to the intra-image augmentation used in contrastive approaches. And we exploit this analogy and incorporate supervised contrastive learning to achieve more robust objects representations in FSOD. We present Few-Shot object detection via Contrastive proposals Encoding (FSCE), a simple yet effective approach to learning contrastive-aware object proposal encodings that facilitate the classification of detected objects. We notice the degradation of average precision (AP) for rare objects mainly comes from misclassifying novel instances as confusable classes. And we ease the misclassification issues by promoting instance level intra-class compactness and inter-class variance via our contrastive proposal encoding loss (CPE loss). Our design outperforms current state-of-the-art works in any shot and all data splits, with up to +8.8% on standard benchmark PASCAL VOC and +2.7% on challenging COCO benchmark. Code is available at: https: //github.com/MegviiDetection/FSCE
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
224,143
2412.07182
An Enhancement of CNN Algorithm for Rice Leaf Disease Image Classification in Mobile Applications
This study focuses on enhancing rice leaf disease image classification algorithms, which have traditionally relied on Convolutional Neural Network (CNN) models. We employed transfer learning with MobileViTV2_050 using ImageNet-1k weights, a lightweight model that integrates CNN's local feature extraction with Vision Transformers' global context learning through a separable self-attention mechanism. Our approach resulted in a significant 15.66% improvement in classification accuracy for MobileViTV2_050-A, our first enhanced model trained on the baseline dataset, achieving 93.14%. Furthermore, MobileViTV2_050-B, our second enhanced model trained on a broader rice leaf dataset, demonstrated a 22.12% improvement, reaching 99.6% test accuracy. Additionally, MobileViTV2-A attained an F1-score of 93% across four rice labels and a Receiver Operating Characteristic (ROC) curve ranging from 87% to 97%. In terms of resource consumption, our enhanced models reduced the total parameters of the baseline CNN model by up to 92.50%, from 14 million to 1.1 million. These results indicate that MobileViTV2_050 not only improves computational efficiency through its separable self-attention mechanism but also enhances global context learning. Consequently, it offers a lightweight and robust solution suitable for mobile deployment, advancing the interpretability and practicality of models in precision agriculture.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
515,546
2112.03860
Differentiable Gaussianization Layers for Inverse Problems Regularized by Deep Generative Models
Deep generative models such as GANs, normalizing flows, and diffusion models are powerful regularizers for inverse problems. They exhibit great potential for helping reduce ill-posedness and attain high-quality results. However, the latent tensors of such deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during inversion, particularly in the presence of data noise and inaccurate forward models, leading to low-fidelity solutions. To address this issue, we propose to reparameterize and Gaussianize the latent tensors using novel differentiable data-dependent layers wherein custom operators are defined by solving optimization problems. These proposed layers constrain inverse problems to obtain high-fidelity in-distribution solutions. We validate our technique on three inversion tasks: compressive-sensing MRI, image deblurring, and eikonal tomography (a nonlinear PDE-constrained inverse problem) using two representative deep generative models: StyleGAN2 and Glow. Our approach achieves state-of-the-art performance in terms of accuracy and consistency.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
270,360
2301.04243
Robust Human Identity Anonymization using Pose Estimation
Many outdoor autonomous mobile platforms require more human identity anonymized data to power their data-driven algorithms. The human identity anonymization should be robust so that less manual intervention is needed, which remains a challenge for current face detection and anonymization systems. In this paper, we propose to use the skeleton generated from the state-of-the-art human pose estimation model to help localize human heads. We develop criteria to evaluate the performance and compare it with the face detection approach. We demonstrate that the proposed algorithm can reduce missed faces and thus better protect the identity information for the pedestrians. We also develop a confidence-based fusion method to further improve the performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
340,000
2210.10360
Adaptive Neural Network Ensemble Using Frequency Distribution
Neural network (NN) ensembles can reduce large prediction variance of NN and improve prediction accuracy. For highly nonlinear problems with insufficient data set, the prediction accuracy of NN models becomes unstable, resulting in a decrease in the accuracy of ensembles. Therefore, this study proposes a frequency distribution-based ensemble that identifies core prediction values, which are expected to be concentrated near the true prediction value. The frequency distribution-based ensemble classifies core prediction values supported by multiple prediction values by conducting statistical analysis with a frequency distribution, which is based on various prediction values obtained from a given prediction point. The frequency distribution-based ensemble can improve predictive performance by excluding prediction values with low accuracy and coping with the uncertainty of the most frequent value. An adaptive sampling strategy that sequentially adds samples based on the core prediction variance calculated as the variance of the core prediction values is proposed to improve the predictive performance of the frequency distribution-based ensemble efficiently. Results of various case studies show that the prediction accuracy of the frequency distribution-based ensemble is higher than that of Kriging and other existing ensemble methods. In addition, the proposed adaptive sampling strategy effectively improves the predictive performance of the frequency distribution-based ensemble compared with the previously developed space-filling and prediction variance-based strategies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
324,891
1708.02271
Implementation of Torque Controller for Brushless Motors on the Omni-directional Wheeled Mobile Robot
The major issue for the wheeled mobile robot is the low level controller gains tuning up especially in the robot competition. The floor surface can be damaged by the robot wheels during the competition, therefore the surface coefficient can be changed over time. PI gains have to be tuned before every match along the competition. In this research, the torque controller is defined and implemented in order to solve this problem. Torque controller consists of a PI controller for the robot wheel's angular velocity and a dynamic equation of brushless motor. The motor dynamics can be derived from the energy conservation law. Three different carpets, which have the different friction coefficients, are used in the experiments. The robot wheel's angular velocity profiles are generated from the robot kinematics with different initial conditions. The output paths of the robot with the torque controller are compared with the output paths of the robot with regular PI controller when the same wheel angular velocity profiles are applied. The results show that the torque controller can provide a better robot path than the normal PI controller.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
78,552
2406.19825
Reinforcement Learning for Efficient Design and Control Co-optimisation of Energy Systems
The ongoing energy transition drives the development of decentralised renewable energy sources, which are heterogeneous and weather-dependent, complicating their integration into energy systems. This study tackles this issue by introducing a novel reinforcement learning (RL) framework tailored for the co-optimisation of design and control in energy systems. Traditionally, the integration of renewable sources in the energy sector has relied on complex mathematical modelling and sequential processes. By leveraging RL's model-free capabilities, the framework eliminates the need for explicit system modelling. By optimising both control and design policies jointly, the framework enhances the integration of renewable sources and improves system efficiency. This contribution paves the way for advanced RL applications in energy management, leading to more efficient and effective use of renewable energy sources.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,581
2303.08812
Trigger-Level Event Reconstruction for Neutrino Telescopes Using Sparse Submanifold Convolutional Neural Networks
Convolutional neural networks (CNNs) have seen extensive applications in scientific data analysis, including in neutrino telescopes. However, the data from these experiments present numerous challenges to CNNs, such as non-regular geometry, sparsity, and high dimensionality. Consequently, CNNs are highly inefficient on neutrino telescope data, and require significant pre-processing that results in information loss. We propose sparse submanifold convolutions (SSCNNs) as a solution to these issues and show that the SSCNN event reconstruction performance is comparable to or better than traditional and machine learning algorithms. Additionally, our SSCNN runs approximately 16 times faster than a traditional CNN on a GPU. As a result of this speedup, it is expected to be capable of handling the trigger-level event rate of IceCube-scale neutrino telescopes. These networks could be used to improve the first estimation of the neutrino energy and direction to seed more advanced reconstructions, or to provide this information to an alert-sending system to quickly follow-up interesting events.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
351,784
2406.08206
Sources of Gain: Decomposing Performance in Conditional Average Dose Response Estimation
Estimating conditional average dose responses (CADR) is an important but challenging problem. Estimators must correctly model the potentially complex relationships between covariates, interventions, doses, and outcomes. In recent years, the machine learning community has shown great interest in developing tailored CADR estimators that target specific challenges. Their performance is typically evaluated against other methods on (semi-) synthetic benchmark datasets. Our paper analyses this practice and shows that using popular benchmark datasets without further analysis is insufficient to judge model performance. Established benchmarks entail multiple challenges, whose impacts must be disentangled. Therefore, we propose a novel decomposition scheme that allows the evaluation of the impact of five distinct components contributing to CADR estimator performance. We apply this scheme to eight popular CADR estimators on four widely-used benchmark datasets, running nearly 1,500 individual experiments. Our results reveal that most established benchmarks are challenging for reasons different from their creators' claims. Notably, confounding, the key challenge tackled by most estimators, is not an issue in any of the considered datasets. We discuss the major implications of our findings and present directions for future research.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
463,388
1902.01975
Age of Information in Multiple Sensing
Having timely and fresh knowledge about the current state of information sources is critical in a variety of applications. In particular, a status update may arrive at the destination much later than its generation time due to processing and communication delays. The freshness of the status update at the destination is captured by the notion of age of information. In this study, we first analyze a network with a single source, $n$ servers, and the monitor (destination). The servers independently sense the source of information and send the status update to the monitor. We then extend our result to multiple independent sources of information in the presence of $n$ servers. We assume that updates arrive at the servers according to Poisson random processes. Each server sends its update to the monitor through a direct link, which is modeled as a queue. The service time to transmit an update is considered to be an exponential random variable. We examine both homogeneous and heterogeneous service and arrival rates for the single-source case, and only homogeneous arrival and service rates for the multiple sources case. We derive a closed-form expression for the average age of information under a last-come-first-serve (LCFS) queue for a single source and arbitrary $n$ homogeneous servers. For $n=2,3$, we derive the explicit average age of information for arbitrary sources and homogeneous servers, and for a single source and heterogeneous servers. For $n=2$ we find the optimal arrival rates given a fixed sum arrival rate and service rates.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
120,779
2303.09678
Neural Lyapunov Control for Nonlinear Systems with Unstructured Uncertainties
Stabilizing controller design and region of attraction (RoA) estimation are essential in nonlinear control. Moreover, it is challenging to implement a control Lyapunov function (CLF) in practice when only partial knowledge of the system is available. We propose a learning framework that can synthesize state-feedback controllers and a CLF for control-affine nonlinear systems with unstructured uncertainties. Based on a regularity condition on these uncertainties, we model them as bounded disturbances and prove that a CLF for the nominal system (estimate of the true system) is an input-to-state stable control Lyapunov function (ISS-CLF) for the true system when the CLF's gradient is bounded. We integrate the robust Lyapunov analysis with the learning of both the control law and CLF. We demonstrate the effectiveness of our learning framework on several examples, such as an inverted pendulum system, a strict-feedback system, and a cart-pole system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
352,140
2412.10457
Explaining Model Overfitting in CNNs via GMM Clustering
Convolutional Neural Networks (CNNs) have demonstrated remarkable prowess in the field of computer vision. However, their opaque decision-making processes pose significant challenges for practical applications. In this study, we provide quantitative metrics for assessing CNN filters by clustering the feature maps corresponding to individual filters in the model via Gaussian Mixture Model (GMM). By analyzing the clustering results, we screen out some anomaly filters associated with outlier samples. We further analyze the relationship between the anomaly filters and model overfitting, proposing three hypotheses. This method is universally applicable across diverse CNN architectures without modifications, as evidenced by its successful application to models like AlexNet and LeNet-5. We present three meticulously designed experiments demonstrating our hypotheses from the perspectives of model behavior, dataset characteristics, and filter impacts. Through this work, we offer a novel perspective for evaluating the CNN performance and gain new insights into the operational behavior of model overfitting.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
516,945
1707.09441
A compressive channel estimation technique robust to synchronization impairments
Initial access at millimeter wave frequencies is a challenging problem due to hardware non-idealities and low SNR measurements prior to beamforming. Prior work has exploited the observation that mmWave MIMO channels are sparse in the spatial angle domain and has used compressed sensing based algorithms for channel estimation. Most of them, however, ignore hardware impairments like carrier frequency offset and phase noise, and fail to perform well when such impairments are considered. In this paper, we develop a compressive channel estimation algorithm for narrowband mmWave systems, which is robust to such non idealities. We address this problem by constructing a tensor that models both the mmWave channel and CFO, and estimate the tensor while still exploiting the sparsity of the mmWave channel. Simulation results show that under the same settings, our method performs better than comparable algorithms that are robust to phase errors.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
78,004
2411.15109
Effective Littlestone Dimension
Delle Rose et al.~(COLT'23) introduced an effective version of the Vapnik-Chervonenkis dimension, and showed that it characterizes improper PAC learning with total computable learners. In this paper, we introduce and study a similar effectivization of the notion of Littlestone dimension. Finite effective Littlestone dimension is a necessary condition for computable online learning but is not a sufficient one -- which we already establish for classes of the effective Littlestone dimension 2. However, the effective Littlestone dimension equals the optimal mistake bound for computable learners in two special cases: a) for classes of Littlestone dimension 1 and b) when the learner receives as additional information an upper bound on the numbers to be guessed. Interestingly, finite effective Littlestone dimension also guarantees that the class consists only of computable functions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
510,441
2111.15181
Zero-Shot Semantic Segmentation via Spatial and Multi-Scale Aware Visual Class Embedding
Fully supervised semantic segmentation technologies bring a paradigm shift in scene understanding. However, the burden of expensive labeling cost remains as a challenge. To solve the cost problem, recent studies proposed language model based zero-shot semantic segmentation (L-ZSSS) approaches. In this paper, we address L-ZSSS has a limitation in generalization which is a virtue of zero-shot learning. Tackling the limitation, we propose a language-model-free zero-shot semantic segmentation framework, Spatial and Multi-scale aware Visual Class Embedding Network (SM-VCENet). Furthermore, leveraging vision-oriented class embedding SM-VCENet enriches visual information of the class embedding by multi-scale attention and spatial attention. We also propose a novel benchmark (PASCAL2COCO) for zero-shot semantic segmentation, which provides generalization evaluation by domain adaptation and contains visually challenging samples. In experiments, our SM-VCENet outperforms zero-shot semantic segmentation state-of-the-art by a relative margin in PASCAL-5i benchmark and shows generalization-robustness in PASCAL2COCO benchmark.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
268,848
2205.03454
Structure Learning in Graphical Models from Indirect Observations
This paper considers learning of the graphical structure of a $p$-dimensional random vector $X \in R^p$ using both parametric and non-parametric methods. Unlike the previous works which observe $x$ directly, we consider the indirect observation scenario in which samples $y$ are collected via a sensing matrix $A \in R^{d\times p}$, and corrupted with some additive noise $w$, i.e, $Y = AX + W$. For the parametric method, we assume $X$ to be Gaussian, i.e., $x\in R^p\sim N(\mu, \Sigma)$ and $\Sigma \in R^{p\times p}$. For the first time, we show that the correct graphical structure can be correctly recovered under the indefinite sensing system ($d < p$) using insufficient samples ($n < p$). In particular, we show that for the exact recovery, we require dimension $d = \Omega(p^{0.8})$ and sample number $n = \Omega(p^{0.8}\log^3 p)$. For the nonparametric method, we assume a nonparanormal distribution for $X$ rather than Gaussian. Under mild conditions, we show that our graph-structure estimator can obtain the correct structure. We derive the minimum sample number $n$ and dimension $d$ as $n\gtrsim (deg)^4 \log^4 n$ and $d \gtrsim p + (deg\cdot\log(d-p))^{\beta/4}$, respectively, where deg is the maximum Markov blanket in the graphical model and $\beta > 0$ is some fixed positive constant. Additionally, we obtain a non-asymptotic uniform bound on the estimation error of the CDF of $X$ from indirect observations with inexact knowledge of the noise distribution. To the best of our knowledge, this bound is derived for the first time and may serve as an independent interest. Numerical experiments on both real-world and synthetic data are provided confirm the theoretical results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
295,283
2204.13536
Reconciling the Quality vs Popularity Dichotomy in Online Cultural Markets
We propose a simple model of an idealized online cultural market in which $N$ items, endowed with a hidden quality metric, are recommended to users by a ranking algorithm possibly biased by the current items' popularity. Our goal is to better understand the underlying mechanisms of the well-known fact that popularity bias can prevent higher-quality items from becoming more popular than lower-quality items, producing an undesirable misalignment between quality and popularity rankings. We do so under the assumption that users, having limited time/attention, are able to discriminate the best-quality only within a random subset of the items. We discover the existence of a harmful regime in which improper use of popularity can seriously compromise the emergence of quality, and a benign regime in which wise use of popularity, coupled with a small discrimination effort on behalf of users, guarantees the perfect alignment of quality and popularity ranking. Our findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
false
293,845
2309.07248
Geometric Gait Optimization for Inertia-Dominated Systems With Nonzero Net Momentum
Inertia-dominated mechanical systems can achieve net displacement by 1) periodically changing their shape (known as kinematic gait) and 2) adjusting their inertia distribution to utilize the existing nonzero net momentum (known as momentum gait). Therefore, finding the gait that most effectively utilizes the two types of locomotion in terms of the magnitude of the net momentum is a significant topic in the study of locomotion. For kinematic locomotion with zero net momentum, the geometry of optimal gaits is expressed as the equilibria of system constraint curvature flux through the surface bounded by the gait, and the cost associated with executing the gait in the metric space. In this paper, we identify the geometry of optimal gaits with nonzero net momentum effects by lifting the gait description to a time-parameterized curve in shape-time space. We also propose the variational gait optimization algorithm corresponding to the lifted geometric structure, and identify two distinct patterns in the optimal motion, determined by whether or not the kinematic and momentum gaits are concentric. The examples of systems with and without fluid-added mass demonstrate that the proposed algorithm can efficiently solve forward and turning locomotion gaits in the presence of nonzero net momentum. At any given momentum and effort limit, the proposed optimal gait that takes into account both momentum and kinematic effects outperforms the reference gaits that each only considers one of these effects.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
391,711
1507.03049
Forecasting the cost of processing multi-join queries via hashing for main-memory databases (Extended version)
Database management systems (DBMSs) carefully optimize complex multi-join queries to avoid expensive disk I/O. As servers today feature tens or hundreds of gigabytes of RAM, a significant fraction of many analytic databases becomes memory-resident. Even after careful tuning for an in-memory environment, a linear disk I/O model such as the one implemented in PostgreSQL may make query response time predictions that are up to 2X slower than the optimal multi-join query plan over memory-resident data. This paper introduces a memory I/O cost model to identify good evaluation strategies for complex query plans with multiple hash-based equi-joins over memory-resident data. The proposed cost model is carefully validated for accuracy using three different systems, including an Amazon EC2 instance, to control for hardware-specific differences. Prior work in parallel query evaluation has advocated right-deep and bushy trees for multi-join queries due to their greater parallelization and pipelining potential. A surprising finding is that the conventional wisdom from shared-nothing disk-based systems does not directly apply to the modern shared-everything memory hierarchy. As corroborated by our model, the performance gap between the optimal left-deep and right-deep query plan can grow to about 10X as the number of joins in the query increases.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
45,044
1906.04940
CogCompTime: A Tool for Understanding Time in Natural Language Text
Automatic extraction of temporal information in text is an important component of natural language understanding. It involves two basic tasks: (1) Understanding time expressions that are mentioned explicitly in text (e.g., February 27, 1998 or tomorrow), and (2) Understanding temporal information that is conveyed implicitly via relations. In this paper, we introduce CogCompTime, a system that has these two important functionalities. It incorporates the most recent progress, achieves state-of-the-art performance, and is publicly available.1 We believe that this demo will be useful for multiple time-aware applications and provide valuable insight for future research in temporal understanding.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
134,885
2402.10086
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
true
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
429,809
2409.19315
Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
Transformer networks, driven by self-attention, are central to Large Language Models. In generative Transformers, self-attention uses cache memory to store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks. We present a custom self-attention in-memory computing architecture based on emerging charge-based memories called gain cells, which can be efficiently written to store new tokens during sequence generation and enable parallel analog dot-product computation required for self-attention. However, the analog gain cell circuits introduce non-idealities and constraints preventing the direct mapping of pre-trained models. To circumvent this problem, we design an initialization algorithm achieving text processing performance comparable to GPT-2 without training from scratch. Our architecture respectively reduces attention latency and energy consumption by up to two and five orders of magnitude compared to GPUs, marking a significant step toward ultra-fast, low-power generative Transformers.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
492,626
1006.5686
Geometric Approximations of Some Aloha-like Stability Regions
Most bounds on the stability region of Aloha give necessary and sufficient conditions for the stability of an arrival rate vector under a specific contention probability (control) vector. But such results do not yield easy-to-check bounds on the overall Aloha stability region because they potentially require checking membership in an uncountably infinite number of sets parameterized by each possible control vector. In this paper we consider an important specific inner bound on Aloha that has this property of difficulty to check membership in the set. We provide ellipsoids (for which membership is easy-to-check) that we conjecture are inner and outer bounds on this set. We also study the set of controls that stabilize a fixed arrival rate vector; this set is shown to be a convex set.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,918
2311.00797
Tipping Points of Evolving Epidemiological Networks: Machine Learning-Assisted, Data-Driven Effective Modeling
We study the tipping point collective dynamics of an adaptive susceptible-infected-susceptible (SIS) epidemiological network in a data-driven, machine learning-assisted manner. We identify a parameter-dependent effective stochastic differential equation (eSDE) in terms of physically meaningful coarse mean-field variables through a deep-learning ResNet architecture inspired by numerical stochastic integrators. We construct an approximate effective bifurcation diagram based on the identified drift term of the eSDE and contrast it with the mean-field SIS model bifurcation diagram. We observe a subcritical Hopf bifurcation in the evolving network's effective SIS dynamics, that causes the tipping point behavior; this takes the form of large amplitude collective oscillations that spontaneously -- yet rarely -- arise from the neighborhood of a (noisy) stationary state. We study the statistics of these rare events both through repeated brute force simulations and by using established mathematical/computational tools exploiting the right-hand-side of the identified SDE. We demonstrate that such a collective SDE can also be identified (and the rare events computations also performed) in terms of data-driven coarse observables, obtained here via manifold learning techniques, in particular Diffusion Maps. The workflow of our study is straightforwardly applicable to other complex dynamics problems exhibiting tipping point dynamics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
404,776
cs/0212038
The Identification of Context-Sensitive Features: A Formal Definition of Context for Concept Learning
A large body of research in machine learning is concerned with supervised learning from examples. The examples are typically represented as vectors in a multi-dimensional feature space (also known as attribute-value descriptions). A teacher partitions a set of training examples into a finite number of classes. The task of the learning algorithm is to induce a concept from the training examples. In this paper, we formally distinguish three types of features: primary, contextual, and irrelevant features. We also formally define what it means for one feature to be context-sensitive to another feature. Context-sensitive features complicate the task of the learner and potentially impair the learner's performance. Our formal definitions make it possible for a learner to automatically identify context-sensitive features. After context-sensitive features have been identified, there are several strategies that the learner can employ for managing the features; however, a discussion of these strategies is outside of the scope of this paper. The formal definitions presented here correct a flaw in previously proposed definitions. We discuss the relationship between our work and a formal definition of relevance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
537,767
2407.17277
From Data to Predictive Control: A Framework for Stochastic Linear Systems with Output Measurements
We introduce data to predictive control, D2PC, a framework to facilitate the design of robust and predictive controllers from data. The proposed framework is designed for discrete-time stochastic linear systems with output measurements and provides a principled design of a predictive controller based on data. The framework starts with a parameter identification method based on the Expectation-Maximization algorithm, which incorporates pre-defined structural constraints. Additionally, we provide an asymptotically correct method to quantify uncertainty in parameter estimates. Next, we develop a strategy to synthesize robust dynamic output-feedback controllers tailored to the derived uncertainty characterization. Finally, we introduce a predictive control scheme that guarantees recursive feasibility and satisfaction of chance constraints. This framework marks a significant advancement in integrating data into robust and predictive control schemes. We demonstrate the efficacy of D2PC through a numerical example involving a $10$-dimensional spring-mass-damper system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
475,908
2005.12592
GECToR -- Grammatical Error Correction: Tag, Not Rewrite
In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful corpora, and second on a combination of errorful and error-free parallel corpora. We design custom token-level transformations to map input tokens to target corrections. Our best single-model/ensemble GEC tagger achieves an $F_{0.5}$ of 65.3/66.5 on CoNLL-2014 (test) and $F_{0.5}$ of 72.4/73.6 on BEA-2019 (test). Its inference speed is up to 10 times as fast as a Transformer-based seq2seq GEC system. The code and trained models are publicly available.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
178,782
2410.11199
Isambard-AI: a leadership class supercomputer optimised specifically for Artificial Intelligence
Isambard-AI is a new, leadership-class supercomputer, designed to support AI-related research. Based on the HPE Cray EX4000 system, and housed in a new, energy efficient Modular Data Centre in Bristol, UK, Isambard-AI employs 5,448 NVIDIA Grace-Hopper GPUs to deliver over 21 ExaFLOP/s of 8-bit floating point performance for LLM training, and over 250 PetaFLOP/s of 64-bit performance, for under 5MW. Isambard-AI integrates two, all-flash storage systems: a 20 PiByte Cray ClusterStor and a 3.5 PiByte VAST solution. Combined these give Isambard-AI flexibility for training, inference and secure data accesses and sharing. But it is the software stack where Isambard-AI will be most different from traditional HPC systems. Isambard-AI is designed to support users who may have been using GPUs in the cloud, and so access will more typically be via Jupyter notebooks, MLOps, or other web-based, interactive interfaces, rather than the approach used on traditional supercomputers of sshing into a system before submitting jobs to a batch scheduler. Its stack is designed to be quickly and regularly upgraded to keep pace with the rapid evolution of AI software, with full support for containers. Phase 1 of Isambard-AI is due online in May/June 2024, with the full system expected in production by the end of the year.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
498,432
2402.04211
Variational Shapley Network: A Probabilistic Approach to Self-Explaining Shapley values with Uncertainty Quantification
Shapley values have emerged as a foundational tool in machine learning (ML) for elucidating model decision-making processes. Despite their widespread adoption and unique ability to satisfy essential explainability axioms, computational challenges persist in their estimation when ($i$) evaluating a model over all possible subset of input feature combinations, ($ii$) estimating model marginals, and ($iii$) addressing variability in explanations. We introduce a novel, self-explaining method that simplifies the computation of Shapley values significantly, requiring only a single forward pass. Recognizing the deterministic treatment of Shapley values as a limitation, we explore incorporating a probabilistic framework to capture the inherent uncertainty in explanations. Unlike alternatives, our technique does not rely directly on the observed data space to estimate marginals; instead, it uses adaptable baseline values derived from a latent, feature-specific embedding space, generated by a novel masked neural network architecture. Evaluations on simulated and real datasets underscore our technique's robust predictive and explanatory performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
427,363
2502.06149
Reward-Based Collision-Free Algorithm for Trajectory Planning of Autonomous Robots
This paper introduces a new mission planning algorithm for autonomous robots that enables the reward-based selection of an optimal waypoint sequence from a predefined set. The algorithm computes a feasible trajectory and corresponding control inputs for a robot to navigate between waypoints while avoiding obstacles, maximizing the total reward, and adhering to constraints on state, input and its derivatives, mission time window, and maximum distance. This also solves a generalized prize-collecting traveling salesman problem. The proposed algorithm employs a new genetic algorithm that evolves solution candidates toward the optimal solution based on a fitness function and crossover. During fitness evaluation, a penalty method enforces constraints, and the differential flatness property with clothoid curves efficiently penalizes infeasible trajectories. The Euler spiral method showed promising results for trajectory parameterization compared to minimum snap and jerk polynomials. Due to the discrete exploration space, crossover is performed using a dynamic time-warping-based method and extended convex combination with projection. A mutation step enhances exploration. Results demonstrate the algorithm's ability to find the optimal waypoint sequence, fulfill constraints, avoid infeasible waypoints, and prioritize high-reward ones. Simulations and experiments with a ground vehicle, quadrotor, and quadruped are presented, complemented by benchmarking and a time-complexity analysis.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
531,951
2007.00924
IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template Reconstruction Strategy for ComVE
This paper introduces our systems for the first two subtasks of SemEval Task4: Commonsense Validation and Explanation. To clarify the intention for judgment and inject contrastive information for selection, we propose the input reconstruction strategy with prompt templates. Specifically, we formalize the subtasks into the multiple-choice question answering format and construct the input with the prompt templates, then, the final prediction of question answering is considered as the result of subtasks. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches secure the third rank on both official test sets of the first two subtasks with an accuracy of 96.4 and an accuracy of 94.3 respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
185,268
2302.06239
Finite element hybridization of port-Hamiltonian systems
In this contribution, we extend the hybridization framework for the Hodge Laplacian [Awanou et al., Hybridization and postprocessing in finite element exterior calculus, 2023 ] to port-Hamiltonian systems describing linear wave propagation phenomena. To this aim, a dual field mixed Galerkin discretization is introduced, in which one variable is approximated via conforming finite element spaces, whereas the second is completely local. This scheme is equivalent to the second order mixed Galerkin formulation and retains a discrete power balance and discrete conservation laws. The mixed formulation is also equivalent to the hybrid formulation. The hybrid system can be efficiently solved using a static condensation procedure in discrete time. The size reduction achieved thanks to the hybridization is greater than the one obtained for the Hodge Laplacian as one field is completely discarded. Numerical experiments on the 3D wave and Maxwell equations show the convergence of the method and the size reduction achieved by the hybridization.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
345,337
0807.1211
Flux: FunctionaL Updates for XML (extended report)
XML database query languages have been studied extensively, but XML database updates have received relatively little attention, and pose many challenges to language design. We are developing an XML update language called Flux, which stands for FunctionaL Updates for XML, drawing upon ideas from functional programming languages. In prior work, we have introduced a core language for Flux with a clear operational semantics and a sound, decidable static type system based on regular expression types. Our initial proposal had several limitations. First, it lacked support for recursive types or update procedures. Second, although a high-level source language can easily be translated to the core language, it is difficult to propagate meaningful type errors from the core language back to the source. Third, certain updates are well-formed yet contain path errors, or ``dead'' subexpressions which never do any useful work. It would be useful to detect path errors, since they often represent errors or optimization opportunities. In this paper, we address all three limitations. Specifically, we present an improved, sound type system that handles recursion. We also formalize a source update language and give a translation to the core language that preserves and reflects typability. We also develop a path-error analysis (a form of dead-code analysis) for updates.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
2,040
1912.11856
A Comparative Study on Machine Learning Algorithms for the Control of a Wall Following Robot
A comparison of the performance of various machine learning models to predict the direction of a wall following robot is presented in this paper. The models were trained using an open-source dataset that contains 24 ultrasound sensors readings and the corresponding direction for each sample. This dataset was captured using SCITOS G5 mobile robot by placing the sensors on the robot waist. In addition to the full format with 24 sensors per record, the dataset has two simplified formats with 4 and 2 input sensor readings per record. Several control models were proposed previously for this dataset using all three dataset formats. In this paper, two primary research contributions are presented. First, presenting machine learning models with accuracies higher than all previously proposed models for this dataset using all three formats. A perfect solution for the 4 and 2 inputs sensors formats is presented using Decision Tree Classifier by achieving a mean accuracy of 100%. On the other hand, a mean accuracy of 99.82% was achieves using the 24 sensor inputs by employing the Gradient Boost Classifier. Second, presenting a comparative study on the performance of different machine learning and deep learning algorithms on this dataset. Therefore, providing an overall insight on the performance of these algorithms for similar sensor fusion problems. All the models in this paper were evaluated using Monte-Carlo cross-validation.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
158,685
2201.00199
The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling
There is an increasing interest in the application of deep learning architectures to tabular data. One of the state-of-the-art solutions is TabTransformer which incorporates an attention mechanism to better track relationships between categorical features and then makes use of a standard MLP to output its final logits. In this paper we propose multiple modifications to the original TabTransformer performing better on binary classification tasks for three separate datasets with more than 1% AUROC gains. Inspired by gated MLP, linear projections are implemented in the MLP block and multiple activation functions are tested. We also evaluate the importance of specific hyper parameters during training.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
273,893
2301.11201
Relative-Interior Solution for the (Incomplete) Linear Assignment Problem with Applications to the Quadratic Assignment Problem
We study the set of optimal solutions of the dual linear programming formulation of the linear assignment problem (LAP) to propose a method for computing a solution from the relative interior of this set. Assuming that an arbitrary dual-optimal solution and an optimal assignment are available (for which many efficient algorithms already exist), our method computes a relative-interior solution in linear time. Since the LAP occurs as a subproblem in the linear programming (LP) relaxation of the quadratic assignment problem (QAP), we employ our method as a new component in the family of dual-ascent algorithms that provide bounds on the optimal value of the QAP. To make our results applicable to the incomplete QAP, which is of interest in practical use-cases, we also provide a linear-time reduction from the incomplete LAP to the complete LAP along with a mapping that preserves optimality and membership in the relative interior. Our experiments on publicly available benchmarks indicate that our approach with relative-interior solution can frequently provide bounds near the optimum of the LP relaxation and its runtime is much lower when compared to a commercial LP solver.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
342,053
1606.02213
Energy-Aware Relay Selection and Power Allocation for Multiple-User Cooperative Networks
This paper investigates the relay assignment and power allocation problem for two different network power management policies: group lifetime maximization (GLM) and minimum weighted total power (MWTP), with the aim of lifetime maximization in symbol error rate (SER) constrained multipleuser cooperative network. With optimal power allocation solution obtained for each policy, we show that the optimal relay assignment can be obtained using bottleneck matching (BM) algorithm and minimum weighted matching (MWM) algorithm for GLM and MWTP policies, respectively. Since relay assignment with BM algorithm is not power efficient, we propose a novel minimum bottleneck matching (MBM) algorithm to solve the relay assignment problem optimally for GLM policy. To further reduce the complexity of the relay assignment, we propose suboptimal relay selection (SRS) algorithm which has linear complexity in number of source and relay nodes. Simulation results demonstrate that the proposed relay selection and power allocation strategies based on GLM policy have better network lifetime performance over the strategies based on MWTP policy. Compared to the MBM and SRS algorithm, relay assignment based on BM algorithm for GLM policy has inferior network lifetime performance at low update interval. We show that relay assignment based on MBM algorithm achieves maximum network lifetime performance and relay assignment with SRS algorithm has performance very close to it.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
56,920
2006.07281
Algorithms and Learning for Fair Portfolio Design
We consider a variation on the classical finance problem of optimal portfolio design. In our setting, a large population of consumers is drawn from some distribution over risk tolerances, and each consumer must be assigned to a portfolio of lower risk than her tolerance. The consumers may also belong to underlying groups (for instance, of demographic properties or wealth), and the goal is to design a small number of portfolios that are fair across groups in a particular and natural technical sense. Our main results are algorithms for optimal and near-optimal portfolio design for both social welfare and fairness objectives, both with and without assumptions on the underlying group structure. We describe an efficient algorithm based on an internal two-player zero-sum game that learns near-optimal fair portfolios ex ante and show experimentally that it can be used to obtain a small set of fair portfolios ex post as well. For the special but natural case in which group structure coincides with risk tolerances (which models the reality that wealthy consumers generally tolerate greater risk), we give an efficient and optimal fair algorithm. We also provide generalization guarantees for the underlying risk distribution that has no dependence on the number of portfolios and illustrate the theory with simulation results.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
181,746
1808.03143
A Hybrid Dynamic-regenerative Damping Scheme for Energy Regeneration in Variable Impedance Actuators
Increasing research efforts have been made to improve the energy efficiency of variable impedance actuators (VIAs) through reduction of energy consumption. However, the harvesting of dissipated energy in such systems remains underexplored. This study proposes a novel variable damping module design enabling energy regeneration in VIAs by exploiting the regenerative braking effect of DC motors. The proposed damping module uses four switches to combine regenerative and dynamic braking, in a hybrid approach that enables energy regeneration without reduction in the range of damping achievable. Numerical simulations and a physical experiment are presented in which the proposed module shows an optimal trade-off between task performance and energy efficiency.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
104,886
2312.15868
Video Frame Interpolation with Region-Distinguishable Priors from SAM
In existing Video Frame Interpolation (VFI) approaches, the motion estimation between neighboring frames plays a crucial role. However, the estimation accuracy in existing methods remains a challenge, primarily due to the inherent ambiguity in identifying corresponding areas in adjacent frames for interpolation. Therefore, enhancing accuracy by distinguishing different regions before motion estimation is of utmost importance. In this paper, we introduce a novel solution involving the utilization of open-world segmentation models, e.g., SAM (Segment Anything Model), to derive Region-Distinguishable Priors (RDPs) in different frames. These RDPs are represented as spatial-varying Gaussian mixtures, distinguishing an arbitrary number of areas with a unified modality. RDPs can be integrated into existing motion-based VFI methods to enhance features for motion estimation, facilitated by our designed play-and-plug Hierarchical Region-aware Feature Fusion Module (HRFFM). HRFFM incorporates RDP into various hierarchical stages of VFI's encoder, using RDP-guided Feature Normalization (RDPFN) in a residual learning manner. With HRFFM and RDP, the features within VFI's encoder exhibit similar representations for matched regions in neighboring frames, thus improving the synthesis of intermediate frames. Extensive experiments demonstrate that HRFFM consistently enhances VFI performance across various scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,168
2210.03506
What Do End-Users Really Want? Investigation of Human-Centered XAI for Mobile Health Apps
In healthcare, AI systems support clinicians and patients in diagnosis, treatment, and monitoring, but many systems' poor explainability remains challenging for practical application. Overcoming this barrier is the goal of explainable AI (XAI). However, an explanation can be perceived differently and, thus, not solve the black-box problem for everyone. The domain of Human-Centered AI deals with this problem by adapting AI to users. We present a user-centered persona concept to evaluate XAI and use it to investigate end-users preferences for various explanation styles and contents in a mobile health stress monitoring application. The results of our online survey show that users' demographics and personality, as well as the type of explanation, impact explanation preferences, indicating that these are essential features for XAI design. We subsumed the results in three prototypical user personas: power-, casual-, and privacy-oriented users. Our insights bring an interactive, human-centered XAI closer to practical application.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
322,063
1510.08382
Flexibly Mining Better Subgroups
In subgroup discovery, also known as supervised pattern mining, discovering high quality one-dimensional subgroups and refinements of these is a crucial task. For nominal attributes, this is relatively straightforward, as we can consider individual attribute values as binary features. For numerical attributes, the task is more challenging as individual numeric values are not reliable statistics. Instead, we can consider combinations of adjacent values, i.e. bins. Existing binning strategies, however, are not tailored for subgroup discovery. That is, they do not directly optimize for the quality of subgroups, therewith potentially degrading the mining result. To address this issue, we propose FLEXI. In short, with FLEXI we propose to use optimal binning to find high quality binary features for both numeric and ordinal attributes. We instantiate FLEXI with various quality measures and show how to achieve efficiency accordingly. Experiments on both synthetic and real-world data sets show that FLEXI outperforms state of the art with up to 25 times improvement in subgroup quality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
48,281
2011.05931
Transmission of a Bit over a Discrete Poisson Channel with Memory
A coding scheme for transmission of a bit maps a given bit to a sequence of channel inputs (called the codeword associated to the transmitted bit). In this paper, we study the problem of designing the best code for a discrete Poisson channel with memory (under peak-power and total-power constraints). The outputs of a discrete Poisson channel with memory are Poisson distributed random variables with a mean comprising of a fixed additive noise and a linear combination of past input symbols. Assuming a maximum-likelihood (ML) decoder, we search for a codebook that has the smallest possible error probability. This problem is challenging because error probability of a code does not have a closed-form analytical expression. For the case of having only a total-power constraint, the optimal code structure is obtained, provided that the blocklength is greater than the memory length of the channel. For the case of having only a peak-power constraint, the optimal code is derived for arbitrary memory and blocklength in the high-power regime. For the case of having both the peak-power and total-power constraints, the optimal code is derived for memoryless Poisson channels when both the total-power and the peak-power bounds are large.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
206,086
2102.03302
Self-Supervised Deep Graph Embedding with High-Order Information Fusion for Community Discovery
Deep graph embedding is an important approach for community discovery. Deep graph neural network with self-supervised mechanism can obtain the low-dimensional embedding vectors of nodes from unlabeled and unstructured graph data. The high-order information of graph can provide more abundant structure information for the representation learning of nodes. However, most self-supervised graph neural networks only use adjacency matrix as the input topology information of graph and cannot obtain too high-order information since the number of layers of graph neural network is fairly limited. If there are too many layers, the phenomenon of over smoothing will appear. Therefore how to obtain and fuse high-order information of graph by a shallow graph neural network is an important problem. In this paper, a deep graph embedding algorithm with self-supervised mechanism for community discovery is proposed. The proposed algorithm uses self-supervised mechanism and different high-order information of graph to train multiple deep graph convolution neural networks. The outputs of multiple graph convolution neural networks are fused to extract the representations of nodes which include the attribute and structure information of a graph. In addition, data augmentation and negative sampling are introduced into the training process to facilitate the improvement of embedding result. The proposed algorithm and the comparison algorithms are conducted on the five experimental data sets. The experimental results show that the proposed algorithm outperforms the comparison algorithms on the most experimental data sets. The experimental results demonstrate that the proposed algorithm is an effective algorithm for community discovery.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
218,699
2008.10236
Strawberry Detection using Mixed Training on Simulated and Real Data
This paper demonstrates how simulated images can be useful for object detection tasks in the agricultural sector, where labeled data can be scarce and costly to collect. We consider training on mixed datasets with real and simulated data for strawberry detection in real images. Our results show that using the real dataset augmented by the simulated dataset resulted in slightly higher accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
192,943
2003.05375
On the Effect of Correlation on the Capacity of Backscatter Communication Systems
We analyse the effect of correlation between the forward and backward links on the capacity of backscatter communication systems. To that aim, we obtain an analytical expression for the average capacity under a correlated Rayleigh product fading channel, as well as closed-form asymptotic expressions for the high and low signal-to-noise ratio (SNR) regimes. Our results show that correlation is indeed detrimental for a fixed target SNR; contrarily to the common belief, we also see that correlation can be actually beneficial in some instances when a fixed power budget is considered.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
167,837
2011.01374
Synthetic Data Generation for Economists
As more tech companies engage in rigorous economic analyses, we are confronted with a data problem: in-house papers cannot be replicated due to use of sensitive, proprietary, or private data. Readers are left to assume that the obscured true data (e.g., internal Google information) indeed produced the results given, or they must seek out comparable public-facing data (e.g., Google Trends) that yield similar results. One way to ameliorate this reproducibility issue is to have researchers release synthetic datasets based on their true data; this allows external parties to replicate an internal researcher's methodology. In this brief overview, we explore synthetic data generation at a high level for economic analyses.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
204,557
1907.04913
Prediction of Compression Index of Fine-Grained Soils Using a Gene Expression Programming Model
In construction projects, estimation of the settlement of fine-grained soils is of critical importance, and yet is a challenging task. The coefficient of consolidation for the compression index (Cc) is a key parameter in modeling the settlement of fine-grained soil layers. However, the estimation of this parameter is costly, time-consuming, and requires skilled technicians. To overcome these drawbacks, we aimed to predict Cc through other soil parameters, i.e., the liquid limit (LL), plastic limit (PL), and initial void ratio (e0). Using these parameters is more convenient and requires substantially less time and cost compared to the conventional tests to estimate Cc. This study presents a novel prediction model for the Cc of fine-grained soils using gene expression programming (GEP). A database consisting of 108 different data points was used to develop the model. A closed-form equation solution was derived to estimate Cc based on LL, PL, and e0. The performance of the developed GEP-based model was evaluated through the coefficient of determination (R2), the root mean squared error (RMSE), and the mean average error (MAE). The proposed model performed better in terms of R2, RMSE, and MAE compared to the other models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
138,230
2307.11166
Exploring reinforcement learning techniques for discrete and continuous control tasks in the MuJoCo environment
We leverage the fast physics simulator, MuJoCo to run tasks in a continuous control environment and reveal details like the observation space, action space, rewards, etc. for each task. We benchmark value-based methods for continuous control by comparing Q-learning and SARSA through a discretization approach, and using them as baselines, progressively moving into one of the state-of-the-art deep policy gradient method DDPG. Over a large number of episodes, Qlearning outscored SARSA, but DDPG outperformed both in a small number of episodes. Lastly, we also fine-tuned the model hyper-parameters expecting to squeeze more performance but using lesser time and resources. We anticipated that the new design for DDPG would vastly improve performance, yet after only a few episodes, we were able to achieve decent average rewards. We expect to improve the performance provided adequate time and computational resources.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
380,818
1108.5212
Deinterleaving Finite Memory Processes via Penalized Maximum Likelihood
We study the problem of deinterleaving a set of finite-memory (Markov) processes over disjoint finite alphabets, which have been randomly interleaved by a finite-memory switch. The deinterleaver has access to a sample of the resulting interleaved process, but no knowledge of the number or structure of the component Markov processes, or of the switch. We study conditions for uniqueness of the interleaved representation of a process, showing that certain switch configurations, as well as memoryless component processes, can cause ambiguities in the representation. We show that a deinterleaving scheme based on minimizing a penalized maximum-likelihood cost function is strongly consistent, in the sense of reconstructing, almost surely as the observed sequence length tends to infinity, a set of component and switch Markov processes compatible with the original interleaved process. Furthermore, under certain conditions on the structure of the switch (including the special case of a memoryless switch), we show that the scheme recovers \emph{all} possible interleaved representations of the original process. Experimental results are presented demonstrating that the proposed scheme performs well in practice, even for relatively short input samples.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
11,820
2204.01966
Time Efficient Joint UAV-BS Deployment and User Association based on Machine Learning
This paper proposes a time-efficient mechanism to decrease the on-line computing time of solving the joint unmanned aerial vehicle base station (UAV-BS) deployment and user/sensor association (UDUA) problem aiming at maximizing the downlink sum transmission throughput. The joint UDUA problem is decoupled into two sub-problems: one is the user association sub-problem, which gets the optimal matching strategy between aerial and ground nodes for certain UAV-BS positions; and the other is the UAV-BS deployment sub-problem trying to find the best position combination of the UAV-BSs that make the solution of the first sub-problem optimal among all the possible position combinations of the UAV-BSs. In the proposed mechanism, we transform the user association sub-problem into an equivalent bipartite matching problem and solve it using the Kuhn-Munkres algorithm. For the UAV-BS deployment sub-problem, we theoretically prove that adopting the best UAV-BS deployment strategy of a previous user distribution for each new user distribution will introduce little performance decline compared with the new user distribution's ground true best strategy if the two user distributions are similar enough. Based on our mathematical analyses, the similarity level between user distributions is well defined and becomes the key to solve the second sub-problem. Numerical results indicate that the proposed UDUA mechanism can achieve near-optimal system performance in terms of average downlink sum transmission throughput and failure rate with enormously reduced computing time compared with benchmark approaches.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
true
289,781
2502.02967
Demonstrating a Control Framework for Physical Human-Robot Interaction Toward Industrial Applications
Human-Robot Interaction (pHRI) is critical for implementing Industry 5.0 which focuses on human-centric approaches. However, few studies explore the practical alignment of pHRI to industrial grade performance. This paper introduces a versatile control framework designed to bridge this gap by incorporating the torque-based control modes: compliance control, null-space compliance, dual compliance, all in static and dynamic scenarios. Thanks to our second-order Quadratic Programming (QP) formulation, strict kinematic and collision constraints are integrated into the system as safety features, and a weighted hierarchy guarantees singularity-robust task tracking performance. The framework is implemented on a Kinova Gen3 collaborative robot (cobot) equipped with a Bota force/torque sensor. A DualShock 4 game controller is attached at the robot's end-effector to demonstrate the framework's capabilities. This setup enables seamless dynamic switching between the modes, and real-time adjustment of parameters, such as transitioning between position and torque control or selecting a more robust custom-developed low-level torque controller over the default one.Built on the open-source robotic control software mc_rtc, to ensure reproducibility for both research and industrial deployment, this framework demonstrates industrial-grade performance and repeatability, showcasing its potential as a robust pHRI control system for industrial environments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
530,548
2306.03354
Simulation-Based Counterfactual Causal Discovery on Real World Driver Behaviour
Being able to reason about how one's behaviour can affect the behaviour of others is a core skill required of intelligent driving agents. Despite this, the state of the art struggles to meet the need of agents to discover causal links between themselves and others. Observational approaches struggle because of the non-stationarity of causal links in dynamic environments, and the sparsity of causal interactions while requiring the approaches to work in an online fashion. Meanwhile interventional approaches are impractical as a vehicle cannot experiment with its actions on a public road. To counter the issue of non-stationarity we reformulate the problem in terms of extracted events, while the previously mentioned restriction upon interventions can be overcome with the use of counterfactual simulation. We present three variants of the proposed counterfactual causal discovery method and evaluate these against state of the art observational temporal causal discovery methods across 3396 causal scenes extracted from a real world driving dataset. We find that the proposed method significantly outperforms the state of the art on the proposed task quantitatively and can offer additional insights by comparing the outcome of an alternate series of decisions in a way that observational and interventional approaches cannot.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
371,283
2207.08380
Visual Representations of Physiological Signals for Fake Video Detection
Realistic fake videos are a potential tool for spreading harmful misinformation given our increasing online presence and information intake. This paper presents a multimodal learning-based method for detection of real and fake videos. The method combines information from three modalities - audio, video, and physiology. We investigate two strategies for combining the video and physiology modalities, either by augmenting the video with information from the physiology or by novelly learning the fusion of those two modalities with a proposed Graph Convolutional Network architecture. Both strategies for combining the two modalities rely on a novel method for generation of visual representations of physiological signals. The detection of real and fake videos is then based on the dissimilarity between the audio and modified video modalities. The proposed method is evaluated on two benchmark datasets and the results show significant increase in detection performance compared to previous methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
308,572
2404.11537
SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening
Pansharpening is a significant image fusion technique that merges the spatial content and spectral characteristics of remote sensing images to generate high-resolution multispectral images. Recently, denoising diffusion probabilistic models have been gradually applied to visual tasks, enhancing controllable image generation through low-rank adaptation (LoRA). In this paper, we introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff, which considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition. Specifically, SSDiff utilizes spatial and spectral branches to learn spatial details and spectral features separately, then employs a designed alternating projection fusion module (APFM) to accomplish the fusion. Furthermore, we propose a frequency modulation inter-branch module (FMIM) to modulate the frequency distribution between branches. The two components of SSDiff can perform favorably against the APFM when utilizing a LoRA-like branch-wise alternative fine-tuning method. It refines SSDiff to capture component-discriminating features more sufficiently. Finally, extensive experiments on four commonly used datasets, i.e., WorldView-3, WorldView-2, GaoFen-2, and QuickBird, demonstrate the superiority of SSDiff both visually and quantitatively. The code will be made open source after possible acceptance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,534
2201.12569
Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes
We consider a sequential decision making problem where the agent faces the environment characterized by the stochastic discrete events and seeks an optimal intervention policy such that its long-term reward is maximized. This problem exists ubiquitously in social media, finance and health informatics but is rarely investigated by the conventional research in reinforcement learning. To this end, we present a novel framework of the model-based reinforcement learning where the agent's actions and observations are asynchronous stochastic discrete events occurring in continuous-time. We model the dynamics of the environment by Hawkes process with external intervention control term and develop an algorithm to embed such process in the Bellman equation which guides the direction of the value gradient. We demonstrate the superiority of our method in both synthetic simulator and real-world problem.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
277,693
1811.03734
Close and ordinary social contacts: how important are they in promoting large-scale contagion?
An outstanding problem of interdisciplinary interest is to understand quantitatively the role of social contacts in contagion dynamics. In general, there are two types of contacts: close ones among friends, colleagues and family members, etc., and ordinary contacts from encounters with strangers. Typically, social reinforcement occurs for close contacts. Taking into account both types of contacts, we develop a contact-based model for social contagion. We find that, associated with the spreading dynamics, for random networks there is coexistence of continuous and discontinuous phase transitions, but for heterogeneous networks the transition is continuous. We also find that ordinary contacts play a crucial role in promoting large scale spreading, and the number of close contacts determines not only the nature of the phase transitions but also the value of the outbreak threshold in random networks. For heterogeneous networks from the real world, the abundance of close contacts affects the epidemic threshold, while its role in facilitating the spreading depends on the adoption threshold assigned to it. We uncover two striking phenomena. First, a strong interplay between ordinary and close contacts is necessary for generating prevalent spreading. In fact, only when there are propagation paths of reasonable length which involve both close and ordinary contacts are large scale outbreaks of social contagions possible. Second, abundant close contacts in heterogeneous networks promote both outbreak and spreading of the contagion through the transmission channels among the hubs, when both values of the threshold and transmission rate among ordinary contacts are small. We develop a theoretical framework to obtain an analytic understanding of the main findings on random networks, with support from extensive numerical computations.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
112,912
1909.00889
Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization without Accessing Target Domain Data
We propose to harness the potential of simulation for the semantic segmentation of real-world self-driving scenes in a domain generalization fashion. The segmentation network is trained without any data of target domains and tested on the unseen target domains. To this end, we propose a new approach of domain randomization and pyramid consistency to learn a model with high generalizability. First, we propose to randomize the synthetic images with the styles of real images in terms of visual appearances using auxiliary datasets, in order to effectively learn domain-invariant representations. Second, we further enforce pyramid consistency across different "stylized" images and within an image, in order to learn domain-invariant and scale-invariant features, respectively. Extensive experiments are conducted on the generalization from GTA and SYNTHIA to Cityscapes, BDDS and Mapillary; and our method achieves superior results over the state-of-the-art techniques. Remarkably, our generalization results are on par with or even better than those obtained by state-of-the-art simulation-to-real domain adaptation methods, which access the target domain data at training time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
143,743
1401.3520
Adaptive Mode Selection for Bidirectional Relay Networks -- Fixed Rate Transmission
In this paper, we consider the problem of sum throughput maximization for bidirectional relay networks with block fading. Thereby, user 1 and user 2 exchange information only via a relay node, i.e., a direct link between both users is not present. We assume that channel state information at the transmitter (CSIT) is not available and/or only one coding and modulation scheme is used at the transmitters due to complexity constraints. Thus, the nodes transmit with a fixed predefined rate regardless of the channel state information (CSI). In general, the nodes in the network can assume one of three possible states in each time slot, namely the transmit, receive, and silent state. Most of the existing protocols assume a fixed schedule for the sequence of the states of the nodes. In this paper, we abandon the restriction of having a fixed and predefined schedule and propose a new protocol which, based on the CSI at the receiver (CSIR), selects the optimal states of the nodes in each time slot such that the sum throughput is maximized. To this end, the relay has to be equipped with two buffers for storage of the information received from the two users. Numerical results show that the proposed protocol significantly outperforms the existing protocols.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,901
1903.11502
Numerical modeling of neutron transport in SP3 approximation by finite element method
The SP3 approximation of the neutron transport equation allows improving the accuracy for both static and transient simulations for reactor core analysis compared with the neutron diffusion theory. Besides, the SP3 calculation costs are much less than higher order transport methods (SN or PN). Another advantage of the SP3 approximation is a similar structure of equations that is used in the diffusion method. Therefore, there is no difficulty to implement the SP3 solution option to the multi-group neutron diffusion codes. In this work, the application of the SP3 methodology based on solution of the {\lambda}- and {\alpha}-spectral problems has been tested for the IAEA-2D and HWR reactor benchmark tests. The FEM is chosen to achieve the 3D geometrical generality, using GMSH as a generic mesh generator. The results calculated with the diffusion and SP3 methods are compared with the reference transport calculation results. It was found for the HWR reactor test that some eigenvalues are complex when calculating using both diffusion and SP3 options.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
125,528
2001.11602
Safe Trajectory Tracking in Uncertain Environments
In Model Predictive Control (MPC) formulations of trajectory tracking problems, infeasible reference trajectories and a-priori unknown constraints can lead to cumbersome designs, aggressive tracking, and loss of recursive feasibility. This is the case, for example, in trajectory tracking applications for mobile systems in the presence of constraints which are not fully known a-priori. In this paper, we propose a new framework called Model Predictive Flexible trajectory Tracking Control (MPFTC), which relaxes the trajectory tracking requirement. Additionally, we accommodate recursive feasibility in the presence of a-priori unknown constraints, which might render the reference trajectory infeasible. In the proposed framework, constraint satisfaction is guaranteed at all times while the reference trajectory is tracked as good as constraint satisfaction allows, thus simplifying the controller design and reducing possibly aggressive tracking behavior. The proposed framework is illustrated with three numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
162,114
2410.08146
Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
A promising approach for improving reasoning in large language models is to use process reward models (PRMs). PRMs provide feedback at each step of a multi-step reasoning trace, potentially improving credit assignment over outcome reward models (ORMs) that only provide feedback at the final step. However, collecting dense, per-step human labels is not scalable, and training PRMs from automatically-labeled data has thus far led to limited gains. To improve a base policy by running search against a PRM or using it as dense rewards for reinforcement learning (RL), we ask: "How should we design process rewards?". Our key insight is that, to be effective, the process reward for a step should measure progress: a change in the likelihood of producing a correct response in the future, before and after taking the step, corresponding to the notion of step-level advantages in RL. Crucially, this progress should be measured under a prover policy distinct from the base policy. We theoretically characterize the set of good provers and our results show that optimizing process rewards from such provers improves exploration during test-time search and online RL. In fact, our characterization shows that weak prover policies can substantially improve a stronger base policy, which we also observe empirically. We validate our claims by training process advantage verifiers (PAVs) to predict progress under such provers, and show that compared to ORMs, test-time search against PAVs is $>8\%$ more accurate, and $1.5-5\times$ more compute-efficient. Online RL with dense rewards from PAVs enables one of the first results with $5-6\times$ gain in sample efficiency, and $>6\%$ gain in accuracy, over ORMs.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
496,974
2002.07354
Leveraging Linear Quadratic Regulator Cost and Energy Consumption for Ultra-Reliable and Low-Latency IoT Control Systems
To efficiently support the real-time control applications, networked control systems operating with ultra-reliable and low-latency communications (URLLCs) become fundamental technology for future Internet of things (IoT). However, the design of control, sensing and communications is generally isolated at present. In this paper, we propose the joint optimization of control cost and energy consumption for a centralized wireless networked control system. Specifically, with the ``sensing-then-control'' protocol, we first develop an optimization framework which jointly takes control, sensing and communications into account. In this framework, we derive the spectral efficiency, linear quadratic regulator cost and energy consumption. Then, a novel performance metric called the \textit{energy-to-control efficiency} is proposed for the IoT control system. In addition, we optimize the energy-to-control efficiency while guaranteeing the requirements of URLLCs, thereupon a general and complex max-min joint optimization problem is formulated for the IoT control system. To optimally solve the formulated problem by reasonable complexity, we propose two radio resource allocation algorithms. Finally, simulation results show that our proposed algorithms can significantly improve the energy-to-control efficiency for the IoT control system with URLLCs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
164,442
1210.5117
Distributed and Autonomous Resource and Power Allocation for Wireless Networks
In this paper, a distributed and autonomous technique for resource and power allocation in orthogonal frequency division multiple access (OFDMA) femto-cellular networks is presented. Here, resource blocks (RBs) and their corresponding transmit powers are assigned to the user(s) in each cell individually without explicit coordination between femto base stations (FBSs). The "allocatability" of each resource is determined utilising only locally available information of the following quantities: - the required rate of the user; - the quality (i.e., strength) of the desired signal; - the frequency-selective fading on each RB; and - the level of interference incident on each RB. Using a fuzzy logic system, the time-averaged values of each of these inputs are combined to determine which RBs are most suitable to be allocated in a particular cell, i.e., which resources can be allocated such that the user requested rate(s) in that cell are satisfied. Furthermore, link adaptation (LA) is included, enabling users to adjust to varying channel conditions. A comprehensive study of this system in a femto-cell environment is performed, yielding system performance improvements in terms of throughput, energy efficiency and coverage over state-of-the-art ICIC techniques.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
19,249
1404.1356
Optimal learning with Bernstein Online Aggregation
We introduce a new recursive aggregation procedure called Bernstein Online Aggregation (BOA). The exponential weights include an accuracy term and a second order term that is a proxy of the quadratic variation as in Hazan and Kale (2010). This second term stabilizes the procedure that is optimal in different senses. We first obtain optimal regret bounds in the deterministic context. Then, an adaptive version is the first exponential weights algorithm that exhibits a second order bound with excess losses that appears first in Gaillard et al. (2014). The second order bounds in the deterministic context are extended to a general stochastic context using the cumulative predictive risk. Such conversion provides the main result of the paper, an inequality of a novel type comparing the procedure with any deterministic aggregation procedure for an integrated criteria. Then we obtain an observable estimate of the excess of risk of the BOA procedure. To assert the optimality, we consider finally the iid case for strongly convex and Lipschitz continuous losses and we prove that the optimal rate of aggregation of Tsybakov (2003) is achieved. The batch version of the BOA procedure is then the first adaptive explicit algorithm that satisfies an optimal oracle inequality with high probability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
32,102
2312.09688
Shaping and Being Shaped by Drones: Supporting Perception-Action Loops
We report on a three-day challenge during which five teams each programmed a nanodrone to be piloted through an obstacle course using bodily movement, in a 3D transposition of the '80s video-game Pacman. Using a bricolage approach to analyse interviews, field notes, video recordings, and inspection of each team's code revealed how participants were shaping and, in turn, became shaped in bodily ways by the drones' limitations. We observed how teams adapted to compete by: 1) shifting from aiming for seamless human-drone interaction, to seeing drones as fragile, wilful, and prone to crashes; 2) engaging with intimate, bodily interactions to more precisely understand, probe, and delimit each drone's capabilities; 3) adopting different strategies, emphasising either training the drone or training the pilot. We contribute with an empirical, somaesthetically focused account of current challenges in HDI and call for programming environments that support action-feedback loops for design and programming purposes.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
415,843
1906.01299
Grid-based Localization Stack for Inspection Drones towards Automation of Large Scale Warehouse Systems
SLAM based techniques are often adopted for solving the navigation problem for the drones in GPS denied environment. Despite the widespread success of these approaches, they have not yet been fully exploited for automation in a warehouse system due to expensive sensors and setup requirements. This paper focuses on the use of low-cost monocular camera-equipped drones for performing warehouse management tasks like inventory scanning and position update. The methods introduced are at par with the existing state of warehouse environment present today, that is, the existence of a grid network for the ground vehicles, hence eliminating any additional infrastructure requirement for drone deployment. As we lack scale information, that in itself forbids us to use any 3D techniques, we focus more towards optimizing standard image processing algorithms like the thick line detection and further developing it into a fast and robust grid localization framework. In this paper, we show different line detection algorithms, their significance in grid localization and their limitations. We further extend our proposed implementation towards a real-time navigation stack for an actual warehouse inspection case scenario. Our line detection method using skeletonization and centroid strategy works considerably even with varying light conditions, line thicknesses, colors, orientations, and partial occlusions. A simple yet effective Kalman Filter has been used for smoothening the {\rho} and {\theta} outputs of the two different line detection methods for better drone control while grid following. A generic strategy that handles the navigation of the drone on a grid for completion of the allotted task is also developed. Based on the simulation and real-life experiments, the final developments on the drone localization and navigation in a structured environment are discussed.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
133,672
1811.03559
A Feature Complete SPIKE Banded Algorithm and Solver
New features and enhancements for the SPIKE banded solver are presented. Among all the SPIKE algorithm versions, we focus our attention on the recursive SPIKE technique which provides the best trade-off between generality and parallel efficiency, but was known for its lack of flexibility. Its application was essentially limited to power of two number of cores/processors. This limitation is successfully addressed in this paper. In addition, we present a new transpose solve option, a standard feature of most numerical solver libraries which has never been addressed by the SPIKE algorithm so far. A pivoting recursive SPIKE strategy is finally presented as an alternative to non-pivoting scheme for systems with large condition numbers. All these new enhancements participate to create a feature complete SPIKE algorithm and a new black-box SPIKE-OpenMP package that significantly outperforms the performance and scalability obtained with other state-of-the-art banded solvers.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
112,869
2004.00234
Botnet Detection Using Recurrent Variational Autoencoder
Botnets are increasingly used by malicious actors, creating increasing threat to a large number of internet users. To address this growing danger, we propose to study methods to detect botnets, especially those that are hard to capture with the commonly used methods, such as the signature based ones and the existing anomaly-based ones. More specifically, we propose a novel machine learning based method, named Recurrent Variational Autoencoder (RVAE), for detecting botnets through sequential characteristics of network traffic flow data including attacks by botnets. We validate robustness of our method with the CTU-13 dataset, where we have chosen the testing dataset to have different types of botnets than those of training dataset. Tests show that RVAE is able to detect botnets with the same accuracy as the best known results published in literature. In addition, we propose an approach to assign anomaly score based on probability distributions, which allows us to detect botnets in streaming mode as the new networking statistics becomes available. This on-line detection capability would enable real-time detection of unknown botnets.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
170,564
2107.02121
ParDen: Surrogate Assisted Hyper-Parameter Optimisation for Portfolio Selection
Portfolio optimisation is a multi-objective optimisation problem (MOP), where an investor aims to optimise the conflicting criteria of maximising a portfolio's expected return whilst minimising its risk and other costs. However, selecting a portfolio is a computationally expensive problem because of the cost associated with performing multiple evaluations on test data ("backtesting") rather than solving the convex optimisation problem itself. In this research, we present ParDen, an algorithm for the inclusion of any discriminative or generative machine learning model as a surrogate to mitigate the computationally expensive backtest procedure. In addition, we compare the performance of alternative metaheuristic algorithms: NSGA-II, R-NSGA-II, NSGA-III, R-NSGA-III, U-NSGA-III, MO-CMA-ES, and COMO-CMA-ES. We measure performance using multi-objective performance indicators, including Generational Distance Plus, Inverted Generational Distance Plus and Hypervolume. We also consider meta-indicators, Success Rate and Average Executions to Success Rate, of the Hypervolume to provide more insight into the quality of solutions. Our results show that ParDen can reduce the number of evaluations required by almost a third while obtaining an improved Pareto front over the state-of-the-art for the problem of portfolio selection.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
244,705
1907.00824
Designing Deep Reinforcement Learning for Human Parameter Exploration
Software tools for generating digital sound often present users with high-dimensional, parametric interfaces, that may not facilitate exploration of diverse sound designs. In this paper, we propose to investigate artificial agents using deep reinforcement learning to explore parameter spaces in partnership with users for sound design. We describe a series of user-centred studies to probe the creative benefits of these agents and adapting their design to exploration. Preliminary studies observing users' exploration strategies with parametric interfaces and testing different agent exploration behaviours led to the design of a fully-functioning prototype, called Co-Explorer, that we evaluated in a workshop with professional sound designers. We found that the Co-Explorer enables a novel creative workflow centred on human-machine partnership, which has been positively received by practitioners. We also highlight varied user exploration behaviors throughout partnering with our system. Finally, we frame design guidelines for enabling such co-exploration workflow in creative digital applications.
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,152
2305.04847
CaloClouds: Fast Geometry-Independent Highly-Granular Calorimeter Simulation
Simulating showers of particles in highly-granular detectors is a key frontier in the application of machine learning to particle physics. Achieving high accuracy and speed with generative machine learning models would enable them to augment traditional simulations and alleviate a major computing constraint. This work achieves a major breakthrough in this task by, for the first time, directly generating a point cloud of a few thousand space points with energy depositions in the detector in 3D space without relying on a fixed-grid structure. This is made possible by two key innovations: i) Using recent improvements in generative modeling we apply a diffusion model to generate photon showers as high-cardinality point clouds. ii) These point clouds of up to $6,000$ space points are largely geometry-independent as they are down-sampled from initial even higher-resolution point clouds of up to $40,000$ so-called Geant4 steps. We showcase the performance of this approach using the specific example of simulating photon showers in the planned electromagnetic calorimeter of the International Large Detector (ILD) and achieve overall good modeling of physically relevant distributions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
362,926
1803.00401
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks inspired by commonly observed distortions in the real world that are well handled by shallow learning methods along with learning based adversaries; (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, including OpenFace and VGG-Face, and two publicly available databases (MEDS and PaSC) demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. The proposed method is also compared with existing detection algorithms and the results show that it is able to detect the attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present several effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
91,671
2406.11906
NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing Methods in Proteomics
Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the high-throughput analysis of protein composition in biological tissues. Many deep learning methods have been developed for \emph{de novo} peptide sequencing task, i.e., predicting the peptide sequence for the observed mass spectrum. However, two key challenges seriously hinder the further advancement of this important task. Firstly, since there is no consensus for the evaluation datasets, the empirical results in different research papers are often not comparable, leading to unfair comparison. Secondly, the current methods are usually limited to amino acid-level or peptide-level precision and recall metrics. In this work, we present the first unified benchmark NovoBench for \emph{de novo} peptide sequencing, which comprises diverse mass spectrum data, integrated models, and comprehensive evaluation metrics. Recent impressive methods, including DeepNovo, PointNovo, Casanovo, InstaNovo, AdaNovo and $\pi$-HelixNovo are integrated into our framework. In addition to amino acid-level and peptide-level precision and recall, we evaluate the models' performance in terms of identifying post-tranlational modifications (PTMs), efficiency and robustness to peptide length, noise peaks and missing fragment ratio, which are important influencing factors while seldom be considered. Leveraging this benchmark, we conduct a large-scale study of current methods, report many insightful findings that open up new possibilities for future development.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
465,135
1806.04957
Expression Empowered ResiDen Network for Facial Action Unit Detection
The paper explores the topic of Facial Action Unit (FAU) detection in the wild. In particular, we are interested in answering the following questions: (1) how useful are residual connections across dense blocks for face analysis? (2) how useful is the information from a network trained for categorical Facial Expression Recognition (FER) for the task of FAU detection? The proposed network (ResiDen) exploits dense blocks along with residual connections and uses auxiliary information from a FER network. The experiments are performed on the EmotionNet and DISFA datasets. The experiments show the usefulness of facial expression information for AU detection. The proposed network achieves state-of-art results on the two databases. Analysis of the results for cross database protocol shows the effectiveness of the network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,359
1903.07072
STNReID : Deep Convolutional Networks with Pairwise Spatial Transformer Networks for Partial Person Re-identification
Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7% and 54.6% rank-1 accuracies on partial ReID and partial iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,531
2207.12405
Versatile Weight Attack via Flipping Limited Bits
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage. Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack, where the effectiveness term could be customized depending on the attacker's purpose. Furthermore, we present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA). To this end, we formulate this problem as a mixed integer programming (MIP) to jointly determine the state of the binary bits (0 or 1) in the memory and learn the sample modification. Utilizing the latest technique in integer programming, we equivalently reformulate this MIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of SSA and TSA in attacking DNNs.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
310,005
2304.05591
Semantic Feature Verification in FLAN-T5
This study evaluates the potential of a large language model for aiding in generation of semantic feature norms - a critical tool for evaluating conceptual structure in cognitive science. Building from an existing human-generated dataset, we show that machine-verified norms capture aspects of conceptual structure beyond what is expressed in human norms alone, and better explain human judgments of semantic similarity amongst items that are distally related. The results suggest that LLMs can greatly enhance traditional methods of semantic feature norm verification, with implications for our understanding of conceptual representation in humans and machines.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
357,679
1308.3509
Stochastic Optimization for Machine Learning
It has been found that stochastic algorithms often find good solutions much more rapidly than inherently-batch approaches. Indeed, a very useful rule of thumb is that often, when solving a machine learning problem, an iterative technique which relies on performing a very large number of relatively-inexpensive updates will often outperform one which performs a smaller number of much "smarter" but computationally-expensive updates. In this thesis, we will consider the application of stochastic algorithms to two of the most important machine learning problems. Part i is concerned with the supervised problem of binary classification using kernelized linear classifiers, for which the data have labels belonging to exactly two classes (e.g. "has cancer" or "doesn't have cancer"), and the learning problem is to find a linear classifier which is best at predicting the label. In Part ii, we will consider the unsupervised problem of Principal Component Analysis, for which the learning task is to find the directions which contain most of the variance of the data distribution. Our goal is to present stochastic algorithms for both problems which are, above all, practical--they work well on real-world data, in some cases better than all known competing algorithms. A secondary, but still very important, goal is to derive theoretical bounds on the performance of these algorithms which are at least competitive with, and often better than, those known for other approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
26,473
2401.10890
Event detection from novel data sources: Leveraging satellite imagery alongside GPS traces
Rapid identification and response to breaking events, particularly those that pose a threat to human life such as natural disasters or conflicts, is of paramount importance. The prevalence of mobile devices and the ubiquity of network connectivity has generated a massive amount of temporally- and spatially-stamped data. Numerous studies have used mobile data to derive individual human mobility patterns for various applications. Similarly, the increasing number of orbital satellites has made it easier to gather high-resolution images capturing a snapshot of a geographical area in sub-daily temporal frequency. We propose a novel data fusion methodology integrating satellite imagery with privacy-enhanced mobile data to augment the event inference task, whether in real-time or historical. In the absence of boots on the ground, mobile data is able to give an approximation of human mobility, proximity to one another, and the built environment. On the other hand, satellite imagery can provide visual information on physical changes to the built and natural environment. The expected use cases for our methodology include small-scale disaster detection (i.e., tornadoes, wildfires, and floods) in rural regions, search and rescue operation augmentation for lost hikers in remote wilderness areas, and identification of active conflict areas and population displacement in war-torn states. Our implementation is open-source on GitHub: https://github.com/ekinugurel/SatMobFusion.
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
422,806
2402.18998
COFT-AD: COntrastive Fine-Tuning for Few-Shot Anomaly Detection
Existing approaches towards anomaly detection~(AD) often rely on a substantial amount of anomaly-free data to train representation and density models. However, large anomaly-free datasets may not always be available before the inference stage; in which case an anomaly detection model must be trained with only a handful of normal samples, a.k.a. few-shot anomaly detection (FSAD). In this paper, we propose a novel methodology to address the challenge of FSAD which incorporates two important techniques. Firstly, we employ a model pre-trained on a large source dataset to initialize model weights. Secondly, to ameliorate the covariate shift between source and target domains, we adopt contrastive training to fine-tune on the few-shot target domain data. To learn suitable representations for the downstream AD task, we additionally incorporate cross-instance positive pairs to encourage a tight cluster of the normal samples, and negative pairs for better separation between normal and synthesized negative samples. We evaluate few-shot anomaly detection on on 3 controlled AD tasks and 4 real-world AD tasks to demonstrate the effectiveness of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
433,645
2012.11685
Neural Methods for Effective, Efficient, and Exposure-Aware Information Retrieval
Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents--or short passages--in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms--such as a person's name or a product model number--not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections--such as the document index of a commercial Web search engine--containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
212,698