id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1905.13561
Speaker Anonymization Using X-vector and Neural Waveform Models
The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents. Despite the popularity and convenience of such services, the sharing of such inherently personal data, including speech data, raises obvious security and privacy concerns. In particular, a user's speech data may be acquired and used with speech synthesis systems to produce high-quality speech utterances which reflect the same user's speaker identity. These utterances may then be used to attack speaker verification systems. One solution to mitigate these concerns involves the concealing of speaker identities before the sharing of speech data. For this purpose, we present a new approach to speaker anonymization. The idea is to extract linguistic and speaker identity features from an utterance and then to use these with neural acoustic and waveform models to synthesize anonymized speech. The original speaker identity, in the form of timbre, is suppressed and replaced with that of an anonymous pseudo identity. The approach exploits state-of-the-art x-vector speaker representations. These are used to derive anonymized pseudo speaker identities through the combination of multiple, random speaker x-vectors. Experimental results show that the proposed approach is effective in concealing speaker identities. It increases the equal error rate of a speaker verification system while maintaining high quality, anonymized speech.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
133,172
1606.03695
Nearest Neighbour Distance Distribution in Hard-Core Point Processes
In this paper we present an analytic framework for formulating the statistical distribution of the nearest neighbour distance in hard-core point processes. We apply this framework to Mat\'{e}rn hard-core point process (MHC) to derive the cumulative distribution function of the contact distance in three cases. The first case is between a point in an MHC process and its nearest neighbour from the same process. The second case is between a point in an independent Poisson point process and the nearest neighbour from an MHC process. The third case is between a point in the complement of an MHC process and its sibling MHC process. We test the analytic results against Monte-Carlo simulations to verify their consistency.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
57,134
1610.05424
Modern WLAN Fingerprinting Indoor Positioning Methods and Deployment Challenges
Wireless Local Area Network (WLAN) has become a promising choice for indoor positioning as the only existing and established infrastructure, to localize the mobile and stationary users indoors. However, since WLAN has been initially designed for wireless networking and not positioning, the localization task based on WLAN signals has several challenges. Amongst the WLAN positioning methods, WLAN fingerprinting localization has recently achieved great attention due to its promising results. WLAN fingerprinting faces several challenges and hence, in this paper, our goal is to overview these challenges and the state-of-the-art solutions. This paper consists of three main parts: 1) Conventional localization schemes; 2) State-of-the-art approaches; 3) Practical deployment challenges. Since all the proposed methods in WLAN literature have been conducted and tested in different settings, the reported results are not equally comparable. So, we compare some of the main localization schemes in a single real environment and assess their localization accuracy, positioning error statistics, and complexity. Our results depict illustrative evaluation of WLAN localization systems and guide to future improvement opportunities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
62,507
2404.15378
Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions
Sliced Wasserstein (SW) and Generalized Sliced Wasserstein (GSW) have been widely used in applications due to their computational and statistical scalability. However, the SW and the GSW are only defined between distributions supported on a homogeneous domain. This limitation prevents their usage in applications with heterogeneous joint distributions with marginal distributions supported on multiple different domains. Using SW and GSW directly on the joint domains cannot make a meaningful comparison since their homogeneous slicing operator i.e., Radon Transform (RT) and Generalized Radon Transform (GRT) are not expressive enough to capture the structure of the joint supports set. To address the issue, we propose two new slicing operators i.e., Partial Generalized Radon Transform (PGRT) and Hierarchical Hybrid Radon Transform (HHRT). In greater detail, PGRT is the generalization of Partial Radon Transform (PRT), which transforms a subset of function arguments non-linearly while HHRT is the composition of PRT and multiple domain-specific PGRT on marginal domain arguments. By using HHRT, we extend the SW into Hierarchical Hybrid Sliced Wasserstein (H2SW) distance which is designed specifically for comparing heterogeneous joint distributions. We then discuss the topological, statistical, and computational properties of H2SW. Finally, we demonstrate the favorable performance of H2SW in 3D mesh deformation, deep 3D mesh autoencoders, and datasets comparison.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
449,076
1908.03963
A Review of Cooperative Multi-Agent Deep Reinforcement Learning
Deep Reinforcement Learning has made significant progress in multi-agent systems in recent years. In this review article, we have focused on presenting recent approaches on Multi-Agent Reinforcement Learning (MARL) algorithms. In particular, we have focused on five common approaches on modeling and solving cooperative multi-agent reinforcement learning problems: (I) independent learners, (II) fully observable critic, (III) value function factorization, (IV) consensus, and (IV) learn to communicate. First, we elaborate on each of these methods, possible challenges, and how these challenges were mitigated in the relevant papers. If applicable, we further make a connection among different papers in each category. Next, we cover some new emerging research areas in MARL along with the relevant recent papers. Due to the recent success of MARL in real-world applications, we assign a section to provide a review of these applications and corresponding articles. Also, a list of available environments for MARL research is provided in this survey. Finally, the paper is concluded with proposals on the possible research directions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
141,365
2203.16518
Collaborative Transformers for Grounded Situation Recognition
Grounded situation recognition is the task of predicting the main activity, entities playing certain roles within the activity, and bounding-box groundings of the entities in the given image. To effectively deal with this challenging task, we introduce a novel approach where the two processes for activity classification and entity estimation are interactive and complementary. To implement this idea, we propose Collaborative Glance-Gaze TransFormer (CoFormer) that consists of two modules: Glance transformer for activity classification and Gaze transformer for entity estimation. Glance transformer predicts the main activity with the help of Gaze transformer that analyzes entities and their relations, while Gaze transformer estimates the grounded entities by focusing only on the entities relevant to the activity predicted by Glance transformer. Our CoFormer achieves the state of the art in all evaluation metrics on the SWiG dataset. Training code and model weights are available at https://github.com/jhcho99/CoFormer.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
288,816
2411.12573
Locomotion Mode Transitions: Tackling System- and User-Specific Variability in Lower-Limb Exoskeletons
Accurate detection of locomotion transitions, such as walk to sit, walk to stair ascent, and descent, is crucial to effectively control robotic assistive devices, such as lower-limb exoskeletons, as each locomotion mode requires specific assistance. Variability in collected sensor data introduced by user- or system-specific characteristics makes it challenging to maintain high transition detection accuracy while avoiding latency using non-adaptive classification models. In this study, we identified key factors influencing transition detection performance, including variations in user behavior, and different mechanical designs of the exoskeletons. To boost the transition detection accuracy, we introduced two methods for adapting a finite-state machine classifier to system- and user-specific variability: a Statistics-Based approach and Bayesian Optimization. Our experimental results demonstrate that both methods remarkably improve transition detection accuracy across diverse users, achieving up to an 80% increase in certain scenarios compared to the non-personalized threshold method. These findings emphasize the importance of personalization in adaptive control systems, underscoring the potential for enhanced user experience and effectiveness in assistive devices. By incorporating subject- and system-specific data into the model training process, our approach offers a precise and reliable solution for detecting locomotion transitions, catering to individual user needs, and ultimately improving the performance of assistive devices.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
509,452
2407.11108
SSSD-ECG-nle: New Label Embeddings with Structured State-Space Models for ECG generation
An electrocardiogram (ECG) is vital for identifying cardiac diseases, offering crucial insights for diagnosing heart conditions and informing potentially life-saving treatments. However, like other types of medical data, ECGs are subject to privacy concerns when distributed and analyzed. Diffusion models have made significant progress in recent years, creating the possibility for synthesizing data comparable to the real one and allowing their widespread adoption without privacy concerns. In this paper, we use diffusion models with structured state spaces for generating digital 10-second 12-lead ECG signals. We propose the SSSD-ECG-nle architecture based on SSSD-ECG with a modified conditioning mechanism and demonstrate its efficiency on downstream tasks. We conduct quantitative and qualitative evaluations, including analyzing convergence speed, the impact of adding positive samples, and assessment with physicians' expert knowledge. Finally, we share the results of physician evaluations and also make synthetic data available to ensure the reproducibility of the experiments described.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,314
2002.06048
AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks
Existing fine-tuning methods use a single learning rate over all layers. In this paper, first, we discuss that trends of layer-wise weight variations by fine-tuning using a single learning rate do not match the well-known notion that lower-level layers extract general features and higher-level layers extract specific features. Based on our discussion, we propose an algorithm that improves fine-tuning performance and reduces network complexity through layer-wise pruning and auto-tuning of layer-wise learning rates. The proposed algorithm has verified the effectiveness by achieving state-of-the-art performance on the image retrieval benchmark datasets (CUB-200, Cars-196, Stanford online product, and Inshop). Code is available at https://github.com/youngminPIL/AutoLR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
164,074
2106.07353
Posthoc Verification and the Fallibility of the Ground Truth
Classifiers commonly make use of pre-annotated datasets, wherein a model is evaluated by pre-defined metrics on a held-out test set typically made of human-annotated labels. Metrics used in these evaluations are tied to the availability of well-defined ground truth labels, and these metrics typically do not allow for inexact matches. These noisy ground truth labels and strict evaluation metrics may compromise the validity and realism of evaluation results. In the present work, we discuss these concerns and conduct a systematic posthoc verification experiment on the entity linking (EL) task. Unlike traditional methodologies, which asks annotators to provide free-form annotations, we ask annotators to verify the correctness of annotations after the fact (i.e., posthoc). Compared to pre-annotation evaluation, state-of-the-art EL models performed extremely well according to the posthoc evaluation methodology. Posthoc validation also permits the validation of the ground truth dataset. Surprisingly, we find predictions from EL models had a similar or higher verification rate than the ground truth. We conclude with a discussion on these findings and recommendations for future evaluations.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
240,891
2212.06832
Multi-Target Decision Making under Conditions of Severe Uncertainty
The quality of consequences in a decision making problem under (severe) uncertainty must often be compared among different targets (goals, objectives) simultaneously. In addition, the evaluations of a consequence's performance under the various targets often differ in their scale of measurement, classically being either purely ordinal or perfectly cardinal. In this paper, we transfer recent developments from abstract decision theory with incomplete preferential and probabilistic information to this multi-target setting and show how -- by exploiting the (potentially) partial cardinal and partial probabilistic information -- more informative orders for comparing decisions can be given than the Pareto order. We discuss some interesting properties of the proposed orders between decision options and show how they can be concretely computed by linear optimization. We conclude the paper by demonstrating our framework in an artificial (but quite real-world) example in the context of comparing algorithms under different performance measures.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
336,227
2209.03356
AST-GIN: Attribute-Augmented Spatial-Temporal Graph Informer Network for Electric Vehicle Charging Station Availability Forecasting
Electric Vehicle (EV) charging demand and charging station availability forecasting is one of the challenges in the intelligent transportation system. With the accurate EV station situation prediction, suitable charging behaviors could be scheduled in advance to relieve range anxiety. Many existing deep learning methods are proposed to address this issue, however, due to the complex road network structure and comprehensive external factors, such as point of interests (POIs) and weather effects, many commonly used algorithms could just extract the historical usage information without considering comprehensive influence of external factors. To enhance the prediction accuracy and interpretability, the Attribute-Augmented Spatial-Temporal Graph Informer (AST-GIN) structure is proposed in this study by combining the Graph Convolutional Network (GCN) layer and the Informer layer to extract both external and internal spatial-temporal dependence of relevant transportation data. And the external factors are modeled as dynamic attributes by the attribute-augmented encoder for training. AST-GIN model is tested on the data collected in Dundee City and experimental results show the effectiveness of our model considering external factors influence over various horizon settings compared with other baselines.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
316,480
2207.11679
Affective Behaviour Analysis Using Pretrained Model with Facial Priori
Affective behaviour analysis has aroused researchers' attention due to its broad applications. However, it is labor exhaustive to obtain accurate annotations for massive face images. Thus, we propose to utilize the prior facial information via Masked Auto-Encoder (MAE) pretrained on unlabeled face images. Furthermore, we combine MAE pretrained Vision Transformer (ViT) and AffectNet pretrained CNN to perform multi-task emotion recognition. We notice that expression and action unit (AU) scores are pure and intact features for valence-arousal (VA) regression. As a result, we utilize AffectNet pretrained CNN to extract expression scores concatenating with expression and AU scores from ViT to obtain the final VA features. Moreover, we also propose a co-training framework with two parallel MAE pretrained ViT for expression recognition tasks. In order to make the two views independent, we random mask most patches during the training process. Then, JS divergence is performed to make the predictions of the two views as consistent as possible. The results on ABAW4 show that our methods are effective.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
309,735
2202.03078
Fair Interpretable Representation Learning with Correction Vectors
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information. Various representation debiasing techniques have been proposed in the literature. However, as neural networks are inherently opaque, these methods are hard to comprehend, which limits their usefulness. We propose a new framework for fair representation learning that is centered around the learning of "correction vectors", which have the same dimensionality as the given data vectors. Correction vectors may be computed either explicitly via architectural constraints or implicitly by training an invertible model based on Normalizing Flows. We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance. Furthermore, we demonstrate that state-of-the-art results can be achieved by the invertible model. Finally, we discuss the law standing of our methodology in light of recent legislation in the European Union.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
279,067
2408.01737
Tightly Coupled SLAM with Imprecise Architectural Plans
Robots navigating indoor environments often have access to architectural plans, which can serve as prior knowledge to enhance their localization and mapping capabilities. While some SLAM algorithms leverage these plans for global localization in real-world environments, they typically overlook a critical challenge: the "as-planned" architectural designs frequently deviate from the "as-built" real-world environments. To address this gap, we present a novel algorithm that tightly couples LIDAR-based simultaneous localization and mapping with architectural plans under the presence of deviations. Our method utilizes a multi-layered semantic representation to not only localize the robot, but also to estimate global alignment and structural deviations between "as-planned" and as-built environments in real-time. To validate our approach, we performed experiments in simulated and real datasets demonstrating robustness to structural deviations up to 35 cm and 15 degrees. On average, our method achieves 43% less localization error than baselines in simulated environments, while in real environments, the as-built 3D maps show 7% lower average alignment error
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
478,356
2107.05315
Contrastive Learning for Cold-Start Recommendation
Recommending cold-start items is a long-standing and fundamental challenge in recommender systems. Without any historical interaction on cold-start items, CF scheme fails to use collaborative signals to infer user preference on these items. To solve this problem, extensive studies have been conducted to incorporate side information into the CF scheme. Specifically, they employ modern neural network techniques (e.g., dropout, consistency constraint) to discover and exploit the coalition effect of content features and collaborative representations. However, we argue that these works less explore the mutual dependencies between content features and collaborative representations and lack sufficient theoretical supports, thus resulting in unsatisfactory performance. In this work, we reformulate the cold-start item representation learning from an information-theoretic standpoint. It aims to maximize the mutual dependencies between item content and collaborative signals. Specifically, the representation learning is theoretically lower-bounded by the integration of two terms: mutual information between collaborative embeddings of users and items, and mutual information between collaborative embeddings and feature representations of items. To model such a learning process, we devise a new objective function founded upon contrastive learning and develop a simple yet effective Contrastive Learning-based Cold-start Recommendation framework(CLCRec). In particular, CLCRec consists of three components: contrastive pair organization, contrastive embedding, and contrastive optimization modules. It allows us to preserve collaborative signals in the content representations for both warm and cold-start items. Through extensive experiments on four publicly accessible datasets, we observe that CLCRec achieves significant improvements over state-of-the-art approaches in both warm- and cold-start scenarios.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
245,739
1911.09816
Two-stage dimension reduction for noisy high-dimensional images and application to Cryogenic Electron Microscopy
Principal component analysis (PCA) is arguably the most widely used dimension-reduction method for vector-type data. When applied to a sample of images, PCA requires vectorization of the image data, which in turn entails solving an eigenvalue problem for the sample covariance matrix. We propose herein a two-stage dimension reduction (2SDR) method for image reconstruction from high-dimensional noisy image data. The first stage treats the image as a matrix, which is a tensor of order 2, and uses multilinear principal component analysis (MPCA) for matrix rank reduction and image denoising. The second stage vectorizes the reduced-rank matrix and achieves further dimension and noise reduction. Simulation studies demonstrate excellent performance of 2SDR, for which we also develop an asymptotic theory that establishes consistency of its rank selection. Applications to cryo-EM (cryogenic electronic microscopy), which has revolutionized structural biology, organic and medical chemistry, cellular and molecular physiology in the past decade, are also provided and illustrated with benchmark cryo-EM datasets. Connections to other contemporaneous developments in image reconstruction and high-dimensional statistical inference are also discussed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
154,626
2207.11158
SPRT-based Efficient Best Arm Identification in Stochastic Bandits
This paper investigates the best arm identification (BAI) problem in stochastic multi-armed bandits in the fixed confidence setting. The general class of the exponential family of bandits is considered. The existing algorithms for the exponential family of bandits face computational challenges. To mitigate these challenges, the BAI problem is viewed and analyzed as a sequential composite hypothesis testing task, and a framework is proposed that adopts the likelihood ratio-based tests known to be effective for sequential testing. Based on this test statistic, a BAI algorithm is designed that leverages the canonical sequential probability ratio tests for arm selection and is amenable to tractable analysis for the exponential family of bandits. This algorithm has two key features: (1) its sample complexity is asymptotically optimal, and (2) it is guaranteed to be $\delta-$PAC. Existing efficient approaches focus on the Gaussian setting and require Thompson sampling for the arm deemed the best and the challenger arm. Additionally, this paper analytically quantifies the computational expense of identifying the challenger in an existing approach. Finally, numerical experiments are provided to support the analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
309,521
1902.08061
Development of a classifiers/quantifiers dictionary towards French-Japanese MT
Although classifiers/quantifiers (CQs) expressions appear frequently in everyday communications or written documents, they are described neither in classical bilingual paper dictionaries , nor in machine-readable dictionaries. The paper describes a CQs dictionary, edited from the corpus we have annotated, and its usage in the framework of French-Japanese machine translation (MT). CQs treatment in MT often causes problems of lexical ambiguity, polylexical phrase recognition difficulties in analysis and doubtful output in transfer-generation, in particular for distant languages pairs like French and Japanese. Our basic treatment of CQs is to annotate the corpus by UNL-UWs (Universal Networking Language-Universal words) 1 , and then to produce a bilingual or multilingual dictionary of CQs, based on synonymy through identity of UWs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
122,118
2101.02897
Sequential Naive Learning
We analyze boundedly rational updating from aggregate statistics in a model with binary actions and binary states. Agents each take an irreversible action in sequence after observing the unordered set of previous actions. Each agent first forms her prior based on the aggregate statistic, then incorporates her signal with the prior based on Bayes rule, and finally applies a decision rule that assigns a (mixed) action to each belief. If priors are formed according to a discretized DeGroot rule, then actions converge to the state (in probability), i.e., \emph{asymptotic learning}, in any informative information structure if and only if the decision rule satisfies probability matching. This result generalizes to unspecified information settings where information structures differ across agents and agents know only the information structure generating their own signal. Also, the main result extends to the case of $n$ states and $n$ actions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
214,764
2306.00021
Explaining Hate Speech Classification with Model Agnostic Methods
There have been remarkable breakthroughs in Machine Learning and Artificial Intelligence, notably in the areas of Natural Language Processing and Deep Learning. Additionally, hate speech detection in dialogues has been gaining popularity among Natural Language Processing researchers with the increased use of social media. However, as evidenced by the recent trends, the need for the dimensions of explainability and interpretability in AI models has been deeply realised. Taking note of the factors above, the research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision. This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach for explainability and to prevent model bias. The bidirectional transformer model BERT has been used for prediction because of its state of the art efficiency over other Machine Learning models. The model agnostic algorithm LIME generates explanations for the output of a trained classifier and predicts the features that influence the model decision. The predictions generated from the model were evaluated manually, and after thorough evaluation, we observed that the model performs efficiently in predicting and explaining its prediction. Lastly, we suggest further directions for the expansion of the provided research work.
true
false
false
true
true
false
false
false
true
false
false
false
false
false
false
false
false
false
369,836
2410.17735
New Insight in Cervical Cancer Diagnosis Using Convolution Neural Network Architecture
The Pap smear is a screening method for early cervical cancer diagnosis. The selection of the right optimizer in the convolutional neural network (CNN) model is key to the success of the CNN in image classification, including the classification of cervical cancer Pap smear images. In this study, stochastic gradient descent (SGD), RMSprop, Adam, AdaGrad, AdaDelta, Adamax, and Nadam optimizers were used to classify cervical cancer Pap smear images from the SipakMed dataset. Resnet-18, Resnet-34, and VGG-16 are the CNN architectures used in this study, and each architecture uses a transfer-learning model. Based on the test results, we conclude that the transfer learning model performs better on all CNNs and optimization techniques and that in the transfer learning model, the optimization has little influence on the training of the model. Adamax, with accuracy values of 72.8% and 66.8%, had the best accuracy for the VGG-16 and Resnet-18 architectures, respectively. Resnet-34 had 54.0%. This is 0.034% lower than Nadam. Overall, Adamax is a suitable optimizer for CNN in cervical cancer classification on Resnet-18, Resnet-34, and VGG-16 architectures. This study provides new insights into the configuration of CNN models for Pap smear image analysis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
501,590
2212.14197
PointVST: Self-Supervised Pre-training for 3D Point Clouds via View-Specific Point-to-Image Translation
The past few years have witnessed the great success and prevalence of self-supervised representation learning within the language and 2D vision communities. However, such advancements have not been fully migrated to the field of 3D point cloud learning. Different from existing pre-training paradigms designed for deep point cloud feature extractors that fall into the scope of generative modeling or contrastive learning, this paper proposes a translative pre-training framework, namely PointVST, driven by a novel self-supervised pretext task of cross-modal translation from 3D point clouds to their corresponding diverse forms of 2D rendered images. More specifically, we begin with deducing view-conditioned point-wise embeddings through the insertion of the viewpoint indicator, and then adaptively aggregate a view-specific global codeword, which can be further fed into subsequent 2D convolutional translation heads for image generation. Extensive experimental evaluations on various downstream task scenarios demonstrate that our PointVST shows consistent and prominent performance superiority over current state-of-the-art approaches as well as satisfactory domain transfer capability. Our code will be publicly available at https://github.com/keeganhk/PointVST.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
338,540
2401.05928
Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback
An emotional support conversation system aims to alleviate users' emotional distress and assist them in addressing their challenges. To generate supportive responses, it is critical to consider multiple factors such as empathy, support strategies, and response coherence, as established in prior methods. Nonetheless, previous models occasionally generate unhelpful responses, which intend to provide support but display counterproductive effects. According to psychology and communication theories, poor performance in just one contributing factor might cause a response to be unhelpful. From the model training perspective, since these models have not been exposed to unhelpful responses during their training phase, they are unable to distinguish if the tokens they generate might result in unhelpful responses during inference. To address this issue, we introduce a novel model-agnostic framework named mitigating unhelpfulness with multifaceted AI feedback for emotional support (Muffin). Specifically, Muffin employs a multifaceted AI feedback module to assess the helpfulness of responses generated by a specific model with consideration of multiple factors. Using contrastive learning, it then reduces the likelihood of the model generating unhelpful responses compared to the helpful ones. Experimental results demonstrate that Muffin effectively mitigates the generation of unhelpful responses while slightly increasing response fluency and relevance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
420,963
1407.5396
Symblicit algorithms for optimal strategy synthesis in monotonic Markov decision processes
When treating Markov decision processes (MDPs) with large state spaces, using explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have proposed a so-called symblicit algorithm for the synthesis of optimal strategies in MDPs, in the quantitative setting of expected mean-payoff. This algorithm, based on the strategy iteration algorithm of Howard and Veinott, efficiently combines symbolic and explicit data structures, and uses binary decision diagrams as symbolic representation. The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains) provides another interesting alternative, especially for the class of monotonic MDPs. We design efficient pseudo-antichain based symblicit algorithms (with open source implementations) for two quantitative settings: the expected mean-payoff and the stochastic shortest path. For two practical applications coming from automated planning and LTL synthesis, we report promising experimental results w.r.t. both the run time and the memory consumption.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
34,773
1904.07974
Discovering Episodes with Compact Minimal Windows
Discovering the most interesting patterns is the key problem in the field of pattern mining. While ranking or selecting patterns is well-studied for itemsets it is surprisingly under-researched for other, more complex, pattern types. In this paper we propose a new quality measure for episodes. An episode is essentially a set of events with possible restrictions on the order of events. We say that an episode is significant if its occurrence is abnormally compact, that is, only few gap events occur between the actual episode events, when compared to the expected length according to the independence model. We can apply this measure as a post-pruning step by first discovering frequent episodes and then rank them according to this measure. In order to compute the score we will need to compute the mean and the variance according to the independence model. As a main technical contribution we introduce a technique that allows us to compute these values. Such a task is surprisingly complex and in order to solve it we develop intricate finite state machines that allow us to compute the needed statistics. We also show that asymptotically our score can be interpreted as a P-value. In our experiments we demonstrate that despite its intricacy our ranking is fast: we can rank tens of thousands episodes in seconds. Our experiments with text data demonstrate that our measure ranks interpretable episodes high.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
127,927
1901.04292
Risk-Aware Resource Allocation for URLLC: Challenges and Strategies with Machine Learning
Supporting ultra-reliable low-latency communications (URLLC) is a major challenge of 5G wireless networks. Stringent delay and reliability requirements need to be satisfied for both scheduled and non-scheduled URLLC traffic to enable a diverse set of 5G applications. Although physical and media access control layer solutions have been investigated to satisfy only scheduled URLLC traffic, there is a lack of study on enabling transmission of non-scheduled URLLC traffic, especially in coexistence with the scheduled URLLC traffic. Machine learning (ML) is an important enabler for such a co-existence scenario due to its ability to exploit spatial/temporal correlation in user behaviors and use of radio resources. Hence, in this paper, we first study the coexistence design challenges, especially the radio resource management (RRM) problem and propose a distributed risk-aware ML solution for RRM. The proposed solution benefits from hybrid orthogonal/non-orthogonal radio resource slicing, and proactively regulates the spectrum needed for satisfying delay/reliability requirement of each URLLC traffic type. A case study is introduced to investigate the potential of the proposed RRM in serving coexisting URLLC traffic types. The results further provide insights on the benefits of leveraging intelligent RRM, e.g. a 75% increase in data rate with respect to the conservative design approach for the scheduled traffic is achieved, while the 99.99% reliability of both scheduled and nonscheduled traffic types is satisfied.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
118,581
2409.10864
Distributed Optimization for Traffic Light Control and Connected Automated Vehicle Coordination in Mixed-Traffic Intersections
In this paper, we consider the problem of coordinating traffic light systems and connected automated vehicles (CAVs) in mixed-traffic intersections. We aim to develop an optimization-based control framework that leverages both the coordination capabilities of CAVs at higher penetration rates and intelligent traffic management using traffic lights at lower penetration rates. Since the resulting optimization problem is a multi-agent mixed-integer quadratic program, we propose a penalization-enhanced maximum block improvement algorithm to solve the problem in a distributed manner. The proposed algorithm, under certain mild conditions, yields a feasible person-by-person optimal solution of the centralized problem. The performance of the control framework and the distributed algorithm is validated through simulations across various penetration rates and traffic volumes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
488,906
2204.06835
GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping
The domain of robotics is challenging to apply deep reinforcement learning due to the need for large amounts of data and for ensuring safety during learning. Curriculum learning has shown good performance in terms of sample- efficient deep learning. In this paper, we propose an algorithm (named GloCAL) that creates a curriculum for an agent to learn multiple discrete tasks, based on clustering tasks according to their evaluation scores. From the highest-performing cluster, a global task representative of the cluster is identified for learning a global policy that transfers to subsequently formed new clusters, while the remaining tasks in the cluster are learned as local policies. The efficacy and efficiency of our GloCAL algorithm are compared with other approaches in the domain of grasp learning for 49 objects with varied object complexity and grasp difficulty from the EGAD! dataset. The results show that GloCAL is able to learn to grasp 100% of the objects, whereas other approaches achieve at most 86% despite being given 1.5 times longer training time.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
291,464
2302.01364
Cycles in Impulsive Goodwin's Oscillators of Arbitrary Order
Existence of periodical solutions, i.e. cycles, in the Impulsive Goodwin's Oscillator (IGO) with the continuous part of an arbitrary order m is considered. The original IGO with a third-order continuous part is a hybrid model that portrays a chemical or biochemical system composed of three substances represented by their concentrations and arranged in a cascade. The first substance in the chain is introduced via an impulsive feedback where both the impulse frequency and weights are modulated by the measured output of the continuous part. It is shown that, under the standard assumptions on the IGO, a positive periodic solution with one firing of the pulse-modulated feedback in the least period also exists in models with any m >= 1. Furthermore, the uniqueness of this 1-cycle is proved for the IGO with m <= 10 whereas, for m > 10, the uniqueness can still be guaranteed under mild assumptions on the frequency modulation function.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
343,567
1308.5673
Nonlocal linear compression of two-photon time interval distribution
We propose a linear compression technique for the time interval distribution of photon pairs. Using a partially frequency-entangled two-photon (TP) state with appropriate mean time width, the compressed TP time interval width can be kept in the minimum limit set by the phase modulation, and is independent of its initial width. As a result of this effect, ultra-narrow TP time interval distribution can be compressed with relatively slow phase modulators to decrease the damage of the phase-instability arising from the phase modulation process.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
26,654
1810.12195
Efficient convex optimization for optimal PMU placement in large distribution grids
The small amount of measurements in distribution grids makes their monitoring more difficult. Topological observability may not be possible, and thus, pseudo-measurements are needed to perform state estimation, which is required to control elements such as distributed generation or transformers at distribution grids. Therefore, we consider the problem of optimal sensor placement to improve the state estimation accuracy in large-scale, 3-phase coupled, unbalanced distribution grids. This is an NP-hard optimization problem whose optimal solution is unpractical to obtain for large networks. Therefore, we develop a computationally efficient convex optimization algorithm to compute a lower bound on the possible value of the optimal solution, and thus check the gap between the bound and heuristic solutions. We test the method on a large test feeder, the standard IEEE 8500-node, to show the effectiveness of the approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
111,715
2204.01134
Control Co-design of a Hydrokinetic Turbine with Open-loop Optimal Control
This paper introduces a control co-design (CCD) framework to simultaneously explore the physical parameters and control spaces for a hydro-kinetic turbine (HKT) rotor optimization. The optimization formulation incorporates a coupled dynamic-hydrodynamic model to maximize the rotor power efficiency for various time-variant flow profiles. The open-loop optimal control is applied for maximum power tracking, and the blade element momentum theory (BEMT) is used to model the hydrodynamics. Case studies with different control constraints are investigated for CCD. Sensitivity analyses were conducted with respect to different flow profiles and initial geometries. Comparisons are made between CCD and the sequential process, with physical design followed by a control design process under the same conditions. The results demonstrate the benefits of CCD and reveal that, with control constraints, CCD leads to increased energy production compared to the design obtained from the sequential design process.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
289,508
0809.0158
Network Tomography Based on Additive Metrics
Inference of the network structure (e.g., routing topology) and dynamics (e.g., link performance) is an essential component in many network design and management tasks. In this paper we propose a new, general framework for analyzing and designing routing topology and link performance inference algorithms using ideas and tools from phylogenetic inference in evolutionary biology. The framework is applicable to a variety of measurement techniques. Based on the framework we introduce and develop several polynomial-time distance-based inference algorithms with provable performance. We provide sufficient conditions for the correctness of the algorithms. We show that the algorithms are consistent (return correct topology and link performance with an increasing sample size) and robust (can tolerate a certain level of measurement errors). In addition, we establish certain optimality properties of the algorithms (i.e., they achieve the optimal $l_\infty$-radius) and demonstrate their effectiveness via model simulation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
2,257
2206.01465
PAC Statistical Model Checking of Mean Payoff in Discrete- and Continuous-Time MDP
Markov decision processes (MDP) and continuous-time MDP (CTMDP) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first algorithm to compute mean payoff probably approximately correctly in unknown MDP; further, we extend it to unknown CTMDP. We do not require any knowledge of the state space, only a lower bound on the minimum transition probability, which has been advocated in literature. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
300,486
2406.18020
MolFusion: Multimodal Fusion Learning for Molecular Representations via Multi-granularity Views
Artificial Intelligence predicts drug properties by encoding drug molecules, aiding in the rapid screening of candidates. Different molecular representations, such as SMILES and molecule graphs, contain complementary information for molecular encoding. Thus exploiting complementary information from different molecular representations is one of the research priorities in molecular encoding. Most existing methods for combining molecular multi-modalities only use molecular-level information, making it hard to encode intra-molecular alignment information between different modalities. To address this issue, we propose a multi-granularity fusion method that is MolFusion. The proposed MolFusion consists of two key components: (1) MolSim, a molecular-level encoding component that achieves molecular-level alignment between different molecular representations. and (2) AtomAlign, an atomic-level encoding component that achieves atomic-level alignment between different molecular representations. Experimental results show that MolFusion effectively utilizes complementary multimodal information, leading to significant improvements in performance across various classification and regression tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
467,835
1903.00812
3D Hand Shape and Pose Estimation from a Single RGB Image
This work addresses a novel and challenging problem of estimating the full 3D hand shape and pose from a single RGB image. Most current methods in 3D hand analysis from monocular RGB images only focus on estimating the 3D locations of hand keypoints, which cannot fully express the 3D shape of hand. In contrast, we propose a Graph Convolutional Neural Network (Graph CNN) based method to reconstruct a full 3D mesh of hand surface that contains richer information of both 3D hand shape and pose. To train networks with full supervision, we create a large-scale synthetic dataset containing both ground truth 3D meshes and 3D poses. When fine-tuning the networks on real-world datasets without 3D ground truth, we propose a weakly-supervised approach by leveraging the depth map as a weak supervision in training. Through extensive evaluations on our proposed new datasets and two public datasets, we show that our proposed method can produce accurate and reasonable 3D hand mesh, and can achieve superior 3D hand pose estimation accuracy when compared with state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
123,106
2105.07553
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing
Due to its powerful capability of representation learning and high-efficiency computation, deep hashing has made significant progress in large-scale image retrieval. However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field. In this paper, we propose a novel prototype-supervised adversarial network (ProS-GAN), which formulates a flexible generative architecture for efficient and effective targeted hashing attack. To the best of our knowledge, this is the first generation-based method to attack deep hashing networks. Generally, our proposed framework consists of three parts, i.e., a PrototypeNet, a generator, and a discriminator. Specifically, the designed PrototypeNet embeds the target label into the semantic representation and learns the prototype code as the category-level representative of the target label. Moreover, the semantic representation and the original image are jointly fed into the generator for a flexible targeted attack. Particularly, the prototype code is adopted to supervise the generator to construct the targeted adversarial example by minimizing the Hamming distance between the hash code of the adversarial example and the prototype code. Furthermore, the generator is against the discriminator to simultaneously encourage the adversarial examples visually realistic and the semantic representation informative. Extensive experiments verify that the proposed framework can efficiently produce adversarial examples with better targeted attack performance and transferability over state-of-the-art targeted attack methods of deep hashing. The related codes could be available at https://github.com/xunguangwang/ProS-GAN .
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
235,469
2408.07989
IIU: Independent Inference Units for Knowledge-based Visual Question Answering
Knowledge-based visual question answering requires external knowledge beyond visible content to answer the question correctly. One limitation of existing methods is that they focus more on modeling the inter-modal and intra-modal correlations, which entangles complex multimodal clues by implicit embeddings and lacks interpretability and generalization ability. The key challenge to solve the above problem is to separate the information and process it separately at the functional level. By reusing each processing unit, the generalization ability of the model to deal with different data can be increased. In this paper, we propose Independent Inference Units (IIU) for fine-grained multi-modal reasoning to decompose intra-modal information by the functionally independent units. Specifically, IIU processes each semantic-specific intra-modal clue by an independent inference unit, which also collects complementary information by communication from different units. To further reduce the impact of redundant information, we propose a memory update module to maintain semantic-relevant memory along with the reasoning process gradually. In comparison with existing non-pretrained multi-modal reasoning models on standard datasets, our model achieves a new state-of-the-art, enhancing performance by 3%, and surpassing basic pretrained multi-modal models. The experimental results show that our IIU model is effective in disentangling intra-modal clues as well as reasoning units to provide explainable reasoning evidence. Our code is available at https://github.com/Lilidamowang/IIU.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
480,808
2312.17642
Research on the Laws of Multimodal Perception and Cognition from a Cross-cultural Perspective -- Taking Overseas Chinese Gardens as an Example
This study aims to explore the complex relationship between perceptual and cognitive interactions in multimodal data analysis,with a specific emphasis on spatial experience design in overseas Chinese gardens. It is found that evaluation content and images on social media can reflect individuals' concerns and sentiment responses, providing a rich data base for cognitive research that contains both sentimental and image-based cognitive information. Leveraging deep learning techniques, we analyze textual and visual data from social media, thereby unveiling the relationship between people's perceptions and sentiment cognition within the context of overseas Chinese gardens. In addition, our study introduces a multi-agent system (MAS)alongside AI agents. Each agent explores the laws of aesthetic cognition through chat scene simulation combined with web search. This study goes beyond the traditional approach of translating perceptions into sentiment scores, allowing for an extension of the research methodology in terms of directly analyzing texts and digging deeper into opinion data. This study provides new perspectives for understanding aesthetic experience and its impact on architecture and landscape design across diverse cultural contexts, which is an essential contribution to the field of cultural communication and aesthetic understanding.
false
false
false
true
true
false
false
false
true
false
false
true
false
false
false
false
false
false
418,815
2407.12501
EmoFace: Audio-driven Emotional 3D Face Animation
Audio-driven emotional 3D face animation aims to generate emotionally expressive talking heads with synchronized lip movements. However, previous research has often overlooked the influence of diverse emotions on facial expressions or proved unsuitable for driving MetaHuman models. In response to this deficiency, we introduce EmoFace, a novel audio-driven methodology for creating facial animations with vivid emotional dynamics. Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements, while maintaining accurate lip synchronization. We propose independent speech encoders and emotion encoders to learn the relationship between audio, emotion and corresponding facial controller rigs, and finally map into the sequence of controller values. Additionally, we introduce two post-processing techniques dedicated to enhancing the authenticity of the animation, particularly in blinks and eye movements. Furthermore, recognizing the scarcity of emotional audio-visual data suitable for MetaHuman model manipulation, we contribute an emotional audio-visual dataset and derive control parameters for each frames. Our proposed methodology can be applied in producing dialogues animations of non-playable characters (NPCs) in video games, and driving avatars in virtual reality environments. Our further quantitative and qualitative experiments, as well as an user study comparing with existing researches show that our approach demonstrates superior results in driving 3D facial models. The code and sample data are available at https://github.com/SJTU-Lucy/EmoFace.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
473,958
1604.03901
Single-Image Depth Perception in the Wild
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
54,575
1911.04787
Effects of data ambiguity and cognitive biases on the interpretability of machine learning models in humanitarian decision making
The effectiveness of machine learning algorithms depends on the quality and amount of data and the operationalization and interpretation by the human analyst. In humanitarian response, data is often lacking or overburdening, thus ambiguous, and the time-scarce, volatile, insecure environments of humanitarian activities are likely to inflict cognitive biases. This paper proposes to research the effects of data ambiguity and cognitive biases on the interpretability of machine learning algorithms in humanitarian decision making.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,077
2207.07117
A Novel Implementation of Machine Learning for the Efficient, Explainable Diagnosis of COVID-19 from Chest CT
In a worldwide health crisis as exigent as COVID-19, there has become a pressing need for rapid, reliable diagnostics. Currently, popular testing methods such as reverse transcription polymerase chain reaction (RT-PCR) can have high false negative rates. Consequently, COVID-19 patients are not accurately identified nor treated quickly enough to prevent transmission of the virus. However, the recent rise of medical CT data has presented promising avenues, since CT manifestations contain key characteristics indicative of COVID-19. This study aimed to take a novel approach in the machine learning-based detection of COVID-19 from chest CT scans. First, the dataset utilized in this study was derived from three major sources, comprising a total of 17,698 chest CT slices across 923 patient cases. Image preprocessing algorithms were then developed to reduce noise by excluding irrelevant features. Transfer learning was also implemented with the EfficientNetB7 pre-trained model to provide a backbone architecture and save computational resources. Lastly, several explainability techniques were leveraged to qualitatively validate model performance by localizing infected regions and highlighting fine-grained pixel details. The proposed model attained an overall accuracy of 0.927 and a sensitivity of 0.958. Explainability measures showed that the model correctly distinguished between relevant, critical features pertaining to COVID-19 chest CT images and normal controls. Deep learning frameworks provide efficient, human-interpretable COVID-19 diagnostics that could complement radiologist decisions or serve as an alternative screening tool. Future endeavors may provide insight into infection severity, patient risk stratification, and prognosis.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
308,112
2308.13340
TriGait: Aligning and Fusing Skeleton and Silhouette Gait Data via a Tri-Branch Network
Gait recognition is a promising biometric technology for identification due to its non-invasiveness and long-distance. However, external variations such as clothing changes and viewpoint differences pose significant challenges to gait recognition. Silhouette-based methods preserve body shape but neglect internal structure information, while skeleton-based methods preserve structure information but omit appearance. To fully exploit the complementary nature of the two modalities, a novel triple branch gait recognition framework, TriGait, is proposed in this paper. It effectively integrates features from the skeleton and silhouette data in a hybrid fusion manner, including a two-stream network to extract static and motion features from appearance, a simple yet effective module named JSA-TC to capture dependencies between all joints, and a third branch for cross-modal learning by aligning and fusing low-level features of two modalities. Experimental results demonstrate the superiority and effectiveness of TriGait for gait recognition. The proposed method achieves a mean rank-1 accuracy of 96.0% over all conditions on CASIA-B dataset and 94.3% accuracy for CL, significantly outperforming all the state-of-the-art methods. The source code will be available at https://github.com/feng-xueling/TriGait/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
387,881
2405.14249
Identifying Breakdowns in Conversational Recommender Systems using User Simulation
We present a methodology to systematically test conversational recommender systems with regards to conversational breakdowns. It involves examining conversations generated between the system and simulated users for a set of pre-defined breakdown types, extracting responsible conversational paths, and characterizing them in terms of the underlying dialogue intents. User simulation offers the advantages of simplicity, cost-effectiveness, and time efficiency for obtaining conversations where potential breakdowns can be identified. The proposed methodology can be used as diagnostic tool as well as a development tool to improve conversational recommendation systems. We apply our methodology in a case study with an existing conversational recommender system and user simulator, demonstrating that with just a few iterations, we can make the system more robust to conversational breakdowns.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
456,331
2111.12495
Altering Backward Pass Gradients improves Convergence
In standard neural network training, the gradients in the backward pass are determined by the forward pass. As a result, the two stages are coupled. This is how most neural networks are trained currently. However, gradient modification in the backward pass has seldom been studied in the literature. In this paper we explore decoupled training, where we alter the gradients in the backward pass. We propose a simple yet powerful method called PowerGrad Transform, that alters the gradients before the weight update in the backward pass and significantly enhances the predictive performance of the neural network. PowerGrad Transform trains the network to arrive at a better optima at convergence. It is computationally extremely efficient, virtually adding no additional cost to either memory or compute, but results in improved final accuracies on both the training and test sets. PowerGrad Transform is easy to integrate into existing training routines, requiring just a few lines of code. PowerGrad Transform accelerates training and makes it possible for the network to better fit the training data. With decoupled training, PowerGrad Transform improves baseline accuracies for ResNet-50 by 0.73%, for SE-ResNet-50 by 0.66% and by more than 1.0% for the non-normalized ResNet-18 network on the ImageNet classification task.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
267,986
2305.15332
Inverse optimal control for averaged cost per stage linear quadratic regulators
Inverse Optimal Control (IOC) is a powerful framework for learning a behaviour from observations of experts. The framework aims to identify the underlying cost function that the observed optimal trajectories (the experts' behaviour) are optimal with respect to. In this work, we considered the case of identifying the cost and the feedback law from observed trajectories generated by an ``average cost per stage" linear quadratic regulator. We show that identifying the cost is in general an ill-posed problem, and give necessary and sufficient conditions for non-identifiability. Moreover, despite the fact that the problem is in general ill-posed, we construct an estimator for the cost function and show that the control gain corresponding to this estimator is a statistically consistent estimator for the true underlying control gain. In fact, the constructed estimator is based on convex optimization, and hence the proved statistical consistency is also observed in practice. We illustrate the latter by applying the method on a simulation example from rehabilitation robotics.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
367,570
2202.01046
Towards High-Payload Admittance Control for Manual Guidance with Environmental Contact
Force control enables hands-on teaching and physical collaboration, with the potential to improve ergonomics and flexibility of automation. Established methods for the design of compliance, impedance control, and \rev{collision response} can achieve free-space stability and acceptable peak contact force on lightweight, lower payload robots. Scaling collaboration to higher payloads can allow new applications, but introduces challenges due to the more significant payload dynamics and the use of higher-payload industrial robots. To achieve high-payload manual guidance with contact, this paper proposes and validates new mechatronic design methods: standard admittance control is extended with damping feedback, compliant structures are integrated to the environment, and a contact response method which allows continuous admittance control is proposed. These methods are compared with respect to free-space stability, contact stability, and peak contact force. The resulting methods are then applied to realize two contact-rich tasks on a 16 kg payload (peg in hole and slot assembly) and free-space co-manipulation of a 50 kg payload.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
278,355
2304.00698
A Post-Training Framework for Improving Heterogeneous Graph Neural Networks
Recent years have witnessed the success of heterogeneous graph neural networks (HGNNs) in modeling heterogeneous information networks (HINs). In this paper, we focus on the benchmark task of HGNNs, i.e., node classification, and empirically find that typical HGNNs are not good at predicting the label of a test node whose receptive field (1) has few training nodes from the same category or (2) has multiple training nodes from different categories. A possible explanation is that their message passing mechanisms may involve noises from different categories, and cannot fully explore task-specific knowledge such as the label dependency between distant nodes. Therefore, instead of introducing a new HGNN model, we propose a general post-training framework that can be applied on any pretrained HGNNs to further inject task-specific knowledge and enhance their prediction performance. Specifically, we first design an auxiliary system that estimates node labels based on (1) a global inference module of multi-channel label propagation and (2) a local inference module of network schema-aware prediction. The mechanism of our auxiliary system can complement the pretrained HGNNs by providing extra task-specific knowledge. During the post-training process, we will strengthen both system-level and module-level consistencies to encourage the cooperation between a pretrained HGNN and our auxiliary system. In this way, both systems can learn from each other for better performance. In experiments, we apply our framework to four typical HGNNs. Experimental results on three benchmark datasets show that compared with pretrained HGNNs, our post-training framework can enhance Micro-F1 by a relative improvement of 3.9% on average. Code, data and appendix are available at https://github.com/GXM1141/HGPF.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
355,779
2307.16788
Congestion Analysis for the DARPA OFFSET CCAST Swarm
The Defense Advanced Research Projects Agency (DARPA) OFFensive Swarm-Enabled Tactics program's goal of launching 250 unmanned aerial and ground vehicles from a limited sized launch zone was a daunting challenge. The swarm's aerial vehicles were primarily multirotor platforms, which can efficiently be launched en masse. Each field exercise expected the deployment of an even larger swarm. While the launch zone's spatial area increased with each field exercise, the relative space for each vehicle was not necessarily increased, considering the increasing size of the swarm and the vehicles' associated GPS error; however, safe mission deployment and execution were expected. At the same time, achieving the mission goals required maximizing efficiency of the swarm's performance by reducing congestion that blocked vehicles from completing tactic assignments. Congestion analysis conducted before the final field exercise focused on adjusting various constraints to optimize the swarm's deployment without reducing safety. During the field exercise, data was collected that permitted analyzing the number and durations of individual vehicle blockages' impact on the resulting congestion. After the field exercise, additional analyses used the mission plan to validate the use of simulation for analyzing congestion.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
382,736
2102.10283
Imitation Learning for Variable Speed Contact Motion for Operation up to Control Bandwidth
The generation of robot motions in the real world is difficult by using conventional controllers alone and requires highly intelligent processing. In this regard, learning-based motion generations are currently being investigated. However, the main issue has been improvements of the adaptability to spatially varying environments, but a variation of the operating speed has not been investigated in detail. In contact-rich tasks, it is especially important to be able to adjust the operating speed because a nonlinear relationship occurs between the operating speed and force (e.g., inertial and frictional forces), and it affects the results of the tasks. Therefore, in this study, we propose a method for generating variable operating speeds while adapting to spatial perturbations in the environment. The proposed method can be adapted to nonlinearities by utilizing a small amount of motion data. We experimentally evaluated the proposed method by erasing a line using an eraser fixed to the tip of the robot as an example of a contact-rich task. Furthermore, the proposed method enables a robot to perform a task faster than a human operator and is capable of operating close to the control bandwidth.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
221,035
2411.17598
Agentic AI for Improving Precision in Identifying Contributions to Sustainable Development Goals
As research institutions increasingly commit to supporting the United Nations' Sustainable Development Goals (SDGs), there is a pressing need to accurately assess their research output against these goals. Current approaches, primarily reliant on keyword-based Boolean search queries, conflate incidental keyword matches with genuine contributions, reducing retrieval precision and complicating benchmarking efforts. This study investigates the application of autoregressive Large Language Models (LLMs) as evaluation agents to identify relevant scholarly contributions to SDG targets in scholarly publications. Using a dataset of academic abstracts retrieved via SDG-specific keyword queries, we demonstrate that small, locally-hosted LLMs can differentiate semantically relevant contributions to SDG targets from documents retrieved due to incidental keyword matches, addressing the limitations of traditional methods. By leveraging the contextual understanding of LLMs, this approach provides a scalable framework for improving SDG-related research metrics and informing institutional reporting.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
true
511,511
1805.09957
Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions
Various 3D semantic attributes such as segmentation masks, geometric features, keypoints, and materials can be encoded as per-point probe functions on 3D geometries. Given a collection of related 3D shapes, we consider how to jointly analyze such probe functions over different shapes, and how to discover common latent structures using a neural network --- even in the absence of any correspondence information. Our network is trained on point cloud representations of shape geometry and associated semantic functions on that point cloud. These functions express a shared semantic understanding of the shapes but are not coordinated in any way. For example, in a segmentation task, the functions can be indicator functions of arbitrary sets of shape parts, with the particular combination involved not known to the network. Our network is able to produce a small dictionary of basis functions for each shape, a dictionary whose span includes the semantic functions provided for that shape. Even though our shapes have independent discretizations and no functional correspondences are provided, the network is able to generate latent bases, in a consistent order, that reflect the shared semantic structure among the shapes. We demonstrate the effectiveness of our technique in various segmentation and keypoint selection applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
98,538
2302.13212
Implicit Contact-Rich Manipulation Planning for a Manipulator with Insufficient Payload
This paper uses a mobile manipulator with a collaborative robotic arm to manipulate objects beyond the robot's maximum payload. It proposes a single-shot probabilistic roadmap-based method to plan and optimize manipulation motion with environment support. The method uses an expanded object mesh model to examine contact and randomly explores object motion while keeping contact and securing affordable grasping force. It generates robotic motion trajectories after obtaining object motion using an optimization-based algorithm. With the proposed method's help, we can plan contact-rich manipulation without particularly analyzing an object's contact modes and their transitions. The planner and optimizer determine them automatically. We conducted experiments and analyses using simulations and real-world executions to examine the method's performance. It can successfully find manipulation motion that met contact, force, and kinematic constraints, thus allowing a mobile manipulator to move heavy objects while leveraging supporting forces from environmental obstacles. The mehtod does not need to explicitly analyze contact states and build contact transition graphs, thus providing a new view for robotic grasp-less manipulation, non-prehensile manipulation, manipulation with contact, etc.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
347,860
2203.07973
MOBDrone: a Drone Video Dataset for Man OverBoard Rescue
Modern Unmanned Aerial Vehicles (UAV) equipped with cameras can play an essential role in speeding up the identification and rescue of people who have fallen overboard, i.e., man overboard (MOB). To this end, Artificial Intelligence techniques can be leveraged for the automatic understanding of visual data acquired from drones. However, detecting people at sea in aerial imagery is challenging primarily due to the lack of specialized annotated datasets for training and testing detectors for this task. To fill this gap, we introduce and publicly release the MOBDrone benchmark, a collection of more than 125K drone-view images in a marine environment under several conditions, such as different altitudes, camera shooting angles, and illumination. We manually annotated more than 180K objects, of which about 113K man overboard, precisely localizing them with bounding boxes. Moreover, we conduct a thorough performance analysis of several state-of-the-art object detectors on the MOBDrone data, serving as baselines for further research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
285,622
1507.00448
Cross Modal Distillation for Supervision Transfer
In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at https://github.com/s-gupta/fast-rcnn/tree/distillation
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
44,760
2106.00219
Question-aware Transformer Models for Consumer Health Question Summarization
Searching for health information online is becoming customary for more and more consumers every day, which makes the need for efficient and reliable question answering systems more pressing. An important contributor to the success rates of these systems is their ability to fully understand the consumers' questions. However, these questions are frequently longer than needed and mention peripheral information that is not useful in finding relevant answers. Question summarization is one of the potential solutions to simplifying long and complex consumer questions before attempting to find an answer. In this paper, we study the task of abstractive summarization for real-world consumer health questions. We develop an abstractive question summarization model that leverages the semantic interpretation of a question via recognition of medical entities, which enables the generation of informative summaries. Towards this, we propose multiple Cloze tasks (i.e. the task of filing missing words in a given context) to identify the key medical entities that enforce the model to have better coverage in question-focus recognition. Additionally, we infuse the decoder inputs with question-type information to generate question-type driven summaries. When evaluated on the MeQSum benchmark corpus, our framework outperformed the state-of-the-art method by 10.2 ROUGE-L points. We also conducted a manual evaluation to assess the correctness of the generated summaries.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,038
1404.4884
Causal Interfaces
The interaction of two binary variables, assumed to be empirical observations, has three degrees of freedom when expressed as a matrix of frequencies. Usually, the size of causal influence of one variable on the other is calculated as a single value, as increase in recovery rate for a medical treatment, for example. We examine what is lost in this simplification, and propose using two interface constants to represent positive and negative implications separately. Given certain assumptions about non-causal outcomes, the set of resulting epistemologies is a continuum. We derive a variety of particular measures and contrast them with the one-dimensional index.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
32,441
2106.05227
Understanding Privacy Attitudes and Concerns Towards Remote Communications During the COVID-19 Pandemic
Since December 2019, the COVID-19 pandemic has caused people around the world to exercise social distancing, which has led to an abrupt rise in the adoption of remote communications for working, socializing, and learning from home. As remote communications will outlast the pandemic, it is crucial to protect users' security and respect their privacy in this unprecedented setting, and that requires a thorough understanding of their behaviors, attitudes, and concerns toward various aspects of remote communications. To this end, we conducted an online study with 220 worldwide Prolific participants. We found that privacy and security are among the most frequently mentioned factors impacting participants' attitude and comfort level with conferencing tools and meeting locations. Open-ended responses revealed that most participants lacked autonomy when choosing conferencing tools or using microphone/webcam in their remote meetings, which in several cases contradicted their personal privacy and security preferences. Based on our findings, we distill several recommendations on how employers, educators, and tool developers can inform and empower users to make privacy-protective decisions when engaging in remote communications.
true
false
false
true
false
false
false
false
false
false
false
false
true
true
false
false
false
false
240,016
2309.07662
Guaranteed approximations of arbitrarily quantified reachability problems
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
391,858
2006.01414
Enhanced Universal Dependency Parsing with Second-Order Inference and Mixture of Training Data
This paper presents the system used in our submission to the \textit{IWPT 2020 Shared Task}. Our system is a graph-based parser with second-order inference. For the low-resource Tamil corpus, we specially mixed the training data of Tamil with other languages and significantly improved the performance of Tamil. Due to our misunderstanding of the submission requirements, we submitted graphs that are not connected, which makes our system only rank \textbf{6th} over 10 teams. However, after we fixed this problem, our system is 0.6 ELAS higher than the team that ranked \textbf{1st} in the official results.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
179,767
2412.14570
Characterising Simulation-Based Program Equilibria
In Tennenholtz's program equilibrium, players of a game submit programs to play on their behalf. Each program receives the other programs' source code and outputs an action. This can model interactions involving AI agents, mutually transparent institutions, or commitments. Tennenholtz (2004) proves a folk theorem for program games, but the equilibria constructed are very brittle. We therefore consider simulation-based programs -- i.e., programs that work by running opponents' programs. These are relatively robust (in particular, two programs that act the same are treated the same) and are more practical than proof-based approaches. Oesterheld's (2019) $\epsilon$Grounded$\pi$Bot is such an approach. Unfortunately, it is not generally applicable to games of three or more players, and only allows for a limited range of equilibria in two player games. In this paper, we propose a generalisation to Oesterheld's (2019) $\epsilon$Grounded$\pi$Bot. We prove a folk theorem for our programs in a setting with access to a shared source of randomness. We then characterise their equilibria in a setting without shared randomness. Both with and without shared randomness, we achieve a much wider range of equilibria than Oesterheld's (2019) $\epsilon$Grounded$\pi$Bot. Finally, we explore the limits of simulation-based program equilibrium, showing that the Tennenholtz folk theorem cannot be attained by simulation-based programs without access to shared randomness.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
518,767
2410.14170
Personalized Image Generation with Large Multimodal Models
Personalized content filtering, such as recommender systems, has become a critical infrastructure to alleviate information overload. However, these systems merely filter existing content and are constrained by its limited diversity, making it difficult to meet users' varied content needs. To address this limitation, personalized content generation has emerged as a promising direction with broad applications. Nevertheless, most existing research focuses on personalized text generation, with relatively little attention given to personalized image generation. The limited work in personalized image generation faces challenges in accurately capturing users' visual preferences and needs from noisy user-interacted images and complex multimodal instructions. Worse still, there is a lack of supervised data for training personalized image generation models. To overcome the challenges, we propose a Personalized Image Generation Framework named Pigeon, which adopts exceptional large multimodal models with three dedicated modules to capture users' visual preferences and needs from noisy user history and multimodal instructions. To alleviate the data scarcity, we introduce a two-stage preference alignment scheme, comprising masked preference reconstruction and pairwise preference alignment, to align Pigeon with the personalized image generation task. We apply Pigeon to personalized sticker and movie poster generation, where extensive quantitative results and human evaluation highlight its superiority over various generative baselines.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
true
499,909
1906.00183
Relay-Aided Channel Estimation for mmWave Systems with Imperfect Antenna Arrays
Compressed Sensing (CS) based channel estimation techniques have recently emerged as an effective way to acquire the channel of millimeter-wave (mmWave) systems with a small number of measurements. These techniques, however, are based on prior knowledge of transmit and receive array manifolds, and assume perfect antenna arrays at both the transmitter and the receiver. In the presence of antenna imperfections, the geometry and response of the arrays are modified. This distorts the CS measurement matrix and results in channel estimation errors. This paper studies the effects of both transmit and receive antenna imperfections on the mmWave channel estimate. A relay-aided solution which corrects for errors caused by faulty transmit arrays is then proposed. Simulation results demonstrate the effectiveness of the proposed solution and show that comparable channel estimates can be obtained when compared to systems with perfect antennas without the need for additional training overhead.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
133,296
1802.08920
Geometric Surface-Based Tracking Control of a Quadrotor UAV
New quadrotor UAV control algorithms are developed, based on nonlinear surfaces composed of tracking errors that evolve directly on the nonlinear configuration manifold, thus inherently including in the control design the nonlinear characteristics of the SE(3) configuration space. In particular, geometric surface-based controllers are developed and are shown, through rigorous stability proofs, to have desirable almost global closed loop properties. For the first time in regards to the geometric literature, a region of attraction independent of the position error is identified and its effects are analyzed. The effectiveness of the proposed "surface based" controllers are illustrated by simulations of aggressive maneuvers in the presence of disturbances and motor saturation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
91,218
2408.15990
A Control Theoretic Approach to Simultaneously Estimate Average Value of Time and Determine Dynamic Price for High-occupancy Toll Lanes
The dynamic pricing problem of a freeway corridor with high-occupancy toll (HOT) lanes was formulated and solved based on a point queue abstraction of the traffic system [Yin and Lou, 2009]. However, existing pricing strategies cannot guarantee that the closed-loop system converges to the optimal state, in which the HOT lanes' capacity is fully utilized but there is no queue on the HOT lanes, and a well-behaved estimation and control method is quite challenging and still elusive. This paper attempts to fill the gap by making three fundamental contributions: (i) to present a simpler formulation of the point queue model based on the new concept of residual capacity, (ii) to propose a simple feedback control theoretic approach to estimate the average value of time and calculate the dynamic price, and (iii) to analytically and numerically prove that the closed-loop system is stable and guaranteed to converge to the optimal state, in either Gaussian or exponential manners.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
484,146
1811.06560
High Granular Operator Spaces, and Less-Contaminated General Rough Mereologies
Granular operator spaces and variants had been introduced and used in theoretical investigations on the foundations of general rough sets by the present author over the last few years. In this research, higher order versions of these are presented uniformly as partial algebraic systems. They are also adapted for practical applications when the data is representable by data table-like structures according to a minimalist schema for avoiding contamination. Issues relating to valuations used in information systems or tables are also addressed. The concept of contamination introduced and studied by the present author across a number of her papers, concerns mixing up of information across semantic domains (or domains of discourse). Rough inclusion functions (\textsf{RIF}s), variants, and numeric functions often have a direct or indirect role in contaminating algorithms. Some solutions that seek to replace or avoid them have been proposed and investigated by the present author in some of her earlier papers. Because multiple kinds of solution are of interest to the contamination problem, granular generalizations of RIFs are proposed, and investigated. Interesting representation results are proved and a core algebraic strategy for generalizing Skowron-Polkowski style of rough mereology (though for a very different purpose) is formulated. A number of examples have been added to illustrate key parts of the proposal in higher order variants of granular operator spaces. Further algorithms grounded in mereological nearness, suited for decision-making in human-machine interaction contexts, are proposed by the present author. Applications of granular \textsf{RIF}s to partial/soft solutions of the inverse problem are also invented in this paper.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
113,550
2409.13649
RainbowSight: A Family of Generalizable, Curved, Camera-Based Tactile Sensors For Shape Reconstruction
Camera-based tactile sensors can provide high resolution positional and local geometry information for robotic manipulation. Curved and rounded fingers are often advantageous, but it can be difficult to derive illumination systems that work well within curved geometries. To address this issue, we introduce RainbowSight, a family of curved, compact, camera-based tactile sensors which use addressable RGB LEDs illuminated in a novel rainbow spectrum pattern. In addition to being able to scale the illumination scheme to different sensor sizes and shapes to fit on a variety of end effector configurations, the sensors can be easily manufactured and require minimal optical tuning to obtain high resolution depth reconstructions of an object deforming the sensor's soft elastomer surface. Additionally, we show the advantages of our new hardware design and improvements in calibration methods for accurate depth map generation when compared to alternative lighting methods commonly implemented in previous camera-based tactile sensors. With these advancements, we make the integration of tactile sensors more accessible to roboticists by allowing them the flexibility to easily customize, fabricate, and calibrate camera-based tactile sensors to best fit the needs of their robotic systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
490,083
1906.11356
Personalized Student Stress Prediction with Deep Multitask Network
With the growing popularity of wearable devices, the ability to utilize physiological data collected from these devices to predict the wearer's mental state such as mood and stress suggests great clinical applications, yet such a task is extremely challenging. In this paper, we present a general platform for personalized predictive modeling of behavioural states like students' level of stress. Through the use of Auto-encoders and Multitask learning we extend the prediction of stress to both sequences of passive sensor data and high-level covariates. Our model outperforms the state-of-the-art in the prediction of stress level from mobile sensor data, obtaining a 45.6 % improvement in F1 score on the StudentLife dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
136,643
2410.06083
Classification of simulation relations for symbolic control
Abstraction-based control design is a promising approach for ensuring safety-critical control of complex cyber-physical systems. A key aspect of this methodology is the relation between the original and abstract systems, which ensures that the abstract controller can be transformed into a valid controller for the original system through a concretization procedure. In this paper, we provide a comprehensive and systematic framework that characterizes various simulation relations, through their associated concretization procedures. We introduce the concept of augmented system, which universally enables a feedback refinement relation with the abstract system. This augmented system encapsulates the specific characteristics of each simulation relation within an interface, enabling a plug-and-play control architecture. Our results demonstrate that the existence of a particular simulation relation between the concrete and abstract systems is equivalent to the implementability of a specific control architecture, which depends on the considered simulation relation. This allows us to introduce new types of relations, and to establish the advantages and drawbacks of different relations, which we exhibit through detailed examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
496,043
2310.07525
ViT-A*: Legged Robot Path Planning using Vision Transformer A*
Legged robots, particularly quadrupeds, offer promising navigation capabilities, especially in scenarios requiring traversal over diverse terrains and obstacle avoidance. This paper addresses the challenge of enabling legged robots to navigate complex environments effectively through the integration of data-driven path-planning methods. We propose an approach that utilizes differentiable planners, allowing the learning of end-to-end global plans via a neural network for commanding quadruped robots. The approach leverages 2D maps and obstacle specifications as inputs to generate a global path. To enhance the functionality of the developed neural network-based path planner, we use Vision Transformers (ViT) for map pre-processing, to enable the effective handling of larger maps. Experimental evaluations on two real robotic quadrupeds (Boston Dynamics Spot and Unitree Go1) demonstrate the effectiveness and versatility of the proposed approach in generating reliable path plans.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
399,009
2006.15273
Data-Driven Topology Optimization with Multiclass Microstructures using Latent Variable Gaussian Process
The data-driven approach is emerging as a promising method for the topological design of multiscale structures with greater efficiency. However, existing data-driven methods mostly focus on a single class of microstructures without considering multiple classes to accommodate spatially varying desired properties. The key challenge is the lack of an inherent ordering or distance measure between different classes of microstructures in meeting a range of properties. To overcome this hurdle, we extend the newly developed latent-variable Gaussian process (LVGP) models to create multi-response LVGP (MR-LVGP) models for the microstructure libraries of metamaterials, taking both qualitative microstructure concepts and quantitative microstructure design variables as mixed-variable inputs. The MR-LVGP model embeds the mixed variables into a continuous design space based on their collective effects on the responses, providing substantial insights into the interplay between different geometrical classes and material parameters of microstructures. With this model, we can easily obtain a continuous and differentiable transition between different microstructure concepts that can render gradient information for multiscale topology optimization. We demonstrate its benefits through multiscale topology optimization with aperiodic microstructures. Design examples reveal that considering multiclass microstructures can lead to improved performance due to the consistent load-transfer paths for micro- and macro-structures.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,456
2010.11644
Theory-based residual neural networks: A synergy of discrete choice models and deep neural networks
Researchers often treat data-driven and theory-driven models as two disparate or even conflicting methods in travel behavior analysis. However, the two methods are highly complementary because data-driven methods are more predictive but less interpretable and robust, while theory-driven methods are more interpretable and robust but less predictive. Using their complementary nature, this study designs a theory-based residual neural network (TB-ResNet) framework, which synergizes discrete choice models (DCMs) and deep neural networks (DNNs) based on their shared utility interpretation. The TB-ResNet framework is simple, as it uses a ($\delta$, 1-$\delta$) weighting to take advantage of DCMs' simplicity and DNNs' richness, and to prevent underfitting from the DCMs and overfitting from the DNNs. This framework is also flexible: three instances of TB-ResNets are designed based on multinomial logit model (MNL-ResNets), prospect theory (PT-ResNets), and hyperbolic discounting (HD-ResNets), which are tested on three data sets. Compared to pure DCMs, the TB-ResNets provide greater prediction accuracy and reveal a richer set of behavioral mechanisms owing to the utility function augmented by the DNN component in the TB-ResNets. Compared to pure DNNs, the TB-ResNets can modestly improve prediction and significantly improve interpretation and robustness, because the DCM component in the TB-ResNets stabilizes the utility functions and input gradients. Overall, this study demonstrates that it is both feasible and desirable to synergize DCMs and DNNs by combining their utility specifications under a TB-ResNet framework. Although some limitations remain, this TB-ResNet framework is an important first step to create mutual benefits between DCMs and DNNs for travel behavior modeling, with joint improvement in prediction, interpretation, and robustness.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,346
2105.03296
VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM
In this paper, we propose a tightly-coupled, multi-modal simultaneous localization and mapping (SLAM) framework, integrating an extensive set of sensors: IMU, cameras, multiple lidars, and Ultra-wideband (UWB) range measurements, hence referred to as VIRAL (visual-inertial-ranging-lidar) SLAM. To achieve such a comprehensive sensor fusion system, one has to tackle several challenges such as data synchronization, multi-threading programming, bundle adjustment (BA), and conflicting coordinate frames between UWB and the onboard sensors, so as to ensure real-time localization and smooth updates in the state estimates. To this end, we propose a two stage approach. In the first stage, lidar, camera, and IMU data on a local sliding window are processed in a core odometry thread. From this local graph, new key frames are evaluated for admission to a global map. Visual feature-based loop closure is also performed to supplement the global factor graph with loop constraints. When the global factor graph satisfies a condition on spatial diversity, the BA process will be triggered to update the coordinate transform between UWB and onboard SLAM systems. The system then seamlessly transitions to the second stage where all sensors are tightly integrated in the odometry thread. The capability of our system is demonstrated via several experiments on high-fidelity graphical-physical simulation and public datasets.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
234,104
2305.16816
Songs Across Borders: Singable and Controllable Neural Lyric Translation
The development of general-domain neural machine translation (NMT) methods has advanced significantly in recent years, but the lack of naturalness and musical constraints in the outputs makes them unable to produce singable lyric translations. This paper bridges the singability quality gap by formalizing lyric translation into a constrained translation problem, converting theoretical guidance and practical techniques from translatology literature to prompt-driven NMT approaches, exploring better adaptation methods, and instantiating them to an English-Chinese lyric translation system. Our model achieves 99.85%, 99.00%, and 95.52% on length accuracy, rhyme accuracy, and word boundary recall. In our subjective evaluation, our model shows a 75% relative enhancement on overall quality, compared against naive fine-tuning (Code available at https://github.com/Sonata165/ControllableLyricTranslation).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
368,266
2406.09661
Temporal Planning via Interval Logic Satisfiability for Autonomous Systems
Many automated planning methods and formulations rely on suitably designed abstractions or simplifications of the constrained dynamics associated with agents to attain computational scalability. We consider formulations of temporal planning where intervals are associated with both action and fluent atoms, and relations between these are given as sentences in Allen's Interval Logic. We propose a notion of planning graphs that can account for complex concurrency relations between actions and fluents as a Constraint Programming (CP) model. We test an implementation of our algorithm on a state-of-the-art framework for CP and compare it with PDDL 2.1 planners that capture plans requiring complex concurrent interactions between agents. We demonstrate our algorithm outperforms existing PDDL 2.1 planners in the case studies. Still, scalability remains challenging when plans must comply with intricate concurrent interactions and the sequencing of actions.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
true
464,028
1709.05231
A Spectral Method for Activity Shaping in Continuous-Time Information Cascades
Information Cascades Model captures dynamical properties of user activity in a social network. In this work, we develop a novel framework for activity shaping under the Continuous-Time Information Cascades Model which allows the administrator for local control actions by allocating targeted resources that can alter the spread of the process. Our framework employs the optimization of the spectral radius of the Hazard matrix, a quantity that has been shown to drive the maximum influence in a network, while enjoying a simple convex relaxation when used to minimize the influence of the cascade. In addition, use-cases such as quarantine and node immunization are discussed to highlight the generality of the proposed activity shaping framework. Finally, we present the NetShape influence minimization method which is compared favorably to baseline and state-of-the-art approaches through simulations on real social networks.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
80,811
2003.00380
Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning
According to the World Health Organization(WHO), it is estimated that approximately 1.3 billion people live with some forms of vision impairment globally, of whom 36 million are blind. Due to their disability, engaging these minority into the society is a challenging problem. The recent rise of smart mobile phones provides a new solution by enabling blind users' convenient access to the information and service for understanding the world. Users with vision impairment can adopt the screen reader embedded in the mobile operating systems to read the content of each screen within the app, and use gestures to interact with the phone. However, the prerequisite of using screen readers is that developers have to add natural-language labels to the image-based components when they are developing the app. Unfortunately, more than 77% apps have issues of missing labels, according to our analysis of 10,408 Android apps. Most of these issues are caused by developers' lack of awareness and knowledge in considering the minority. And even if developers want to add the labels to UI components, they may not come up with concise and clear description as most of them are of no visual issues. To overcome these challenges, we develop a deep-learning based model, called LabelDroid, to automatically predict the labels of image-based buttons by learning from large-scale commercial apps in Google Play. The experimental results show that our model can make accurate predictions and the generated labels are of higher quality than that from real Android developers.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
166,277
1610.05710
Feasibility Based Large Margin Nearest Neighbor Metric Learning
Large margin nearest neighbor (LMNN) is a metric learner which optimizes the performance of the popular $k$NN classifier. However, its resulting metric relies on pre-selected target neighbors. In this paper, we address the feasibility of LMNN's optimization constraints regarding these target points, and introduce a mathematical measure to evaluate the size of the feasible region of the optimization problem. We enhance the optimization framework of LMNN by a weighting scheme which prefers data triplets which yield a larger feasible region. This increases the chances to obtain a good metric as the solution of LMNN's problem. We evaluate the performance of the resulting feasibility-based LMNN algorithm using synthetic and real datasets. The empirical results show an improved accuracy for different types of datasets in comparison to regular LMNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
62,551
2010.04532
Measuring What Counts: The case of Rumour Stance Classification
Stance classification can be a powerful tool for understanding whether and which users believe in online rumours. The task aims to automatically predict the stance of replies towards a given rumour, namely support, deny, question, or comment. Numerous methods have been proposed and their performance compared in the RumourEval shared tasks in 2017 and 2019. Results demonstrated that this is a challenging problem since naturally occurring rumour stance data is highly imbalanced. This paper specifically questions the evaluation metrics used in these shared tasks. We re-evaluate the systems submitted to the two RumourEval tasks and show that the two widely adopted metrics -- accuracy and macro-F1 -- are not robust for the four-class imbalanced task of rumour stance classification, as they wrongly favour systems with highly skewed accuracy towards the majority class. To overcome this problem, we propose new evaluation metrics for rumour stance detection. These are not only robust to imbalanced data but also score higher systems that are capable of recognising the two most informative minority classes (support and deny).
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
199,775
2107.11637
Group-based Motion Prediction for Navigation in Crowded Environments
We focus on the problem of planning the motion of a robot in a dynamic multiagent environment such as a pedestrian scene. Enabling the robot to navigate safely and in a socially compliant fashion in such scenes requires a representation that accounts for the unfolding multiagent dynamics. Existing approaches to this problem tend to employ microscopic models of motion prediction that reason about the individual behavior of other agents. While such models may achieve high tracking accuracy in trajectory prediction benchmarks, they often lack an understanding of the group structures unfolding in crowded scenes. Inspired by the Gestalt theory from psychology, we build a Model Predictive Control framework (G-MPC) that leverages group-based prediction for robot motion planning. We conduct an extensive simulation study involving a series of challenging navigation tasks in scenes extracted from two real-world pedestrian datasets. We illustrate that G-MPC enables a robot to achieve statistically significantly higher safety and lower number of group intrusions than a series of baselines featuring individual pedestrian motion prediction models. Finally, we show that G-MPC can handle noisy lidar-scan estimates without significant performance losses.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
247,644
2210.00137
Going In Blind: Object Motion Classification using Distributed Tactile Sensing for Safe Reaching in Clutter
Robotic manipulators navigating cluttered shelves or cabinets may find it challenging to avoid contact with obstacles. Indeed, rearranging obstacles may be necessary to access a target. Rather than planning explicit motions that place obstacles into a desired pose, we suggest allowing incidental contacts to rearrange obstacles while monitoring contacts for safety. Bypassing object identification, we present a method for categorizing object motions from tactile data collected from incidental contacts with a capacitive tactile skin on an Allegro Hand. We formalize tactile cues associated with categories of object motion, demonstrating that they can determine with $>90$% accuracy whether an object is movable and whether a contact is causing the object to slide stably (safe contact) or tip (unsafe).
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
320,752
2403.18356
MonoHair: High-Fidelity Hair Modeling from a Monocular Video
Undoubtedly, high-fidelity 3D hair is crucial for achieving realism, artistic expression, and immersion in computer graphics. While existing 3D hair modeling methods have achieved impressive performance, the challenge of achieving high-quality hair reconstruction persists: they either require strict capture conditions, making practical applications difficult, or heavily rely on learned prior data, obscuring fine-grained details in images. To address these challenges, we propose MonoHair,a generic framework to achieve high-fidelity hair reconstruction from a monocular video, without specific requirements for environments. Our approach bifurcates the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference. The exterior is meticulously crafted using our Patch-based Multi-View Optimization (PMVO). This method strategically collects and integrates hair information from multiple views, independent of prior data, to produce a high-fidelity exterior 3D line map. This map not only captures intricate details but also facilitates the inference of the hair's inner structure. For the interior, we employ a data-driven, multi-view 3D hair reconstruction method. This method utilizes 2D structural renderings derived from the reconstructed exterior, mirroring the synthetic 2D inputs used during training. This alignment effectively bridges the domain gap between our training data and real-world data, thereby enhancing the accuracy and reliability of our interior structure inference. Lastly, we generate a strand model and resolve the directional ambiguity by our hair growth algorithm. Our experiments demonstrate that our method exhibits robustness across diverse hairstyles and achieves state-of-the-art performance. For more results, please refer to our project page https://keyuwu-cs.github.io/MonoHair/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
441,904
2407.09935
LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation
Image resampling is a basic technique that is widely employed in daily applications, such as camera photo editing. Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors. Still, these methods are not the perfect substitute for interpolation, due to the drawbacks in efficiency and versatility. In this work, we propose a novel method of Learning Resampling Function (termed LeRF), which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption of interpolation. Specifically, LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the hyper-parameters that determine the shapes of these resampling functions with a neural network. Based on the formulation of LeRF, we develop a family of models, including both efficiency-orientated and performance-orientated ones. To achieve interpolation-level efficiency, we adopt look-up tables (LUTs) to accelerate the inference of the learned neural network. Furthermore, we design a directional ensemble strategy and edge-sensitive indexing patterns to better capture local structures. On the other hand, to obtain DNN-level performance, we propose an extension of LeRF to enable it in cooperation with pre-trained upsampling models for cascaded resampling. Extensive experiments show that the efficiency-orientated version of LeRF runs as fast as interpolation, generalizes well to arbitrary transformations, and outperforms interpolation significantly, e.g., up to 3dB PSNR gain over Bicubic for x2 upsampling on Manga109. Besides, the performance-orientated version of LeRF reaches comparable performance with existing DNNs at much higher efficiency, e.g., less than 25% running time on a desktop GPU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
472,779
2112.00516
Simultaneous Controller and Lyapunov Function Design for Constrained Nonlinear Systems
This paper presents a method to stabilize state and input constrained nonlinear systems using an offline optimization on variable triangulations of the set of admissible states. For control-affine systems, by choosing a continuous piecewise affine (CPA) controller structure, the non-convex optimization is formulated as iterative semi-definite programming (SDP), which can be solved efficiently using available software. The method has very general assumptions on the system's dynamics and constraints. Unlike similar existing methods, it avoids finding terminal invariant sets, solving non-convex optimizations, and does not rely on knowing a control Lyapunov function (CLF), as it finds a CPA Lyapunov function explicitly. The method enforces a desired upper-bound on the decay rate of the state norm and finds the exact region of attraction. Thus, it can be also viewed as a systematic approach for finding Lipschitz CLFs in state and input constrained control-affine systems. Using the CLF, a minimum norm controller is also formulated by quadratic programming for online application.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
269,160
2006.01671
A generalized linear joint trained framework for semi-supervised learning of sparse features
The elastic-net is among the most widely used types of regularization algorithms, commonly associated with the problem of supervised generalized linear model estimation via penalized maximum likelihood. Its nice properties originate from a combination of $\ell_1$ and $\ell_2$ norms, which endow this method with the ability to select variables taking into account the correlations between them. In the last few years, semi-supervised approaches, that use both labeled and unlabeled data, have become an important component in the statistical research. Despite this interest, however, few researches have investigated semi-supervised elastic-net extensions. This paper introduces a novel solution for semi-supervised learning of sparse features in the context of generalized linear model estimation: the generalized semi-supervised elastic-net (s2net), which extends the supervised elastic-net method, with a general mathematical formulation that covers, but is not limited to, both regression and classification problems. We develop a flexible and fast implementation for s2net in R, and its advantages are illustrated using both real and synthetic data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
179,838
2011.01285
Exemplar Guided Active Learning
We consider the problem of wisely using a limited budget to label a small subset of a large unlabeled dataset. We are motivated by the NLP problem of word sense disambiguation. For any word, we have a set of candidate labels from a knowledge base, but the label set is not necessarily representative of what occurs in the data: there may exist labels in the knowledge base that very rarely occur in the corpus because the sense is rare in modern English; and conversely there may exist true labels that do not exist in our knowledge base. Our aim is to obtain a classifier that performs as well as possible on examples of each "common class" that occurs with frequency above a given threshold in the unlabeled set while annotating as few examples as possible from "rare classes" whose labels occur with less than this frequency. The challenge is that we are not informed which labels are common and which are rare, and the true label distribution may exhibit extreme skew. We describe an active learning approach that (1) explicitly searches for rare classes by leveraging the contextual embedding spaces provided by modern language models, and (2) incorporates a stopping rule that ignores classes once we prove that they occur below our target threshold with high probability. We prove that our algorithm only costs logarithmically more than a hypothetical approach that knows all true label frequencies and show experimentally that incorporating automated search can significantly reduce the number of samples needed to reach target accuracy levels.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
204,535
2003.08593
Curriculum DeepSDF
When learning to sketch, beginners start with simple and flexible shapes, and then gradually strive for more complex and accurate ones in the subsequent training sessions. In this paper, we design a "shape curriculum" for learning continuous Signed Distance Function (SDF) on shapes, namely Curriculum DeepSDF. Inspired by how humans learn, Curriculum DeepSDF organizes the learning task in ascending order of difficulty according to the following two criteria: surface accuracy and sample difficulty. The former considers stringency in supervising with ground truth, while the latter regards the weights of hard training samples near complex geometry and fine structure. More specifically, Curriculum DeepSDF learns to reconstruct coarse shapes at first, and then gradually increases the accuracy and focuses more on complex local details. Experimental results show that a carefully-designed curriculum leads to significantly better shape reconstructions with the same training data, training epochs and network architecture as DeepSDF. We believe that the application of shape curricula can benefit the training process of a wide variety of 3D shape representation learning methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
168,785
2311.02461
SPHEAR: Spherical Head Registration for Complete Statistical 3D Modeling
We present \emph{SPHEAR}, an accurate, differentiable parametric statistical 3D human head model, enabled by a novel 3D registration method based on spherical embeddings. We shift the paradigm away from the classical Non-Rigid Registration methods, which operate under various surface priors, increasing reconstruction fidelity and minimizing required human intervention. Additionally, SPHEAR is a \emph{complete} model that allows not only to sample diverse synthetic head shapes and facial expressions, but also gaze directions, high-resolution color textures, surface normal maps, and hair cuts represented in detail, as strands. SPHEAR can be used for automatic realistic visual data generation, semantic annotation, and general reconstruction tasks. Compared to state-of-the-art approaches, our components are fast and memory efficient, and experiments support the validity of our design choices and the accuracy of registration, reconstruction and generation techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
405,453
2409.06187
Bottleneck-based Encoder-decoder ARchitecture (BEAR) for Learning Unbiased Consumer-to-Consumer Image Representations
Unbiased representation learning is still an object of study under specific applications and contexts. Novel architectures are usually crafted to resolve particular problems using mixtures of fundamental pieces. This paper presents different image feature extraction mechanisms that work together with residual connections to encode perceptual image information in an autoencoder configuration. We use image data that aims to support a larger research agenda dealing with issues regarding criminal activity in consumer-to-consumer online platforms. Preliminary results suggest that the proposed architecture can learn rich spaces using ours and other image datasets resolving important challenges that are identified.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
487,030
2212.08888
Exploiting Rich Textual User-Product Context for Improving Sentiment Analysis
User and product information associated with a review is useful for sentiment polarity prediction. Typical approaches incorporating such information focus on modeling users and products as implicitly learned representation vectors. Most do not exploit the potential of historical reviews, or those that currently do require unnecessary modifications to model architecture or do not make full use of user/product associations. The contribution of this work is twofold: i) a method to explicitly employ historical reviews belonging to the same user/product to initialize representations, and ii) efficient incorporation of textual associations between users and products via a user-product cross-context module. Experiments on IMDb, Yelp-2013 and Yelp-2014 benchmarks show that our approach substantially outperforms previous state-of-the-art. Since we employ BERT-base as the encoder, we additionally provide experiments in which our approach performs well with Span-BERT and Longformer. Furthermore, experiments where the reviews of each user/product in the training data are downsampled demonstrate the effectiveness of our approach under a low-resource setting.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
336,907
2007.07841
Align then Summarize: Automatic Alignment Methods for Summarization Corpus Creation
Summarizing texts is not a straightforward task. Before even considering text summarization, one should determine what kind of summary is expected. How much should the information be compressed? Is it relevant to reformulate or should the summary stick to the original phrasing? State-of-the-art on automatic text summarization mostly revolves around news articles. We suggest that considering a wider variety of tasks would lead to an improvement in the field, in terms of generalization and robustness. We explore meeting summarization: generating reports from automatic transcriptions. Our work consists in segmenting and aligning transcriptions with respect to reports, to get a suitable dataset for neural summarization. Using a bootstrapping approach, we provide pre-alignments that are corrected by human annotators, making a validation set against which we evaluate automatic models. This consistently reduces annotators' efforts by providing iteratively better pre-alignment and maximizes the corpus size by using annotations from our automatic alignment models. Evaluation is conducted on \publicmeetings, a novel corpus of aligned public meetings. We report automatic alignment and summarization performances on this corpus and show that automatic alignment is relevant for data annotation since it leads to large improvement of almost +4 on all ROUGE scores on the summarization task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
187,450
2104.07932
Interval-censored Hawkes processes
Interval-censored data solely records the aggregated counts of events during specific time intervals - such as the number of patients admitted to the hospital or the volume of vehicles passing traffic loop detectors - and not the exact occurrence time of the events. It is currently not understood how to fit the Hawkes point processes to this kind of data. Its typical loss function (the point process log-likelihood) cannot be computed without exact event times. Furthermore, it does not have the independent increments property to use the Poisson likelihood. This work builds a novel point process, a set of tools, and approximations for fitting Hawkes processes within interval-censored data scenarios. First, we define the Mean Behavior Poisson process (MBPP), a novel Poisson process with a direct parameter correspondence to the popular self-exciting Hawkes process. We fit MBPP in the interval-censored setting using an interval-censored Poisson log-likelihood (IC-LL). We use the parameter equivalence to uncover the parameters of the associated Hawkes process. Second, we introduce two novel exogenous functions to distinguish the exogenous from the endogenous events. We propose the multi-impulse exogenous function - for when the exogenous events are observed as event time - and the latent homogeneous Poisson process exogenous function - for when the exogenous events are presented as interval-censored volumes. Third, we provide several approximation methods to estimate the intensity and compensator function of MBPP when no analytical solution exists. Fourth and finally, we connect the interval-censored loss of MBPP to a broader class of Bregman divergence-based functions. Using the connection, we show that the popularity estimation algorithm Hawkes Intensity Process (HIP) is a particular case of the MBPP. We verify our models through empirical testing on synthetic data and real-world data.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
230,604
2302.09319
MAILS -- Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies
The goal of the present paper is to develop and validate a questionnaire to assess AI literacy. In particular, the questionnaire should be deeply grounded in the existing literature on AI literacy, should be modular (i.e., including different facets that can be used independently of each other) to be flexibly applicable in professional life depending on the goals and use cases, and should meet psychological requirements and thus includes further psychological competencies in addition to the typical facets of AIL. We derived 60 items to represent different facets of AI Literacy according to Ng and colleagues conceptualisation of AI literacy and additional 12 items to represent psychological competencies such as problem solving, learning, and emotion regulation in regard to AI. For this purpose, data were collected online from 300 German-speaking adults. The items were tested for factorial structure in confirmatory factor analyses. The result is a measurement instrument that measures AI literacy with the facets Use & apply AI, Understand AI, Detect AI, and AI Ethics and the ability to Create AI as a separate construct, and AI Self-efficacy in learning and problem solving and AI Self-management. This study contributes to the research on AI literacy by providing a measurement instrument relying on profound competency models. In addition, higher-order psychological competencies are included that are particularly important in the context of pervasive change through AI systems.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
346,371
2106.12398
End-to-End Lexically Constrained Machine Translation for Morphologically Rich Languages
Lexically constrained machine translation allows the user to manipulate the output sentence by enforcing the presence or absence of certain words and phrases. Although current approaches can enforce terms to appear in the translation, they often struggle to make the constraint word form agree with the rest of the generated output. Our manual analysis shows that 46% of the errors in the output of a baseline constrained model for English to Czech translation are related to agreement. We investigate mechanisms to allow neural machine translation to infer the correct word inflection given lemmatized constraints. In particular, we focus on methods based on training the model with constraints provided as part of the input sequence. Our experiments on the English-Czech language pair show that this approach improves the translation of constrained terms in both automatic and manual evaluation by reducing errors in agreement. Our approach thus eliminates inflection errors, without introducing new errors or decreasing the overall quality of the translation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
242,717
1508.04186
Distributed Deep Q-Learning
We propose a distributed deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is based on the deep Q-network, a convolutional neural network trained with a variant of Q-learning. Its input is raw pixels and its output is a value function estimating future rewards from taking an action given a system state. To distribute the deep Q-network training, we adapt the DistBelief software framework to the context of efficiently training reinforcement learning agents. As a result, the method is completely asynchronous and scales well with the number of machines. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to achieve reasonable success on a simple game with minimal parameter tuning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
46,101
1203.1338
Network Structure, Topology and Dynamics in Generalized Models of Synchronization
We explore the interplay of network structure, topology, and dynamic interactions between nodes using the paradigm of distributed synchronization in a network of coupled oscillators. As the network evolves to a global steady state, interconnected oscillators synchronize in stages, revealing network's underlying community structure. Traditional models of synchronization assume that interactions between nodes are mediated by a conservative process, such as diffusion. However, social and biological processes are often non-conservative. We propose a new model of synchronization in a network of oscillators coupled via non-conservative processes. We study dynamics of synchronization of a synthetic and real-world networks and show that different synchronization models reveal different structures within the same network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
14,743
1809.04227
Deep Co-investment Network Learning for Financial Assets
Most recent works model the market structure of the stock market as a correlation network of the stocks. They apply pre-defined patterns to extract correlation information from the time series of stocks. Without considering the influences of the evolving market structure to the market index trends, these methods hardly obtain the market structure models which are compatible with the market principles. Advancements in deep learning have shown their incredible modeling capacity on various finance-related tasks. However, the learned inner parameters, which capture the essence of the finance time series, are not further exploited about their representation in the financial fields. In this work, we model the financial market structure as a deep co-investment network and propose a Deep Co-investment Network Learning (DeepCNL) method. DeepCNL automatically learns deep co-investment patterns between any pairwise stocks, where the rise-fall trends of the market index are used for distance supervision. The learned inner parameters of the trained DeepCNL, which encodes the temporal dynamics of deep co-investment patterns, are used to build the co-investment network between the stocks as the investment structure of the corresponding market. We verify the effectiveness of DeepCNL on the real-world stock data and compare it with the existing methods on several financial tasks. The experimental results show that DeepCNL not only has the ability to better reflect the stock market structure that is consistent with widely-acknowledged financial principles but also is more capable to approximate the investment activities which lead to the stock performance reported in the real news or research reports than other alternatives.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
107,507
1909.13046
Meta Learning with Differentiable Closed-form Solver for Fast Video Object Segmentation
This paper tackles the problem of video object segmentation. We are specifically concerned with the task of segmenting all pixels of a target object in all frames, given the annotation mask in the first frame. Even when such annotation is available this remains a challenging problem because of the changing appearance and shape of the object over time. In this paper, we tackle this task by formulating it as a meta-learning problem, where the base learner grasping the semantic scene understanding for a general type of objects, and the meta learner quickly adapting the appearance of the target object with a few examples. Our proposed meta-learning method uses a closed form optimizer, the so-called "ridge regression", which has been shown to be conducive for fast and better training convergence. Moreover, we propose a mechanism, named "block splitting", to further speed up the training process as well as to reduce the number of learning parameters. In comparison with the-state-of-the art methods, our proposed framework achieves significant boost up in processing speed, while having very competitive performance compared to the best performing methods on the widely used datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
147,309