id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1707.06658
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
77,464
1504.04930
Connecting Multiple-unicast and Network Error Correction: Reduction and Unachievability
We show that solving a multiple-unicast network coding problem can be reduced to solving a single-unicast network error correction problem, where an adversary may jam at most a single edge in the network. Specifically, we present an efficient reduction that maps a multiple-unicast network coding instance to a network error correction instance while preserving feasibility. The reduction holds for both the zero probability of error model and the vanishing probability of error model. Previous reductions are restricted to the zero-error case. As an application of the reduction, we present a constructive example showing that the single-unicast network error correction capacity may not be achievable, a result of separate interest.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,207
2404.07855
Resolve Domain Conflicts for Generalizable Remote Physiological Measurement
Remote photoplethysmography (rPPG) technology has become increasingly popular due to its non-invasive monitoring of various physiological indicators, making it widely applicable in multimedia interaction, healthcare, and emotion analysis. Existing rPPG methods utilize multiple datasets for training to enhance the generalizability of models. However, they often overlook the underlying conflict issues across different datasets, such as (1) label conflict resulting from different phase delays between physiological signal labels and face videos at the instance level, and (2) attribute conflict stemming from distribution shifts caused by head movements, illumination changes, skin types, etc. To address this, we introduce the DOmain-HArmonious framework (DOHA). Specifically, we first propose a harmonious phase strategy to eliminate uncertain phase delays and preserve the temporal variation of physiological signals. Next, we design a harmonious hyperplane optimization that reduces irrelevant attribute shifts and encourages the model's optimization towards a global solution that fits more valid scenarios. Our experiments demonstrate that DOHA significantly improves the performance of existing methods under multiple protocols. Our code is available at https://github.com/SWY666/rPPG-DOHA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
446,002
2406.04809
A Survey of Fragile Model Watermarking
Model fragile watermarking, inspired by both the field of adversarial attacks on neural networks and traditional multimedia fragile watermarking, has gradually emerged as a potent tool for detecting tampering, and has witnessed rapid development in recent years. Unlike robust watermarks, which are widely used for identifying model copyrights, fragile watermarks for models are designed to identify whether models have been subjected to unexpected alterations such as backdoors, poisoning, compression, among others. These alterations can pose unknown risks to model users, such as misidentifying stop signs as speed limit signs in classic autonomous driving scenarios. This paper provides an overview of the relevant work in the field of model fragile watermarking since its inception, categorizing them and revealing the developmental trajectory of the field, thus offering a comprehensive survey for future endeavors in model fragile watermarking.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
461,850
2403.06183
An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling
Understanding the dimension dependency of computational complexity in high-dimensional sampling problem is a fundamental problem, both from a practical and theoretical perspective. Compared with samplers with unbiased stationary distribution, e.g., Metropolis-adjusted Langevin algorithm (MALA), biased samplers, e.g., Underdamped Langevin Dynamics (ULD), perform better in low-accuracy cases just because a lower dimension dependency in their complexities. Along this line, Freund et al. (2022) suggest that the modified Langevin algorithm with prior diffusion is able to converge dimension independently for strongly log-concave target distributions. Nonetheless, it remains open whether such property establishes for more general cases. In this paper, we investigate the prior diffusion technique for the target distributions satisfying log-Sobolev inequality (LSI), which covers a much broader class of distributions compared to the strongly log-concave ones. In particular, we prove that the modified Langevin algorithm can also obtain the dimension-independent convergence of KL divergence with different step size schedules. The core of our proof technique is a novel construction of an interpolating SDE, which significantly helps to conduct a more accurate characterization of the discrete updates of the overdamped Langevin dynamics. Our theoretical analysis demonstrates the benefits of prior diffusion for a broader class of target distributions and provides new insights into developing faster sampling algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
436,337
2109.03408
GTT-Net: Learned Generalized Trajectory Triangulation
We present GTT-Net, a supervised learning framework for the reconstruction of sparse dynamic 3D geometry. We build on a graph-theoretic formulation of the generalized trajectory triangulation problem, where non-concurrent multi-view imaging geometry is known but global image sequencing is not provided. GTT-Net learns pairwise affinities modeling the spatio-temporal relationships among our input observations and leverages them to determine 3D geometry estimates. Experiments reconstructing 3D motion-capture sequences show GTT-Net outperforms the state of the art in terms of accuracy and robustness. Within the context of articulated motion reconstruction, our proposed architecture is 1) able to learn and enforce semantic 3D motion priors for shared training and test domains, while being 2) able to generalize its performance across different training and test domains. Moreover, GTT-Net provides a computationally streamlined framework for trajectory triangulation with applications to multi-instance reconstruction and event segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
254,054
2012.03611
Intelligent Reflecting Surface Aided Multi-Cell NOMA Networks
This paper proposes a novel framework of resource allocation in intelligent reflecting surface (IRS) aided multi-cell non-orthogonal multiple access (NOMA) networks, where a sum-rate maximization problem is formulated. To address this challenging mixed-integer non-linear problem, we decompose it into an optimization problem (P1) with continuous variables and a matching problem (P2) with integer variables. For the non-convex optimization problem (P1), iterative algorithms are proposed for allocating transmit power, designing reflection matrix, and determining decoding order by invoking relaxation methods such as convex upper bound substitution, successive convex approximation and semidefinite relaxation. For the combinational problem (P2), swap matching-based algorithms are proposed to achieve a two-sided exchange-stable state among users, BSs and subchannels. Numerical results are provided for demonstrating that the sum-rate of the NOMA networks is capable of being enhanced with the aid of the IRS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
210,180
2112.02962
DANets: Deep Abstract Networks for Tabular Data Classification and Regression
Tabular data are ubiquitous in real world applications. Although many commonly-used neural components (e.g., convolution) and extensible neural networks (e.g., ResNet) have been developed by the machine learning community, few of them were effective for tabular data and few designs were adequately tailored for tabular data structures. In this paper, we propose a novel and flexible neural component for tabular data, called Abstract Layer (AbstLay), which learns to explicitly group correlative input features and generate higher-level features for semantics abstraction. Also, we design a structure re-parameterization method to compress the learned AbstLay, thus reducing the computational complexity by a clear margin in the reference phase. A special basic block is built using AbstLays, and we construct a family of Deep Abstract Networks (DANets) for tabular data classification and regression by stacking such blocks. In DANets, a special shortcut path is introduced to fetch information from raw tabular features, assisting feature interactions across different levels. Comprehensive experiments on seven real-world tabular datasets show that our AbstLay and DANets are effective for tabular data classification and regression, and the computational complexity is superior to competitive methods. Besides, we evaluate the performance gains of DANet as it goes deep, verifying the extendibility of our method. Our code is available at https://github.com/WhatAShot/DANet.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
270,027
2306.15298
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Pretrained language models are publicly available and constantly finetuned for various real-life applications. As they become capable of grasping complex contextual information, harmful biases are likely increasingly intertwined with those models. This paper analyses gender bias in BERT models with two main contributions: First, a novel bias measure is introduced, defining biases as the difference in sentiment valuation of female and male sample versions. Second, we comprehensively analyse BERT's biases on the example of a realistic IMDB movie classifier. By systematically varying elements of the training pipeline, we can conclude regarding their impact on the final model bias. Seven different public BERT models in nine training conditions, i.e. 63 models in total, are compared. Almost all conditions yield significant gender biases. Results indicate that reflected biases stem from public BERT models rather than task-specific data, emphasising the weight of responsible usage.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
375,974
2107.06755
DIT4BEARs Smart Roads Internship
The research internship at UiT - The Arctic University of Norway was offered for our team being the winner of the 'Smart Roads - Winter Road Maintenance 2021' Hackathon. The internship commenced on 3 May 2021 and ended on 21 May 2021 with meetings happening twice each week. In spite of having different nationalities and educational backgrounds, we both interns tried to collaborate as a team as much as possible. The most alluring part was working on this project made us realize the critical conditions faced by the arctic people, where it was hard to gain such a unique experience from our residence. We developed and implemented several deep learning models to classify the states (dry, moist, wet, icy, snowy, slushy). Depending upon the best model, the weather forecast app will predict the state taking the Ta, Tsurf, Height, Speed, Water, etc. into consideration. The crucial part was to define a safety metric which is the product of the accident rates based on friction and the accident rates based on states. We developed a regressor that will predict the safety metric depending upon the state obtained from the classifier and the friction obtained from the sensor data. A pathfinding algorithm has been designed using the sensor data, open street map data, weather data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
246,187
2403.15822
Computational Sentence-level Metrics Predicting Human Sentence Comprehension
The majority of research in computational psycholinguistics has concentrated on the processing of words. This study introduces innovative methods for computing sentence-level metrics using multilingual large language models. The metrics developed sentence surprisal and sentence relevance and then are tested and compared to validate whether they can predict how humans comprehend sentences as a whole across languages. These metrics offer significant interpretability and achieve high accuracy in predicting human sentence reading speeds. Our results indicate that these computational sentence-level metrics are exceptionally effective at predicting and elucidating the processing difficulties encountered by readers in comprehending sentences as a whole across a variety of languages. Their impressive performance and generalization capabilities provide a promising avenue for future research in integrating LLMs and cognitive science.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
440,759
1710.06900
Temporally-Reweighted Chinese Restaurant Process Mixtures for Clustering, Imputing, and Forecasting Multivariate Time Series
This article proposes a Bayesian nonparametric method for forecasting, imputation, and clustering in sparsely observed, multivariate time series data. The method is appropriate for jointly modeling hundreds of time series with widely varying, non-stationary dynamics. Given a collection of $N$ time series, the Bayesian model first partitions them into independent clusters using a Chinese restaurant process prior. Within a cluster, all time series are modeled jointly using a novel "temporally-reweighted" extension of the Chinese restaurant process mixture. Markov chain Monte Carlo techniques are used to obtain samples from the posterior distribution, which are then used to form predictive inferences. We apply the technique to challenging forecasting and imputation tasks using seasonal flu data from the US Center for Disease Control and Prevention, demonstrating superior forecasting accuracy and competitive imputation accuracy as compared to multiple widely used baselines. We further show that the model discovers interpretable clusters in datasets with hundreds of time series, using macroeconomic data from the Gapminder Foundation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
82,843
2306.05411
R-MAE: Regions Meet Masked Autoencoders
In this work, we explore regions as a potential visual analogue of words for self-supervised image representation learning. Inspired by Masked Autoencoding (MAE), a generative pre-training baseline, we propose masked region autoencoding to learn from groups of pixels or regions. Specifically, we design an architecture which efficiently addresses the one-to-many mapping between images and regions, while being highly effective especially with high-quality regions. When integrated with MAE, our approach (R-MAE) demonstrates consistent improvements across various pre-training datasets and downstream detection and segmentation benchmarks, with negligible computational overheads. Beyond the quantitative evaluation, our analysis indicates the models pre-trained with masked region autoencoding unlock the potential for interactive segmentation. The code is provided at https://github.com/facebookresearch/r-mae.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
372,182
1509.05173
Taming the ReLU with Parallel Dither in a Deep Neural Network
Rectified Linear Units (ReLU) seem to have displaced traditional 'smooth' nonlinearities as activation-function-du-jour in many - but not all - deep neural network (DNN) applications. However, nobody seems to know why. In this article, we argue that ReLU are useful because they are ideal demodulators - this helps them perform fast abstract learning. However, this fast learning comes at the expense of serious nonlinear distortion products - decoy features. We show that Parallel Dither acts to suppress the decoy features, preventing overfitting and leaving the true features cleanly demodulated for rapid, reliable learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
47,014
2004.10301
Structured Mechanical Models for Robot Learning and Control
Model-based methods are the dominant paradigm for controlling robotic systems, though their efficacy depends heavily on the accuracy of the model used. Deep neural networks have been used to learn models of robot dynamics from data, but they suffer from data-inefficiency and the difficulty to incorporate prior knowledge. We introduce Structured Mechanical Models, a flexible model class for mechanical systems that are data-efficient, easily amenable to prior knowledge, and easily usable with model-based control techniques. The goal of this work is to demonstrate the benefits of using Structured Mechanical Models in lieu of black-box neural networks when modeling robot dynamics. We demonstrate that they generalize better from limited data and yield more reliable model-based controllers on a variety of simulated robotic domains.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
false
false
false
false
173,598
2302.00871
Using In-Context Learning to Improve Dialogue Safety
While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, which often perpetuates social biases or stereotypes. We investigate a retrieval-based method for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with strong baselines without requiring training. For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking procedure which can further improve response safeness.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
343,381
2409.15128
The Number of Trials Matters in Infinite-Horizon General-Utility Markov Decision Processes
The general-utility Markov decision processes (GUMDPs) framework generalizes the MDPs framework by considering objective functions that depend on the frequency of visitation of state-action pairs induced by a given policy. In this work, we contribute with the first analysis on the impact of the number of trials, i.e., the number of randomly sampled trajectories, in infinite-horizon GUMDPs. We show that, as opposed to standard MDPs, the number of trials plays a key-role in infinite-horizon GUMDPs and the expected performance of a given policy depends, in general, on the number of trials. We consider both discounted and average GUMDPs, where the objective function depends, respectively, on discounted and average frequencies of visitation of state-action pairs. First, we study policy evaluation under discounted GUMDPs, proving lower and upper bounds on the mismatch between the finite and infinite trials formulations for GUMDPs. Second, we address average GUMDPs, studying how different classes of GUMDPs impact the mismatch between the finite and infinite trials formulations. Third, we provide a set of empirical results to support our claims, highlighting how the number of trajectories and the structure of the underlying GUMDP influence policy evaluation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
490,767
2009.11564
Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases
Equipping machines with comprehensive knowledge of the world's entities and their relationships has been a long-standing goal of AI. Over the last decade, large-scale knowledge bases, also known as knowledge graphs, have been automatically constructed from web contents and text sources, and have become a key asset for search engines. This machine knowledge can be harnessed to semantically interpret textual phrases in news, social media and web tables, and contributes to question answering, natural language processing and data analytics. This article surveys fundamental concepts and practical methods for creating and curating large knowledge bases. It covers models and methods for discovering and canonicalizing entities and their semantic types and organizing them into clean taxonomies. On top of this, the article discusses the automatic extraction of entity-centric properties. To support the long-term life-cycle and the quality assurance of machine knowledge, the article presents methods for constructing open schemas and for knowledge curation. Case studies on academic projects and industrial knowledge graphs complement the survey of concepts and methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
197,209
2405.10518
Enhancing Perception Quality in Remote Sensing Image Compression via Invertible Neural Network
Decoding remote sensing images to achieve high perceptual quality, particularly at low bitrates, remains a significant challenge. To address this problem, we propose the invertible neural network-based remote sensing image compression (INN-RSIC) method. Specifically, we capture compression distortion from an existing image compression algorithm and encode it as a set of Gaussian-distributed latent variables via INN. This ensures that the compression distortion in the decoded image becomes independent of the ground truth. Therefore, by leveraging the inverse mapping of INN, we can input the decoded image along with a set of randomly resampled Gaussian distributed variables into the inverse network, effectively generating enhanced images with better perception quality. To effectively learn compression distortion, channel expansion, Haar transformation, and invertible blocks are employed to construct the INN. Additionally, we introduce a quantization module (QM) to mitigate the impact of format conversion, thus enhancing the framework's generalization and improving the perceptual quality of enhanced images. Extensive experiments demonstrate that our INN-RSIC significantly outperforms the existing state-of-the-art traditional and deep learning-based image compression methods in terms of perception quality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
454,790
2410.10739
Balancing Continuous Pre-Training and Instruction Fine-Tuning: Optimizing Instruction-Following in LLMs
Large Language Models (LLMs) for public use require continuous pre-training to remain up-to-date with the latest data. The models also need to be fine-tuned with specific instructions to maintain their ability to follow instructions accurately. Typically, LLMs are released in two versions: the Base LLM, pre-trained on diverse data, and the instruction-refined LLM, additionally trained with specific instructions for better instruction following. The question arises as to which model should undergo continuous pre-training to maintain its instruction-following abilities while also staying current with the latest data. In this study, we delve into the intricate relationship between continuous pre-training and instruction fine-tuning of the LLMs and investigate the impact of continuous pre-training on the instruction following abilities of both the base and its instruction finetuned model. Further, the instruction fine-tuning process is computationally intense and requires a substantial number of hand-annotated examples for the model to learn effectively. This study aims to find the most compute-efficient strategy to gain up-to-date knowledge and instruction-following capabilities without requiring any instruction data and fine-tuning. We empirically prove our findings on the LLaMa 3, 3.1 and Qwen 2, 2.5 family of base and instruction models, providing a comprehensive exploration of our hypotheses across varying sizes of pre-training data corpus and different LLMs settings.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
498,208
1612.02589
Multi-source Transfer Learning with Convolutional Neural Networks for Lung Pattern Analysis
Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis (CAD) systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2% in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,254
1808.03800
Statistics of the Eigenvalues of a Noisy Multi-Soliton Pulse
For Nonlinear-Frequency Division-Multiplexed (NFDM) systems, the statistics of the received nonlinear spectrum in the presence of additive white Gaussian noise (AWGN) is an open problem. We present a novel method, based on the Fourier collocation algorithm, to compute these statistics.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
104,998
2201.05358
A three-field phase-field model for mixed-mode fracture in rock based on experimental determination of the mode II fracture toughness
In this contribution, a novel framework for simulating mixed-mode failure in rock is presented. Based on a hybrid phase-field model for mixed-mode fracture, separate phase-field variables are introduced for tensile (mode I) and shear (mode II) fracture. The resulting three-field problem features separate length scale parameters for mode I and mode II cracks. In contrast to the classic two-field mixed-mode approaches it can thus account for different tensile and shear strength of rock. The two phase-field equations are implicitly coupled through the degradation of the material in the elastic equation, and the three fields are solved using a staggered iteration scheme. For its validation, the three-field model is calibrated for two types of rock, Solnhofen Limestone and Pfraundorfer Dolostone. To this end, double-edge notched Brazilian disk (DNBD) tests are performed to determine the mode II fracture toughness. The numerical results demonstrate that the proposed phase-field model is able to reproduce the different crack patterns observed in the DNBD tests. A final example of a uniaxial compression test on a rare drill core demonstrates, that the proposed model is able to capture complex, 3D mixed-mode crack patterns when calibrated with the correct mode I and mode II fracture toughness.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
275,371
2008.07832
Tackling the Unannotated: Scene Graph Generation with Bias-Reduced Models
Predicting a scene graph that captures visual entities and their interactions in an image has been considered a crucial step towards full scene comprehension. Recent scene graph generation (SGG) models have shown their capability of capturing the most frequent relations among visual entities. However, the state-of-the-art results are still far from satisfactory, e.g. models can obtain 31% in overall recall R@100, whereas the likewise important mean class-wise recall mR@100 is only around 8% on Visual Genome (VG). The discrepancy between R and mR results urges to shift the focus from pursuing a high R to a high mR with a still competitive R. We suspect that the observed discrepancy stems from both the annotation bias and sparse annotations in VG, in which many visual entity pairs are either not annotated at all or only with a single relation when multiple ones could be valid. To address this particular issue, we propose a novel SGG training scheme that capitalizes on self-learned knowledge. It involves two relation classifiers, one offering a less biased setting for the other to base on. The proposed scheme can be applied to most of the existing SGG models and is straightforward to implement. We observe significant relative improvements in mR (between +6.6% and +20.4%) and competitive or better R (between -2.4% and 0.3%) across all standard SGG tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,236
2003.00403
Cops-Ref: A new Dataset and Task on Compositional Referring Expression Comprehension
Referring expression comprehension (REF) aims at identifying a particular object in a scene by a natural language expression. It requires joint reasoning over the textual and visual domains to solve the problem. Some popular referring expression datasets, however, fail to provide an ideal test bed for evaluating the reasoning ability of the models, mainly because 1) their expressions typically describe only some simple distinctive properties of the object and 2) their images contain limited distracting information. To bridge the gap, we propose a new dataset for visual reasoning in context of referring expression comprehension with two main features. First, we design a novel expression engine rendering various reasoning logics that can be flexibly combined with rich visual properties to generate expressions with varying compositionality. Second, to better exploit the full reasoning chain embodied in an expression, we propose a new test setting by adding additional distracting images containing objects sharing similar properties with the referent, thus minimising the success rate of reasoning-free cross-domain alignment. We evaluate several state-of-the-art REF models, but find none of them can achieve promising performance. A proposed modular hard mining strategy performs the best but still leaves substantial room for improvement. We hope this new dataset and task can serve as a benchmark for deeper visual reasoning analysis and foster the research on referring expression comprehension.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
166,292
1703.02742
End-to-end Throughput Maximization for Underlay Multi-hop Cognitive Radio Networks with RF Energy Harvesting
This paper studies a green paradigm for the underlay coexistence of primary users (PUs) and secondary users (SUs) in energy harvesting cognitive radio networks (EH-CRNs), wherein battery-free SUs capture both the spectrum and the energy of PUs to enhance spectrum efficiency and green energy utilization. To lower the transmit powers of SUs, we employ multi-hop transmission with time division multiple access, by which SUs first harvest energy from the RF signals of PUs and then transmit data in the allocated time concurrently with PUs, all in the licensed spectrum. In this way, the available transmit energy of each SU mainly depends on the harvested energy before the turn to transmit, namely energy causality. Meanwhile, the transmit powers of SUs must be strictly controlled to protect PUs from harmful interference. Thus, subject to the energy causality constraint and the interference power constraint, we study the end-to-end throughput maximization problem for optimal time and power allocation. To solve this nonconvex problem, we first equivalently transform it into a convex optimization problem and then propose the joint optimal time and power allocation (JOTPA) algorithm that iteratively solves a series of feasibility problems until convergence. Extensive simulations evaluate the performance of EH-CRNs with JOTPA in three typical deployment scenarios and validate the superiority of JOTPA by making comparisons with two other resource allocation algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
69,616
2212.10733
Scalable Hybrid Learning Techniques for Scientific Data Compression
Data compression is becoming critical for storing scientific data because many scientific applications need to store large amounts of data and post process this data for scientific discovery. Unlike image and video compression algorithms that limit errors to primary data, scientists require compression techniques that accurately preserve derived quantities of interest (QoIs). This paper presents a physics-informed compression technique implemented as an end-to-end, scalable, GPU-based pipeline for data compression that addresses this requirement. Our hybrid compression technique combines machine learning techniques and standard compression methods. Specifically, we combine an autoencoder, an error-bounded lossy compressor to provide guarantees on raw data error, and a constraint satisfaction post-processing step to preserve the QoIs within a minimal error (generally less than floating point error). The effectiveness of the data compression pipeline is demonstrated by compressing nuclear fusion simulation data generated by a large-scale fusion code, XGC, which produces hundreds of terabytes of data in a single day. Our approach works within the ADIOS framework and results in compression by a factor of more than 150 while requiring only a few percent of the computational resources necessary for generating the data, making the overall approach highly effective for practical scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
337,581
1506.05860
Variational Gaussian Copula Inference
We utilize copulas to constitute a unified framework for constructing and optimizing variational proposals in hierarchical Bayesian models. For models with continuous and non-Gaussian hidden variables, we propose a semiparametric and automated variational Gaussian copula approach, in which the parametric Gaussian copula family is able to preserve multivariate posterior dependence, and the nonparametric transformations based on Bernstein polynomials provide ample flexibility in characterizing the univariate marginal posteriors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
44,341
1810.00460
TacWhiskers: Biomimetic optical tactile whiskered robots
Here we propose and investigate a novel vibrissal tactile sensor - the TacWhisker array - based on modifying a 3D-printed optical cutaneous (fingertip) tactile sensor - the TacTip. Two versions are considered: a static TacWhisker array analogous to immotile tactile vibrissae (e.g. rodent microvibrissae) and a dynamic TacWhisker array analogous to motile tactile vibrissae (e.g. rodent macrovibrissae). Performance is assessed on an active object localization task. The whisking motion of the dynamic TacWhisker leads to millimetre-scale location perception, whereas perception with the static TacWhisker array is relatively poor when making dabbing contacts. The dynamic sensor output is dominated by a self-generated motion signal, which can be compensated by comparing to a reference signal. Overall, the TacWhisker arrays give a new class of tactile whiskered robots that benefit from being relatively inexpensive and customizable. Furthermore, the biomimetic basis for the TacWhiskers fits well with building an embodied model of the rodent sensory system for investigating animal perception. A video demonstrating this robot can be seen at https://www.youtube.com/watch?v=ksS177ep6yY
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
109,186
2408.15458
PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification
Correctly assessing the malignancy of breast lesions identified during ultrasound examinations is crucial for effective clinical decision-making. However, the current "golden standard" relies on manual BI-RADS scoring by clinicians, often leading to unnecessary biopsies and a significant mental health burden on patients and their families. In this paper, we introduce PersonalizedUS, an interpretable machine learning system that leverages recent advances in conformal prediction to provide precise and personalized risk estimates with local coverage guarantees and sensitivity, specificity, and predictive values above 0.9 across various threshold levels. In particular, we identify meaningful lesion subgroups where distribution-free, model-agnostic conditional coverage holds, with approximately 90% of our prediction sets containing only the ground truth in most lesion subgroups, thus explicitly characterizing for which patients the model is most suitably applied. Moreover, we make available a curated tabular dataset of 1936 biopsied breast lesions from a recent observational multicenter study and benchmark the performance of several state-of-the-art learning algorithms. We also report a successful case study of the deployed system in the same multicenter context. Concrete clinical benefits include up to a 65% reduction in requested biopsies among BI-RADS 4a and 4b lesions, with minimal to no missed cancer cases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
483,942
2203.02116
In the Service of Online Order: Tackling Cyber-Bullying with Machine Learning and Affect Analysis
One of the burning problems lately in Japan has been cyber-bullying, or slandering and bullying people online. The problem has been especially noticed on unofficial Web sites of Japanese schools. Volunteers consisting of school personnel and PTA (Parent-Teacher Association) members have started Online Patrol to spot malicious contents within Web forums and blogs. In practise, Online Patrol assumes reading through the whole Web contents, which is a task difficult to perform manually. With this paper we introduce a research intended to help PTA members perform Online Patrol more efficiently. We aim to develop a set of tools that can automatically detect malicious entries and report them to PTA members. First, we collected cyber-bullying data from unofficial school Web sites. Then we performed analysis of this data in two ways. Firstly, we analysed the entries with a multifaceted affect analysis system in order to find distinctive features for cyber-bullying and apply them to a machine learning classifier. Secondly, we applied a SVM based machine learning method to train a classifier for detection of cyber-bullying. The system was able to classify cyber-bullying entries with 88.2% of balanced F-score.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
283,643
2303.10149
CoVIO: Online Continual Learning for Visual-Inertial Odometry
Visual odometry is a fundamental task for many applications on mobile devices and robotic platforms. Since such applications are oftentimes not limited to predefined target domains and learning-based vision systems are known to generalize poorly to unseen environments, methods for continual adaptation during inference time are of significant interest. In this work, we introduce CoVIO for online continual learning of visual-inertial odometry. CoVIO effectively adapts to new domains while mitigating catastrophic forgetting by exploiting experience replay. In particular, we propose a novel sampling strategy to maximize image diversity in a fixed-size replay buffer that targets the limited storage capacity of embedded devices. We further provide an asynchronous version that decouples the odometry estimation from the network weight update step enabling continuous inference in real time. We extensively evaluate CoVIO on various real-world datasets demonstrating that it successfully adapts to new domains while outperforming previous methods. The code of our work is publicly available at http://continual-slam.cs.uni-freiburg.de.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
352,330
1410.3682
Greedy Sparsity-Promoting Algorithms for Distributed Learning
This paper focuses on the development of novel greedy techniques for distributed learning under sparsity constraints. Greedy techniques have widely been used in centralized systems due to their low computational requirements and at the same time their relatively good performance in estimating sparse parameter vectors/signals. The paper reports two new algorithms in the context of sparsity--aware learning. In both cases, the goal is first to identify the support set of the unknown signal and then to estimate the non--zero values restricted to the active support set. First, an iterative greedy multi--step procedure is developed, based on a neighborhood cooperation strategy, using batch processing on the observed data. Next, an extension of the algorithm to the online setting, based on the diffusion LMS rationale for adaptivity, is derived. Theoretical analysis of the algorithms is provided, where it is shown that the batch algorithm converges to the unknown vector if a Restricted Isometry Property (RIP) holds. Moreover, the online version converges in the mean to the solution vector under some general assumptions. Finally, the proposed schemes are tested against recently developed sparsity--promoting algorithms and their enhanced performance is verified via simulation examples.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,723
2101.11744
On the mapping between Hopfield networks and Restricted Boltzmann Machines
Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact mapping between HNs and RBMs has been previously noted for the special case of orthogonal (uncorrelated) encoded patterns. We present here an exact mapping in the case of correlated pattern HNs, which are more broadly applicable to existing datasets. Specifically, we show that any HN with $N$ binary variables and $p<N$ arbitrary binary patterns can be transformed into an RBM with $N$ binary visible variables and $p$ gaussian hidden variables. We outline the conditions under which the reverse mapping exists, and conduct experiments on the MNIST dataset which suggest the mapping provides a useful initialization to the RBM weights. We discuss extensions, the potential importance of this correspondence for the training of RBMs, and for understanding the performance of deep architectures which utilize RBMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
217,373
1206.4619
Inductive Kernel Low-rank Decomposition with Priors: A Generalized Nystrom Method
Low-rank matrix decomposition has gained great popularity recently in scaling up kernel methods to large amounts of data. However, some limitations could prevent them from working effectively in certain domains. For example, many existing approaches are intrinsically unsupervised, which does not incorporate side information (e.g., class labels) to produce task specific decompositions; also, they typically work "transductively", i.e., the factorization does not generalize to new samples, so the complete factorization needs to be recomputed when new samples become available. To solve these problems, in this paper we propose an"inductive"-flavored method for low-rank kernel decomposition with priors. We achieve this by generalizing the Nystr\"om method in a novel way. On the one hand, our approach employs a highly flexible, nonparametric structure that allows us to generalize the low-rank factors to arbitrarily new samples; on the other hand, it has linear time and space complexities, which can be orders of magnitudes faster than existing approaches and renders great efficiency in learning a low-rank kernel decomposition. Empirical results demonstrate the efficacy and efficiency of the proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,670
2403.19727
New Semantic Task for the French Spoken Language Understanding MEDIA Benchmark
Intent classification and slot-filling are essential tasks of Spoken Language Understanding (SLU). In most SLUsystems, those tasks are realized by independent modules. For about fifteen years, models achieving both of themjointly and exploiting their mutual enhancement have been proposed. A multilingual module using a joint modelwas envisioned to create a touristic dialogue system for a European project, HumanE-AI-Net. A combination ofmultiple datasets, including the MEDIA dataset, was suggested for training this joint model. The MEDIA SLU datasetis a French dataset distributed since 2005 by ELRA, mainly used by the French research community and free foracademic research since 2020. Unfortunately, it is annotated only in slots but not intents. An enhanced version ofMEDIA annotated with intents has been built to extend its use to more tasks and use cases. This paper presents thesemi-automatic methodology used to obtain this enhanced version. In addition, we present the first results of SLUexperiments on this enhanced dataset using joint models for intent classification and slot-filling.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
442,453
2009.13723
A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in Aerial View
Drones shooting can be applied in dynamic traffic monitoring, object detecting and tracking, and other vision tasks. The variability of the shooting location adds some intractable challenges to these missions, such as varying scale, unstable exposure, and scene migration. In this paper, we strive to tackle the above challenges and automatically understand the crowd from the visual data collected from drones. First, to alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed, which extracts optical flow and frame difference information as an additional branch. Besides, to improve the model's generalization ability at different scales and time, we randomly combine a variety of data transformation methods to simulate some unseen environments. To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV). Experiment results show the effectiveness of the virtual data. Our method wins the challenge with a mean absolute error (MAE) of 12.70. Moreover, a comprehensive ablation study is conducted to explore each component's contribution.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
197,820
1908.08612
Tiered Graph Autoencoders with PyTorch Geometric for Molecular Graphs
Tiered latent representations and latent spaces for molecular graphs provide a simple but effective way to explicitly represent and utilize groups (e.g., functional groups), which consist of the atom (node) tier, the group tier and the molecule (graph) tier. They can be learned using the tiered graph autoencoder architecture. In this paper we discuss adapting tiered graph autoencoders for use with PyTorch Geometric, for both the deterministic tiered graph autoencoder model and the probabilistic tiered variational graph autoencoder model. We also discuss molecular structure information sources that can be accessed to extract training data for molecular graphs. To support transfer learning, a critical consideration is that the information must utilize standard unique molecule and constituent atom identifiers. As a result of using tiered graph autoencoders for deep learning, each molecular graph possesses tiered latent representations. At each tier, the latent representation consists of: node features, edge indices, edge features, membership matrix, and node embeddings. This enables the utilization and exploration of tiered molecular latent spaces, either individually (the node tier, the group tier, or the graph tier) or jointly, as well as navigation across the tiers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
142,600
1102.3867
Controllability properties for the one-dimensional Heat equation under multiplicative or nonnegative additive controls with local mobile support
We discuss several new results on nonnegative approximate controllability for the one-dimensional Heat equation governed by either multiplicative or nonnegative additive control, acting within a proper subset of the space domain at every moment of time. Our methods allow us to link these two types of controls to some extend. The main results include approximate controllability properties both for the static and mobile control supports.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
9,276
2501.15157
Median of Forests for Robust Density Estimation
Robust density estimation refers to the consistent estimation of the density function even when the data is contaminated by outliers. We find that existing forest density estimation at a certain point is inherently resistant to the outliers outside the cells containing the point, which we call \textit{non-local outliers}, but not resistant to the rest \textit{local outliers}. To achieve robustness against all outliers, we propose an ensemble learning algorithm called \textit{medians of forests for robust density estimation} (\textit{MFRDE}), which adopts a pointwise median operation on forest density estimators fitted on subsampled datasets. Compared to existing robust kernel-based methods, MFRDE enables us to choose larger subsampling sizes, sacrificing less accuracy for density estimation while achieving robustness. On the theoretical side, we introduce the local outlier exponent to quantify the number of local outliers. Under this exponent, we show that even if the number of outliers reaches a certain polynomial order in the sample size, MFRDE is able to achieve almost the same convergence rate as the same algorithm on uncontaminated data, whereas robust kernel-based methods fail. On the practical side, real data experiments show that MFRDE outperforms existing robust kernel-based methods. Moreover, we apply MFRDE to anomaly detection to showcase a further application.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,430
2410.15801
Improve Dense Passage Retrieval with Entailment Tuning
Retrieval module can be plugged into many downstream NLP tasks to improve their performance, such as open-domain question answering and retrieval-augmented generation. The key to a retrieval system is to calculate relevance scores to query and passage pairs. However, the definition of relevance is often ambiguous. We observed that a major class of relevance aligns with the concept of entailment in NLI tasks. Based on this observation, we designed a method called entailment tuning to improve the embedding of dense retrievers. Specifically, we unify the form of retrieval data and NLI data using existence claim as a bridge. Then, we train retrievers to predict the claims entailed in a passage with a variant task of masked prediction. Our method can be efficiently plugged into current dense retrieval methods, and experiments show the effectiveness of our method.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
500,733
1008.4565
On the Transmission-Computation-Energy Tradeoff in Wireless and Fixed Networks
In this paper, a framework for the analysis of the transmission-computation-energy tradeoff in wireless and fixed networks is introduced. The analysis of this tradeoff considers both the transmission energy as well as the energy consumed at the receiver to process the received signal. While previous work considers linear decoder complexity, which is only achieved by uncoded transmission, this paper claims that the average processing (or computation) energy per symbol depends exponentially on the information rate of the source message. The introduced framework is parametrized in a way that it reflects properties of fixed and wireless networks alike. The analysis of this paper shows that exponential complexity and therefore stronger codes are preferable at low data rates while linear complexity and therefore uncoded transmission becomes preferable at high data rates. The more the computation energy is emphasized (such as in fixed networks), the less hops are optimal and the lower is the benefit of multi-hopping. On the other hand, the higher the information rate of the single-hop network, the higher the benefits of multi-hopping. Both conclusions are underlined by analytical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,383
2411.01696
Conformal Risk Minimization with Variance Reduction
Conformal prediction (CP) is a distribution-free framework for achieving probabilistic guarantees on black-box models. CP is generally applied to a model post-training. Recent research efforts, on the other hand, have focused on optimizing CP efficiency during training. We formalize this concept as the problem of conformal risk minimization (CRM). In this direction, conformal training (ConfTr) by Stutz et al.(2022) is a technique that seeks to minimize the expected prediction set size of a model by simulating CP in-between training updates. Despite its potential, we identify a strong source of sample inefficiency in ConfTr that leads to overly noisy estimated gradients, introducing training instability and limiting practical use. To address this challenge, we propose variance-reduced conformal training (VR-ConfTr), a CRM method that incorporates a variance reduction technique in the gradient estimation of the ConfTr objective function. Through extensive experiments on various benchmark datasets, we demonstrate that VR-ConfTr consistently achieves faster convergence and smaller prediction sets compared to baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,183
2205.02942
Evaluating the principle of relatedness: Estimation, drivers and implications for policy
A growing body of research documents that the size and growth of an industry in a place depends on how much related activity is found there. This fact is commonly referred to as the "principle of relatedness". However, there is no consensus on why we observe the principle of relatedness, how best to determine which industries are related or how this empirical regularity can help inform local industrial policy. We perform a structured search over tens of thousands of specifications to identify robust -- in terms of out-of-sample predictions -- ways to determine how well industries fit the local economies of US cities. To do so, we use data that allow us to derive relatedness from observing which industries co-occur in the portfolios of establishments, firms, cities and countries. Different portfolios yield different relatedness matrices, each of which help predict the size and growth of local industries. However, our specification search not only identifies ways to improve the performance of such predictions, but also reveals new facts about the principle of relatedness and important trade-offs between predictive performance and interpretability of relatedness patterns. We use these insights to deepen our theoretical understanding of what underlies path-dependent development in cities and expand existing policy frameworks that rely on inter-industry relatedness analysis.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
295,118
2310.05021
Toward Intelligent Emergency Control for Large-scale Power Systems: Convergence of Learning, Physics, Computing and Control
This paper has delved into the pressing need for intelligent emergency control in large-scale power systems, which are experiencing significant transformations and are operating closer to their limits with more uncertainties. Learning-based control methods are promising and have shown effectiveness for intelligent power system control. However, when they are applied to large-scale power systems, there are multifaceted challenges such as scalability, adaptiveness, and security posed by the complex power system landscape, which demand comprehensive solutions. The paper first proposes and instantiates a convergence framework for integrating power systems physics, machine learning, advanced computing, and grid control to realize intelligent grid control at a large scale. Our developed methods and platform based on the convergence framework have been applied to a large (more than 3000 buses) Texas power system, and tested with 56000 scenarios. Our work achieved a 26% reduction in load shedding on average and outperformed existing rule-based control in 99.7% of the test scenarios. The results demonstrated the potential of the proposed convergence framework and DRL-based intelligent control for the future grid.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
397,941
1807.02874
Bounds and Constructions for Multi-Symbol Duplication Error Correcting Codes
In this paper, we study codes correcting $t$ duplications of $\ell$ consecutive symbols. These errors are known as tandem duplication errors, where a sequence of symbols is repeated and inserted directly after its original occurrence. Using sphere packing arguments, we derive non-asymptotic upper bounds on the cardinality of codes that correct such errors for any choice of parameters. Based on the fact that a code correcting insertions of $t$ zero-blocks can be used to correct $t$ tandem duplications, we construct codes for tandem duplication errors. We compare the cardinalities of these codes with their sphere packing upper bounds. Finally, we discuss the asymptotic behavior of the derived codes and bounds, which yields insights about the tandem duplication channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
102,366
1602.08461
A One-Hop Information Based Geographic Routing Protocol for Delay Tolerant MANETs
Delay and Disruption Tolerant Networks (DTNs) may lack continuous network connectivity. Routing in DTNs is thus a challenge since it must handle network partitioning, long delays, and dynamic topology. Meanwhile, routing protocols of the traditional Mobile Ad hoc NETworks (MANETs) cannot work well due to the failure of its assumption that most network connections are available. In this article, a geographic routing protocol is proposed for MANETs in delay tolerant situations, by using no more than one-hop information. A utility function is designed for implementing the under-controlled replication strategy. To reduce the overheads caused by message flooding, we employ a criterion so as to evaluate the degree of message redundancy. Consequently a message redundancy coping mechanism is added to our routing protocol. Extensive simulations have been conducted and the results show that when node moving speed is relatively low, our routing protocol outperforms the other schemes such as Epidemic, Spray and Wait, FirstContact in delivery ratio and average hop count, while introducing an acceptable overhead ratio into the network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
52,648
2108.07707
On Incorrectness Logic and Kleene Algebra with Top and Tests
Kleene algebra with tests (KAT) is a foundational equational framework for reasoning about programs, which has found applications in program transformations, networking and compiler optimizations, among many other areas. In his seminal work, Kozen proved that KAT subsumes propositional Hoare logic, showing that one can reason about the (partial) correctness of while programs by means of the equational theory of KAT. In this work, we investigate the support that KAT provides for reasoning about incorrectness, instead, as embodied by Ohearn's recently proposed incorrectness logic. We show that KAT cannot directly express incorrectness logic. The main reason for this limitation can be traced to the fact that KAT cannot express explicitly the notion of codomain, which is essential to express incorrectness triples. To address this issue, we study Kleene Algebra with Top and Tests (TopKAT), an extension of KAT with a top element. We show that TopKAT is powerful enough to express a codomain operation, to express incorrectness triples, and to prove all the rules of incorrectness logic sound. This shows that one can reason about the incorrectness of while-like programs by means of the equational theory of TopKAT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
250,996
2210.12214
Optimizing Bilingual Neural Transducer with Synthetic Code-switching Text Generation
Code-switching describes the practice of using more than one language in the same sentence. In this study, we investigate how to optimize a neural transducer based bilingual automatic speech recognition (ASR) model for code-switching speech. Focusing on the scenario where the ASR model is trained without supervised code-switching data, we found that semi-supervised training and synthetic code-switched data can improve the bilingual ASR system on code-switching speech. We analyze how each of the neural transducer's encoders contributes towards code-switching performance by measuring encoder-specific recall values, and evaluate our English/Mandarin system on the ASCEND data set. Our final system achieves 25% mixed error rate (MER) on the ASCEND English/Mandarin code-switching test set -- reducing the MER by 2.1% absolute compared to the previous literature -- while maintaining good accuracy on the monolingual test sets.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
325,633
1110.1514
Blackwell Approachability and Minimax Theory
This manuscript investigates the relationship between Blackwell Approachability, a stochastic vector-valued repeated game, and minimax theory, a single-play scalar-valued scenario. First, it is established in a general setting --- one not permitting invocation of minimax theory --- that Blackwell's Approachability Theorem and its generalization due to Hou are still valid. Second, minimax structure grants a result in the spirit of Blackwell's weak-approachability conjecture, later resolved by Vieille, that any set is either approachable by one player, or avoidable by the opponent. This analysis also reveals a strategy for the opponent.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
12,534
2310.00337
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware
In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a promising avenue for efficient storage and manipulation of neural network weights. However, the transition from trained floating-point models to hardware-constrained analog architectures remains a challenge. In this work, we combine a quantization technique specifically designed for such architectures with a novel self-correcting mechanism. By utilizing dual crossbar connections to represent both the positive and negative parts of a single weight, we develop an algorithm to approximate a set of multiplicative weights. These weights, along with their differences, aim to represent the original network's weights with minimal loss in performance. We implement the models using IBM's aihwkit and evaluate their efficacy over time. Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
395,934
1309.2853
General Purpose Textual Sentiment Analysis and Emotion Detection Tools
Textual sentiment analysis and emotion detection consists in retrieving the sentiment or emotion carried by a text or document. This task can be useful in many domains: opinion mining, prediction, feedbacks, etc. However, building a general purpose tool for doing sentiment analysis and emotion detection raises a number of issues, theoretical issues like the dependence to the domain or to the language but also pratical issues like the emotion representation for interoperability. In this paper we present our sentiment/emotion analysis tools, the way we propose to circumvent the di culties and the applications they are used for.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
26,983
2106.10962
Segmentation of cell-level anomalies in electroluminescence images of photovoltaic modules
In the operation & maintenance (O&M) of photovoltaic (PV) plants, the early identification of failures has become crucial to maintain productivity and prolong components' life. Of all defects, cell-level anomalies can lead to serious failures and may affect surrounding PV modules in the long run. These fine defects are usually captured with high spatial resolution electroluminescence (EL) imaging. The difficulty of acquiring such images has limited the availability of data. For this work, multiple data resources and augmentation techniques have been used to surpass this limitation. Current state-of-the-art detection methods extract barely low-level information from individual PV cell images, and their performance is conditioned by the available training data. In this article, we propose an end-to-end deep learning pipeline that detects, locates and segments cell-level anomalies from entire photovoltaic modules via EL images. The proposed modular pipeline combines three deep learning techniques: 1. object detection (modified Faster-RNN), 2. image classification (EfficientNet) and 3. weakly supervised segmentation (autoencoder). The modular nature of the pipeline allows to upgrade the deep learning models to the further improvements in the state-of-the-art and also extend the pipeline towards new functionalities.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
242,229
2011.00422
Future-Aware Diverse Trends Framework for Recommendation
In recommender systems, modeling user-item behaviors is essential for user representation learning. Existing sequential recommenders consider the sequential correlations between historically interacted items for capturing users' historical preferences. However, since users' preferences are by nature time-evolving and diversified, solely modeling the historical preference (without being aware of the time-evolving trends of preferences) can be inferior for recommending complementary or fresh items and thus hurt the effectiveness of recommender systems. In this paper, we bridge the gap between the past preference and potential future preference by proposing the future-aware diverse trends (FAT) framework. By future-aware, for each inspected user, we construct the future sequences from other similar users, which comprise of behaviors that happen after the last behavior of the inspected user, based on a proposed neighbor behavior extractor. By diverse trends, supposing the future preferences can be diversified, we propose the diverse trends extractor and the time-aware mechanism to represent the possible trends of preferences for a given user with multiple vectors. We leverage both the representations of historical preference and possible future trends to obtain the final recommendation. The quantitative and qualitative results from relatively extensive experiments on real-world datasets demonstrate the proposed framework not only outperforms the state-of-the-art sequential recommendation methods across various metrics, but also makes complementary and fresh recommendations.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
204,209
2210.02200
Machine learning in bioprocess development: From promise to practice
Fostered by novel analytical techniques, digitalization and automation, modern bioprocess development provides high amounts of heterogeneous experimental data, containing valuable process information. In this context, data-driven methods like machine learning (ML) approaches have a high potential to rationally explore large design spaces while exploiting experimental facilities most efficiently. The aim of this review is to demonstrate how ML methods have been applied so far in bioprocess development, especially in strain engineering and selection, bioprocess optimization, scale-up, monitoring and control of bioprocesses. For each topic, we will highlight successful application cases, current challenges and point out domains that can potentially benefit from technology transfer and further progress in the field of ML.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,564
1806.03568
Explainable Recommendation via Multi-Task Learning in Opinionated Text Data
Explaining automatically generated recommendations allows users to make more informed and accurate decisions about which results to utilize, and therefore improves their satisfaction. In this work, we develop a multi-task learning solution for explainable recommendation. Two companion learning tasks of user preference modeling for recommendation} and \textit{opinionated content modeling for explanation are integrated via a joint tensor factorization. As a result, the algorithm predicts not only a user's preference over a list of items, i.e., recommendation, but also how the user would appreciate a particular item at the feature level, i.e., opinionated textual explanation. Extensive experiments on two large collections of Amazon and Yelp reviews confirmed the effectiveness of our solution in both recommendation and explanation tasks, compared with several existing recommendation algorithms. And our extensive user study clearly demonstrates the practical value of the explainable recommendations generated by our algorithm.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
100,028
1311.4723
Zero-Delay and Causal Secure Source Coding
We investigate the combination between causal/zero-delay source coding and information-theoretic secrecy. Two source coding models with secrecy constraints are considered. We start by considering zero-delay perfectly secret lossless transmission of a memoryless source. We derive bounds on the key rate and coding rate needed for perfect zero-delay secrecy. In this setting, we consider two models which differ by the ability of the eavesdropper to parse the bit-stream passing from the encoder to the legitimate decoder into separate messages. We also consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. Unlike the zero-delay setting where variable-length coding is traditionally used but might leak information on the source through the length of the codewords, in this setting, since delay is allowed, block coding is possible. We show that in this setting, separation of encryption and causal source coding is optimal.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,518
2210.00155
A Novel Power-Band based Data Segmentation Method for Enhancing Meter Phase and Transformer-Meter Pairing Identification
This paper presents a novel power-band-based data segmentation (PBDS) method to enhance the identification of meter phase and meter-transformer pairing. Meters that share the same transformer or are on the same phase typically exhibit strongly correlated voltage profiles. However, under high power consumption, there can be significant voltage drops along the line connecting a customer to the distribution transformer. These voltage drops significantly decrease the correlations among meters on the same phase or supplied by the same transformer, resulting in high misidentification rates. To address this issue, we propose using power bands to select highly correlated voltage segments for computing correlations, rather than relying solely on correlations computed from the entire voltage waveforms. The algorithm's performance is assessed by conducting tests using data gathered from 13 utility feeders. To ensure the credibility of the identification results, utility engineers conduct field verification for all 13 feeders. The verification results unequivocally demonstrate that the proposed algorithm surpasses existing methods in both accuracy and robustness.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
320,757
2012.13184
Simulation of Vision-based Tactile Sensors using Physics based Rendering
Tactile sensing has seen a rapid adoption with the advent of vision-based tactile sensors. Vision-based tactile sensors provide high resolution, compact and inexpensive data to perform precise in-hand manipulation and human-robot interaction. However, the simulation of tactile sensors is still a challenge. In this paper, we built the first fully general optical tactile simulation system for a GelSight sensor using physics-based rendering techniques. We propose physically accurate light models and show in-depth analysis of individual components of our simulation pipeline. Our system outperforms previous simulation techniques qualitatively and quantitative on image similarity metrics. Our code and experimental data is open-sourced at https://labs.ri.cmu.edu/robotouch/tactile-optical-simulation/
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
213,142
2006.06822
Coronavirus Contact Tracing: Evaluating The Potential Of Using Bluetooth Received Signal Strength For Proximity Detection
We report on measurements of Bluetooth Low Energy (LE) received signal strength taken on mobile handsets in a variety of common, real-world settings. We note that a key difficulty is obtaining the ground truth as to when people are in close proximity to one another. Knowledge of this ground truth is important for accurately evaluating the accuracy with which contact events are detected by Bluetooth LE. We approach this by adopting a scenario-based approach. In summary, we find that the Bluetooth LE received signal strength can vary substantially depending on the relative orientation of handsets, on absorption by the human body, reflection/absorption of radio signals in buildings and trains. Indeed we observe that the received signal strength need not decrease with increasing distance. This suggests that the development of accurate methods for proximity detection based on Bluetooth LE received signal strength is likely to be challenging. Our measurements also suggest that combining use of Bluetooth LE contact tracing apps with adoption of new social protocols may yield benefits but this requires further investigation. For example, placing phones on the table during meetings is likely to simplify proximity detection using received signal strength. Similarly, carrying handbags with phones placed close to the outside surface. In locations where the complexity of signal propagation makes proximity detection using received signal strength problematic entry/exit from the location might instead be logged in an app by e.g. scanning a time-varying QR code or the like.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
181,564
1511.00054
Gaussian Process Random Fields
Gaussian processes have been successful in both supervised and unsupervised machine learning tasks, but their computational complexity has constrained practical applications. We introduce a new approximation for large-scale Gaussian processes, the Gaussian Process Random Field (GPRF), in which local GPs are coupled via pairwise potentials. The GPRF likelihood is a simple, tractable, and parallelizeable approximation to the full GP marginal likelihood, enabling latent variable modeling and hyperparameter selection on large datasets. We demonstrate its effectiveness on synthetic spatial data as well as a real-world application to seismic event location.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
48,369
1810.04119
Positional Cartesian Genetic Programming
Cartesian Genetic Programming (CGP) has many modifications across a variety of implementations, such as recursive connections and node weights. Alternative genetic operators have also been proposed for CGP, but have not been fully studied. In this work, we present a new form of genetic programming based on a floating point representation. In this new form of CGP, called Positional CGP, node positions are evolved. This allows for the evaluation of many different genetic operators while allowing for previous CGP improvements like recurrency. Using nine benchmark problems from three different classes, we evaluate the optimal parameters for CGP and PCGP, including novel genetic operators.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
109,978
2007.08162
A Product Channel Attack to Wireless Physical Layer Security
We propose a novel attack that compromises the physical layer security of downlink (DL) communications in wireless systems. This technique is based on the transmission of a slowly-varying random symbol by the eavesdropper during its uplink transmission, so that the equivalent fading channel observed at the base station (BS) has a larger variance. Then, the BS designs the secure DL transmission under the assumption that the eavesdropper's channel experiences a larger fading severity than in reality. We show that this approach can lead the BS to transmit to Bob at a rate larger than the secrecy capacity, thus compromising the system secure operation. Our analytical results, corroborated by simulations, show that the use of multiple antennas at the BS may partially alleviate but not immunize against these type of attacks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
187,547
1503.00288
Success factors for Crowdfunding founders and funders
Crowdfunding has been used as one of the effective ways for entrepreneurs to raise funding especially in creative industries. Individuals as well as organizations are paying more attentions to the emergence of new crowdfunding platforms. In the Netherlands, the government is also trying to help artists access financial resources through crowdfunding platforms. This research aims at discovering the success factors for crowdfunding projects through crowdfunding platforms from both founders and funders perspective. We designed our own website for founders and funders to observe crowdfunding behaviors. Our research will contribute to crowdfunding success factors related to issues of trust and decision making and provide practical recommendations for practitioners and researchers.
true
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
40,686
1908.00164
Supervised Learning of the Global Risk Network Activation from Media Event Reports
The World Economic Forum (WEF) publishes annual reports on global risks which have the high impact on the world's economy. Currently, many researchers analyze the modeling and evolution of risks. However, few studies focus on validation of the global risk networks published by the WEF. In this paper, we first create a risk knowledge graph from the annotated risk events crawled from the Wikipedia. Then, we compare the relational dependencies of risks in the WEF and Wikipedia networks, and find that they share over 50% of their edges. Moreover, the edges unique to each network signify the different perspectives of the experts and the public on global risks. To reduce the cost of manual annotation of events triggering risk activation, we build an auto-detection tool which filters out over 80% media reported events unrelated to the global risks. In the process of filtering, our tool also continuously learns keywords relevant to global risks from the event sentences. Using locations of events extracted from the risk knowledge graph, we find characteristics of geographical distributions of the categories of global risks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
140,440
1404.0046
Approximation Schemes for Many-Objective Query Optimization
The goal of multi-objective query optimization (MOQO) is to find query plans that realize a good compromise between conflicting objectives such as minimizing execution time and minimizing monetary fees in a Cloud scenario. A previously proposed exhaustive MOQO algorithm needs hours to optimize even simple TPC-H queries. This is why we propose several approximation schemes for MOQO that generate guaranteed near-optimal plans in seconds where exhaustive optimization takes hours. We integrated all MOQO algorithms into the Postgres optimizer and present experimental results for TPC-H queries; we extended the Postgres cost model and optimize for up to nine conflicting objectives in our experiments. The proposed algorithms are based on a formal analysis of typical cost functions that occur in the context of MOQO. We identify properties that hold for a broad range of objectives and can be exploited for the design of future MOQO algorithms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
31,973
1903.11576
An Alternating Manifold Proximal Gradient Method for Sparse PCA and Sparse CCA
Sparse principal component analysis (PCA) and sparse canonical correlation analysis (CCA) are two essential techniques from high-dimensional statistics and machine learning for analyzing large-scale data. Both problems can be formulated as an optimization problem with nonsmooth objective and nonconvex constraints. Since non-smoothness and nonconvexity bring numerical difficulties, most algorithms suggested in the literature either solve some relaxations or are heuristic and lack convergence guarantees. In this paper, we propose a new alternating manifold proximal gradient method to solve these two high-dimensional problems and provide a unified convergence analysis. Numerical experiment results are reported to demonstrate the advantages of our algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
125,541
2312.08951
Multi-Scene Generalized Trajectory Global Graph Solver with Composite Nodes for Multiple Object Tracking
The global multi-object tracking (MOT) system can consider interaction, occlusion, and other ``visual blur'' scenarios to ensure effective object tracking in long videos. Among them, graph-based tracking-by-detection paradigms achieve surprising performance. However, their fully-connected nature poses storage space requirements that challenge algorithm handling long videos. Currently, commonly used methods are still generated trajectories by building one-forward associations across frames. Such matches produced under the guidance of first-order similarity information may not be optimal from a longer-time perspective. Moreover, they often lack an end-to-end scheme for correcting mismatches. This paper proposes the Composite Node Message Passing Network (CoNo-Link), a multi-scene generalized framework for modeling ultra-long frames information for association. CoNo-Link's solution is a low-storage overhead method for building constrained connected graphs. In addition to the previous method of treating objects as nodes, the network innovatively treats object trajectories as nodes for information interaction, improving the graph neural network's feature representation capability. Specifically, we formulate the graph-building problem as a top-k selection task for some reliable objects or trajectories. Our model can learn better predictions on longer-time scales by adding composite nodes. As a result, our method outperforms the state-of-the-art in several commonly used datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
415,531
1904.02948
Center and Scale Prediction: Anchor-free Approach for Pedestrian and Face Detection
Object detection generally requires sliding-window classifiers in tradition or anchor box based predictions in modern deep learning approaches. However, either of these approaches requires tedious configurations in boxes. In this paper, we provide a new perspective where detecting objects is motivated as a high-level semantic feature detection task. Like edges, corners, blobs and other feature detectors, the proposed detector scans for feature points all over the image, for which the convolution is naturally suited. However, unlike these traditional low-level features, the proposed detector goes for a higher-level abstraction, that is, we are looking for central points where there are objects, and modern deep models are already capable of such a high-level semantic abstraction. Besides, like blob detection, we also predict the scales of the central points, which is also a straightforward convolution. Therefore, in this paper, pedestrian and face detection is simplified as a straightforward center and scale prediction task through convolutions. This way, the proposed method enjoys a box-free setting. Though structurally simple, it presents competitive accuracy on several challenging benchmarks, including pedestrian detection and face detection. Furthermore, a cross-dataset evaluation is performed, demonstrating a superior generalization ability of the proposed method. Code and models can be accessed at (https://github.com/liuwei16/CSP and https://github.com/hasanirtiza/Pedestron).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,574
1907.00068
On Reducing Negative Jacobian Determinant of the Deformation Predicted by Deep Registration Networks
Image registration is a fundamental step in medical image analysis. Ideally, the transformation that registers one image to another should be a diffeomorphism that is both invertible and smooth. Traditional methods like geodesic shooting approach the problem via differential geometry, with theoretical guarantees that the resulting transformation will be smooth and invertible. Most previous research using unsupervised deep neural networks for registration have used a local smoothness constraint (typically, a spatial variation loss) to address the smoothness issue. These networks usually produce non-invertible transformations with ``folding'' in multiple voxel locations, indicated by a negative determinant of the Jacobian matrix of the transformation. While using a loss function that specifically penalizes the folding is a straightforward solution, this usually requires carefully tuning the regularization strength, especially when there are also other losses. In this paper we address this problem from a different angle, by investigating possible training mechanisms that will help the network avoid negative Jacobians and produce smoother deformations. We contribute two independent ideas in this direction. Both ideas greatly reduce the number of folding locations in the predicted deformation, without making changes to the hyperparameters or the architecture used in the existing baseline registration network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
136,929
1611.08303
Deep Watershed Transform for Instance Segmentation
Most contemporary approaches to instance segmentation use complex pipelines involving conditional random fields, recurrent neural networks, object proposals, or template matching schemes. In our paper, we present a simple yet powerful end-to-end convolutional neural network to tackle this task. Our approach combines intuitions from the classical watershed transform and modern deep learning to produce an energy map of the image where object instances are unambiguously represented as basins in the energy map. We then perform a cut at a single energy level to directly yield connected components corresponding to object instances. Our model more than doubles the performance of the state-of-the-art on the challenging Cityscapes Instance Level Segmentation task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,480
2307.03401
Metropolitan Scale and Longitudinal Dataset of Anonymized Human Mobility Trajectories
Modeling and predicting human mobility trajectories in urban areas is an essential task for various applications. The recent availability of large-scale human movement data collected from mobile devices have enabled the development of complex human mobility prediction models. However, human mobility prediction methods are often trained and tested on different datasets, due to the lack of open-source large-scale human mobility datasets amid privacy concerns, posing a challenge towards conducting fair performance comparisons between methods. To this end, we created an open-source, anonymized, metropolitan scale, and longitudinal (90 days) dataset of 100,000 individuals' human mobility trajectories, using mobile phone location data. The location pings are spatially and temporally discretized, and the metropolitan area is undisclosed to protect users' privacy. The 90-day period is composed of 75 days of business-as-usual and 15 days during an emergency. To promote the use of the dataset, we will host a human mobility prediction data challenge (`HuMob Challenge 2023') using the human mobility dataset, which will be held in conjunction with ACM SIGSPATIAL 2023.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
378,028
2411.14356
Convex Approximation of Probabilistic Reachable Sets from Small Samples Using Self-supervised Neural Networks
Probabilistic Reachable Set (PRS) plays a crucial role in many fields of autonomous systems, yet efficiently generating PRS remains a significant challenge. This paper presents a learning approach to generating 2-dimensional PRS for states in a dynamic system. Traditional methods such as Hamilton-Jacobi reachability analysis, Monte Carlo, and Gaussian process classification face significant computational challenges or require detailed dynamics information, limiting their applicability in realistic situations. Existing data-driven methods may lack accuracy. To overcome these limitations, we propose leveraging neural networks, commonly used in imitation learning and computer vision, to imitate expert methods to generate PRS approximations. We trained the neural networks using a multi-label, self-supervised learning approach. We selected the fine-tuned convex approximation method as the expert to create expert PRS. Additionally, we continued sampling from the distribution to obtain a diverse array of sample sets. Given a small sample set, the trained neural networks can replicate the PRS approximation generated by the expert method, while the generation speed is much faster.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
510,124
2205.14140
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
The increasing size and complexity of modern ML systems has improved their predictive capabilities but made their behavior harder to explain. Many techniques for model explanation have been developed in response, but we lack clear criteria for assessing these techniques. In this paper, we cast model explanation as the causal inference problem of estimating causal effects of real-world concepts on the output behavior of ML models given actual input data. We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP). CEBaB consists of short restaurant reviews with human-generated counterfactual reviews in which an aspect (food, noise, ambiance, service) of the dining experience was modified. Original and counterfactual reviews are annotated with multiply-validated sentiment ratings at the aspect-level and review-level. The rich structure of CEBaB allows us to go beyond input features to study the effects of abstract, real-world concepts on model behavior. We use CEBaB to compare the quality of a range of concept-based explanation methods covering different assumptions and conceptions of the problem, and we seek to establish natural metrics for comparative assessments of these methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
299,235
2011.08492
A Time-Frequency based Suspicious Activity Detection for Anti-Money Laundering
Money laundering is the crucial mechanism utilized by criminals to inject proceeds of crime to the financial system. The primary responsibility of the detection of suspicious activity related to money laundering is with the financial institutions. Most of the current systems in these institutions are rule-based and ineffective. The available data science-based anti-money laundering (AML) models in order to replace the existing rule-based systems work on customer relationship management (CRM) features and time characteristics of transaction behaviour. However, there is still a challenge on accuracy and problems around feature engineering due to thousands of possible features. Aiming to improve the detection performance of suspicious transaction monitoring systems for AML systems, in this article, we introduce a novel feature set based on time-frequency analysis, that makes use of 2-D representations of financial transactions. Random forest is utilized as a machine learning method, and simulated annealing is adopted for hyperparameter tuning. The designed algorithm is tested on real banking data, proving the efficacy of the results in practically relevant environments. It is shown that the time-frequency characteristics of suspicious and non-suspicious entities differentiate significantly, which would substantially improve the precision of data science-based transaction monitoring systems looking at only time-series transaction and CRM features.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
206,893
1901.05588
Adaptive backstepping control for FOS with nonsmooth nonlinearities
This paper proposes an original solution to input saturation and dead zone of fractional order system. To overcome these nonsmooth nonlinearities, the control input is decomposed into two independent parts by introducing an intermediate variable, and thus the problem of dead zone and saturation transforms into the problem of disturbance and saturation afterwards. With the procedure of fractional order adaptive backstepping controller design, the bound of disturbance is estimated, and saturation is compensated by the virtual signal of an auxiliary system as well. In spite of the existence of nonsmooth nonlinearities, the output is guaranteed to track the reference signal asymptotically on the basis of our proposed method. Some simulation studies are carried out in order to demonstrate the effectiveness of method at last.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
118,819
2005.03409
AutoSOS: Towards Multi-UAV Systems Supporting Maritime Search and Rescue with Lightweight AI and Edge Computing
Rescue vessels are the main actors in maritime safety and rescue operations. At the same time, aerial drones bring a significant advantage into this scenario. This paper presents the research directions of the AutoSOS project, where we work in the development of an autonomous multi-robot search and rescue assistance platform capable of sensor fusion and object detection in embedded devices using novel lightweight AI models. The platform is meant to perform reconnaissance missions for initial assessment of the environment using novel adaptive deep learning algorithms that efficiently use the available sensors and computational resources on drones and rescue vessel. When drones find potential objects, they will send their sensor data to the vessel to verity the findings with increased accuracy. The actual rescue and treatment operation are left as the responsibility of the rescue personnel. The drones will autonomously reconfigure their spatial distribution to enable multi-hop communication, when a direct connection between a drone transmitting information and the vessel is unavailable.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
176,156
1308.2516
Fluctuation in e-mail sizes weakens power-law correlations in e-mail flow
Power-law correlations have been observed in packet flow over the Internet. The possible origin of these correlations includes demand for Internet services. We observe the demand for e-mail services in an organization, and analyze correlations in the flow and the sequence of send requests using a Detrended Fluctuation Analysis (DFA). The correlation in the flow is found to be weaker than that in the send requests. Four types of artificial flow are constructed to investigate the effects of fluctuations in e-mail sizes. As a result, we find that the correlation in the flow originates from that in the sequence of send requests. The strength of the power-law correlation decreases as a function of the ratio of the standard deviation of e-mail sizes to their average.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
26,391
1906.01334
Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Neural natural language generation (NNLG) from structured meaning representations has become increasingly popular in recent years. While we have seen progress with generating syntactically correct utterances that preserve semantics, various shortcomings of NNLG systems are clear: new tasks require new training data which is not available or straightforward to acquire, and model outputs are simple and may be dull and repetitive. This paper addresses these two critical challenges in NNLG by: (1) scalably (and at no cost) creating training datasets of parallel meaning representations and reference texts with rich style markup by using data from freely available and naturally descriptive user reviews, and (2) systematically exploring how the style markup enables joint control of semantic and stylistic aspects of neural model output. We present YelpNLG, a corpus of 300,000 rich, parallel meaning representations and highly stylistically varied reference texts spanning different restaurant attributes, and describe a novel methodology that can be scalably reused to generate NLG datasets for other domains. The experiments show that the models control important aspects, including lexical choice of adjectives, output length, and sentiment, allowing the models to successfully hit multiple style targets without sacrificing semantics.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
133,677
2112.04182
Unimodal Face Classification with Multimodal Training
Face recognition is a crucial task in various multimedia applications such as security check, credential access and motion sensing games. However, the task is challenging when an input face is noisy (e.g. poor-condition RGB image) or lacks certain information (e.g. 3D face without color). In this work, we propose a Multimodal Training Unimodal Test (MTUT) framework for robust face classification, which exploits the cross-modality relationship during training and applies it as a complementary of the imperfect single modality input during testing. Technically, during training, the framework (1) builds both intra-modality and cross-modality autoencoders with the aid of facial attributes to learn latent embeddings as multimodal descriptors, (2) proposes a novel multimodal embedding divergence loss to align the heterogeneous features from different modalities, which also adaptively avoids the useless modality (if any) from confusing the model. This way, the learned autoencoders can generate robust embeddings in single-modality face classification on test stage. We evaluate our framework in two face classification datasets and two kinds of testing input: (1) poor-condition image and (2) point cloud or 3D face mesh, when both 2D and 3D modalities are available for training. We experimentally show that our MTUT framework consistently outperforms ten baselines on 2D and 3D settings of both datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,447
2107.04652
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders
Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential(encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises: how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model? In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is "strongly invertible" (in a sense we suitably formalize), the inferential model need not be much more complex. Conversely, we prove that there exist non-invertible generative maps, for which the encoding direction needs to be exponentially larger (under standard assumptions in computational complexity). Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice (e.g. convolution and pooling based networks). Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
245,526
1803.03288
TicTac: Accelerating Distributed Deep Learning with Communication Scheduling
State-of-the-art deep learning systems rely on iterative distributed training to tackle the increasing complexity of models and input data. The iteration time in these communication-heavy systems depends on the computation time, communication time and the extent of overlap of computation and communication. In this work, we identify a shortcoming in systems with graph representation for computation, such as TensorFlow and PyTorch, that result in high variance in iteration time --- random order of received parameters across workers. We develop a system, TicTac, to improve the iteration time by fixing this issue in distributed deep learning with Parameter Servers while guaranteeing near-optimal overlap of communication and computation. TicTac identifies and enforces an order of network transfers which improves the iteration time using prioritization. Our system is implemented over TensorFlow and requires no changes to the model or developer inputs. TicTac improves the throughput by up to $37.7\%$ in inference and $19.2\%$ in training, while also reducing straggler effect by up to $2.3\times$. Our code is publicly available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
92,218
2304.04935
Sentence-Level Relation Extraction via Contrastive Learning with Descriptive Relation Prompts
Sentence-level relation extraction aims to identify the relation between two entities for a given sentence. The existing works mostly focus on obtaining a better entity representation and adopting a multi-label classifier for relation extraction. A major limitation of these works is that they ignore background relational knowledge and the interrelation between entity types and candidate relations. In this work, we propose a new paradigm, Contrastive Learning with Descriptive Relation Prompts(CTL-DRP), to jointly consider entity information, relational knowledge and entity type restrictions. In particular, we introduce an improved entity marker and descriptive relation prompts when generating contextual embedding, and utilize contrastive learning to rank the restricted candidate relations. The CTL-DRP obtains a competitive F1-score of 76.7% on TACRED. Furthermore, the new presented paradigm achieves F1-scores of 85.8% and 91.6% on TACREV and Re-TACRED respectively, which are both the state-of-the-art performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
357,420
2003.10038
Robust Hypergraph Clustering via Convex Relaxation of Truncated MLE
We study hypergraph clustering in the weighted $d$-uniform hypergraph stochastic block model ($d$\textsf{-WHSBM}), where each edge consisting of $d$ nodes from the same community has higher expected weight than the edges consisting of nodes from different communities. We propose a new hypergraph clustering algorithm, called \textsf{CRTMLE}, and provide its performance guarantee under the $d$\textsf{-WHSBM} for general parameter regimes. We show that the proposed method achieves the order-wise optimal or the best existing results for approximately balanced community sizes. Moreover, our results settle the first recovery guarantees for growing number of clusters of unbalanced sizes. Involving theoretical analysis and empirical results, we demonstrate the robustness of our algorithm against the unbalancedness of community sizes or the presence of outlier nodes.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
169,222
2111.14836
Low-bit Quantization of Recurrent Neural Network Language Models Using Alternating Direction Methods of Multipliers
The high memory consumption and computational costs of Recurrent neural network language models (RNNLMs) limit their wider application on resource constrained devices. In recent years, neural network quantization techniques that are capable of producing extremely low-bit compression, for example, binarized RNNLMs, are gaining increasing research interests. Directly training of quantized neural networks is difficult. By formulating quantized RNNLMs training as an optimization problem, this paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM). This method can also flexibly adjust the trade-off between the compression rate and model performance using tied low-bit quantization tables. Experiments on two tasks: Penn Treebank (PTB), and Switchboard (SWBD) suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs. Faster convergence of 5 times in model training over the baseline binarized RNNLM quantization was also obtained. Index Terms: Language models, Recurrent neural networks, Quantization, Alternating direction methods of multipliers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
268,728
1505.05365
Towards Ideal Semantics for Analyzing Stream Reasoning
The rise of smart applications has drawn interest to logical reasoning over data streams. Recently, different query languages and stream processing/reasoning engines were proposed in different communities. However, due to a lack of theoretical foundations, the expressivity and semantics of these diverse approaches are given only informally. Towards clear specifications and means for analytic study, a formal framework is needed to define their semantics in precise terms. To this end, we present a first step towards an ideal semantics that allows for exact descriptions and comparisons of stream reasoning systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
43,294
2112.03916
BT-Unet: A self-supervised learning framework for biomedical image segmentation using Barlow Twins with U-Net models
Deep learning has brought the most profound contribution towards biomedical image segmentation to automate the process of delineation in medical imaging. To accomplish such task, the models are required to be trained using huge amount of annotated or labelled data that highlights the region of interest with a binary mask. However, efficient generation of the annotations for such huge data requires expert biomedical analysts and extensive manual effort. It is a tedious and expensive task, while also being vulnerable to human error. To address this problem, a self-supervised learning framework, BT-Unet is proposed that uses the Barlow Twins approach to pre-train the encoder of a U-Net model via redundancy reduction in an unsupervised manner to learn data representation. Later, complete network is fine-tuned to perform actual segmentation. The BT-Unet framework can be trained with a limited number of annotated samples while having high number of unannotated samples, which is mostly the case in real-world problems. This framework is validated over multiple U-Net models over diverse datasets by generating scenarios of a limited number of labelled samples using standard evaluation metrics. With exhaustive experiment trials, it is observed that the BT-Unet framework enhances the performance of the U-Net models with significant margin under such circumstances.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,380
2109.06424
Statistical Inference: The Missing Piece of RecSys Experiment Reliability Discourse
This paper calls attention to the missing component of the recommender system evaluation process: Statistical Inference. There is active research in several components of the recommender system evaluation process: selecting baselines, standardizing benchmarks, and target item sampling. However, there has not yet been significant work on the role and use of statistical inference for analyzing recommender system evaluation results. In this paper, we argue that the use of statistical inference is a key component of the evaluation process that has not been given sufficient attention. We support this argument with systematic review of recent RecSys papers to understand how statistical inference is currently being used, along with a brief survey of studies that have been done on the use of statistical inference in the information retrieval community. We present several challenges that exist for inference in recommendation experiment which buttresses the need for empirical studies to aid with appropriately selecting and applying statistical inference techniques.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
255,143
1908.07433
Pix2Pose: Pixel-Wise Coordinate Regression of Objects for 6D Pose Estimation
Estimating the 6D pose of objects using only RGB images remains challenging because of problems such as occlusion and symmetries. It is also difficult to construct 3D models with precise texture without expert knowledge or specialized scanning devices. To address these problems, we propose a novel pose estimation method, Pix2Pose, that predicts the 3D coordinates of each object pixel without textured models. An auto-encoder architecture is designed to estimate the 3D coordinates and expected errors per pixel. These pixel-wise predictions are then used in multiple stages to form 2D-3D correspondences to directly compute poses with the PnP algorithm with RANSAC iterations. Our method is robust to occlusion by leveraging recent achievements in generative adversarial training to precisely recover occluded parts. Furthermore, a novel loss function, the transformer loss, is proposed to handle symmetric objects by guiding predictions to the closest symmetric pose. Evaluations on three different benchmark datasets containing symmetric and occluded objects show our method outperforms the state of the art using only RGB images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
142,294
1701.06599
Unsupervised Joint Mining of Deep Features and Image Labels for Large-scale Radiology Image Categorization and Scene Recognition
The recent rapid and tremendous success of deep convolutional neural networks (CNN) on many challenging computer vision tasks largely derives from the accessibility of the well-annotated ImageNet and PASCAL VOC datasets. Nevertheless, unsupervised image categorization (i.e., without the ground-truth labeling) is much less investigated, yet critically important and difficult when annotations are extremely hard to obtain in the conventional way of "Google Search" and crowd sourcing. We address this problem by presenting a looped deep pseudo-task optimization (LDPO) framework for joint mining of deep CNN features and image labels. Our method is conceptually simple and rests upon the hypothesized "convergence" of better labels leading to better trained CNN models which in turn feed more discriminative image representations to facilitate more meaningful clusters/labels. Our proposed method is validated in tackling two important applications: 1) Large-scale medical image annotation has always been a prohibitively expensive and easily-biased task even for well-trained radiologists. Significantly better image categorization results are achieved via our proposed approach compared to the previous state-of-the-art method. 2) Unsupervised scene recognition on representative and publicly available datasets with our proposed technique is examined. The LDPO achieves excellent quantitative scene classification results. On the MIT indoor scene dataset, it attains a clustering accuracy of 75.3%, compared to the state-of-the-art supervised classification accuracy of 81.0% (when both are based on the VGG-VD model).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
67,161
2108.10533
Entropy-Aware Model Initialization for Effective Exploration in Deep Reinforcement Learning
Encouraging exploration is a critical issue in deep reinforcement learning. We investigate the effect of initial entropy that significantly influences the exploration, especially at the earlier stage. Our main observations are as follows: 1) low initial entropy increases the probability of learning failure, and 2) this initial entropy is biased towards a low value that inhibits exploration. Inspired by the investigations, we devise entropy-aware model initialization, a simple yet powerful learning strategy for effective exploration. We show that the devised learning strategy significantly reduces learning failures and enhances performance, stability, and learning speed through experiments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
251,923
1908.05857
Edge Computing-Enabled Cell-Free Massive MIMO Systems
Mobile edge computing (MEC) has been introduced to provide additional computing capabilities at network edges in order to improve performance of latency critical applications. In this paper, we consider the cell-free (CF) massive MIMO framework with implementing MEC functionalities. We consider multiple types of users with different average time requirements for computing/processing the tasks, and consider access points (APs) with MEC servers and a central server (CS) with the cloud computing capability. After deriving successful communication and computing probabilities using stochastic geometry and queueing theory, we present the successful edge computing probability (SECP) for a target computation latency. Through numerical results, we also analyze the impact of the AP coverage and the offloading probability to the CS on the SECP. It is observed that the optimal probability of offloading to the CS in terms of the SECP decreases with the AP coverage. Finally, we numerically characterize the minimum required energy consumption for guaranteeing a desired level of SECP. It is observed that for any desired level of SECP, it is more energy efficient to have larger number of APs as compared to having more number of antennas at each AP with smaller AP density.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
141,836
1803.10470
What deep learning can tell us about higher cognitive functions like mindreading?
Can deep learning (DL) guide our understanding of computations happening in biological brain? We will first briefly consider how DL has contributed to the research on visual object recognition. In the main part we will assess whether DL could also help us to clarify the computations underlying higher cognitive functions such as Theory of Mind. In addition, we will compare the objectives and learning signals of brains and machines, leading us to conclude that simply scaling up the current DL algorithms will most likely not lead to human level Theory of Mind.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
93,710
2112.09908
Anomaly Discovery in Semantic Segmentation via Distillation Comparison Networks
This paper aims to address the problem of anomaly discovery in semantic segmentation. Our key observation is that semantic classification plays a critical role in existing approaches, while the incorrectly classified pixels are easily regarded as anomalies. Such a phenomenon frequently appears and is rarely discussed, which significantly reduces the performance of anomaly discovery. To this end, we propose a novel Distillation Comparison Network (DiCNet). It comprises of a teacher branch which is a semantic segmentation network that removed the semantic classification head, and a student branch that is distilled from the teacher branch through a distribution distillation. We show that the distillation guarantees the semantic features of the two branches hold consistency in the known classes, while reflect inconsistency in the unknown class. Therefore, we leverage the semantic feature discrepancy between the two branches to discover the anomalies. DiCNet abandons the semantic classification head in the inference process, and hence significantly alleviates the issue caused by incorrect semantic classification. Extensive experimental results on StreetHazards dataset and BDD-Anomaly dataset are conducted to verify the superior performance of DiCNet. In particular, DiCNet obtains a 6.3% improvement in AUPR and a 5.2% improvement in FPR95 on StreetHazards dataset, achieves a 4.2% improvement in AUPR and a 6.8% improvement in FPR95 on BDD-Anomaly dataset. Codes are available at https://github.com/zhouhuan-hust/DiCNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
272,293
2404.12081
MaskCD: A Remote Sensing Change Detection Network Based on Mask Classification
Change detection (CD) from remote sensing (RS) images using deep learning has been widely investigated in the literature. It is typically regarded as a pixel-wise labeling task that aims to classify each pixel as changed or unchanged. Although per-pixel classification networks in encoder-decoder structures have shown dominance, they still suffer from imprecise boundaries and incomplete object delineation at various scenes. For high-resolution RS images, partly or totally changed objects are more worthy of attention rather than a single pixel. Therefore, we revisit the CD task from the mask prediction and classification perspective and propose MaskCD to detect changed areas by adaptively generating categorized masks from input image pairs. Specifically, it utilizes a cross-level change representation perceiver (CLCRP) to learn multiscale change-aware representations and capture spatiotemporal relations from encoded features by exploiting deformable multihead self-attention (DeformMHSA). Subsequently, a masked-attention-based detection transformers (MA-DETR) decoder is developed to accurately locate and identify changed objects based on masked attention and self-attention mechanisms. It reconstructs the desired changed objects by decoding the pixel-wise representations into learnable mask proposals and making final predictions from these candidates. Experimental results on five benchmark datasets demonstrate the proposed approach outperforms other state-of-the-art models. Codes and pretrained models are available online (https://github.com/EricYu97/MaskCD).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,732
1907.05014
Conditional Analysis for Key-Value Data with Local Differential Privacy
Local differential privacy (LDP) has been deemed as the de facto measure for privacy-preserving distributed data collection and analysis. Recently, researchers have extended LDP to the basic data type in NoSQL systems: the key-value data, and show its feasibilities in mean estimation and frequency estimation. In this paper, we develop a set of new perturbation mechanisms for key-value data collection and analysis under the strong model of local differential privacy. Since many modern machine learning tasks rely on the availability of conditional probability or the marginal statistics, we then propose the conditional frequency estimation method for key analysis and the conditional mean estimation for value analysis in key-value data. The released statistics with conditions can further be used in learning tasks. Extensive experiments of frequency and mean estimation on both synthetic and real-world datasets validate the effectiveness and accuracy of the proposed key-value perturbation mechanisms against the state-of-art competitors.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
138,257
2302.07252
Toward a Millimeter-Scale Tendon-Driven Continuum Wrist with Integrated Gripper for Microsurgical Applications
Microsurgery is a particularly impactful yet challenging form of surgery. Robot assisted microsurgery has the potential to improve surgical dexterity and enable precise operation on such small scales in ways not previously possible. Intraocular microsurgery is a particularly challenging domain in part due to the lack of dexterity that is achievable with rigid instruments inserted through the eye. In this work, we present a new design for a millimeter-scale, dexterous wrist intended for microsurgery applications. The wrist is created via a state-of-the-art two-photon-polymerization (2PP) microfabrication technique, enabling the wrist to be constructed of flexible material with complex internal geometries and critical features at the micron-scale. The wrist features a square cross section with side length of 1.25 mm and total length of 3.75 mm. The wrist has three tendons routed down its length which, when actuated by small-scale linear actuators, enable bending in any plane. We present an integrated gripper actuated by a fourth tendon routed down the center of the robot. We evaluate the wrist and gripper by characterizing its bend-angle. We achieve more than 90 degrees bending in both axes. We demonstrate out of plane bending as well as the robot's ability to grip while actuated. Our integrated gripper/tendon-driven continuum robot design and meso-scale assembly techniques have the potential to enable small-scale wrists with more dexterity than has been previously demonstrated. Such a wrist could improve surgeon capabilities during teleoperation with the potential to improve patient outcomes in a variety of surgical applications, including intraocular surgery.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
345,671
2502.12743
"I know myself better, but not really greatly": Using LLMs to Detect and Explain LLM-Generated Texts
Large language models (LLMs) have demonstrated impressive capabilities in generating human-like texts, but the potential misuse of such LLM-generated texts raises the need to distinguish between human-generated and LLM-generated content. This paper explores the detection and explanation capabilities of LLM-based detectors of LLM-generated texts, in the context of a binary classification task (human-generated texts vs LLM-generated texts) and a ternary classification task (human-generated texts, LLM-generated texts, and undecided). By evaluating on six close/open-source LLMs with different sizes, our findings reveal that while self-detection consistently outperforms cross-detection, i.e., LLMs can detect texts generated by themselves more accurately than those generated by other LLMs, the performance of self-detection is still far from ideal, indicating that further improvements are needed. We also show that extending the binary to the ternary classification task with a new class "Undecided" can enhance both detection accuracy and explanation quality, with improvements being statistically significant and consistent across all LLMs. We finally conducted comprehensive qualitative and quantitative analyses on the explanation errors, which are categorized into three types: reliance on inaccurate features (the most frequent error), hallucinations, and incorrect reasoning. These findings with our human-annotated dataset emphasize the need for further research into improving both self-detection and self-explanation, particularly to address overfitting issues that may hinder generalization.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
535,030
2209.13739
Emulating Human Kinematic Behavior on Lower-Limb Prostheses via Multi-Contact Models and Force-Based Nonlinear Control
Ankle push-off largely contributes to limb energy generation in human walking, leading to smoother and more efficient locomotion. Providing this net positive work to an amputee requires an active prosthesis, but has the potential to enable more natural assisted locomotion. To this end, this paper uses multi-contact models of locomotion together with force-based nonlinear optimization-based controllers to achieve human-like kinematic behavior, including ankle push-off, on a powered transfemoral prosthesis for 2 subjects. In particular, we leverage model-based control approaches for dynamic bipedal robotic walking to develop a systematic method to realize human-like walking on a powered prosthesis that does not require subject-specific tuning. We begin by synthesizing an optimization problem that yields gaits that resemble human joint trajectories at a kinematic level, and realize these gaits on a prosthesis through a control Lyapunov function based nonlinear controller that responds to real-time ground reaction forces and interaction forces with the human. The proposed controller is implemented on a prosthesis for two subjects without tuning between subjects, emulating subject-specific human kinematic trends on the prosthesis joints. These experimental results demonstrate that our force-based nonlinear control approach achieves better tracking of human kinematic trajectories than traditional methods.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
320,005
2409.01809
Hardware-Based Microgrid Coupled to Real-Time Simulated Power Grids for Evaluating New Control Strategies in Future Energy Systems
The design of new control strategies for future energy systems can neither be directly tested in real power grids nor be evaluated based on only current grid situations. In this regard, extensive tests are required in laboratory settings using real power system equipment. However, since it is impossible to replicate the entire grid section of interest, even in large-scale experiments, hardware setups must be supplemented by detailed simulations to reproduce the system under study fully. This paper presents a unique test environment in which a hardware-based microgrid environment is physically coupled with a large-scale real-time simulation framework. The setup combines the advantages of developing new solutions using hardware-based experiments and evaluating the impact on large-scale power systems using real-time simulations. In this paper, the interface between the microgrid-under-test environment and the real-time simulations is evaluated in terms of accuracy and communication delays. Furthermore, a test case is presented showing the approach's ability to test microgrid control strategies for supporting the grid. It is observed that the communication delays via the physical interface depend on the simulation sampling time and do not significantly affect the accuracy in the interaction between the hardware and the simulated grid.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
485,474