id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1004.2626
Propagating Conjunctions of AllDifferent Constraints
We study propagation algorithms for the conjunction of two AllDifferent constraints. Solutions of an AllDifferent constraint can be seen as perfect matchings on the variable/value bipartite graph. Therefore, we investigate the problem of finding simultaneous bipartite matchings. We present an extension of the famous Hall theorem which characterizes when simultaneous bipartite matchings exists. Unfortunately, finding such matchings is NP-hard in general. However, we prove a surprising result that finding a simultaneous matching on a convex bipartite graph takes just polynomial time. Based on this theoretical result, we provide the first polynomial time bound consistency algorithm for the conjunction of two AllDifferent constraints. We identify a pathological problem on which this propagator is exponentially faster compared to existing propagators. Our experiments show that this new propagator can offer significant benefits over existing methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
6,177
1807.02548
Delay-Aware Coded Caching for Mobile Users
In this work, we study the trade-off between the cache capacity and the user delay for a cooperative Small Base Station (SBS) coded caching system with mobile users. First, a delay-aware coded caching policy, which takes into account the popularity of the files and the maximum re-buffering delay to minimize the average rebuffering delay of a mobile user under a given cache capacity constraint is introduced. Subsequently, we address a scenario where some files are served by the macro-cell base station (MBS) when the cache capacity of the SBSs is not sufficient to store all the files in the library. For this scenario, we develop a coded caching policy that minimizes the average amount of data served by the MBS under an average re-buffering delay constraint.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
102,287
1605.09509
Fixed-time consensus of multiple double-integrator systems under directed topologies: A motion-planning approach
This paper investigates the fixed-time consensus problem under directed topologies. By using a motion-planning approach, a class of distributed fixed-time algorithms are developed for a multi-agent system with double-integrator dynamics. In the context of the fixed-time consensus, we focus on both directed fixed and switching topologies. Under the directed fixed topology, a novel class of distributed algorithms are designed, which guarantee the consensus of the multi-agent system with a fixed settling time if the topology has a directed spanning tree. Under the directed periodically switching topologies, the fixedtime consensus is solved via the proposed algorithms if the topologies jointly have a directed spanning tree. In particular, the fixed settling time can be off-line pre-assigned according to task requirements. Compared with the existing results, to our best knowledge, it is the first time to solve the fixed-time consensus problem for double-integrator systems under directed topologies. Finally, a numerical example is given to illustrate the effectiveness of the analytical results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
56,575
2208.01787
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing different SLAM strategies and results across six teams that participated in the three-year-long SubT competition. In particular, the paper has four main goals. First, we review the algorithms, architectures, and systems adopted by the teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to approach for virtually all teams in the competition), heterogeneous multi-robot operation (including both aerial and ground robots), and real-world underground operation (from the presence of obscurants to the need to handle tight computational constraints). We do not shy away from discussing the dirty details behind the different SubT SLAM systems, which are often omitted from technical papers. Second, we discuss the maturity of the field by highlighting what is possible with the current SLAM systems and what we believe is within reach with some good systems engineering. Third, we outline what we believe are fundamental open problems, that are likely to require further research to break through. Finally, we provide a list of open-source SLAM implementations and datasets that have been produced during the SubT challenge and related efforts, and constitute a useful resource for researchers and practitioners.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
311,265
2202.11287
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis
Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks. This increases the importance of robust training of 3D models in the face of adversarial attacks. Based on our analysis on the performance of existing adversarial attacks, more adversarial perturbations are found in the mid and high-frequency components of input data. Therefore, by suppressing the high-frequency content in the training phase, the models robustness against adversarial examples is improved. Experiments showed that the proposed defense method decreases the success rate of six attacks on PointNet, PointNet++ ,, and DGCNN models. In particular, improvements are achieved with an average increase of classification accuracy by 3.8 % on drop100 attack and 4.26 % on drop200 attack compared to the state-of-the-art methods. The method also improves models accuracy on the original dataset compared to other available methods.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
281,837
1206.6428
A Binary Classification Framework for Two-Stage Multiple Kernel Learning
With the advent of kernel methods, automating the task of specifying a suitable kernel has become increasingly important. In this context, the Multiple Kernel Learning (MKL) problem of finding a combination of pre-specified base kernels that is suitable for the task at hand has received significant attention from researchers. In this paper we show that Multiple Kernel Learning can be framed as a standard binary classification problem with additional constraints that ensure the positive definiteness of the learned kernel. Framing MKL in this way has the distinct advantage that it makes it easy to leverage the extensive research in binary classification to develop better performing and more scalable MKL algorithms that are conceptually simpler, and, arguably, more accessible to practitioners. Experiments on nine data sets from different domains show that, despite its simplicity, the proposed technique compares favorably with current leading MKL approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,963
2405.07175
On-Demand Model and Client Deployment in Federated Learning with Deep Reinforcement Learning
In Federated Learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby enhancing adaptability. However, challenges arise in dynamic and mobile environments where certain devices may become inaccessible as FL clients, impacting data availability and client selection methods. To address this, we propose an On-Demand solution, deploying new clients using Docker Containers on-the-fly. Our On-Demand solution, employing Deep Reinforcement Learning (DRL), targets client availability and selection, while considering data shifts, and container deployment complexities. It employs an autonomous end-to-end solution for handling model deployment and client selection. The DRL strategy uses a Markov Decision Process (MDP) framework, with a Master Learner and a Joiner Learner. The designed cost functions represent the complexity of the dynamic client deployment and selection. Simulated tests show that our architecture can easily adjust to changes in the environment and respond to On-Demand requests. This underscores its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and tabular reinforcement learning solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
453,610
2112.00412
The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification
The problem of class imbalanced data is that the generalization performance of the classifier deteriorates due to the lack of data from minority classes. In this paper, we propose a novel minority over-sampling method to augment diversified minority samples by leveraging the rich context of the majority classes as background images. To diversify the minority samples, our key idea is to paste an image from a minority class onto rich-context images from a majority class, using them as background images. Our method is simple and can be easily combined with the existing long-tailed recognition methods. We empirically prove the effectiveness of the proposed oversampling method through extensive experiments and ablation studies. Without any architectural changes or complex algorithms, our method achieves state-of-the-art performance on various long-tailed classification benchmarks. Our code is made available at https://github.com/naver-ai/cmo.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
269,131
2104.05599
Deep Reinforcement Learning Based Controller for Active Heave Compensation
Heave compensation is an essential part in various offshore operations. It is used in various applications, which include on-loading or off-loading systems, offshore drilling, landing helicopter on oscillating structures, and deploying and retrieving manned submersibles. In this paper, a reinforcement learning (RL) based controller is proposed for active heave compensation using a deep deterministic policy gradient (DDPG) algorithm. A DDPG algorithm which is a model-free, online reinforcement learning method, is adopted to capture the experience of the agent during the training trials. The simulation results demonstrate up to 10 % better heave compensation performance of RL controller as compared to a tuned Proportional-Derivative Control. The performance of the proposed method is compared with respect to heave compensation, offset tracking, disturbance rejection, and noise attenuation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
229,782
1311.3023
Asynchronous Distributed Downlink Beamforming and Power Control in Multi-cell Networks
In this paper, we consider a multi-cell network where every base station (BS) serves multiple users with an antenna array. Each user is associated with only one BS and has a single antenna. Assume that only long-term channel state information (CSI) is available in the system. The objective is to minimize the network downlink transmission power needed to meet the users' signal-to-interference-plus-noise ratio (SINR) requirements. For this objective, we propose an asynchronous distributed beamforming and power control algorithm which provides the same optimal solution as given by centralized algorithms. To design the algorithm, the power minimization problem is formulated mathematically as a non-convex problem. For distributed implementation, the non-convex problem is cast into the dual decomposition framework. Resorting to the theory about matrix pencil, a novel asynchronous iterative method is proposed for solving the dual of the non-convex problem. The methods for beamforming and power control are obtained by investigating the primal problem. At last, simulation results are provided to demonstrate the convergence and performance of the algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,377
2312.13228
Benchmarks for Retrospective Automated Driving System Crash Rate Analysis Using Police-Reported Crash Data
With fully automated driving systems (ADS; SAE level 4) ride-hailing services expanding in the US, we are now approaching an inflection point, where the process of retrospectively evaluating ADS safety impact can start to yield statistically credible conclusions. An ADS safety impact measurement requires a comparison to a "benchmark" crash rate. This study aims to address, update, and extend the existing literature by leveraging police-reported crashes to generate human crash rates for multiple geographic areas with current ADS deployments. All of the data leveraged is publicly accessible, and the benchmark determination methodology is intended to be repeatable and transparent. Generating a benchmark that is comparable to ADS crash data is associated with certain challenges, including data selection, handling underreporting and reporting thresholds, identifying the population of drivers and vehicles to compare against, choosing an appropriate severity level to assess, and matching crash and mileage exposure data. Consequently, we identify essential steps when generating benchmarks, and present our analyses amongst a backdrop of existing ADS benchmark literature. One analysis presented is the usage of established underreporting correction methodology to publicly available human driver police-reported data to improve comparability to publicly available ADS crash data. We also identify important dependencies in controlling for geographic region, road type, and vehicle type, and show how failing to control for these features can bias results. This body of work aims to contribute to the ability of the community - researchers, regulators, industry, and experts - to reach consensus on how to estimate accurate benchmarks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
417,244
2311.02805
Tailoring Self-Rationalizers with Multi-Reward Distillation
Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant scales (e.g., 175B parameter GPT-3); and 2) focuses largely on downstream performance, ignoring the semantics of the rationales themselves, e.g., are they faithful, true, and helpful for humans? In this work, we enable small-scale LMs (approx. 200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. Our method, MaRio (Multi-rewArd RatIOnalization), is a multi-reward conditioned self-rationalization algorithm that optimizes multiple distinct properties like plausibility, diversity and consistency. Results on five difficult question-answering datasets StrategyQA, QuaRel, OpenBookQA, NumerSense and QASC show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline. Extensive human evaluations confirm that MaRio rationales are preferred vs. SFT rationales, as well as qualitative improvements in plausibility and consistency.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
405,591
1910.02130
Online Active Perception for Partially Observable Markov Decision Processes with Limited Budget
Active perception strategies enable an agent to selectively gather information in a way to improve its performance. In applications in which the agent does not have prior knowledge about the available information sources, it is crucial to synthesize active perception strategies at runtime. We consider a setting in which at runtime an agent is capable of gathering information under a limited budget. We pose the problem in the context of partially observable Markov decision processes. We propose a generalized greedy strategy that selects a subset of information sources with near-optimality guarantees on uncertainty reduction. Our theoretical analysis establishes that the proposed active perception strategy achieves near-optimal performance in terms of expected cumulative reward. We demonstrate the resulting strategies in simulations on a robotic navigation problem.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
148,144
1410.0709
Efficient classification of billions of points into complex geographic regions using hierarchical triangular mesh
We present a case study about the spatial indexing and regional classification of billions of geographic coordinates from geo-tagged social network data using Hierarchical Triangular Mesh (HTM) implemented for Microsoft SQL Server. Due to the lack of certain features of the HTM library, we use it in conjunction with the GIS functions of SQL Server to significantly increase the efficiency of pre-filtering of spatial filter and join queries. For example, we implemented a new algorithm to compute the HTM tessellation of complex geographic regions and precomputed the intersections of HTM triangles and geographic regions for faster false-positive filtering. With full control over the index structure, HTM-based pre-filtering of simple containment searches outperforms SQL Server spatial indices by a factor of ten and HTM-based spatial joins run about a hundred times faster.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
36,493
2205.14549
Asymmetric Local Information Privacy and the Watchdog Mechanism
This paper proposes a novel watchdog privatization scheme by generalizing local information privacy (LIP) to enhance data utility. To protect the sensitive features $S$ correlated with some useful data $X$, LIP restricts the lift, the ratio of the posterior belief to the prior on $S$ after and before accessing $X$. For each $x$, both maximum and minimum lift over sensitive features are measures of the privacy risk of publishing this symbol and should be restricted for the privacy-preserving purpose. Previous works enforce the same bound for both max-lift and min-lift. However, empirical observations show that the min-lift is usually much smaller than the max-lift. In this work, we generalize the LIP definition to consider the unequal values of max and min lift, i.e., considering different bounds for max-lift and min-lift. This new definition is applied to the watchdog privacy mechanism. We demonstrate that the utility is enhanced under a given privacy constraint on local differential privacy. At the same time, the resulting max-lift is lower and, therefore, tightly restricts other privacy leakages, e.g., mutual information, maximal leakage, and $\alpha$-leakage.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
299,402
2304.04567
ADS_UNet: A Nested UNet for Histopathology Image Segmentation
The UNet model consists of fully convolutional network (FCN) layers arranged as contracting encoder and upsampling decoder maps. Nested arrangements of these encoder and decoder maps give rise to extensions of the UNet model, such as UNete and UNet++. Other refinements include constraining the outputs of the convolutional layers to discriminate between segment labels when trained end to end, a property called deep supervision. This reduces feature diversity in these nested UNet models despite their large parameter space. Furthermore, for texture segmentation, pixel correlations at multiple scales contribute to the classification task; hence, explicit deep supervision of shallower layers is likely to enhance performance. In this paper, we propose ADS UNet, a stage-wise additive training algorithm that incorporates resource-efficient deep supervision in shallower layers and takes performance-weighted combinations of the sub-UNets to create the segmentation model. We provide empirical evidence on three histopathology datasets to support the claim that the proposed ADS UNet reduces correlations between constituent features and improves performance while being more resource efficient. We demonstrate that ADS_UNet outperforms state-of-the-art Transformer-based models by 1.08 and 0.6 points on CRAG and BCSS datasets, and yet requires only 37% of GPU consumption and 34% of training time as that required by Transformers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,269
2406.10501
Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition
Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code is publicly available at https://github.com/sakura/Code.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
464,429
1904.00615
Discontinuous Constituency Parsing with a Stack-Free Transition System and a Dynamic Oracle
We introduce a novel transition system for discontinuous constituency parsing. Instead of storing subtrees in a stack --i.e. a data structure with linear-time sequential access-- the proposed system uses a set of parsing items, with constant-time random access. This change makes it possible to construct any discontinuous constituency tree in exactly $4n - 2$ transitions for a sentence of length $n$. At each parsing step, the parser considers every item in the set to be combined with a focus item and to construct a new constituent in a bottom-up fashion. The parsing strategy is based on the assumption that most syntactic structures can be parsed incrementally and that the set --the memory of the parser-- remains reasonably small on average. Moreover, we introduce a provably correct dynamic oracle for the new transition system, and present the first experiments in discontinuous constituency parsing using a dynamic oracle. Our parser obtains state-of-the-art results on three English and German discontinuous treebanks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
125,913
2211.01987
Exact calculation of quantizer constants for arbitrary lattices
We present an algorithm for the exact computer-aided construction of the Voronoi cells of lattices with known symmetry group. Our algorithm scales better than linearly with the total number of faces and is applicable to dimensions beyond 12, which previous methods could not achieve. The new algorithm is applied to the Coxeter-Todd lattice $K_{12}$ as well as to a family of lattices obtained from laminating $K_{12}$. By optimizing this family, we obtain a new best 13-dimensional lattice quantizer (among the lattices with published exact quantizer constants).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
328,436
1505.06999
Some Open Problems in Optimal AdaBoost and Decision Stumps
The significance of the study of the theoretical and practical properties of AdaBoost is unquestionable, given its simplicity, wide practical use, and effectiveness on real-world datasets. Here we present a few open problems regarding the behavior of "Optimal AdaBoost," a term coined by Rudin, Daubechies, and Schapire in 2004 to label the simple version of the standard AdaBoost algorithm in which the weak learner that AdaBoost uses always outputs the weak classifier with lowest weighted error among the respective hypothesis class of weak classifiers implicit in the weak learner. We concentrate on the standard, "vanilla" version of Optimal AdaBoost for binary classification that results from using an exponential-loss upper bound on the misclassification training error. We present two types of open problems. One deals with general weak hypotheses. The other deals with the particular case of decision stumps, as often and commonly used in practice. Answers to the open problems can have immediate significant impact to (1) cementing previously established results on asymptotic convergence properties of Optimal AdaBoost, for finite datasets, which in turn can be the start to any convergence-rate analysis; (2) understanding the weak-hypotheses class of effective decision stumps generated from data, which we have empirically observed to be significantly smaller than the typically obtained class, as well as the effect on the weak learner's running time and previously established improved bounds on the generalization performance of Optimal AdaBoost classifiers; and (3) shedding some light on the "self control" that AdaBoost tends to exhibit in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
43,505
2208.04921
TSRFormer: Table Structure Recognition with Transformers
We present a new table structure recognition (TSR) approach, called TSRFormer, to robustly recognizing the structures of complex tables with geometrical distortions from various table images. Unlike previous methods, we formulate table separation line prediction as a line regression problem instead of an image segmentation problem and propose a new two-stage DETR based separator prediction approach, dubbed \textbf{Sep}arator \textbf{RE}gression \textbf{TR}ansformer (SepRETR), to predict separation lines from table images directly. To make the two-stage DETR framework work efficiently and effectively for the separation line prediction task, we propose two improvements: 1) A prior-enhanced matching strategy to solve the slow convergence issue of DETR; 2) A new cross attention module to sample features from a high-resolution convolutional feature map directly so that high localization accuracy is achieved with low computational cost. After separation line prediction, a simple relation network based cell merging module is used to recover spanning cells. With these new techniques, our TSRFormer achieves state-of-the-art performance on several benchmark datasets, including SciTSR, PubTabNet and WTW. Furthermore, we have validated the robustness of our approach to tables with complex structures, borderless cells, large blank spaces, empty or spanning cells as well as distorted or even curved shapes on a more challenging real-world in-house dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,261
2411.09037
A Transformer-Based Visual Piano Transcription Algorithm
Automatic music transcription (AMT) for musical performances is a long standing problem in the field of Music Information Retrieval (MIR). Visual piano transcription (VPT) is a multimodal subproblem of AMT which focuses on extracting a symbolic representation of a piano performance from visual information only (e.g., from a top-down video of the piano keyboard). Inspired by the success of Transformers for audio-based AMT, as well as their recent successes in other computer vision tasks, in this paper we present a Transformer based architecture for VPT. The proposed VPT system combines a piano bounding box detection model with an onset and pitch detection model, allowing our system to perform well in more naturalistic conditions like imperfect image crops around the piano and slightly tilted images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
508,106
2409.00717
Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques
We initiate the study of Preference-Based Multi-Agent Reinforcement Learning (PbMARL), exploring both theoretical foundations and empirical validations. We define the task as identifying the Nash equilibrium from a preference-only offline dataset in general-sum games, a problem marked by the challenge of sparse feedback signals. Our theory establishes the upper complexity bounds for Nash Equilibrium in effective PbMARL, demonstrating that single-policy coverage is inadequate and highlighting the importance of unilateral dataset coverage. These theoretical insights are verified through comprehensive experiments. To enhance the practical performance, we further introduce two algorithmic techniques. (1) We propose a Mean Squared Error (MSE) regularization along the time axis to achieve a more uniform reward distribution and improve reward learning outcomes. (2) We propose an additional penalty based on the distribution of the dataset to incorporate pessimism, improving stability and effectiveness during training. Our findings underscore the multifaceted approach required for PbMARL, paving the way for effective preference-based multi-agent systems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
true
485,042
2007.12829
Joint Featurewise Weighting and Lobal Structure Learning for Multi-view Subspace Clustering
Multi-view clustering integrates multiple feature sets, which reveal distinct aspects of the data and provide complementary information to each other, to improve the clustering performance. It remains challenging to effectively exploit complementary information across multiple views since the original data often contain noise and are highly redundant. Moreover, most existing multi-view clustering methods only aim to explore the consistency of all views while ignoring the local structure of each view. However, it is necessary to take the local structure of each view into consideration, because different views would present different geometric structures while admitting the same cluster structure. To address the above issues, we propose a novel multi-view subspace clustering method via simultaneously assigning weights for different features and capturing local information of data in view-specific self-representation feature spaces. Especially, a common cluster structure regularization is adopted to guarantee consistency among different views. An efficient algorithm based on an augmented Lagrangian multiplier is also developed to solve the associated optimization problem. Experiments conducted on several benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance. We provide the Matlab code on https://github.com/Ekin102003/JFLMSC.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
188,934
1907.04641
One Shot Learning for Deformable Medical Image Registration and Periodic Motion Tracking
Deformable image registration is a very important field of research in medical imaging. Recently multiple deep learning approaches were published in this area showing promising results. However, drawbacks of deep learning methods are the need for a large amount of training datasets and their inability to register unseen images different from the training datasets. One shot learning comes without the need of large training datasets and has already been proven to be applicable to 3D data. In this work we present a one shot registration approach for periodic motion tracking in 3D and 4D datasets. When applied to 3D dataset the algorithm calculates the inverse of a registration vector field simultaneously. For registration we employed a U-Net combined with a coarse to fine approach and a differential spatial transformer module. The algorithm was thoroughly tested with multiple 4D and 3D datasets publicly available. The results show that the presented approach is able to track periodic motion and to yield a competitive registration accuracy. Possible applications are the use as a stand-alone algorithm for 3D and 4D motion tracking or in the beginning of studies until enough datasets for a separate training phase are available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
138,158
1910.05815
An Efficient Beam and Channel Acquisition via Sparsity Map and Joint Angle-Delay Power Profile Estimation for Wideband Massive MIMO Systems
In this paper, an efficient beam and channel acquisition scheme together with joint angle-delay power profile (JADPP) construction are proposed for single-carrier mm-wave wideband sparse massive multiple-input multiple-output (MIMO) channels when hybrid beamforming architecture is utilized. We consider two different modes of operation, namely slow-time beam acquisition and fast-time instantaneous channel estimation, for training stage of time division duplex based systems. In the first mode, where pre-structured hybrid beams are formed to scan intended angular sectors, the joint angle-delay sparsity map together with power intensities of each user channels are obtained by using a novel constant false alarm rate thresholding algorithm inspired from adaptive radar detection theory. The proposed thresholding algorithm employs a spatio-temporal adaptive matched filter type estimator, taking the strong interference due to simultaneously active multipath components of different user channels into account, in order to estimate JADPP of each user. After applying the proposed thresholding algorithm on the estimated power profile, the angle-delay sparsity map of the massive MIMO channel is constructed, based on which the channel covariance matrices (CCMs) are formed with significantly reduced amount of training snapshots. Then, by using the estimated CCMs, the analog beamformer is reconstructed by means of a virtual sectorization while taking the inter-group and inter-symbol interference into account. Finally, for the second mode of operation, two novel reduced-rank instantaneous channel estimators, operating in a proper beamspace formed by the hybrid structure, are proposed. The proposed beam and channel acquisition techniques attain the channel estimation accuracy of minimum mean square error filter with true knowledge of CCMs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
149,180
2404.00816
HeteroMILE: a Multi-Level Graph Representation Learning Framework for Heterogeneous Graphs
Heterogeneous graphs are ubiquitous in real-world applications because they can represent various relationships between different types of entities. Therefore, learning embeddings in such graphs is a critical problem in graph machine learning. However, existing solutions for this problem fail to scale to large heterogeneous graphs due to their high computational complexity. To address this issue, we propose a Multi-Level Embedding framework of nodes on a heterogeneous graph (HeteroMILE) - a generic methodology that allows contemporary graph embedding methods to scale to large graphs. HeteroMILE repeatedly coarsens the large sized graph into a smaller size while preserving the backbone structure of the graph before embedding it, effectively reducing the computational cost by avoiding time-consuming processing operations. It then refines the coarsened embedding to the original graph using a heterogeneous graph convolution neural network. We evaluate our approach using several popular heterogeneous graph datasets. The experimental results show that HeteroMILE can substantially reduce computational time (approximately 20x speedup) and generate an embedding of better quality for link prediction and node classification.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
443,115
2412.01299
Cross-Modal Visual Relocalization in Prior LiDAR Maps Utilizing Intensity Textures
Cross-modal localization has drawn increasing attention in recent years, while the visual relocalization in prior LiDAR maps is less studied. Related methods usually suffer from inconsistency between the 2D texture and 3D geometry, neglecting the intensity features in the LiDAR point cloud. In this paper, we propose a cross-modal visual relocalization system in prior LiDAR maps utilizing intensity textures, which consists of three main modules: map projection, coarse retrieval, and fine relocalization. In the map projection module, we construct the database of intensity channel map images leveraging the dense characteristic of panoramic projection. The coarse retrieval module retrieves the top-K most similar map images to the query image from the database, and retains the top-K' results by covisibility clustering. The fine relocalization module applies a two-stage 2D-3D association and a covisibility inlier selection method to obtain robust correspondences for 6DoF pose estimation. The experimental results on our self-collected datasets demonstrate the effectiveness in both place recognition and pose estimation tasks.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
513,059
2403.14441
Quantifying Semantic Query Similarity for Automated Linear SQL Grading: A Graph-based Approach
Quantifying the semantic similarity between database queries is a critical challenge with broad applications, ranging from query log analysis to automated educational assessment of SQL skills. Traditional methods often rely solely on syntactic comparisons or are limited to checking for semantic equivalence. This paper introduces a novel graph-based approach to measure the semantic dissimilarity between SQL queries. Queries are represented as nodes in an implicit graph, while the transitions between nodes are called edits, which are weighted by semantic dissimilarity. We employ shortest path algorithms to identify the lowest-cost edit sequence between two given queries, thereby defining a quantifiable measure of semantic distance. A prototype implementation of this technique has been evaluated through an empirical study, which strongly suggests that our method provides more accurate and comprehensible grading compared to existing techniques. Moreover, the results indicate that our approach comes close to the quality of manual grading, making it a robust tool for diverse database query comparison tasks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
440,080
2310.09388
CORN: Co-Trained Full- And No-Reference Speech Quality Assessment
Perceptual evaluation constitutes a crucial aspect of various audio-processing tasks. Full reference (FR) or similarity-based metrics rely on high-quality reference recordings, to which lower-quality or corrupted versions of the recording may be compared for evaluation. In contrast, no-reference (NR) metrics evaluate a recording without relying on a reference. Both the FR and NR approaches exhibit advantages and drawbacks relative to each other. In this paper, we present a novel framework called CORN that amalgamates these dual approaches, concurrently training both FR and NR models together. After training, the models can be applied independently. We evaluate CORN by predicting several common objective metrics and across two different architectures. The NR model trained using CORN has access to a reference recording during training, and thus, as one would expect, it consistently outperforms baseline NR models trained independently. Perhaps even more remarkable is that the CORN FR model also outperforms its baseline counterpart, even though it relies on the same training data and the same model architecture. Thus, a single training regime produces two independently useful models, each outperforming independently trained models
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,751
1609.09167
On private information retrieval array codes
Given a database, the private information retrieval (PIR) protocol allows a user to make queries to several servers and retrieve a certain item of the database via the feedbacks, without revealing the privacy of the specific item to any single server. Classical models of PIR protocols require that each server stores a whole copy of the database. Recently new PIR models are proposed with coding techniques arising from distributed storage system. In these new models each server only stores a fraction $1/s$ of the whole database, where $s>1$ is a given rational number. PIR array codes are recently proposed by Fazeli, Vardy and Yaakobi to characterize the new models. Consider a PIR array code with $m$ servers and the $k$-PIR property (which indicates that these $m$ servers may emulate any efficient $k$-PIR protocol). The central problem is to design PIR array codes with optimal rate $k/m$. Our contribution to this problem is three-fold. First, for the case $1<s\le 2$, although PIR array codes with optimal rate have been constructed recently by Blackburn and Etzion, the number of servers in their construction is impractically large. We determine the minimum number of servers admitting the existence of a PIR array code with optimal rate for a certain range of parameters. Second, for the case $s>2$, we derive a new upper bound on the rate of a PIR array code. Finally, for the case $s>2$, we analyze a new construction by Blackburn and Etzion and show that its rate is better than all the other existing constructions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
61,680
2306.08419
Mediated Multi-Agent Reinforcement Learning
The majority of Multi-Agent Reinforcement Learning (MARL) literature equates the cooperation of self-interested agents in mixed environments to the problem of social welfare maximization, allowing agents to arbitrarily share rewards and private information. This results in agents that forgo their individual goals in favour of social good, which can potentially be exploited by selfish defectors. We argue that cooperation also requires agents' identities and boundaries to be respected by making sure that the emergent behaviour is an equilibrium, i.e., a convention that no agent can deviate from and receive higher individual payoffs. Inspired by advances in mechanism design, we propose to solve the problem of cooperation, defined as finding socially beneficial equilibrium, by using mediators. A mediator is a benevolent entity that may act on behalf of agents, but only for the agents that agree to it. We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare subject to constraints that encourage agents to cooperate through the mediator. Our experiments in matrix and iterative games highlight the potential power of applying mediators in MARL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
373,408
2101.08238
AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification
Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations induced by the semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44\% Rank@1 on the CUHK-PEDES test set. It also outperforms its competitors by $>$10\% in cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
216,267
2012.14563
Random Planted Forest: a directly interpretable tree ensemble
We introduce a novel interpretable tree based algorithm for prediction in a regression setting. Our motivation is to estimate the unknown regression function from a functional decomposition perspective in which the functional components correspond to lower order interaction terms. The idea is to modify the random forest algorithm by keeping certain leaves after they are split instead of deleting them. This leads to non-binary trees which we refer to as planted trees. An extension to a forest leads to our random planted forest algorithm. Additionally, the maximum number of covariates which can interact within a leaf can be bounded. If we set this interaction bound to one, the resulting estimator is a sum of one-dimensional functions. In the other extreme case, if we do not set a limit, the resulting estimator and corresponding model place no restrictions on the form of the regression function. In a simulation study we find encouraging prediction and visualisation properties of our random planted forest method. We also develop theory for an idealized version of random planted forests in cases where the interaction bound is low. We show that if it is smaller than three, the idealized version achieves asymptotically optimal convergence rates up to a logarithmic factor. Code is available on GitHub https://github.com/PlantedML/randomPlantedForest.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
213,541
2003.14127
Peri-Diagnostic Decision Support Through Cost-Efficient Feature Acquisition at Test-Time
Computer-aided diagnosis (CADx) algorithms in medicine provide patient-specific decision support for physicians. These algorithms are usually applied after full acquisition of high-dimensional multimodal examination data, and often assume feature-completeness. This, however, is rarely the case due to examination costs, invasiveness, or a lack of indication. A sub-problem in CADx, which to our knowledge has received very little attention among the CADx community so far, is to guide the physician during the entire peri-diagnostic workflow, including the acquisition stage. We model the following question, asked from a physician's perspective: "Given the evidence collected so far, which examination should I perform next, in order to achieve the most accurate and efficient diagnostic prediction?". In this work, we propose a novel approach which is enticingly simple: use dropout at the input layer, and integrated gradients of the trained network at test-time to attribute feature importance dynamically. We validate and explain the effectiveness of our proposed approach using two public medical and two synthetic datasets. Results show that our proposed approach is more cost- and feature-efficient than prior approaches and achieves a higher overall accuracy. This directly translates to less unnecessary examinations for patients, and a quicker, less costly and more accurate decision support for the physician.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
170,421
2403.16365
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Modern neural networks are often trained on massive datasets that are web scraped with minimal human inspection. As a result of this insecure curation pipeline, an adversary can poison or backdoor the resulting model by uploading malicious data to the internet and waiting for a victim to scrape and train on it. Existing approaches for creating poisons and backdoors start with randomly sampled clean data, called base samples, and then modify those samples to craft poisons. However, some base samples may be significantly more amenable to poisoning than others. As a result, we may be able to craft more potent poisons by carefully choosing the base samples. In this work, we use guided diffusion to synthesize base samples from scratch that lead to significantly more potent poisons and backdoors than previous state-of-the-art attacks. Our Guided Diffusion Poisoning (GDP) base samples can be combined with any downstream poisoning or backdoor attack to boost its effectiveness. Our implementation code is publicly available at: https://github.com/hsouri/GDP .
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
440,995
1707.05479
PunFields at SemEval-2017 Task 7: Employing Roget's Thesaurus in Automatic Pun Recognition and Interpretation
The article describes a model of automatic interpretation of English puns, based on Roget's Thesaurus, and its implementation, PunFields. In a pun, the algorithm discovers two groups of words that belong to two main semantic fields. The fields become a semantic vector based on which an SVM classifier learns to recognize puns. A rule-based model is then applied for recognition of intentionally ambiguous (target) words and their definitions. In SemEval Task 7 PunFields shows a considerably good result in pun classification, but requires improvement in searching for the target word and its definition.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
77,242
1401.3632
Bayesian Conditional Density Filtering
We propose a Conditional Density Filtering (C-DF) algorithm for efficient online Bayesian inference. C-DF adapts MCMC sampling to the online setting, sampling from approximations to conditional posterior distributions obtained by propagating surrogate conditional sufficient statistics (a function of data and parameter estimates) as new data arrive. These quantities eliminate the need to store or process the entire dataset simultaneously and offer a number of desirable features. Often, these include a reduction in memory requirements and runtime and improved mixing, along with state-of-the-art parameter inference and prediction. These improvements are demonstrated through several illustrative examples including an application to high dimensional compressed regression. Finally, we show that C-DF samples converge to the target posterior distribution asymptotically as sampling proceeds and more data arrives.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
29,923
2305.08594
Improving Customer Experience in Call Centers with Intelligent Customer-Agent Pairing
Customer experience plays a critical role for a profitable organisation or company. A satisfied customer for a company corresponds to higher rates of customer retention, and better representation in the market. One way to improve customer experience is to optimize the functionality of its call center. In this work, we have collaborated with the largest provider of telecommunications and Internet access in the country, and we formulate the customer-agent pairing problem as a machine learning problem. The proposed learning-based method causes a significant improvement in performance of about $215\%$ compared to a rule-based method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
364,338
2012.08149
Dilated-Scale-Aware Attention ConvNet For Multi-Class Object Counting
Object counting aims to estimate the number of objects in images. The leading counting approaches focus on the single category counting task and achieve impressive performance. Note that there are multiple categories of objects in real scenes. Multi-class object counting expands the scope of application of object counting task. The multi-target detection task can achieve multi-class object counting in some scenarios. However, it requires the dataset annotated with bounding boxes. Compared with the point annotations in mainstream object counting issues, the coordinate box-level annotations are more difficult to obtain. In this paper, we propose a simple yet efficient counting network based on point-level annotations. Specifically, we first change the traditional output channel from one to the number of categories to achieve multiclass counting. Since all categories of objects use the same feature extractor in our proposed framework, their features will interfere mutually in the shared feature space. We further design a multi-mask structure to suppress harmful interaction among objects. Extensive experiments on the challenging benchmarks illustrate that the proposed method achieves state-of-the-art counting performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
211,678
1402.1151
Image Acquisition in an Underwater Vision System with NIR and VIS Illumination
The paper describes the image acquisition system able to capture images in two separated bands of light, used to underwater autonomous navigation. The channels are: the visible light spectrum and near infrared spectrum. The characteristics of natural, underwater environment were also described together with the process of the underwater image creation. The results of an experiment with comparison of selected images acquired in these channels are discussed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
30,648
2206.05149
Referring Image Matting
Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper, which aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting. First, we establish a large-scale challenging dataset RefMatte by designing a comprehensive image composition and expression generation engine to automatically produce high-quality images along with diverse text attributes based on public datasets. RefMatte consists of 230 object categories, 47,500 images, 118,749 expression-region entities, and 474,996 expressions. Additionally, we construct a real-world test set with 100 high-resolution natural images and manually annotate complex phrases to evaluate the out-of-domain generalization abilities of RIM methods. Furthermore, we present a novel baseline method CLIPMat for RIM, including a context-embedded prompt, a text-driven semantic pop-up, and a multi-level details extractor. Extensive experiments on RefMatte in both keyword and expression settings validate the superiority of CLIPMat over representative methods. We hope this work could provide novel insights into image matting and encourage more follow-up studies. The dataset, code and models are available at https://github.com/JizhiziLi/RIM.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
301,907
2305.02801
A numerically efficient output-only system-identification framework for stochastically forced self-sustained oscillators
Self-sustained oscillations are ubiquitous in nature and engineering. In this paper, we propose a novel output-only system-identification framework for identifying the system parameters of a self-sustained oscillator affected by Gaussian white noise. A Langevin model that characterizes the self-sustained oscillator is postulated, and the corresponding Fokker--Planck equation is derived from stochastic averaging. From the drift and diffusion terms of the Fokker--Planck equation, unknown parameters of the system are identified. We develop a numerically efficient algorithm for enhancing the accuracy of parameter identification. In particular, a modified Levenberg--Marquardt optimization algorithm tailored to output-only system identification is introduced. The proposed framework is demonstrated on both numerical and experimental oscillators with varying system parameters that develop into self-sustained oscillations. The results show that the computational cost required for performing the system identification is dramatically reduced by using the proposed framework. Also, system parameters that were difficult to be extracted with the existing method could be efficiently computed with the system identification method developed in this study. Pertaining to the robustness and computational efficiency of the presented framework, this study can contribute to an accurate and fast diagnosis of dynamical systems under stochastic forcing.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
362,178
1904.13221
Eigen Values Features for the Classification of Brain Signals corresponding to 2D and 3D Educational Contents
In this paper, we have proposed a brain signal classification method, which uses eigenvalues of the covariance matrix as features to classify images (topomaps) created from the brain signals. The signals are recorded during the answering of 2D and 3D questions. The system is used to classify the correct and incorrect answers for both 2D and 3D questions. Using the classification technique, the impacts of 2D and 3D multimedia educational contents on learning, memory retention and recall will be compared. The subjects learn similar 2D and 3D educational contents. Afterwards, subjects are asked 20 multiple-choice questions (MCQs) associated with the contents after thirty minutes (Short-Term Memory) and two months (Long-Term Memory). Eigenvalues features extracted from topomaps images are given to K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) classifiers, in order to identify the states of the brain related to incorrect and correct answers. Excellent accuracies obtained by both classifiers and by applying statistical analysis on the results, no significant difference is indicated between 2D and 3D multimedia educational contents on learning, memory retention and recall in both STM and LTM.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
129,329
2109.11058
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Prior work has shown that structural supervision helps English language models learn generalizations about syntactic phenomena such as subject-verb agreement. However, it remains unclear if such an inductive bias would also improve language models' ability to learn grammatical dependencies in typologically different languages. Here we investigate this question in Mandarin Chinese, which has a logographic, largely syllable-based writing system; different word order; and sparser morphology than English. We train LSTMs, Recurrent Neural Network Grammars, Transformer language models, and Transformer-parameterized generative parsing models on two Mandarin Chinese datasets of different sizes. We evaluate the models' ability to learn different aspects of Mandarin grammar that assess syntactic and semantic relationships. We find suggestive evidence that structural supervision helps with representing syntactic state across intervening content and improves performance in low-data settings, suggesting that the benefits of hierarchical inductive biases in acquiring dependency relationships may extend beyond English.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
256,827
2206.04672
Overcoming the Spectral Bias of Neural Value Approximation
Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm. While multi-layer perceptron networks are universal function approximators, recent works in neural kernel regression suggest the presence of a spectral bias, where fitting high-frequency components of the value function requires exponentially more gradient update steps than the low-frequency ones. In this work, we re-examine off-policy reinforcement learning through the lens of kernel regression and propose to overcome such bias via a composite neural tangent kernel. With just a single line-change, our approach, the Fourier feature networks (FFN) produce state-of-the-art performance on challenging continuous control domains with only a fraction of the compute. Faster convergence and better off-policy stability also make it possible to remove the target network without suffering catastrophic divergences, which further reduces TD}(0)'s estimation bias on a few tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
301,715
cs/0004008
How to Evaluate your Question Answering System Every Day and Still Get Real Work Done
In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
537,089
1807.08585
A refined mean field approximation of synchronous discrete-time population models
Mean field approximation is a popular method to study the behaviour of stochastic models composed of a large number of interacting objects. When the objects are asynchronous, the mean field approximation of a population model can be expressed as an ordinary differential equation. When the objects are (clock-) synchronous the mean field approximation is a discrete time dynamical system. We focus on the latter.We study the accuracy of mean field approximation when this approximation is a discrete-time dynamical system. We extend a result that was shown for the continuous time case and we prove that expected performance indicators estimated by mean field approximation are $O(1/N)$-accurate. We provide simple expressions to effectively compute the asymptotic error of mean field approximation, for finite time-horizon and steady-state, and we use this computed error to propose what we call a \emph{refined} mean field approximation. We show, by using a few numerical examples, that this technique improves the quality of approximation compared to the classical mean field approximation, especially for relatively small population sizes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
103,572
1407.1514
A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm
Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput. The length-$N$ input sequence is partitioned into $B$ blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of $B$, but degrades the compression quality. Instead, our approach is to first estimate the minimum description length (MDL) context tree source underlying the entire input, and then encode each of the $B$ blocks in parallel based on the MDL source. With this two-pass approach, the compression loss incurred by using more parallel units is insignificant. Our algorithm is work-efficient, i.e., its computational complexity is $O(N/B)$. Its redundancy is approximately $B\log(N/B)$ bits above Rissanen's lower bound on universal compression performance, with respect to any context tree source whose maximal depth is at most $\log(N/B)$. We improve the compression by using different quantizers for states of the context tree based on the number of symbols corresponding to those states. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
34,445
2207.03557
Flow Synthesis Based Visual Servoing Frameworks for Monocular Obstacle Avoidance Amidst High-Rises
We propose a novel flow synthesis based visual servoing framework enabling long-range obstacle avoidance for Micro Air Vehicles (MAV) flying amongst tall skyscrapers. Recent deep learning based frameworks use optical flow to do high-precision visual servoing. In this paper, we explore the question: can we design a surrogate flow for these high-precision visual-servoing methods, which leads to obstacle avoidance? We revisit the concept of saliency for identifying high-rise structures in/close to the line of attack amongst other competing skyscrapers and buildings as a collision obstacle. A synthesised flow is used to displace the salient object segmentation mask. This flow is so computed that the visual servoing controller maneuvers the MAV safely around the obstacle. In this approach, we use a multi-step Cross-Entropy Method (CEM) based servo control to achieve flow convergence, resulting in obstacle avoidance. We use this novel pipeline to successfully and persistently maneuver high-rises and reach the goal in simulated and photo-realistic real-world scenes. We conduct extensive experimentation and compare our approach with optical flow and short-range depth-based obstacle avoidance methods to demonstrate the proposed framework's merit. Additional Visualisation can be found at https://sites.google.com/view/monocular-obstacle/home
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
306,884
2206.05490
Discovery and density estimation of latent confounders in Bayesian networks with evidence lower bound
Discovering and parameterising latent confounders represent important and challenging problems in causal structure learning and density estimation respectively. In this paper, we focus on both discovering and learning the distribution of latent confounders. This task requires solutions that come from different areas of statistics and machine learning. We combine elements of variational Bayesian methods, expectation-maximisation, hill-climbing search, and structure learning under the assumption of causal insufficiency. We propose two learning strategies; one that maximises model selection accuracy, and another that improves computational efficiency in exchange for minor reductions in accuracy. The former strategy is suitable for small networks and the latter for moderate size networks. Both learning strategies perform well relative to existing solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
302,023
2409.18836
Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study
When assessing the quality of prediction models in machine learning, confidence intervals (CIs) for the generalization error, which measures predictive performance, are a crucial tool. Luckily, there exist many methods for computing such CIs and new promising approaches are continuously being proposed. Typically, these methods combine various resampling procedures, most popular among them cross-validation and bootstrapping, with different variance estimation techniques. Unfortunately, however, there is currently no consensus on when any of these combinations may be most reliably employed and how they generally compare. In this work, we conduct a large-scale study comparing CIs for the generalization error, the first one of such size, where we empirically evaluate 13 different CI methods on a total of 19 tabular regression and classification problems, using seven different inducers and a total of eight loss functions. We give an overview of the methodological foundations and inherent challenges of constructing CIs for the generalization error and provide a concise review of all 13 methods in a unified framework. Finally, the CI methods are evaluated in terms of their relative coverage frequency, width, and runtime. Based on these findings, we can identify a subset of methods that we would recommend. We also publish the datasets as a benchmarking suite on OpenML and our code on GitHub to serve as a basis for further studies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
492,425
0808.4156
Rate-Distortion via Markov Chain Monte Carlo
We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a `heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results. Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,246
2310.19174
Predicting recovery following stroke: deep learning, multimodal data and feature selection using explainable AI
Machine learning offers great potential for automated prediction of post-stroke symptoms and their response to rehabilitation. Major challenges for this endeavour include the very high dimensionality of neuroimaging data, the relatively small size of the datasets available for learning, and how to effectively combine neuroimaging and tabular data (e.g. demographic information and clinical characteristics). This paper evaluates several solutions based on two strategies. The first is to use 2D images that summarise MRI scans. The second is to select key features that improve classification accuracy. Additionally, we introduce the novel approach of training a convolutional neural network (CNN) on images that combine regions-of-interest extracted from MRIs, with symbolic representations of tabular data. We evaluate a series of CNN architectures (both 2D and a 3D) that are trained on different representations of MRI and tabular data, to predict whether a composite measure of post-stroke spoken picture description ability is in the aphasic or non-aphasic range. MRI and tabular data were acquired from 758 English speaking stroke survivors who participated in the PLORAS study. The classification accuracy for a baseline logistic regression was 0.678 for lesion size alone, rising to 0.757 and 0.813 when initial symptom severity and recovery time were successively added. The highest classification accuracy 0.854 was observed when 8 regions-of-interest was extracted from each MRI scan and combined with lesion size, initial severity and recovery time in a 2D Residual Neural Network.Our findings demonstrate how imaging and tabular data can be combined for high post-stroke classification accuracy, even when the dataset is small in machine learning terms. We conclude by proposing how the current models could be improved to achieve even higher levels of accuracy using images from hospital scanners.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
403,880
2006.10820
A New One-Point Residual-Feedback Oracle For Black-Box Learning and Control
Zeroth-order optimization (ZO) algorithms have been recently used to solve black-box or simulation-based learning and control problems, where the gradient of the objective function cannot be easily computed but can be approximated using the objective function values. Many existing ZO algorithms adopt two-point feedback schemes due to their fast convergence rate compared to one-point feedback schemes. However, two-point schemes require two evaluations of the objective function at each iteration, which can be impractical in applications where the data are not all available a priori, e.g., in online optimization. In this paper, we propose a novel one-point feedback scheme that queries the function value once at each iteration and estimates the gradient using the residual between two consecutive points. When optimizing a deterministic Lipschitz function, we show that the query complexity of ZO with the proposed one-point residual feedback matches that of ZO with the existing two-point schemes. Moreover, the query complexity of the proposed algorithm can be improved when the objective function has Lipschitz gradient. Then, for stochastic bandit optimization problems where only noisy objective function values are given, we show that ZO with one-point residual feedback achieves the same convergence rate as that of two-point scheme with uncontrollable data samples. We demonstrate the effectiveness of the proposed one-point residual feedback via extensive numerical experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,008
2501.07819
3UR-LLM: An End-to-End Multimodal Large Language Model for 3D Scene Understanding
Multi-modal Large Language Models (MLLMs) exhibit impressive capabilities in 2D tasks, yet encounter challenges in discerning the spatial positions, interrelations, and causal logic in scenes when transitioning from 2D to 3D representations. We find that the limitations mainly lie in: i) the high annotation cost restricting the scale-up of volumes of 3D scene data, and ii) the lack of a straightforward and effective way to perceive 3D information which results in prolonged training durations and complicates the streamlined framework. To this end, we develop pipeline based on open-source 2D MLLMs and LLMs to generate high-quality 3D-text pairs and construct 3DS-160K , to enhance the pre-training process. Leveraging this high-quality pre-training data, we introduce the 3UR-LLM model, an end-to-end 3D MLLM designed for precise interpretation of 3D scenes, showcasing exceptional capability in navigating the complexities of the physical world. 3UR-LLM directly receives 3D point cloud as input and project 3D features fused with text instructions into a manageable set of tokens. Considering the computation burden derived from these hybrid tokens, we design a 3D compressor module to cohesively compress the 3D spatial cues and textual narrative. 3UR-LLM achieves promising performance with respect to the previous SOTAs, for instance, 3UR-LLM exceeds its counterparts by 7.1\% CIDEr on ScanQA, while utilizing fewer training resources. The code and model weights for 3UR-LLM and the 3DS-160K benchmark are available at 3UR-LLM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
524,524
2402.04498
Pathspace Kalman Filters with Dynamic Process Uncertainty for Analyzing Time-course Data
Kalman Filter (KF) is an optimal linear state prediction algorithm, with applications in fields as diverse as engineering, economics, robotics, and space exploration. Here, we develop an extension of the KF, called a Pathspace Kalman Filter (PKF) which allows us to a) dynamically track the uncertainties associated with the underlying data and prior knowledge, and b) take as input an entire trajectory and an underlying mechanistic model, and using a Bayesian methodology quantify the different sources of uncertainty. An application of this algorithm is to automatically detect temporal windows where the internal mechanistic model deviates from the data in a time-dependent manner. First, we provide theorems characterizing the convergence of the PKF algorithm. Then, we numerically demonstrate that the PKF outperforms conventional KF methods on a synthetic dataset lowering the mean-squared-error by several orders of magnitude. Finally, we apply this method to biological time-course dataset involving over 1.8 million gene expression measurements.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
427,476
2206.00244
Fair Comparison between Efficient Attentions
Transformers have been successfully used in various fields and are becoming the standard tools in computer vision. However, self-attention, a core component of transformers, has a quadratic complexity problem, which limits the use of transformers in various vision tasks that require dense prediction. Many studies aiming at solving this problem have been reported proposed. However, no comparative study of these methods using the same scale has been reported due to different model configurations, training schemes, and new methods. In our paper, we validate these efficient attention models on the ImageNet1K classification task by changing only the attention operation and examining which efficient attention is better.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
300,046
1904.01697
Using spatial partitioning to reduce the bit error rate of diffusion-based molecular communications
This work builds on our earlier work on designing demodulators for diffusion-based molecular communications using a Markovian approach. The demodulation filters take the form of an ordinary differential equation (ODE) which computes the log-posteriori probability of observing a transmission symbol given the continuous history of receptor activities. A limitation of our earlier work is that the receiver is assumed to be a small cubic volume called a voxel. In this work, we extend the maximum a-posteriori demodulation to the case where the receiver may consist of multiple voxels and derive the ODE for log-posteriori probability calculation. This extension allows us to study receiver behaviour of different volumes and shapes. In particular, it also allows us to consider spatially partitioned receivers where the chemicals in the receiver are not allowed to mix. The key result of this paper is that spatial partitioning can be used to reduce bit-error rate in diffusion-based molecular communications.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
126,213
2309.06550
Synthetic Text Generation using Hypergraph Representations
Generating synthetic variants of a document is often posed as text-to-text transformation. We propose an alternate LLM based method that first decomposes a document into semantic frames and then generates text using this interim sparse format. The frames are modeled using a hypergraph, which allows perturbing the frame contents in a principled manner. Specifically, new hyperedges are mined through topological analysis and complex polyadic relationships including hierarchy and temporal dynamics are accommodated. We show that our solution generates documents that are diverse, coherent and vary in style, sentiment, format, composition and facts.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
391,450
2110.10812
REAL-M: Towards Speech Separation on Real Mixtures
In recent years, deep learning based source separation has achieved impressive results. Most studies, however, still evaluate separation models on synthetic datasets, while the performance of state-of-the-art techniques on in-the-wild speech data remains an open question. This paper contributes to fill this gap in two ways. First, we release the REAL-M dataset, a crowd-sourced corpus of real-life mixtures. Secondly, we address the problem of performance evaluation of real-life mixtures, where the ground truth is not available. We bypass this issue by carefully designing a blind Scale-Invariant Signal-to-Noise Ratio (SI-SNR) neural estimator. Through a user study, we show that our estimator reliably evaluates the separation performance on real mixtures. The performance predictions of the SI-SNR estimator indeed correlate well with human opinions. Moreover, we observe that the performance trends predicted by our estimator on the REAL-M dataset closely follow those achieved on synthetic benchmarks when evaluating popular speech separation models.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
262,265
2105.04443
Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction
Grammatical Error Correction (GEC) aims to correct writing errors and help language learners improve their writing skills. However, existing GEC models tend to produce spurious corrections or fail to detect lots of errors. The quality estimation model is necessary to ensure learners get accurate GEC results and avoid misleading from poorly corrected sentences. Well-trained GEC models can generate several high-quality hypotheses through decoding, such as beam search, which provide valuable GEC evidence and can be used to evaluate GEC quality. However, existing models neglect the possible GEC evidence from different hypotheses. This paper presents the Neural Verification Network (VERNet) for GEC quality estimation with multiple hypotheses. VERNet establishes interactions among hypotheses with a reasoning graph and conducts two kinds of attention mechanisms to propagate GEC evidence to verify the quality of generated hypotheses. Our experiments on four GEC datasets show that VERNet achieves state-of-the-art grammatical error detection performance, achieves the best quality estimation results, and significantly improves GEC performance by reranking hypotheses. All data and source codes are available at https://github.com/thunlp/VERNet.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
234,506
1606.06127
Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation
The size of nuclei in histological preparations from excised breast tumors is predictive of patient outcome (large nuclei indicate poor outcome). Pathologists take into account nuclear size when performing breast cancer grading. In addition, the mean nuclear area (MNA) has been shown to have independent prognostic value. The straightforward approach to measuring nuclear size is by performing nuclei segmentation. We hypothesize that given an image of a tumor region with known nuclei locations, the area of the individual nuclei and region statistics such as the MNA can be reliably computed directly from the image data by employing a machine learning model, without the intermediate step of nuclei segmentation. Towards this goal, we train a deep convolutional neural network model that is applied locally at each nucleus location, and can reliably measure the area of the individual nuclei and the MNA. Furthermore, we show how such an approach can be extended to perform combined nuclei detection and measurement, which is reminiscent of granulometry.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
57,531
2010.12702
A global-local neighborhood search algorithm and tabu search for flexible job shop scheduling problem
The Flexible Job Shop Scheduling Problem (FJSP) is a combinatorial problem that continues to be studied extensively due to its practical implications in manufacturing systems and emerging new variants, in order to model and optimize more complex situations that reflect the current needs of the industry better. This work presents a new meta-heuristic algorithm called GLNSA (Global-local neighborhood search algorithm), in which the neighborhood concepts of a cellular automaton are used, so that a set of leading solutions called "smart_cells" generates and shares information that helps to optimize instances of FJSP. The GLNSA algorithm is complemented with a tabu search that implements a simplified version of the Nopt1 neighborhood defined in [1] to complement the optimization task. The experiments carried out show a satisfactory performance of the proposed algorithm, compared with other results published in recent algorithms and widely cited in the specialized bibliography, using 86 test problems, improving the optimal result reported in previous works in two of them.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
202,809
2008.08701
Hidden Footprints: Learning Contextual Walkability from 3D Human Trails
Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis. Yet learning a computational model for this purpose is challenging due to semantic ambiguity and a lack of labeled data: current datasets only tell you where people are, not where they could be. We tackle this problem by leveraging information from existing datasets, without additional labeling. We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints. However, this augmented data is still sparse. We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss. Using this strategy, we demonstrate a model that learns to predict a walkability map from a single image. We evaluate our model on the Waymo and Cityscapes datasets, demonstrating superior performance compared to baselines and state-of-the-art models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
192,481
2010.10177
Sparse Gaussian Process Variational Autoencoders
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on inducing points, which are essential for the computational efficiency of GPs, nor do they handle missing data -- a natural occurrence in many spatio-temporal datasets -- in a principled manner. We address these shortcomings with the development of the sparse Gaussian process variational autoencoder (SGP-VAE), characterised by the use of partial inference networks for parameterising sparse GP approximations. Leveraging the benefits of amortised variational inference, the SGP-VAE enables inference in multi-output sparse GPs on previously unobserved data with no additional training. The SGP-VAE is evaluated in a variety of experiments where it outperforms alternative approaches including multi-output GPs and structured VAEs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
201,807
1601.07804
Joint Sensing Matrix and Sparsifying Dictionary Optimization for Tensor Compressive Sensing
Tensor Compressive Sensing (TCS) is a multidimensional framework of Compressive Sensing (CS), and it is advantageous in terms of reducing the amount of storage, easing hardware implementations and preserving multidimensional structures of signals in comparison to a conventional CS system. In a TCS system, instead of using a random sensing matrix and a predefined dictionary, the average-case performance can be further improved by employing an optimized multidimensional sensing matrix and a learned multilinear sparsifying dictionary. In this paper, we propose a joint optimization approach of the sensing matrix and dictionary for a TCS system. For the sensing matrix design in TCS, an extended separable approach with a closed form solution and a novel iterative non-separable method are proposed when the multilinear dictionary is fixed. In addition, a multidimensional dictionary learning method that takes advantages of the multidimensional structure is derived, and the influence of sensing matrices is taken into account in the learning process. A joint optimization is achieved via alternately iterating the optimization of the sensing matrix and dictionary. Numerical experiments using both synthetic data and real images demonstrate the superiority of the proposed approaches.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
51,468
1911.05861
Federated and Differentially Private Learning for Electronic Health Records
The use of collaborative and decentralized machine learning techniques such as federated learning have the potential to enable the development and deployment of clinical risk predictions models in low-resource settings without requiring sensitive data be shared or stored in a central repository. This process necessitates communication of model weights or updates between collaborating entities, but it is unclear to what extent patient privacy is compromised as a result. To gain insight into this question, we study the efficacy of centralized versus federated learning in both private and non-private settings. The clinical prediction tasks we consider are the prediction of prolonged length of stay and in-hospital mortality across thirty one hospitals in the eICU Collaborative Research Database. We find that while it is straightforward to apply differentially private stochastic gradient descent to achieve strong privacy bounds when training in a centralized setting, it is considerably more difficult to do so in the federated setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,383
1901.11033
Metric Gaussian Variational Inference
Solving Bayesian inference problems approximately with variational approaches can provide fast and accurate results. Capturing correlation within the approximation requires an explicit parametrization. This intrinsically limits this approach to either moderately dimensional problems, or requiring the strongly simplifying mean-field approach. We propose Metric Gaussian Variational Inference (MGVI) as a method that goes beyond mean-field. Here correlations between all model parameters are taken into account, while still scaling linearly in computational time and memory. With this method we achieve higher accuracy and in many cases a significant speedup compared to traditional methods. MGVI is an iterative method that performs a series of Gaussian approximations to the posterior. We alternate between approximating the covariance with the inverse Fisher information metric evaluated at an intermediate mean estimate and optimizing the KL-divergence for the given covariance with respect to the mean. This procedure is iterated until the uncertainty estimate is self-consistent with the mean parameter. We achieve linear scaling by avoiding to store the covariance explicitly at any time. Instead we draw samples from the approximating distribution relying on an implicit representation and numerical schemes to approximately solve linear equations. Those samples are used to approximate the KL-divergence and its gradient. The usage of natural gradient descent allows for rapid convergence. Formulating the Bayesian model in standardized coordinates makes MGVI applicable to any inference problem with continuous parameters. We demonstrate the high accuracy of MGVI by comparing it to HMC and its fast convergence relative to other established methods in several examples. We investigate real-data applications, as well as synthetic examples of varying size and complexity and up to a million model parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,153
2206.14437
MaNi: Maximizing Mutual Information for Nuclei Cross-Domain Unsupervised Segmentation
In this work, we propose a mutual information (MI) based unsupervised domain adaptation (UDA) method for the cross-domain nuclei segmentation. Nuclei vary substantially in structure and appearances across different cancer types, leading to a drop in performance of deep learning models when trained on one cancer type and tested on another. This domain shift becomes even more critical as accurate segmentation and quantification of nuclei is an essential histopathology task for the diagnosis/ prognosis of patients and annotating nuclei at the pixel level for new cancer types demands extensive effort by medical experts. To address this problem, we maximize the MI between labeled source cancer type data and unlabeled target cancer type data for transferring nuclei segmentation knowledge across domains. We use the Jensen-Shanon divergence bound, requiring only one negative pair per positive pair for MI maximization. We evaluate our set-up for multiple modeling frameworks and on different datasets comprising of over 20 cancer-type domain shifts and demonstrate competitive performance. All the recently proposed approaches consist of multiple components for improving the domain adaptation, whereas our proposed module is light and can be easily incorporated into other methods (Implementation: https://github.com/YashSharma/MaNi ).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
305,287
2312.11714
Time-Transformer: Integrating Local and Global Features for Better Time Series Generation
Generating time series data is a promising approach to address data deficiency problems. However, it is also challenging due to the complex temporal properties of time series data, including local correlations as well as global dependencies. Most existing generative models have failed to effectively learn both the local and global properties of time series data. To address this open problem, we propose a novel time series generative model named 'Time-Transformer AAE', which consists of an adversarial autoencoder (AAE) and a newly designed architecture named 'Time-Transformer' within the decoder. The Time-Transformer first simultaneously learns local and global features in a layer-wise parallel design, combining the abilities of Temporal Convolutional Networks and Transformer in extracting local features and global dependencies respectively. Second, a bidirectional cross attention is proposed to provide complementary guidance across the two branches and achieve proper fusion between local and global features. Experimental results demonstrate that our model can outperform existing state-of-the-art models in 5 out of 6 datasets, specifically on those with data containing both global and local properties. Furthermore, we highlight our model's advantage on handling this kind of data via an artificial dataset. Finally, we show our model's ability to address a real-world problem: data augmentation to support learning with small datasets and imbalanced datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
416,686
1910.12217
Spatially Coupled LDPC Codes with Sub-Block Locality
A new type of spatially coupled low-density parity-check (SC-LDPC) codes motivated by practical storage applications is presented. SC-LDPCL codes (suffix 'L' stands for locality) can be decoded locally at the level of sub-blocks that are much smaller than the full code block, thus offering flexible access to the coded information alongside the strong reliability of the global full-block decoding. Toward that, we propose constructions of SC-LDPCL codes that allow controlling the trade-off between local and global correction performance. In addition to local and global decoding, the paper develops a density-evolution analysis for a decoding mode we call semi-global decoding, in which the decoder has access to the requested sub-block plus a prescribed number of sub-blocks around it. SC-LDPCL codes are also studied under a channel model with variability across sub-blocks, for which decoding-performance lower bounds are derived.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
151,008
2308.10874
Analyzing Transformer Dynamics as Movement through Embedding Space
Transformer based language models exhibit intelligent behaviors such as understanding natural language, recognizing patterns, acquiring knowledge, reasoning, planning, reflecting and using tools. This paper explores how their underlying mechanics give rise to intelligent behaviors. Towards that end, we propose framing Transformer dynamics as movement through embedding space. Examining Transformers through this perspective reveals key insights, establishing a Theory of Transformers: 1) Intelligent behaviours map to paths in Embedding Space which, the Transformer random-walks through during inferencing. 2) LM training learns a probability distribution over all possible paths. `Intelligence' is learnt by assigning higher probabilities to paths representing intelligent behaviors. No learning can take place in-context; context only narrows the subset of paths sampled during decoding. 5) The Transformer is a self-mapping composition function, folding a context sequence into a context-vector such that it's proximity to a token-vector reflects its co-occurrence and conditioned probability. Thus, the physical arrangement of vectors in Embedding Space determines path probabilities. 6) Context vectors are composed by aggregating features of the sequence's tokens via a process we call the encoding walk. Attention contributes a - potentially redundant - association-bias to this process. 7) This process is comprised of two principal operation types: filtering (data independent) and aggregation (data dependent). This generalization unifies Transformers with other sequence models. Building upon this foundation, we formalize a popular semantic interpretation of embeddings into a ``concept-space theory'' and find some evidence of it's validity.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
386,919
1710.08557
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
83,096
2402.02164
Hierarchical Structure Enhances the Convergence and Generalizability of Linear Molecular Representation
Language models demonstrate fundamental abilities in syntax, semantics, and reasoning, though their performance often depends significantly on the inputs they process. This study introduces TSIS (Simplified TSID) and its variants:TSISD (TSIS with Depth-First Search), TSISO (TSIS in Order), and TSISR (TSIS in Random), as integral components of the t-SMILES framework. These additions complete the framework's design, providing diverse approaches to molecular representation. Through comprehensive analysis and experiments employing deep generative models, including GPT, diffusion models, and reinforcement learning, the findings reveal that the hierarchical structure of t-SMILES is more straightforward to parse than initially anticipated. Furthermore, t-SMILES consistently outperforms other linear representations such as SMILES, SELFIES, and SAFE, demonstrating superior convergence speed and enhanced generalization capabilities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
426,414
2212.08046
Silhouette: Toward Performance-Conscious and Transferable CPU Embeddings
Learned embeddings are widely used to obtain concise data representation and enable transfer learning between different data sets and tasks. In this paper, we present Silhouette, our approach that leverages publicly-available performance data sets to learn CPU embeddings. We show how these embeddings enable transfer learning between data sets of different types and sizes. Each of these scenarios leads to an improvement in accuracy for the target data set.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
336,605
1004.2372
Learning Deterministic Regular Expressions for the Inference of Schemas from XML Data
Inferring an appropriate DTD or XML Schema Definition (XSD) for a given collection of XML documents essentially reduces to learning deterministic regular expressions from sets of positive example words. Unfortunately, there is no algorithm capable of learning the complete class of deterministic regular expressions from positive examples only, as we will show. The regular expressions occurring in practical DTDs and XSDs, however, are such that every alphabet symbol occurs only a small number of times. As such, in practice it suffices to learn the subclass of deterministic regular expressions in which each alphabet symbol occurs at most k times, for some small k. We refer to such expressions as k-occurrence regular expressions (k-OREs for short). Motivated by this observation, we provide a probabilistic algorithm that learns k-OREs for increasing values of k, and selects the deterministic one that best describes the sample based on a Minimum Description Length argument. The effectiveness of the method is empirically validated both on real world and synthetic data. Furthermore, the method is shown to be conservative over the simpler classes of expressions considered in previous work.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
6,166
2102.07804
Scaling Up Exact Neural Network Compression by ReLU Stability
We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. In this work, we introduce an algorithm based on solving a single optimization problem to identify all stable neurons. Our approach is on median 183 times faster than the state-of-art method on CIFAR-10, which allows us to explore exact compression on deeper (5 x 100) and wider (2 x 800) networks within minutes. For classifiers trained under an amount of L1 regularization that does not worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10 dataset. The code is available at the following link, https://github.com/yuxwind/ExactCompression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,225
2005.10017
Differential Mapping Spiking Neural Network for Sensor-Based Robot Control
In this work, a spiking neural network (SNN) is proposed for approximating differential sensorimotor maps of robotic systems. The computed model is used as a local Jacobian-like projection that relates changes in sensor space to changes in motor space. The SNN consists of an input (sensory) layer and an output (motor) layer connected through plastic synapses, with inter-inhibitory connections at the output layer. Spiking neurons are modeled as Izhikevich neurons with a synaptic learning rule based on spike-timing-dependent plasticity. Feedback data from proprioceptive and exteroceptive sensors are encoded and fed into the input layer through a motor babbling process. As the main challenge to building an efficient SNN is to tune its parameters, we present an intuitive tuning method that considerably reduces the number of neurons and the amount of data required for training. Our proposed architecture represents a biologically plausible neural controller that is capable of handling noisy sensor readings to guide robot movements in real-time. Experimental results are presented to validate the control methodology with a vision-guided robot.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
178,064
1602.08836
Full-Duplex Cloud-RAN with Uplink/Downlink Remote Radio Head Association
This paper considers a cloud radio access network (C-RAN) where spatially distributed remote radio heads (RRHs) communicate with a full-duplex user. In order to reflect a realistic scenario, the uplink (UL) and downlink (DL) RRHs are assumed to be equipped with multiple antennas and distributed according to a Poisson point process. We consider all participate and nearest RRH association schemes with distributed beamforming in the form of maximum ratio combining/maximal ratio transmission (MRC/MRT) and zero-forcing/MRT(ZF/MRT) processing. We derive analytical expressions useful to compare the average sum rate among association schemes as a function of the number of RRHs antennas and density of the UL and DL RRHs. Numerical results show that significant performance improvements can be achieved by using the full-duplex mode as compared to the half-duplex mode, while the choice of the beamforming design as well as the RRH association scheme plays a critical role in determining the full-duplex gains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
52,698
1708.05943
Neural Machine Translation with Extended Context
We investigate the use of extended context in attention-based neural machine translation. We base our experiments on translated movie subtitles and discuss the effect of increasing the segments beyond single translation units. We study the use of extended source language context as well as bilingual context extensions. The models learn to distinguish between information from different segments and are surprisingly robust with respect to translation quality. In this pilot study, we observe interesting cross-sentential attention patterns that improve textual coherence in translation at least in some selected cases.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
79,234
2202.00248
Entanglement-Assisted Quantum Error-Correcting Codes over Local Frobenius Rings
In this paper, we provide a framework for constructing entanglement-assisted quantum error-correcting codes (EAQECCs) from classical additive codes over a finite commutative local Frobenius ring $\mathcal{R}$. At the heart of the framework, and this is one of the main technical contributions of our paper, is a procedure to construct, for an additive code $\mathcal{C}$ over $\mathcal{R}$, a generating set for $\mathcal{C}$ that is in standard form, meaning that it consists purely of isotropic generators and hyperbolic pairs. Moreover, when $\mathcal{R}$ is a Galois ring, we give an exact expression for the minimum number of pairs of maximally entangled qudits required to construct an EAQECC from an additive code over $\mathcal{R}$, which significantly extends known results for EAQECCs over finite fields. We also demonstrate how adding extra coordinates to an additive code can give us a certain degree of flexibility in determining the parameters of the EAQECCs that result from our construction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
278,079
1905.10805
Usage of multiple RTL features for Earthquake prediction
We construct a classification model that predicts if an earthquake with the magnitude above a threshold will take place at a given location in a time range 30-180 days from a given moment of time. A common approach is to use expert forecasts based on features like Region-Time-Length (RTL) characteristics. The proposed approach uses machine learning on top of multiple RTL features to take into account effects at various scales and to improve prediction accuracy. For historical data about Japan earthquakes 1992-2005 and predictions at locations given in this database the best model has precision up to ~ 0.95 and recall up to ~ 0.98.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,200
1807.07760
Improving Image Clustering With Multiple Pretrained CNN Feature Extractors
For many image clustering problems, replacing raw image data with features extracted by a pretrained convolutional neural network (CNN), leads to better clustering performance. However, the specific features extracted, and, by extension, the selected CNN architecture, can have a major impact on the clustering results. In practice, this crucial design choice is often decided arbitrarily due to the impossibility of using cross-validation with unsupervised learning problems. However, information contained in the different pretrained CNN architectures may be complementary, even when pretrained on the same data. To improve clustering performance, we rephrase the image clustering problem as a multi-view clustering (MVC) problem that considers multiple different pretrained feature extractors as different "views" of the same data. We then propose a multi-input neural network architecture that is trained end-to-end to solve the MVC problem effectively. Our experimental results, conducted on three different natural image datasets, show that: 1. using multiple pretrained CNNs jointly as feature extractors improves image clustering; 2. using an end-to-end approach improves MVC; and 3. combining both produces state-of-the-art results for the problem of image clustering.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
103,375
2406.01460
MLIP: Efficient Multi-Perspective Language-Image Pretraining with Exhaustive Data Utilization
Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success, leading to rapid advancements in multimodal studies. However, CLIP faces a notable challenge in terms of inefficient data utilization. It relies on a single contrastive supervision for each image-text pair during representation learning, disregarding a substantial amount of valuable information that could offer richer supervision. Additionally, the retention of non-informative tokens leads to increased computational demands and time costs, particularly in CLIP's ViT image encoder. To address these issues, we propose Multi-Perspective Language-Image Pretraining (MLIP). In MLIP, we leverage the frequency transform's sensitivity to both high and low-frequency variations, which complements the spatial domain's sensitivity limited to low-frequency variations only. By incorporating frequency transforms and token-level alignment, we expand CILP's single supervision into multi-domain and multi-level supervision, enabling a more thorough exploration of informative image features. Additionally, we introduce a token merging method guided by comprehensive semantics from the frequency and spatial domains. This allows us to merge tokens to multi-granularity tokens with a controllable compression rate to accelerate CLIP. Extensive experiments validate the effectiveness of our design.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
460,327
1810.10221
Cross-Resolution Person Re-identification with Deep Antithetical Learning
Images with different resolutions are ubiquitous in public person re-identification (ReID) datasets and real-world scenes, it is thus crucial for a person ReID model to handle the image resolution variations for improving its generalization ability. However, most existing person ReID methods pay little attention to this resolution discrepancy problem. One paradigm to deal with this problem is to use some complicated methods for mapping all images into an artificial image space, which however will disrupt the natural image distribution and requires heavy image preprocessing. In this paper, we analyze the deficiencies of several widely-used objective functions handling image resolution discrepancies and propose a new framework called deep antithetical learning that directly learns from the natural image space rather than creating an arbitrary one. We first quantify and categorize original training images according to their resolutions. Then we create an antithetical training set and make sure that original training images have counterparts with antithetical resolutions in this new set. At last, a novel Contrastive Center Loss(CCL) is proposed to learn from images with different resolutions without being interfered by their resolution discrepancies. Extensive experimental analyses and evaluations indicate that the proposed framework, even using a vanilla deep ReID network, exhibits remarkable performance improvements. Without bells and whistles, our approach outperforms previous state-of-the-art methods by a large margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
111,233
1703.07218
DG-Embedded Radial Distribution System Planning Using Binary-Selective PSO
With the increasing rate of power consumption, many new distribution systems need to be constructed to accommodate connecting the new consumers to the power grid. On the other hand, the increasing penetration of renewable distributed generation (DG) resources into the distribution systems and the necessity of optimally place them in the network can dramatically change the problem of distribution system planning and design. In this paper, the problem of optimal distribution system planning including conductor sizing, DG placement, alongside with placement and sizing of shunt capacitors is studied. A new Binary-Selective Particle Swarm Optimization (PSO) approach which is capable of handling all types of continuous, binary and selective variables, simultaneously, is proposed to solve the optimization problem of distribution system planning. The objective of the problem is to minimize the system costs. Load growth rate, cost of energy, cost of power, and inflation rate are all taken into account. The efficacy of the proposed method is tested on a 26-bus distribution system.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
70,356
2306.03810
X-Align++: cross-modal cross-view alignment for Bird's-eye-view segmentation
Bird's-eye-view (BEV) grid is a typical representation of the perception of road components, e.g., drivable area, in autonomous driving. Most existing approaches rely on cameras only to perform segmentation in BEV space, which is fundamentally constrained by the absence of reliable depth information. The latest works leverage both camera and LiDAR modalities but suboptimally fuse their features using simple, concatenation-based mechanisms. In this paper, we address these problems by enhancing the alignment of the unimodal features in order to aid feature fusion, as well as enhancing the alignment between the cameras' perspective view (PV) and BEV representations. We propose X-Align, a novel end-to-end cross-modal and cross-view learning framework for BEV segmentation consisting of the following components: (i) a novel Cross-Modal Feature Alignment (X-FA) loss, (ii) an attention-based Cross-Modal Feature Fusion (X-FF) module to align multi-modal BEV features implicitly, and (iii) an auxiliary PV segmentation branch with Cross-View Segmentation Alignment (X-SA) losses to improve the PV-to-BEV transformation. We evaluate our proposed method across two commonly used benchmark datasets, i.e., nuScenes and KITTI-360. Notably, X-Align significantly outperforms the state-of-the-art by 3 absolute mIoU points on nuScenes. We also provide extensive ablation studies to demonstrate the effectiveness of the individual components.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
371,487
2105.11926
ConQuer-92 -- The revised report on the conceptual query language LISA-D
In this report the conceptual query language ConQuer-92 is introduced. This query language serves as the backbone of InfoAssistant's query facilities. Furthermore, this language can also be used for the specification of derivation rules (e.g. subtype defining rules) and textual constraints in InfoModeler. This report is solely concerned with a formal definition, and the explanation thereof, of ConQuer-92. The implementation of ConQuer-92 in SQL-92 will be treated in a separate report.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
236,867
1210.2440
Group Model Selection Using Marginal Correlations: The Good, the Bad and the Ugly
Group model selection is the problem of determining a small subset of groups of predictors (e.g., the expression data of genes) that are responsible for majority of the variation in a response variable (e.g., the malignancy of a tumor). This paper focuses on group model selection in high-dimensional linear models, in which the number of predictors far exceeds the number of samples of the response variable. Existing works on high-dimensional group model selection either require the number of samples of the response variable to be significantly larger than the total number of predictors contributing to the response or impose restrictive statistical priors on the predictors and/or nonzero regression coefficients. This paper provides comprehensive understanding of a low-complexity approach to group model selection that avoids some of these limitations. The proposed approach, termed Group Thresholding (GroTh), is based on thresholding of marginal correlations of groups of predictors with the response variable and is reminiscent of existing thresholding-based approaches in the literature. The most important contribution of the paper in this regard is relating the performance of GroTh to a polynomial-time verifiable property of the predictors for the general case of arbitrary (random or deterministic) predictors and arbitrary nonzero regression coefficients.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
19,018
2311.05041
Active Transfer Learning for Efficient Video-Specific Human Pose Estimation
Human Pose (HP) estimation is actively researched because of its wide range of applications. However, even estimators pre-trained on large datasets may not perform satisfactorily due to a domain gap between the training and test data. To address this issue, we present our approach combining Active Learning (AL) and Transfer Learning (TL) to adapt HP estimators to individual video domains efficiently. For efficient learning, our approach quantifies (i) the estimation uncertainty based on the temporal changes in the estimated heatmaps and (ii) the unnaturalness in the estimated full-body HPs. These quantified criteria are then effectively combined with the state-of-the-art representativeness criterion to select uncertain and diverse samples for efficient HP estimator learning. Furthermore, we reconsider the existing Active Transfer Learning (ATL) method to introduce novel ideas related to the retraining methods and Stopping Criteria (SC). Experimental results demonstrate that our method enhances learning efficiency and outperforms comparative methods. Our code is publicly available at: https://github.com/ImIntheMiddle/VATL4Pose-WACV2024
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
406,445
2402.11325
ChatEarthNet: A Global-Scale Image-Text Dataset Empowering Vision-Language Geo-Foundation Models
An in-depth comprehension of global land cover is essential in Earth observation, forming the foundation for a multitude of applications. Although remote sensing technology has advanced rapidly, leading to a proliferation of satellite imagery, the inherent complexity of these images often makes them difficult for non-expert users to understand. Natural language, as a carrier of human knowledge, can be a bridge between common users and complicated satellite imagery. In this context, we introduce a global-scale, high-quality image-text dataset for remote sensing, providing natural language descriptions for Sentinel-2 data to facilitate the understanding of satellite imagery for common users. Specifically, we utilize Sentinel-2 data for its global coverage as the foundational image source, employing semantic segmentation labels from the European Space Agency's (ESA) WorldCover project to enrich the descriptions of land covers. By conducting in-depth semantic analysis, we formulate detailed prompts to elicit rich descriptions from ChatGPT. To enhance the dataset's quality, we introduce the manual verification process. This step involves manual inspection and correction to refine the dataset, thus significantly improving its accuracy and quality. Finally, we offer the community ChatEarthNet, a large-scale image-text dataset characterized by global coverage, high quality, wide-ranging diversity, and detailed descriptions. ChatEarthNet consists of 163,488 image-text pairs with captions generated by ChatGPT-3.5 and an additional 10,000 image-text pairs with captions generated by ChatGPT-4V(ision). This dataset has significant potential for training vision-language geo-foundation models and evaluating large vision-language models for remote sensing. The dataset will be made publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
430,339
2205.08340
A unified framework for dataset shift diagnostics
Supervised learning techniques typically assume training data originates from the target population. Yet, in reality, dataset shift frequently arises, which, if not adequately taken into account, may decrease the performance of their predictors. In this work, we propose a novel and flexible framework called DetectShift that quantifies and tests for multiple dataset shifts, encompassing shifts in the distributions of $(X, Y)$, $X$, $Y$, $X|Y$, and $Y|X$. DetectShift equips practitioners with insights into data shifts, facilitating the adaptation or retraining of predictors using both source and target data. This proves extremely valuable when labeled samples in the target domain are limited. The framework utilizes test statistics with the same nature to quantify the magnitude of the various shifts, making results more interpretable. It is versatile, suitable for regression and classification tasks, and accommodates diverse data forms - tabular, text, or image. Experimental results demonstrate the effectiveness of DetectShift in detecting dataset shifts even in higher dimensions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
296,897
1302.6808
Learning Gaussian Networks
We describe algorithms for learning Bayesian networks from a combination of user knowledge and statistical data. The algorithms have two components: a scoring metric and a search procedure. The scoring metric takes a network structure, statistical data, and a user's prior knowledge, and returns a score proportional to the posterior probability of the network structure given the data. The search procedure generates networks for evaluation by the scoring metric. Previous work has concentrated on metrics for domains containing only discrete variables, under the assumption that data represents a multinomial sample. In this paper, we extend this work, developing scoring metrics for domains containing all continuous variables or a mixture of discrete and continuous variables, under the assumption that continuous data is sampled from a multivariate normal distribution. Our work extends traditional statistical approaches for identifying vanishing regression coefficients in that we identify two important assumptions, called event equivalence and parameter modularity, that when combined allow the construction of prior distributions for multivariate normal parameters from a single prior Bayesian network specified by a user.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
22,438
2002.06914
On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods
In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Moreover, we demonstrate that varying size of the test size automatically has impact on the performance of the same model based on commonly used metrics for the Entity Alignment task. We show that this leads to various problems in the interpretation of results, which may support misleading conclusions. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance. Our code is available at https://github.com/mberr/rank-based-evaluation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,339
2409.19487
HealthQ: Unveiling Questioning Capabilities of LLM Chains in Healthcare Conversations
In digital healthcare, large language models (LLMs) have primarily been utilized to enhance question-answering capabilities and improve patient interactions. However, effective patient care necessitates LLM chains that can actively gather information by posing relevant questions. This paper presents HealthQ, a novel framework designed to evaluate the questioning capabilities of LLM healthcare chains. We implemented several LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, and introduced an LLM judge to assess the relevance and informativeness of the generated questions. To validate HealthQ, we employed traditional Natural Language Processing (NLP) metrics such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Named Entity Recognition (NER)-based set comparison, and constructed two custom datasets from public medical note datasets, ChatDoctor and MTS-Dialog. Our contributions are threefold: we provide the first comprehensive study on the questioning capabilities of LLMs in healthcare conversations, develop a novel dataset generation pipeline, and propose a detailed evaluation methodology.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
492,703
1503.01524
Genetic optimization of the Hyperloop route through the Grapevine
We demonstrate a genetic algorithm that employs a versatile fitness function to optimize route selection for the Hyperloop, a proposed high speed passenger transportation system.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
40,838
1204.1240
Optimal Save-Then-Transmit Protocol for Energy Harvesting Wireless Transmitters
In this paper, the design of a wireless communication device relying exclusively on energy harvesting is considered. Due to the inability of rechargeable energy sources to charge and discharge at the same time, a constraint we term the energy half-duplex constraint, two rechargeable energy storage devices (ESDs) are assumed so that at any given time, there is always one ESD being recharged. The energy harvesting rate is assumed to be a random variable that is constant over the time interval of interest. A save-then-transmit (ST) protocol is introduced, in which a fraction of time {\rho} (dubbed the save-ratio) is devoted exclusively to energy harvesting, with the remaining fraction 1 - {\rho} used for data transmission. The ratio of the energy obtainable from an ESD to the energy harvested is termed the energy storage efficiency, {\eta}. We address the practical case of the secondary ESD being a battery with {\eta} < 1, and the main ESD being a super-capacitor with {\eta} = 1. The optimal save-ratio that minimizes outage probability is derived, from which some useful design guidelines are drawn. In addition, we compare the outage performance of random power supply to that of constant power supply over the Rayleigh fading channel. The diversity order with random power is shown to be the same as that of constant power, but the performance gap can be large. Furthermore, we extend the proposed ST protocol to wireless networks with multiple transmitters. It is shown that the system-level outage performance is critically dependent on the relationship between the number of transmitters and the optimal save-ratio for single-channel outage minimization. Numerical results are provided to validate our proposed study.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
15,300
2405.20821
Pursuing Overall Welfare in Federated Learning through Sequential Decision Making
In traditional federated learning, a single global model cannot perform equally well for all clients. Therefore, the need to achieve the client-level fairness in federated system has been emphasized, which can be realized by modifying the static aggregation scheme for updating the global model to an adaptive one, in response to the local signals of the participating clients. Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework, in other words, a central server's sequential decision making process. To enhance the decision making capability, we propose simple and intuitive improvements for suboptimal designs within existing methods, presenting AAggFF. Considering practical requirements, we further subdivide our method tailored for the cross-device and the cross-silo settings, respectively. Theoretical analyses guarantee sublinear regret upper bounds for both settings: $\mathcal{O}(\sqrt{T \log{K}})$ for the cross-device setting, and $\mathcal{O}(K \log{T})$ for the cross-silo setting, with $K$ clients and $T$ federation rounds. Extensive experiments demonstrate that the federated system equipped with AAggFF achieves better degree of client-level fairness than existing methods in both practical settings. Code is available at https://github.com/vaseline555/AAggFF
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
459,535
1803.01690
New Ideas for Brain Modelling 5
This paper describes a process for combining patterns and features, to guide a search process and make predictions. It is based on the functionality that a human brain might have, which is a highly distributed network of simple neuronal components that can apply some level of matching and cross-referencing over retrieved patterns. The process uses memory in a dynamic way and it is directed through the pattern matching. The paper firstly describes the mechanisms for neuronal search, memory and prediction. The paper then presents a formal language for defining cognitive processes, that is, pattern-based sequences and transitions. The language can define an outer framework for concept sets that are linked to perform the cognitive act. The language also has a mathematical basis, allowing for the rule construction to be consistent. Now, both static memory and dynamic process hierarchies can be built as tree structures. The new information can also be used to further integrate the cognitive model and the ensemble-hierarchy structure becomes an essential part. A theory about linking can suggest that nodes in different regions link together when generally they represent the same thing.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
91,929