id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2309.15204
CLRmatchNet: Enhancing Curved Lane Detection with Deep Matching Process
Lane detection plays a crucial role in autonomous driving by providing vital data to ensure safe navigation. Modern algorithms rely on anchor-based detectors, which are then followed by a label-assignment process to categorize training detections as positive or negative instances based on learned geometric attributes. Accurate label assignment has great impact on the model performance, that is usually relying on a pre-defined classical cost function evaluating GT-prediction alignment. However, classical label assignment methods face limitations due to their reliance on predefined cost functions derived from low-dimensional models, potentially impacting their optimality. Our research introduces MatchNet, a deep learning submodule-based approach aimed at improving the label assignment process. Integrated into a state-of-the-art lane detection network such as the Cross Layer Refinement Network for Lane Detection (CLRNet), MatchNet replaces the conventional label assignment process with a submodule network. The integrated model, CLRmatchNet, surpasses CLRNet, showing substantial improvements in scenarios involving curved lanes, with remarkable improvement across all backbones of +2.8% for ResNet34, +2.3% for ResNet101, and +2.96% for DLA34. In addition, it maintains or even improves comparable results in other sections. Our method boosts the confidence level in lane detection, allowing an increase in the confidence threshold. Our code is available at: https://github.com/sapirkontente/CLRmatchNet.git
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
394,879
1806.04935
Convolutional sparse coding for capturing high speed video content
Video capture is limited by the trade-off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial and temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade-off remains. The recent introduction of compressive sensing and sparse reconstruction techniques allows for the capture of single-shot high-speed video, by coding the temporal information in a single frame, and then reconstructing the full video sequence from this single coded image and a trained dictionary of image patches. In this paper, we first analyze this approach, and find insights that help improve the quality of the reconstructed videos. We then introduce a novel technique, based on convolutional sparse coding (CSC), and show how it outperforms the state-of-the-art, patch-based approach in terms of flexibility and efficiency, due to the convolutional nature of its filter banks. The key idea for CSC high-speed video acquisition is extending the basic formulation by imposing an additional constraint in the temporal dimension, which enforces sparsity of the first-order derivatives over time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
100,353
2408.13373
Learning Unknowns from Unknowns: Diversified Negative Prototypes Generator for Few-Shot Open-Set Recognition
Few-shot open-set recognition (FSOR) is a challenging task that requires a model to recognize known classes and identify unknown classes with limited labeled data. Existing approaches, particularly Negative-Prototype-Based methods, generate negative prototypes based solely on known class data. However, as the unknown space is infinite while the known space is limited, these methods suffer from limited representation capability. To address this limitation, we propose a novel approach, termed \textbf{D}iversified \textbf{N}egative \textbf{P}rototypes \textbf{G}enerator (DNPG), which adopts the principle of "learning unknowns from unknowns." Our method leverages the unknown space information learned from base classes to generate more representative negative prototypes for novel classes. During the pre-training phase, we learn the unknown space representation of the base classes. This representation, along with inter-class relationships, is then utilized in the meta-learning process to construct negative prototypes for novel classes. To prevent prototype collapse and ensure adaptability to varying data compositions, we introduce the Swap Alignment (SA) module. Our DNPG model, by learning from the unknown space, generates negative prototypes that cover a broader unknown space, thereby achieving state-of-the-art performance on three standard FSOR datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
483,121
2206.06807
The Causal Structure of Semantic Ambiguities
Ambiguity is a natural language phenomenon occurring at different levels of syntax, semantics, and pragmatics. It is widely studied; in Psycholinguistics, for instance, we have a variety of competing studies for the human disambiguation processes. These studies are empirical and based on eye-tracking measurements. Here we take first steps towards formalizing these processes for semantic ambiguities where we identified the presence of two features: (1) joint plausibility degrees of different possible interpretations, (2) causal structures according to which certain words play a more substantial role in the processes. The novel sheaf-theoretic model of definite causality developed by Gogioso and Pinzani in QPL 2021 offers tools to model and reason about these features. We applied this theory to a dataset of ambiguous phrases extracted from Psycholinguistics literature and their human plausibility judgements collected by us using the Amazon Mechanical Turk engine. We measured the causal fractions of different disambiguation orders within the phrases and discovered two prominent orders: from subject to verb in the subject-verb and from object to verb in the verb object phrases. We also found evidence for delay in the disambiguation of polysemous vs homonymous verbs, again compatible with Psycholinguistic findings.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
302,508
2310.13011
Compositional preference models for aligning LMs
As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitting the preference dataset. We propose Compositional Preference Models (CPMs), a novel PM framework that decomposes one global preference assessment into several interpretable features, obtains scalar scores for these features from a prompted LM, and aggregates these scores using a logistic regression classifier. Through these simple steps, CPMs allow to control which properties of the preference data are used to train the preference model and to build it based on features that are believed to underlie the human preference judgment. Our experiments show that CPMs not only improve generalization and are more robust to overoptimization than standard PMs, but also that best-of-n samples obtained using CPMs tend to be preferred over samples obtained using conventional PMs. Overall, our approach demonstrates the benefits of endowing PMs with priors about which features determine human preferences while relying on LM capabilities to extract those features in a scalable and robust way.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
401,252
2306.09215
On the Effects and Optimal Design of Redundant Sensors in Collaborative State Estimation
The existence of redundant sensors in collaborative state estimation is a common occurrence, yet their true significance remains elusive. This paper comprehensively investigates the effects and optimal design of redundant sensors in sensor networks that use Kalman filtering to estimate the state of a random process collaboratively. The paper presents two main results: a theoretical analysis of the effects of redundant sensors and an engineering-oriented optimal design of redundant sensors. In the theoretical analysis, the paper leverages Riccati equations and Symplectic matrix theory to unveil the explicit role of redundant sensors in cooperative state estimation. The results unequivocally demonstrate that the addition of redundant sensors enhances the estimation performance of the sensor network, aligning with the principle of ``more is better". Moreover, the paper establishes a precise sufficient and necessary condition to assess whether the inclusion of redundant sensors improves the overall estimation performance. Moving towards engineering-oriented design optimization, the paper proposes a novel algorithm to tackle the optimal design problem of redundant sensors, and the convergence of the proposed algorithm is guaranteed. Numerical simulations are provided to demonstrate the results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
373,722
2311.00300
Entity Alignment Method of Science and Technology Patent based on Graph Convolution Network and Information Fusion
The entity alignment of science and technology patents aims to link the equivalent entities in the knowledge graph of different science and technology patent data sources. Most entity alignment methods only use graph neural network to obtain the embedding of graph structure or use attribute text description to obtain semantic representation, ignoring the process of multi-information fusion in science and technology patents. In order to make use of the graphic structure and auxiliary information such as the name, description and attribute of the patent entity, this paper proposes an entity alignment method based on the graph convolution network for science and technology patent information fusion. Through the graph convolution network and BERT model, the structure information and entity attribute information of the science and technology patent knowledge graph are embedded and represented to achieve multi-information fusion, thus improving the performance of entity alignment. Experiments on three benchmark data sets show that the proposed method Hit@K The evaluation indicators are better than the existing methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
404,593
2501.15893
Benchmarking Quantum Reinforcement Learning
Benchmarking and establishing proper statistical validation metrics for reinforcement learning (RL) remain ongoing challenges, where no consensus has been established yet. The emergence of quantum computing and its potential applications in quantum reinforcement learning (QRL) further complicate benchmarking efforts. To enable valid performance comparisons and to streamline current research in this area, we propose a novel benchmarking methodology, which is based on a statistical estimator for sample complexity and a definition of statistical outperformance. Furthermore, considering QRL, our methodology casts doubt on some previous claims regarding its superiority. We conducted experiments on a novel benchmarking environment with flexible levels of complexity. While we still identify possible advantages, our findings are more nuanced overall. We discuss the potential limitations of these results and explore their implications for empirical research on quantum advantage in QRL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,751
2212.04354
Device identification using optimized digital footprints
The rapidly increasing number of internet of things (IoT) and non-IoT devices has imposed new security challenges to network administrators. Accurate device identification in the increasingly complex network structures is necessary. In this paper, a device fingerprinting (DFP) method has been proposed for device identification, based on digital footprints, which devices use for communication over a network. A subset of nine features have been selected from the network and transport layers of a single transmission control protocol/internet protocol packet based on attribute evaluators in Weka, to generate device-specific signatures. The method has been evaluated on two online datasets, and an experimental dataset, using different supervised machine learning (ML) algorithms. Results have shown that the method is able to distinguish device type with up to 100% precision using the random forest (RF) classifier, and classify individual devices with up to 95.7% precision. These results demonstrate the applicability of the proposed DFP method for device identification, in order to provide a more secure and robust network.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
335,415
1609.05815
Generalized Fano and non-Fano networks
It is known that the Fano network has a vector linear solution if and only if the characteristic of the finite field is $2$; and the non-Fano network has a vector linear solution if and only if the characteristic of the finite field is not $2$. Using these properties of Fano and non-Fano networks it has been shown that linear network coding is insufficient. In this paper we generalize the properties of Fano and non-Fano networks. Specifically, by adding more nodes and edges to the Fano network, we construct a network which has a vector linear solution for any vector dimension if and only if the characteristic of the finite field belongs to an arbitrary given set of primes $\{p_1,p_2,\ldots,p_l\}$. Similarly, by adding more nodes and edges to the non-Fano network, we construct a network which has a vector linear solution for any vector dimension if and only if the characteristic of the finite field does not belong to an arbitrary given set of primes $\{p_1,p_2,\ldots,p_l\}$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
61,201
2203.09936
Fake News Detection Using Majority Voting Technique
Due to the evolution of the Web and social network platforms it becomes very easy to disseminate the information. Peoples are creating and sharing more information than ever before, which may be misleading, misinformation or fake information. Fake news detection is a crucial and challenging task due to the unstructured nature of the available information. In the recent years, researchers have provided significant solutions to tackle with the problem of fake news detection, but due to its nature there are still many open issues. In this paper, we have proposed majority voting approach to detect fake news articles. We have used different textual properties of fake and real news. We have used publicly available fake news dataset, comprising of 20,800 news articles among which 10,387 are real and 10,413 are fake news labeled as binary 0 and 1. For the evaluation of our approach, we have used commonly used machine learning classifiers like, Decision Tree, Logistic Regression, XGBoost, Random Forest, Extra Trees, AdaBoost, SVM, SGD and Naive Bayes. Using the aforementioned classifiers, we built a multi-model fake news detection system using Majority Voting technique to achieve the more accurate results. The experimental results show that, our proposed approach achieved accuracy of 96.38%, precision of 96%, recall of 96% and F1-measure of 96%. The evaluation confirms that, Majority Voting technique achieved more acceptable results as compare to individual learning technique.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
286,342
cs/0511065
Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh Environments
We consider multiple-input multiple-output (MIMO) transmit beamforming systems with maximum ratio combining (MRC) receivers. The operating environment is Rayleigh-fading with both transmit and receive spatial correlation. We present exact expressions for the probability density function (p.d.f.) of the output signal-to-noise ratio (SNR), as well as the system outage probability. The results are based on explicit closed-form expressions which we derive for the p.d.f. and c.d.f. of the maximum eigenvalue of double-correlated complex Wishart matrices. For systems with two antennas at either the transmitter or the receiver, we also derive exact closed-form expressions for the symbol error rate (SER). The new expressions are used to prove that MIMO-MRC achieves the maximum available spatial diversity order, and to demonstrate the effect of spatial correlation. The analysis is validated through comparison with Monte-Carlo simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,087
1804.02047
Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond
State-of-the-art pedestrian detection models have achieved great success in many benchmarks. However, these models require lots of annotation information and the labeling process usually takes much time and efforts. In this paper, we propose a method to generate labeled pedestrian data and adapt them to support the training of pedestrian detectors. The proposed framework is built on the Generative Adversarial Network (GAN) with multiple discriminators, trying to synthesize realistic pedestrians and learn the background context simultaneously. To handle the pedestrians of different sizes, we adopt the Spatial Pyramid Pooling (SPP) layer in the discriminator. We conduct experiments on two benchmarks. The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details. To quantitatively evaluate our approach, we add the generated samples into training data of the baseline pedestrian detectors and show the synthetic images are able to improve the detectors' performance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
94,328
2211.10748
Delay-aware Backpressure Routing Using Graph Neural Networks
We propose a throughput-optimal biased backpressure (BP) algorithm for routing, where the bias is learned through a graph neural network that seeks to minimize end-to-end delay. Classical BP routing provides a simple yet powerful distributed solution for resource allocation in wireless multi-hop networks but has poor delay performance. A low-cost approach to improve this delay performance is to favor shorter paths by incorporating pre-defined biases in the BP computation, such as a bias based on the shortest path (hop) distance to the destination. In this work, we improve upon the widely-used metric of hop distance (and its variants) for the shortest path bias by introducing a bias based on the link duty cycle, which we predict using a graph convolutional neural network. Numerical results show that our approach can improve the delay performance compared to classical BP and existing BP alternatives based on pre-defined bias while being adaptive to interference density. In terms of complexity, our distributed implementation only introduces a one-time overhead (linear in the number of devices in the network) compared to classical BP, and a constant overhead compared to the lowest-complexity existing bias-based BP algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
331,422
2405.08038
Feature Expansion and enhanced Compression for Class Incremental Learning
Class incremental learning consists in training discriminative models to classify an increasing number of classes over time. However, doing so using only the newly added class data leads to the known problem of catastrophic forgetting of the previous classes. Recently, dynamic deep learning architectures have been shown to exhibit a better stability-plasticity trade-off by dynamically adding new feature extractors to the model in order to learn new classes followed by a compression step to scale the model back to its original size, thus avoiding a growing number of parameters. In this context, we propose a new algorithm that enhances the compression of previous class knowledge by cutting and mixing patches of previous class samples with the new images during compression using our Rehearsal-CutMix method. We show that this new data augmentation reduces catastrophic forgetting by specifically targeting past class information and improving its compression. Extensive experiments performed on the CIFAR and ImageNet datasets under diverse incremental learning evaluation protocols demonstrate that our approach consistently outperforms the state-of-the-art . The code will be made available upon publication of our work.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
453,968
2310.08210
CLExtract: Recovering Highly Corrupted DVB/GSE Satellite Stream with Contrastive Learning
Since satellite systems are playing an increasingly important role in our civilization, their security and privacy weaknesses are more and more concerned. For example, prior work demonstrates that the communication channel between maritime VSAT and ground segment can be eavesdropped on using consumer-grade equipment. The stream decoder GSExtract developed in this prior work performs well for most packets but shows incapacity for corrupted streams. We discovered that such stream corruption commonly exists in not only Europe and North Atlantic areas but also Asian areas. In our experiment, using GSExtract, we are only able to decode 2.1\% satellite streams we eavesdropped on in Asia. Therefore, in this work, we propose to use a contrastive learning technique with data augmentation to decode and recover such highly corrupted streams. Rather than rely on critical information in corrupted streams to search for headers and perform decoding, contrastive learning directly learns the features of packet headers at different protocol layers and identifies them in a stream sequence. By filtering them out, we can extract the innermost data payload for further analysis. Our evaluation shows that this new approach can successfully recover 71-99\% eavesdropped data hundreds of times faster speed than GSExtract. Besides, the effectiveness of our approach is not largely damaged when stream corruption becomes more severe.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
399,308
2004.05125
Rapidly Deploying a Neural Search Engine for the COVID-19 Open Research Dataset: Preliminary Thoughts and Lessons Learned
We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI. This web application exists as part of a suite of tools that we have developed over the past few weeks to help domain experts tackle the ongoing global pandemic. We hope that improved information access capabilities to the scientific literature can inform evidence-based decision making and insight generation. This paper describes our initial efforts and offers a few thoughts about lessons we have learned along the way.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
true
172,097
2203.02635
Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributes
Deep neural networks are increasingly deployed for scene analytics, including to evaluate the attention and reaction of people exposed to out-of-home advertisements. However, the features extracted by a deep neural network that was trained to predict a specific, consensual attribute (e.g. emotion) may also encode and thus reveal information about private, protected attributes (e.g. age or gender). In this work, we focus on such leakage of private information at inference time. We consider an adversary with access to the features extracted by the layers of a deployed neural network and use these features to predict private attributes. To prevent the success of such an attack, we modify the training of the network using a confusion loss that encourages the extraction of features that make it difficult for the adversary to accurately predict private attributes. We validate this training approach on image-based tasks using a publicly available dataset. Results show that, compared to the original network, the proposed PrivateNet can reduce the leakage of private information of a state-of-the-art emotion recognition classifier by 2.88% for gender and by 13.06% for age group, with a minimal effect on task accuracy.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
283,805
2012.06981
Fine-Grained Lineage for Safer Notebook Interactions
Computational notebooks have emerged as the platform of choice for data science and analytical workflows, enabling rapid iteration and exploration. By keeping intermediate program state in memory and segmenting units of execution into so-called "cells", notebooks allow users to execute their workflows interactively and enjoy particularly tight feedback. However, as cells are added, removed, reordered, and rerun, this hidden intermediate state accumulates in a way that is not necessarily correlated with the notebook's visible code, making execution behavior difficult to reason about, and leading to errors and lack of reproducibility. We present NBSafety, a custom Jupyter kernel that uses runtime tracing and static analysis to automatically manage lineage associated with cell execution and global notebook state. NBSafety detects and prevents errors that users make during unaided notebook interactions, all while preserving the flexibility of existing notebook semantics. We evaluate NBSafety's ability to prevent erroneous interactions by replaying and analyzing 666 real notebook sessions. Of these, NBSafety identified 117 sessions with potential safety errors, and in the remaining 549 sessions, the cells that NBSafety identified as resolving safety issues were more than $7\times$ more likely to be selected by users for re-execution compared to a random baseline, even though the users were not using NBSafety and were therefore not influenced by its suggestions.
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
211,299
2103.11282
Tracking error learning control for precise mobile robot path tracking in outdoor environment
This paper presents a Tracking-Error Learning Control (TELC) algorithm for precise mobile robot path tracking in off-road terrain. In traditional tracking error-based control approaches, feedback and feedforward controllers are designed based on the nominal model which cannot capture the uncertainties, disturbances and changing working conditions so that they cannot ensure precise path tracking performance in the outdoor environment. In TELC algorithm, the feedforward control actions are updated by using the tracking error dynamics and the plant-model mismatch problem is thus discarded. Therefore, the feedforward controller gradually eliminates the feedback controller from the control of the system once the mobile robot has been on-track. In addition to the proof of the stability, it is proven that the cost functions do not have local minima so that the coefficients in TELC algorithm guarantee that the global minimum is reached. The experimental results show that the TELC algorithm results in better path tracking performance than the traditional tracking error-based control method. The mobile robot controlled by TELC algorithm can track a target path precisely with less than $10$ cm error in off-road terrain.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
225,743
2210.01745
Making Decisions under Outcome Performativity
Decision-makers often act in response to data-driven predictions, with the goal of achieving favorable outcomes. In such settings, predictions don't passively forecast the future; instead, predictions actively shape the distribution of outcomes they are meant to predict. This performative prediction setting raises new challenges for learning "optimal" decision rules. In particular, existing solution concepts do not address the apparent tension between the goals of forecasting outcomes accurately and steering individuals to achieve desirable outcomes. To contend with this concern, we introduce a new optimality concept -- performative omniprediction -- adapted from the supervised (non-performative) learning setting. A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule with respect to many possibly-competing objectives. Our main result demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which we call outcome performativity. On a technical level, our results follow by carefully generalizing the notion of outcome indistinguishability to the outcome performative setting. From an appropriate notion of Performative OI, we recover many consequences known to hold in the supervised setting, such as omniprediction and universal adaptability.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
321,375
2410.10322
Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks
In this work, we investigate a particular implicit bias in the gradient descent training process, which we term "Feature Averaging", and argue that it is one of the principal factors contributing to non-robustness of deep neural networks. Despite the existence of multiple discriminative features capable of classifying data, neural networks trained by gradient descent exhibit a tendency to learn the average (or certain combination) of these features, rather than distinguishing and leveraging each feature individually. In particular, we provide a detailed theoretical analysis of the training dynamics of gradient descent in a two-layer ReLU network for a binary classification task, where the data distribution consists of multiple clusters with orthogonal cluster center vectors. We rigorously prove that gradient descent converges to the regime of feature averaging, wherein the weights associated with each hidden-layer neuron represent an average of the cluster centers (each center corresponding to a distinct feature). It leads the network classifier to be non-robust due to an attack that aligns with the negative direction of the averaged features. Furthermore, we prove that, with the provision of more granular supervised information, a two-layer multi-class neural network is capable of learning individual features, from which one can derive a binary classifier with the optimal robustness under our setting. Besides, we also conduct extensive experiments using synthetic datasets, MNIST and CIFAR-10 to substantiate the phenomenon of feature averaging and its role in adversarial robustness of neural networks. We hope the theoretical and empirical insights can provide a deeper understanding of the impact of the gradient descent training on feature learning process, which in turn influences the robustness of the network, and how more detailed supervision may enhance model robustness.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
498,017
2007.08553
Smooth Deformation Field-based Mismatch Removal in Real-time
This paper studies the mismatch removal problem, which may serve as the subsequent step of feature matching. Non-rigid deformation makes it difficult to remove mismatches because no parametric transformation can be found. To solve this problem, we first propose an algorithm based on the re-weighting and 1-point RANSAC strategy (R1P-RNSC), which is a parametric method under a reasonable assumption that the non-rigid deformation can be approximately represented by multiple locally rigid transformations. R1P-RNSC is fast but suffers from a drawback that the local smoothing information cannot be taken into account. Then, we propose a non-parametric algorithm based on the expectation maximization algorithm and dual quaternion (EMDQ) representation to generate the smooth deformation field. The two algorithms compensate for the drawbacks of each other. Specifically, EMDQ needs good initial values provided by R1P-RNSC, and R1P-RNSC needs EMDQ for refinement. Experimental results with real-world data demonstrate that the combination of the two algorithms has the best accuracy compared to other state-of-the-art methods, which can handle up to 85% of outliers in real-time. The ability to generate dense deformation field from sparse matches with outliers in real-time makes the proposed algorithms have many potential applications, such as non-rigid registration and SLAM.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
187,660
1505.04560
On the Formation of Circles in Co-authorship Networks
The availability of an overwhelmingly large amount of bibliographic information including citation and co-authorship data makes it imperative to have a systematic approach that will enable an author to organize her own personal academic network profitably. An effective method could be to have one's co-authorship network arranged into a set of "circles", which has been a recent practice for organizing relationships (e.g., friendship) in many online social networks. In this paper, we propose an unsupervised approach to automatically detect circles in an ego network such that each circle represents a densely knit community of researchers. Our model is an unsupervised method which combines a variety of node features and node similarity measures. The model is built from a rich co-authorship network data of more than 8 hundred thousand authors. In the first level of evaluation, our model achieves 13.33% improvement in terms of overlapping modularity compared to the best among four state-of-the-art community detection methods. Further, we conduct a task-based evaluation -- two basic frameworks for collaboration prediction are considered with the circle information (obtained from our model) included in the feature set. Experimental results show that including the circle information detected by our model improves the prediction performance by 9.87% and 15.25% on average in terms of AUC (Area under the ROC) and P rec@20 (Precision at Top 20) respectively compared to the case, where the circle information is not present.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
43,202
2105.05566
Structural risk minimization for quantum linear classifiers
Quantum machine learning (QML) models based on parameterized quantum circuits are often highlighted as candidates for quantum computing's near-term ``killer application''. However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper we study how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havl\'{i}\v{c}ek et al. (Nature, 2019), and Schuld and Killoran (PRL, 2019). Firstly, using relationships to well understood classical models, we prove that two model parameters -- i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model -- closely control the models' complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models' ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
234,855
1608.00698
Covert Communication in the Presence of an Uninformed Jammer
Recent work has established that when transmitter Alice wishes to communicate reliably to recipient Bob without detection by warden Willie, with additive white Gaussian noise (AWGN) channels between all parties, communication is limited to $\mathcal{O}(\sqrt{n})$ bits in $n$ channel uses. However, this assumes Willie has an accurate statistical characterization of the channel. When Willie has uncertainty about such and his receiver is limited to a threshold test on the received power, Alice can transmit covertly with a power that does not decrease with $n$, thus conveying $\mathcal{O}(n)$ bits covertly and reliably in $n$ uses of an AWGN channel. Here, we consider covert communication of $\mathcal{O}(n)$ bits in $n$ channel uses while generalizing the environment and removing any restrictions on Willie's receiver. We assume an uninformed "jammer" is present to help Alice, and we consider AWGN and block fading channels. In some scenarios, Willie's optimal detector is a threshold test on the received power. When the channel between the jammer and Willie has multiple fading blocks per codeword, a threshold test on the received power is not optimal. However, we establish that Alice can remain covert with a transmit power that does not decrease with $n$ even when Willie employs an optimal detector.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
59,321
2008.05932
Kullback-Leibler divergence between quantum distributions, and its upper-bound
This work presents an upper-bound to value that the Kullback-Leibler (KL) divergence can reach for a class of probability distributions called quantum distributions (QD). The aim is to find a distribution $U$ which maximizes the KL divergence from a given distribution $P$ under the assumption that $P$ and $U$ have been generated by distributing a given discrete quantity, a quantum. Quantum distributions naturally represent a wide range of probability distributions that are used in practical applications. Moreover, such a class of distributions can be obtained as an approximation of any probability distribution. The retrieving of an upper-bound for the entropic divergence is here shown to be possible under the condition that the compared distributions are quantum distributions over the same quantum value, thus they become comparable. Thus, entropic divergence acquires a more powerful meaning when it is applied to comparable distributions. This aspect should be taken into account in future developments of divergences. The theoretical findings are used for proposing a notion of normalized KL divergence that is empirically shown to behave differently from already known measures.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
191,659
2107.04618
Optimal Triangulation Method is Not Really Optimal
Triangulation refers to the problem of finding a 3D point from its 2D projections on multiple camera images. For solving this problem, it is the common practice to use so-called optimal triangulation method, which we call the L2 method in this paper. But, the method can be optimal only if we assume no uncertainty in the camera parameters. Through extensive comparison on synthetic and real data, we observed that the L2 method is actually not the best choice when there is uncertainty in the camera parameters. Interestingly, it can be observed that the simple mid-point method outperforms other methods. Apart from its high performance, the mid-point method has a simple closed formed solution for multiple camera images while the L2 method is hard to be used for more than two camera images. Therefore, in contrast to the common practice, we argue that the simple mid-point method should be used in structure-from-motion applications where there is uncertainty in camera parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
245,512
2207.03807
Beyond Transfer Learning: Co-finetuning for Action Localisation
Transfer learning is the predominant paradigm for training deep networks on small target datasets. Models are typically pretrained on large ``upstream'' datasets for classification, as such labels are easy to collect, and then finetuned on ``downstream'' tasks such as action localisation, which are smaller due to their finer-grained annotations. In this paper, we question this approach, and propose co-finetuning -- simultaneously training a single model on multiple ``upstream'' and ``downstream'' tasks. We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data, and also show how we can easily extend our approach to multiple ``upstream'' datasets to further improve performance. In particular, co-finetuning significantly improves the performance on rare classes in our downstream task, as it has a regularising effect, and enables the network to learn feature representations that transfer between different datasets. Finally, we observe how co-finetuning with public, video classification datasets, we are able to achieve state-of-the-art results for spatio-temporal action localisation on the challenging AVA and AVA-Kinetics datasets, outperforming recent works which develop intricate models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
306,984
1805.08365
Learning Markov Clustering Networks for Scene Text Detection
A novel framework named Markov Clustering Network (MCN) is proposed for fast and robust scene text detection. MCN predicts instance-level bounding boxes by firstly converting an image into a Stochastic Flow Graph (SFG) and then performing Markov Clustering on this graph. Our method can detect text objects with arbitrary size and orientation without prior knowledge of object size. The stochastic flow graph encode objects' local correlation and semantic information. An object is modeled as strongly connected nodes, which allows flexible bottom-up detection for scale-varying and rotated objects. MCN generates bounding boxes without using Non-Maximum Suppression, and it can be fully parallelized on GPUs. The evaluation on public benchmarks shows that our method outperforms the existing methods by a large margin in detecting multioriented text objects. MCN achieves new state-of-art performance on challenging MSRA-TD500 dataset with precision of 0.88, recall of 0.79 and F-score of 0.83. Also, MCN achieves realtime inference with frame rate of 34 FPS, which is $1.5\times$ speedup when compared with the fastest scene text detection algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
98,120
2309.17313
Few-Shot Domain Adaptation for Charge Prediction on Unprofessional Descriptions
Recent works considering professional legal-linguistic style (PLLS) texts have shown promising results on the charge prediction task. However, unprofessional users also show an increasing demand on such a prediction service. There is a clear domain discrepancy between PLLS texts and non-PLLS texts expressed by those laypersons, which degrades the current SOTA models' performance on non-PLLS texts. A key challenge is the scarcity of non-PLLS data for most charge classes. This paper proposes a novel few-shot domain adaptation (FSDA) method named Disentangled Legal Content for Charge Prediction (DLCCP). Compared with existing FSDA works, which solely perform instance-level alignment without considering the negative impact of text style information existing in latent features, DLCCP (1) disentangles the content and style representations for better domain-invariant legal content learning with carefully designed optimization goals for content and style spaces and, (2) employs the constitutive elements knowledge of charges to extract and align element-level and instance-level content representations simultaneously. We contribute the first publicly available non-PLLS dataset named NCCP for developing layperson-friendly charge prediction models. Experiments on NCCP show the superiority of our methods over competitive baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
395,722
2407.18421
Self-Directed Synthetic Dialogues and Revisions Technical Report
Synthetic data has become an important tool in the fine-tuning of language models to follow instructions and solve complex problems. Nevertheless, the majority of open data to date is often lacking multi-turn data and collected on closed models, limiting progress on advancing open fine-tuning methods. We introduce Self Directed Synthetic Dialogues (SDSD), an experimental dataset consisting of guided conversations of language models talking to themselves. The dataset consists of multi-turn conversations generated with DBRX, Llama 2 70B, and Mistral Large, all instructed to follow a conversation plan generated prior to the conversation. We also explore including principles from Constitutional AI and other related works to create synthetic preference data via revisions to the final conversation turn. We hope this work encourages further exploration in multi-turn data and the use of open models for expanding the impact of synthetic data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
476,356
2405.06705
LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought
Self-correction is emerging as a promising approach to mitigate the issue of hallucination in Large Language Models (LLMs). To facilitate effective self-correction, recent research has proposed mistake detection as its initial step. However, current literature suggests that LLMs often struggle with reliably identifying reasoning mistakes when using simplistic prompting strategies. To address this challenge, we introduce a unique prompting strategy, termed the Pedagogical Chain-of-Thought (PedCoT), which is specifically designed to guide the identification of reasoning mistakes, particularly mathematical reasoning mistakes. PedCoT consists of pedagogical principles for prompts (PPP) design, two-stage interaction process (TIP) and grounded PedCoT prompts, all inspired by the educational theory of the Bloom Cognitive Model (BCM). We evaluate our approach on two public datasets featuring math problems of varying difficulty levels. The experiments demonstrate that our zero-shot prompting strategy significantly outperforms strong baselines. The proposed method can achieve the goal of reliable mathematical mistake identification and provide a foundation for automatic math answer grading. The results underscore the significance of educational theory, serving as domain knowledge, in guiding prompting strategy design for addressing challenging tasks with LLMs effectively.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
453,416
2203.10107
SiMCa: Sinkhorn Matrix Factorization with Capacity Constraints
For a very broad range of problems, recommendation algorithms have been increasingly used over the past decade. In most of these algorithms, the predictions are built upon user-item affinity scores which are obtained from high-dimensional embeddings of items and users. In more complex scenarios, with geometrical or capacity constraints, prediction based on embeddings may not be sufficient and some additional features should be considered in the design of the algorithm. In this work, we study the recommendation problem in the setting where affinities between users and items are based both on their embeddings in a latent space and on their geographical distance in their underlying euclidean space (e.g., $\mathbb{R}^2$), together with item capacity constraints. This framework is motivated by some real-world applications, for instance in healthcare: the task is to recommend hospitals to patients based on their location, pathology, and hospital capacities. In these applications, there is somewhat of an asymmetry between users and items: items are viewed as static points, their embeddings, capacities and locations constraining the allocation. Upon the observation of an optimal allocation, user embeddings, items capacities, and their positions in their underlying euclidean space, our aim is to recover item embeddings in the latent space; doing so, we are then able to use this estimate e.g. in order to predict future allocations. We propose an algorithm (SiMCa) based on matrix factorization enhanced with optimal transport steps to model user-item affinities and learn item embeddings from observed data. We then illustrate and discuss the results of such an approach for hospital recommendation on synthetic data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
286,400
2309.08737
Experimental Assessment of a Forward-Collision Warning System Fusing Deep Learning and Decentralized Radio Sensing
This paper presents the idea of an automatic forward-collision warning system based on a decentralized radio sensing (RS) approach. In this framework, a vehicle in receiving mode employs a continuous waveform (CW) transmitted by a second vehicle as a probe signal to detect oncoming vehicles and warn the driver of a potential forward collision. Such a CW can easily be incorporated as a pilot signal within the data frame of current multicarrier vehicular communication systems. Detection of oncoming vehicles is performed by a deep learning (DL) module that analyzes the features of the Doppler signature imprinted on the CW probe signal by a rapidly approaching vehicle. This decentralized CW RS approach was assessed experimentally using data collected by a series of field trials conducted in a two-lanes high-speed highway. Detection performance was evaluated for two different DL models: a long short-term memory network and a convolutional neural network. The obtained results demonstrate the feasibility of the envisioned forward-collision warning system based on the fusion of DL and decentralized CW RS.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
392,297
1904.10597
Autonomous Voltage Control for Grid Operation Using Deep Reinforcement Learning
Modern power grids are experiencing grand challenges caused by the stochastic and dynamic nature of growing renewable energy and demand response. Traditional theoretical assumptions and operational rules may be violated, which are difficult to be adapted by existing control systems due to the lack of computational power and accurate grid models for use in real time, leading to growing concerns in the secure and economic operation of the power grid. Existing operational control actions are typically determined offline, which are less optimized. This paper presents a novel paradigm, Grid Mind, for autonomous grid operational controls using deep reinforcement learning. The proposed AI agent for voltage control can learn its control policy through interactions with massive offline simulations, and adapts its behavior to new changes including not only load/generation variations but also topological changes. A properly trained agent is tested on the IEEE 14-bus system with tens of thousands of scenarios, and promising performance is demonstrated in applying autonomous voltage controls for secure grid operation.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
128,658
2201.03028
Development of a hybrid machine-learning and optimization tool for performance-based solar shading design
Solar shading design should be done for the desired Indoor Environmental Quality (IEQ) in the early design stages. This field can be very challenging and time-consuming also requires experts, sophisticated software, and a large amount of money. The primary purpose of this research is to design a simple tool to study various models of solar shadings and make decisions easier and faster in the early stages. Database generation methods, artificial intelligence, and optimization have been used to achieve this goal. This tool includes two main parts of 1. predicting the performance of the user-selected model along with proposing effective parameters and 2. proposing optimal pre-prepared models to the user. In this regard, initially, a side-lit shoebox model with variable parameters was modeled parametrically, and five common solar shading models with their variables were applied to the space. For each solar shadings and the state without shading, metrics related to daylight and glare, view, and initial costs were simulated. The database generated in this research includes 87912 alternatives and six calculated metrics introduced to optimized machine learning models, including neural network, random Forrest, support vector regression, and k nearest neighbor. According to the results, the most accurate and fastest estimation model was Random Forrest, with an r2_score of 0.967 to 1. Then, sensitivity analysis was performed to identify the most influential parameters for each shading model and the state without it. This analysis distinguished the most effective parameters, including window orientation, WWR, room width, length, and shading depth. Finally, by optimizing the estimation function of machine learning models with the NSGA II algorithm, about 7300 optimal models were identified. The developed tool can evaluate various design alternatives in less than a few seconds for each.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
274,732
2211.15782
Towards Preserving Semantic Structure in Argumentative Multi-Agent via Abstract Interpretation
Over the recent twenty years, argumentation has received considerable attention in the fields of knowledge representation, reasoning, and multi-agent systems. However, argumentation in dynamic multi-agent systems encounters the problem of significant arguments generated by agents, which comes at the expense of representational complexity and computational cost. In this work, we aim to investigate the notion of abstraction from the model-checking perspective, where several arguments are trying to defend the same position from various points of view, thereby reducing the size of the argumentation framework whilst preserving the semantic flow structure in the system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
333,390
2308.09952
Finding emergence in data by maximizing effective information
Quantifying emergence and modeling emergent dynamics in a data-driven manner for complex dynamical systems is challenging due to the lack of direct observations at the micro-level. Thus, it's crucial to develop a framework to identify emergent phenomena and capture emergent dynamics at the macro-level using available data. Inspired by the theory of causal emergence (CE), this paper introduces a machine learning framework to learn macro-dynamics in an emergent latent space and quantify the degree of CE. The framework maximizes effective information, resulting in a macro-dynamics model with enhanced causal effects. Experimental results on simulated and real data demonstrate the effectiveness of the proposed framework. It quantifies degrees of CE effectively under various conditions and reveals distinct influences of different noise types. It can learn a one-dimensional coarse-grained macro-state from fMRI data, to represent complex neural activities during movie clip viewing. Furthermore, improved generalization to different test environments is observed across all simulation data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
386,503
1606.00094
Boda-RTC: Productive Generation of Portable, Efficient Code for Convolutional Neural Networks on Mobile Computing Platforms
The popularity of neural networks (NNs) spans academia, industry, and popular culture. In particular, convolutional neural networks (CNNs) have been applied to many image based machine learning tasks and have yielded strong results. The availability of hardware/software systems for efficient training and deployment of large and/or deep CNN models has been, and continues to be, an important consideration for the field. Early systems for NN computation focused on leveraging existing dense linear algebra techniques and libraries. Current approaches use low-level machine specific programming and/or closed-source, purpose-built vendor libraries. In this work, we present an open source system that, compared to existing approaches, achieves competitive computational speed while achieving higher portability. We achieve this by targeting the vendor-neutral OpenCL platform using a code-generation approach. We argue that our approach allows for both: (1) the rapid development of new computational kernels for existing hardware targets, and (2) the rapid tuning of existing computational kernels for new hardware targets. Results are presented for a case study of targeting the Qualcomm Snapdragon 820 mobile computing platform for CNN deployment.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
56,625
2311.00136
Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis. Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive spatiotemporal generation problem. Neuroformer is a multimodal, multitask generative pretrained transformer (GPT) model that is specifically designed to handle the intricacies of data in systems neuroscience. It scales linearly with feature size, can process an arbitrary number of modalities, and is adaptable to downstream tasks, such as predicting behavior. We first trained Neuroformer on simulated datasets, and found that it both accurately predicted simulated neuronal circuit activity, and also intrinsically inferred the underlying neural circuit connectivity, including direction. When pretrained to decode neural responses, the model predicted the behavior of a mouse with only few-shot fine-tuning, suggesting that the model begins learning how to do so directly from the neural representations themselves, without any explicit supervision. We used an ablation study to show that joint training on neuronal responses and behavior boosted performance, highlighting the model's ability to associate behavioral and neural representations in an unsupervised manner. These findings show that Neuroformer can analyze neural datasets and their emergent properties, informing the development of models and hypotheses associated with the brain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
404,520
2002.09703
Automatic Data Augmentation via Deep Reinforcement Learning for Effective Kidney Tumor Segmentation
Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation. However, the data generated by these conventional augmentation methods are random and sometimes harmful to the subsequent segmentation. In this paper, we developed a novel automatic learning-based data augmentation method for medical image segmentation which models the augmentation task as a trial-and-error procedure using deep reinforcement learning (DRL). In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss. Specifically, the best sequential combination of different basic operations is automatically learned by directly maximizing the performance improvement (\ie, Dice ratio) on the available validation set. We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
165,154
1706.05723
Detecting Large Concept Extensions for Conceptual Analysis
When performing a conceptual analysis of a concept, philosophers are interested in all forms of expression of a concept in a text---be it direct or indirect, explicit or implicit. In this paper, we experiment with topic-based methods of automating the detection of concept expressions in order to facilitate philosophical conceptual analysis. We propose six methods based on LDA, and evaluate them on a new corpus of court decision that we had annotated by experts and non-experts. Our results indicate that these methods can yield important improvements over the keyword heuristic, which is often used as a concept detection heuristic in many contexts. While more work remains to be done, this indicates that detecting concepts through topics can serve as a general-purpose method for at least some forms of concept expression that are not captured using naive keyword approaches.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
75,561
2408.11316
Probabilistic Medical Predictions of Large Language Models
Large Language Models (LLMs) have shown promise in clinical applications through prompt engineering, allowing flexible clinical predictions. However, they struggle to produce reliable prediction probabilities, which are crucial for transparency and decision-making. While explicit prompts can lead LLMs to generate probability estimates, their numerical reasoning limitations raise concerns about reliability. We compared explicit probabilities from text generation to implicit probabilities derived from the likelihood of predicting the correct label token. Across six advanced open-source LLMs and five medical datasets, explicit probabilities consistently underperformed implicit probabilities in discrimination, precision, and recall. This discrepancy is more pronounced with smaller LLMs and imbalanced datasets, highlighting the need for cautious interpretation, improved probability estimation methods, and further research for clinical use of LLMs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
482,233
2409.14495
Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension
Logical reading comprehension is a challenging task that entails grasping the underlying semantics of text and applying reasoning to deduce the correct answer. Prior researches have primarily focused on enhancing logical reasoning capabilities through Chain-of-Thought (CoT) or data augmentation. However, previous work constructing chain-of-thought rationales concentrates solely on analyzing correct options, neglecting the incorrect alternatives. Addtionally, earlier efforts on data augmentation by altering contexts rely on rule-based methods, which result in generated contexts that lack diversity and coherence. To address these issues, we propose a Premise-Oriented Data Augmentation (PODA) framework. This framework can generate CoT rationales including analyses for both correct and incorrect options, while constructing diverse and high-quality counterfactual contexts from incorrect candidate options. We integrate summarizing premises and identifying premises for each option into rationales. Subsequently, we employ multi-step prompts with identified premises to construct counterfactual context. To facilitate the model's capabilities to better differentiate the reasoning process associated with each option, we introduce a novel thought-path contrastive learning method that compares reasoning paths between the original and counterfactual samples. Experimental results on three representative LLMs demonstrate that our method can improve the baselines substantially across two challenging logical reasoning benchmarks (ReClor and LogiQA 2.0). The data and code are released at https://github.com/lalalamdbf/TPReasoner.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
490,492
2501.10122
Integrating Mediumband with Emerging Technologies: Unified Vision for 6G and Beyond Physical Layer
In this paper, we present a vision for the physical layer of 6G and beyond, where emerging physical layer technologies integrate to drive wireless links toward mediumband operation, addressing a major challenge: deep fading, a prevalent, and perhaps the most consequential, obstacle in wireless communication link performance. By leveraging recent insights into wireless channel fundamentals and advancements in computing, multi-modal sensing, and AI, we articulate how reflecting surfaces (RS), sensing, digital twins (DTs), ray-tracing, and AI can work synergistically to lift the burden of deep fading in future wireless communication networks. This refreshingly new approach promises transformative improvements in reliability, spectral efficiency, energy efficiency, and network resilience, positioning 6G for truly superior performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
525,400
2003.04614
Label-Driven Reconstruction for Domain Adaptation in Semantic Segmentation
Unsupervised domain adaptation enables to alleviate the need for pixel-wise annotation in the semantic segmentation. One of the most common strategies is to translate images from the source domain to the target domain and then align their marginal distributions in the feature space using adversarial learning. However, source-to-target translation enlarges the bias in translated images and introduces extra computations, owing to the dominant data size of the source domain. Furthermore, consistency of the joint distribution in source and target domains cannot be guaranteed through global feature alignment. Here, we present an innovative framework, designed to mitigate the image translation bias and align cross-domain features with the same category. This is achieved by 1) performing the target-to-source translation and 2) reconstructing both source and target images from their predicted labels. Extensive experiments on adapting from synthetic to real urban scene understanding demonstrate that our framework competes favorably against existing state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
167,599
2004.06307
Sense and Sensibility: Characterizing Social Media Users Regarding the Use of Controversial Terms for COVID-19
With the world-wide development of 2019 novel coronavirus, although WHO has officially announced the disease as COVID-19, one controversial term - "Chinese Virus" is still being used by a great number of people. In the meantime, global online media coverage about COVID-19-related racial attacks increases steadily, most of which are anti-Chinese or anti-Asian. As this pandemic becomes increasingly severe, more people start to talk about it on social media platforms such as Twitter. When they refer to COVID-19, there are mainly two ways: using controversial terms like "Chinese Virus" or "Wuhan Virus", or using non-controversial terms like "Coronavirus". In this study, we attempt to characterize the Twitter users who use controversial terms and those who use non-controversial terms. We use the Tweepy API to retrieve 17 million related tweets and the information of their authors. We find significant differences between these two groups of Twitter users across their demographics, user-level features like the number of followers, political following status, as well as their geo-locations. Moreover, we apply classification models to predict Twitter users who are more likely to use controversial terms. To our best knowledge, this is the first large-scale social media-based study to characterize users with respect to their usage of controversial terms during a major crisis.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
172,472
2112.07839
LoSAC: An Efficient Local Stochastic Average Control Method for Federated Optimization
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a large number of distributed clients, is vital for federated learning. The primary concerns in FedOpt can be attributed to the model divergence and communication efficiency, which significantly affect the performance. In this paper, we propose a new method, i.e., LoSAC, to learn from heterogeneous distributed data more efficiently. Its key algorithmic insight is to locally update the estimate for the global full gradient after {each} regular local model update. Thus, LoSAC can keep clients' information refreshed in a more compact way. In particular, we have studied the convergence result for LoSAC. Besides, the bonus of LoSAC is the ability to defend the information leakage from the recent technique Deep Leakage Gradients (DLG). Finally, experiments have verified the superiority of LoSAC comparing with state-of-the-art FedOpt algorithms. Specifically, LoSAC significantly improves communication efficiency by more than $100\%$ on average, mitigates the model divergence problem and equips with the defense ability against DLG.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
271,602
1903.07107
Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agents
Neuroevolution is a process of training neural networks (NN) through an evolutionary algorithm, usually to serve as a state-to-action mapping model in control or reinforcement learning-type problems. This paper builds on the Neuro Evolution of Augmented Topologies (NEAT) formalism that allows designing topology and weight evolving NNs. Fundamental advancements are made to the neuroevolution process to address premature stagnation and convergence issues, central among which is the incorporation of automated mechanisms to control the population diversity and average fitness improvement within the neuroevolution process. Insights into the performance and efficiency of the new algorithm is obtained by evaluating it on three benchmark problems from the Open AI platform and an Unmanned Aerial Vehicle (UAV) collision avoidance problem.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
124,535
1901.02033
The Effect of Introducing Redundancy in a Probabilistic Forwarding Protocol
This paper is concerned with the problem of broadcasting information from a source node to every node in an ad-hoc network. Flooding, as a broadcast mechanism, involves each node forwarding any packet it receives to all its neighbours. This results in excessive transmissions and thus a high energy expenditure overall. Probabilistic forwarding or gossiping involves each node forwarding a received packet to all its neighbours only with a certain probability $p$. In this paper, we study the effect of introducing redundancy, in the form of coded packets, into a probabilistic forwarding protocol. Specifically, we assume that the source node has $k$ data packets to broadcast, which are encoded into $n \ge k$ coded packets, such that any $k$ of these coded packets are sufficient to recover the original $k$ data packets. Our interest is in determining the minimum forwarding probability $p$ for a "successful broadcast", which we take to be the event that the expected fraction of network nodes that receive at least $k$ of the $n$ coded packets is close to 1. We examine, via simulations and analysis of a number of different network topologies (e.g., trees, grids, random geometric graphs), how this minimum forwarding probability, and correspondingly, the expected total number of packet transmissions varies with the amount of redundancy added. Our simulation results indicate that over network topologies that are highly connected, the introduction of redundancy into the probabilistic forwarding protocol is useful, as it can significantly reduce the expected total number of transmissions needed for a successful broadcast. On the other hand, for trees, our analysis shows that the expected total number of transmissions needed increases with redundancy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,105
2310.01846
Benchmarking and Improving Generator-Validator Consistency of Language Models
As of September 2023, ChatGPT correctly answers "what is 7+8" with 15, but when asked "7+8=15, True or False" it responds with "False". This inconsistency between generating and validating an answer is prevalent in language models (LMs) and erodes trust. In this paper, we propose a framework for measuring the consistency between generation and validation (which we call generator-validator consistency, or GV-consistency), finding that even GPT-4, a state-of-the-art LM, is GV-consistent only 76% of the time. To improve the consistency of LMs, we propose to finetune on the filtered generator and validator responses that are GV-consistent, and call this approach consistency fine-tuning. We find that this approach improves GV-consistency of Alpaca-30B from 60% to 93%, and the improvement extrapolates to unseen tasks and domains (e.g., GV-consistency for positive style transfers extrapolates to unseen styles like humor). In addition to improving consistency, consistency fine-tuning improves both generator quality and validator accuracy without using any labeled data. Evaluated across 6 tasks, including math questions, knowledge-intensive QA, and instruction following, our method improves the generator quality by 16% and the validator accuracy by 6.3% across all tasks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
396,605
2409.10986
Control-flow Reconstruction Attacks on Business Process Models
Process models may be automatically generated from event logs that contain as-is data of a business process. While such models generalize over the control-flow of specific, recorded process executions, they are often also annotated with behavioural statistics, such as execution frequencies.Based thereon, once a model is published, certain insights about the original process executions may be reconstructed, so that an external party may extract confidential information about the business process. This work is the first to empirically investigate such reconstruction attempts based on process models. To this end, we propose different play-out strategies that reconstruct the control-flow from process trees, potentially exploiting frequency annotations. To assess the potential success of such reconstruction attacks on process models, and hence the risks imposed by publishing them, we compare the reconstructed process executions with those of the original log for several real-world datasets.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
true
488,963
2305.06595
BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews
The analysis of consumer sentiment, as expressed through reviews, can provide a wealth of insight regarding the quality of a product. While the study of sentiment analysis has been widely explored in many popular languages, relatively less attention has been given to the Bangla language, mostly due to a lack of relevant data and cross-domain adaptability. To address this limitation, we present BanglaBook, a large-scale dataset of Bangla book reviews consisting of 158,065 samples classified into three broad categories: positive, negative, and neutral. We provide a detailed statistical analysis of the dataset and employ a range of machine learning models to establish baselines including SVM, LSTM, and Bangla-BERT. Our findings demonstrate a substantial performance advantage of pre-trained models over models that rely on manually crafted features, emphasizing the necessity for additional training resources in this domain. Additionally, we conduct an in-depth error analysis by examining sentiment unigrams, which may provide insight into common classification errors in under-resourced languages like Bangla. Our codes and data are publicly available at https://github.com/mohsinulkabir14/BanglaBook.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
363,604
1910.01909
On some distributed scheduling algorithms for wireless networks with hypergraph interference models
It is shown that the performance of the maximal scheduling algorithm in wireless ad hoc networks under the hypergraph interference model can be further away from optimal than previously known. The exact worst-case performance of this distributed, greedy scheduling algorithm is analyzed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
148,079
2409.15934
Automated test generation to evaluate tool-augmented LLMs as conversational AI agents
Tool-augmented LLMs are a promising approach to create AI agents that can have realistic conversations, follow procedures, and call appropriate functions. However, evaluating them is challenging due to the diversity of possible conversations, and existing datasets focus only on single interactions and function-calling. We present a test generation pipeline to evaluate LLMs as conversational AI agents. Our framework uses LLMs to generate diverse tests grounded on user-defined procedures. For that, we use intermediate graphs to limit the LLM test generator's tendency to hallucinate content that is not grounded on input procedures, and enforces high coverage of the possible conversations. Additionally, we put forward ALMITA, a manually curated dataset for evaluating AI agents in customer support, and use it to evaluate existing LLMs. Our results show that while tool-augmented LLMs perform well in single interactions, they often struggle to handle complete conversations. While our focus is on customer support, our method is general and capable of AI agents for different domains.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
491,135
2311.13732
A Propagation Perspective on Recursive Forward Dynamics for Systems with Kinematic Loops
We revisit the concept of constraint embedding as a means for dealing with kinematic loop constraints during dynamics computations for rigid-body systems. Specifically, we consider the local loop constraints emerging from common actuation sub-mechanisms in modern robotics systems (e.g., geared motors, differential drives, and four-bar mechanisms). However, rather than develop the concept of constraint embedding from the perspective of graphical analysis, we present a novel analysis of constraint embedding that generalizes the traditional concepts of joint models and motion/force subspaces between individual rigid bodies to generalized joint models and motion/force subspaces between groups of rigid bodies subject to loop constraints. The generalized concepts are used in a self-contained, articulated-body-based derivation of the constraint-embedding-based recursive algorithm for forward dynamics. The derivation represents the first assembly method to demonstrate the recursivity of articulated inertia computation in the presence of loop constraints. We demonstrate the broad applicability of the generalized joint concepts by showing how they also lead to the constraint-embedding-based recursive algorithm for inverse dynamics. Lastly, we benchmark our open-source implementation in C++ for the forward dynamics algorithm against a state-of-the-art, non-recursive algorithm. Our benchmarking validates that constraint embedding outperforms the non-recursive alternative in the case of local kinematic loops.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
409,852
2407.11087
Restore-RWKV: Efficient and Effective Medical Image Restoration with RWKV
Transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high-resolution medical images. The recent advent of the Receptance Weighted Key Value (RWKV) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. To leverage its advanced design, we propose Restore-RWKV, the first RWKV-based model for medical image restoration. Since the original RWKV model is designed for 1D sequences, we make two necessary modifications for modeling spatial relations in 2D medical images. First, we present a recurrent WKV (Re-WKV) attention mechanism that captures global dependencies with linear computational complexity. Re-WKV incorporates bidirectional attention as basic for a global receptive field and recurrent attention to effectively model 2D dependencies from various scan directions. Second, we develop an omnidirectional token shift (Omni-Shift) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. These adaptations make the proposed Restore-RWKV an efficient and effective model for medical image restoration. Even a lightweight variant of Restore-RWKV, with only 1.16 million parameters, achieves comparable or even superior results compared to existing state-of-the-art (SOTA) methods. Extensive experiments demonstrate that the resulting Restore-RWKV achieves SOTA performance across a range of medical image restoration tasks, including PET image synthesis, CT image denoising, MRI image super-resolution, and all-in-one medical image restoration. Code is available at: https://github.com/Yaziwel/Restore-RWKV.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,300
1901.09541
On Random Subsampling of Gaussian Process Regression: A Graphon-Based Analysis
In this paper, we study random subsampling of Gaussian process regression, one of the simplest approximation baselines, from a theoretical perspective. Although subsampling discards a large part of training data, we show provable guarantees on the accuracy of the predictive mean/variance and its generalization ability. For analysis, we consider embedding kernel matrices into graphons, which encapsulate the difference of the sample size and enables us to evaluate the approximation and generalization errors in a unified manner. The experimental results show that the subsampling approximation achieves a better trade-off regarding accuracy and runtime than the Nystr\"{o}m and random Fourier expansion methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
119,779
1611.04211
On Location Hiding in Distributed Systems
We consider the following problem - a group of mobile agents perform some task on a terrain modeled as a graph. In a given moment of time an adversary gets an access to the graph and positions of the agents. Shortly before adversary's observation the mobile agents have a chance to relocate themselves in order to hide their initial configuration. We assume that the initial configuration may possibly reveal to the adversary some information about the task they performed. Clearly agents have to change their location in possibly short time using minimal energy. In our paper we introduce a definition of a \emph{well hiding} algorithm in which the starting and final configurations of the agents have small mutual information. Then we discuss the influence of various features of the model on the running time of the optimal well-hiding algorithm. We show that if the topology of the graph is known to the agents, then the number of steps proportional to the diameter of the graph is sufficient and necessary. In the unknown topology scenario we only consider a single agent case. We first show that the task is impossible in the deterministic case if the agent has no memory. Then we present a polynomial randomized algorithm. Finally in the model with memory we show that the number of steps proportional to the number of edges of the graph is sufficient and necessary. In some sense we investigate how complex is the problem of "losing" information about location (both physical and logical) for different settings.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
63,805
1707.03997
A Web-Based Tool for Analysing Normative Documents in English
Our goal is to use formal methods to analyse normative documents written in English, such as privacy policies and service-level agreements. This requires the combination of a number of different elements, including information extraction from natural language, formal languages for model representation, and an interface for property specification and verification. We have worked on a collection of components for this task: a natural language extraction tool, a suitable formalism for representing such documents, an interface for building models in this formalism, and methods for answering queries asked of a given model. In this work, each of these concerns is brought together in a web-based tool, providing a single interface for analysing normative texts in English. Through the use of a running example, we describe each component and demonstrate the workflow established by our tool.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
76,970
1711.01526
On Identification of Distribution Grids
Large-scale integration of distributed energy resources into residential distribution feeders necessitates careful control of their operation through power flow analysis. While the knowledge of the distribution system model is crucial for this type of analysis, it is often unavailable or outdated. The recent introduction of synchrophasor technology in low-voltage distribution grids has created an unprecedented opportunity to learn this model from high-precision, time-synchronized measurements of voltage and current phasors at various locations. This paper focuses on joint estimation of model parameters (admittance values) and operational structure of a poly-phase distribution network from the available telemetry data via the lasso, a method for regression shrinkage and selection. We propose tractable convex programs capable of tackling the low rank structure of the distribution system and develop an online algorithm for early detection and localization of critical events that induce a change in the admittance matrix. The efficacy of these techniques is corroborated through power flow studies on four three-phase radial distribution systems serving real household demands.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
83,905
1610.08885
Optimal actuator placement for minimizing the worst-case control energy
We consider the actuator placement problem for linear systems. Specifically, we aim to identify an actuator which requires the least amount of control energy to drive the system from an arbitrary initial condition to the origin in the worst case. Said otherwise, we investigate the minimax problem of minimizing the control energy over the worst possible initial conditions. Recall that the least amount of control energy needed to drive a linear controllable system from any initial condition on the unit sphere to the origin is upper-bounded by the inverse of the smallest eigenvalue of the associated controllability Gramian, and moreover, the upper-bound is sharp. The minimax problem can be thus viewed as the optimization problem of minimizing the upper-bound via the placement of an actuator. In spite of its simple and natural formulation, this problem is difficult to solve. In fact, properties such as the stability of the system matrix, which are not related to controllability, now play important roles. We focus in this paper on the special case where the system matrix is positive definite. Under this assumption, we are able to provide a complete solution to the optimal actuator placement problem and highlight the difficulty in solving the general problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
62,979
2206.00667
How Biased are Your Features?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the quantification and mitigation of classifier bias is a central concern in fairness in machine learning. In this paper, we aim to quantify the influence of different features in a dataset on the bias of a classifier. To do this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into its components among individual features and the intersection of multiple features. The key idea is to represent existing group fairness metrics as the difference of the scaled conditional variances in the classifier's prediction and apply a decomposition of variance according to global sensitivity analysis. To estimate FIFs, we instantiate an algorithm FairXplainer that applies variance decomposition of classifier's prediction following local regression. Experiments demonstrate that FairXplainer captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier. The code is available at https://github.com/ReAILe/bias-explainer.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,206
1112.2491
Permutation Excess Entropy and Mutual Information between the Past and Future
We address the excess entropy, which is a measure of complexity for stationary time series, from the ordinal point of view. We show that the permutation excess entropy is equal to the mutual information between two adjacent semi-infinite blocks in the space of orderings for finite-state stationary ergodic Markov processes. This result may shed a new light on the relationship between complexity and anticipation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,423
2103.11623
Multi-Transmitter Coded Caching Networks with Transmitter-side Knowledge of File Popularity
This work presents a new way of exploiting non-uniform file popularity in coded caching networks. Focusing on a fully-connected fully-interfering wireless setting with multiple cache-enabled transmitters and receivers, we show how non-uniform file popularity can be used very efficiently to accelerate the impact of transmitter-side data redundancy on receiver-side coded caching. This approach is motivated by the recent discovery that, under any realistic file-size constraint, having content appear in multiple transmitters can in fact dramatically boost the speed-up factor attributed to coded caching. We formulate an optimization problem that exploits file popularity to optimize the placement of files at the transmitters. We then provide a proof that reduces significantly the variable search space, and propose a new search algorithm that solves the problem at hand. We also prove an analytical performance upper bound, which is in fact met by our algorithm in the regime of many receivers. Our work reflects the benefits of allocating higher cache redundancy to more popular files, but also reflects a law of diminishing returns where for example very popular files may in fact benefit from minimum redundancy. In the end, this work reveals that in the context of coded caching, employing multiple transmitters can be a catalyst in fully exploiting file popularity, as it avoids various asymmetry complications that appear when file popularity is used to alter the receiver-side cache placement.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
225,877
1703.10661
BanglaLekha-Isolated: A Comprehensive Bangla Handwritten Character Dataset
Bangla handwriting recognition is becoming a very important issue nowadays. It is potentially a very important task specially for Bangla speaking population of Bangladesh and West Bengal. By keeping that in our mind we are introducing a comprehensive Bangla handwritten character dataset named BanglaLekha-Isolated. This dataset contains Bangla handwritten numerals, basic characters and compound characters. This dataset was collected from multiple geographical location within Bangladesh and includes sample collected from a variety of aged groups. This dataset can also be used for other classification problems i.e: gender, age, district. This is the largest dataset on Bangla handwritten characters yet.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
70,949
2205.06468
Monocular Human Digitization via Implicit Re-projection Networks
We present an approach to generating 3D human models from images. The key to our framework is that we predict double-sided orthographic depth maps and color images from a single perspective projected image. Our framework consists of three networks. The first network predicts normal maps to recover geometric details such as wrinkles in the clothes and facial regions. The second network predicts shade-removed images for the front and back views by utilizing the predicted normal maps. The last multi-headed network takes both normal maps and shade-free images and predicts depth maps while selectively fusing photometric and geometric information through multi-headed attention gates. Experimental results demonstrate that our method shows visually plausible results and competitive performance in terms of various evaluation metrics over state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,247
1910.08945
Online Bagging for Anytime Transfer Learning
Transfer learning techniques have been widely used in the reality that it is difficult to obtain sufficient labeled data in the target domain, but a large amount of auxiliary data can be obtained in the relevant source domain. But most of the existing methods are based on offline data. In practical applications, it is often necessary to face online learning problems in which the data samples are achieved sequentially. In this paper, We are committed to applying the ensemble approach to solving the problem of online transfer learning so that it can be used in anytime setting. More specifically, we propose a novel online transfer learning framework, which applies the idea of online bagging methods to anytime transfer learning problems, and constructs strong classifiers through online iterations of the usefulness of multiple weak classifiers. Further, our algorithm also provides two extension schemes to reduce the impact of negative transfer. Experiments on three real data sets show that the effectiveness of our proposed algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
150,025
2207.05983
Data-Driven Identification of Dynamic Quality Models in Drinking Water Networks
Traditional control and monitoring of water quality in drinking water distribution networks (WDN) rely on mostly model- or toolbox-driven approaches, where the network topology and parameters are assumed to be known. In contrast, system identification (SysID) algorithms for generic dynamic system models seek to approximate such models using only input-output data without relying on network parameters. The objective of this paper is to investigate SysID algorithms for water quality model approximation. This research problem is challenging due to (i) complex water quality and reaction dynamics and (ii) the mismatch between the requirements of SysID algorithms and the properties of water quality dynamics. In this paper, we present the first attempt to identify water quality models in WDNs using only input-output experimental data and classical SysID methods without knowing any WDN parameters. Properties of water quality models are introduced, the ensuing challenges caused by these properties when identifying water quality models are discussed, and remedial solutions are given. Through case studies, we demonstrate the applicability of SysID algorithms, show the corresponding performance in terms of accuracy and computational time, and explore the possible factors impacting water quality model identification.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
307,732
2007.13055
Optimizing Block-Sparse Matrix Multiplications on CUDA with TVM
We implemented and optimized matrix multiplications between dense and block-sparse matrices on CUDA. We leveraged TVM, a deep learning compiler, to explore the schedule space of the operation and generate efficient CUDA code. With the automatic parameter tuning in TVM, our cross-thread reduction based implementation achieved competitive or better performance compared with other state-of-the-art frameworks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
189,007
2102.02485
Image Restoration by Deep Projected GSURE
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution. In recent years, solutions that are based on deep Convolutional Neural Networks (CNNs) have shown great promise. Yet, most of these techniques, which train CNNs using external data, are restricted to the observation models that have been used in the training phase. A recent alternative that does not have this drawback relies on learning the target image using internal learning. One such prominent example is the Deep Image Prior (DIP) technique that trains a network directly on the input image with a least-squares loss. In this paper, we propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN. We demonstrate two ways to use our framework. In the first one, where no explicit prior is used, we show that the proposed approach outperforms other internal learning methods, such as DIP. In the second one, we show that our GSURE-based loss leads to improved performance when used within a plug-and-play priors scheme.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
218,422
1812.11718
Over- and Under-Approximating Reachable Sets for Perturbed Delay Differential Equations
This note explores reach set computations for perturbed delay differential equations (DDEs). The perturbed DDEs of interest in this note is a class of DDEs whose dynamics are subject to perturbations, and their solutions feature the local homeomorphism property with respect to initial states. Membership in this class of perturbed DDEs is determined by conducting sensitivity analysis of solution mappings with respect to initial states to impose a bound constraint on the time-lag term. The homeomorphism property of solutions to such class of perturbed DDEs enables us to construct over- and under-approximations of reach sets by performing reachability analysis on just the boundaries of their permitted initial sets, thereby permitting an extension of reach set computation methods for ordinary differential equations to perturbed DDEs. Three examples demonstrate the performance of our approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
117,616
1106.0895
Computable Bounds for Rate Distortion with Feed-Forward for Stationary and Ergodic Sources
In this paper we consider the rate distortion problem of discrete-time, ergodic, and stationary sources with feed forward at the receiver. We derive a sequence of achievable and computable rates that converge to the feed-forward rate distortion. We show that, for ergodic and stationary sources, the rate {align} R_n(D)=\frac{1}{n}\min I(\hat{X}^n\rightarrow X^n){align} is achievable for any $n$, where the minimization is taken over the transition conditioning probability $p(\hat{x}^n|x^n)$ such that $\ex{}{d(X^n,\hat{X}^n)}\leq D$. The limit of $R_n(D)$ exists and is the feed-forward rate distortion. We follow Gallager's proof where there is no feed-forward and, with appropriate modification, obtain our result. We provide an algorithm for calculating $R_n(D)$ using the alternating minimization procedure, and present several numerical examples. We also present a dual form for the optimization of $R_n(D)$, and transform it into a geometric programming problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,728
2411.18268
Information geometry of bosonic Gaussian thermal states
Bosonic Gaussian thermal states form a fundamental class of states in quantum information science. This paper explores the information geometry of these states, focusing on characterizing the distance between two nearby states and the geometry induced by a parameterization in terms of their mean vectors and Hamiltonian matrices. In particular, for the family of bosonic Gaussian thermal states, we derive expressions for their Fisher-Bures and Kubo-Mori information matrices with respect to their mean vectors and Hamiltonian matrices. An important application of our formulas consists of fundamental limits on how well one can estimate these parameters. We additionally establish formulas for the derivatives and the symmetric logarithmic derivatives of bosonic Gaussian thermal states. The former could have applications in gradient descent algorithms for quantum machine learning when using bosonic Gaussian thermal states as an ansatz, and the latter in formulating optimal strategies for single parameter estimation of bosonic Gaussian thermal states. Finally, the expressions for the aforementioned information matrices could have additional applications in natural gradient descent algorithms when using bosonic Gaussian thermal states as an ansatz.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
511,801
2409.00815
Serialized Speech Information Guidance with Overlapped Encoding Separation for Multi-Speaker Automatic Speech Recognition
Serialized output training (SOT) attracts increasing attention due to its convenience and flexibility for multi-speaker automatic speech recognition (ASR). However, it is not easy to train with attention loss only. In this paper, we propose the overlapped encoding separation (EncSep) to fully utilize the benefits of the connectionist temporal classification (CTC) and attention hybrid loss. This additional separator is inserted after the encoder to extract the multi-speaker information with CTC losses. Furthermore, we propose the serialized speech information guidance SOT (GEncSep) to further utilize the separated encodings. The separated streams are concatenated to provide single-speaker information to guide attention during decoding. The experimental results on LibriMix show that the single-speaker encoding can be separated from the overlapped encoding. The CTC loss helps to improve the encoder representation under complex scenarios. GEncSep further improved performance.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
485,081
2302.01999
Online and Offline Learning of Player Objectives from Partial Observations in Dynamic Games
Robots deployed to the real world must be able to interact with other agents in their environment. Dynamic game theory provides a powerful mathematical framework for modeling scenarios in which agents have individual objectives and interactions evolve over time. However, a key limitation of such techniques is that they require a-priori knowledge of all players' objectives. In this work, we address this issue by proposing a novel method for learning players' objectives in continuous dynamic games from noise-corrupted, partial state observations. Our approach learns objectives by coupling the estimation of unknown cost parameters of each player with inference of unobserved states and inputs through Nash equilibrium constraints. By coupling past state estimates with future state predictions, our approach is amenable to simultaneous online learning and prediction in receding horizon fashion. We demonstrate our method in several simulated traffic scenarios in which we recover players' preferences for, e.g., desired travel speed and collision-avoidance behavior. Results show that our method reliably estimates game-theoretic models from noise-corrupted data that closely matches ground-truth objectives, consistently outperforming state-of-the-art approaches.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
343,810
2408.06698
High-order projection-based upwind method for simulation of transitional turbulent flows
We present a scalable, high-order implicit large-eddy simulation (ILES) approach for incompressible transitional flows. This method employs the mass-conserving mixed stress (MCS) method for discretizing the Navier-Stokes equations. The MCS method's low dissipation characteristics, combined with the introduced operator-splitting solution technique, result in a high-order solver optimized for efficient and parallel computation of under-resolved turbulent flows. We further enhance the inherent capabilities of the ILES model by incorporating high-order upwind fluxes and are examining its approximation behaviour in transitional aerodynamic flow problems. In this study, we use flows over the Eppler 387 airfoil at Reynolds numbers up to $3 \cdot 10^5$ as benchmarks for our simulations.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
480,306
2007.15506
SimPose: Effectively Learning DensePose and Surface Normals of People from Simulated Data
With a proliferation of generic domain-adaptation approaches, we report a simple yet effective technique for learning difficult per-pixel 2.5D and 3D regression representations of articulated people. We obtained strong sim-to-real domain generalization for the 2.5D DensePose estimation task and the 3D human surface normal estimation task. On the multi-person DensePose MSCOCO benchmark, our approach outperforms the state-of-the-art methods which are trained on real images that are densely labelled. This is an important result since obtaining human manifold's intrinsic uv coordinates on real images is time consuming and prone to labeling noise. Additionally, we present our model's 3D surface normal predictions on the MSCOCO dataset that lacks any real 3D surface normal labels. The key to our approach is to mitigate the "Inter-domain Covariate Shift" with a carefully selected training batch from a mixture of domain samples, a deep batch-normalized residual network, and a modified multi-task learning objective. Our approach is complementary to existing domain-adaptation techniques and can be applied to other dense per-pixel pose estimation problems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
189,681
2009.00100
Online Multi-Object Tracking and Segmentation with GMPHD Filter and Mask-based Affinity Fusion
In this paper, we propose a highly practical fully online multi-object tracking and segmentation (MOTS) method that uses instance segmentation results as an input. The proposed method is based on the Gaussian mixture probability hypothesis density (GMPHD) filter, a hierarchical data association (HDA), and a mask-based affinity fusion (MAF) model to achieve high-performance online tracking. The HDA consists of two associations: segment-to-track and track-to-track associations. One affinity, for position and motion, is computed by using the GMPHD filter, and the other affinity, for appearance is computed by using the responses from a single object tracker such as a kernalized correlation filter. These two affinities are simply fused by using a score-level fusion method such as min-max normalization referred to as MAF. In addition, to reduce the number of false positive segments, we adopt mask IoU-based merging (mask merging). The proposed MOTS framework with the key modules: HDA, MAF, and mask merging, is easily extensible to simultaneously track multiple types of objects with CPU only execution in parallel processing. In addition, the developed framework only requires simple parameter tuning unlike many existing MOTS methods that need intensive hyperparameter optimization. In the experiments on the two popular MOTS datasets, the key modules show some improvements. For instance, ID-switch decreases by more than half compared to a baseline method in the training sets. In conclusion, our tracker achieves state-of-the-art MOTS performance in the test sets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
193,953
1905.10427
DIVA: Domain Invariant Variational Autoencoders
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant Variational Autoencoder (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations. We highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. To the best of our knowledge this has not been done before in a domain generalization setting. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,053
1505.00511
Iterative Detection and Decoding Algorithms using LDPC Codes for MIMO Systems in Block-Fading Channels
We propose iterative detection and decoding (IDD) algorithms with Low-Density Parity-Check (LDPC) codes for Multiple Input Multiple Output (MIMO) systems operating in block-fading and fast Rayleigh fading channels. Soft-input soft-output minimum mean-square error receivers with successive interference cancellation are considered. In particular, we devise a novel strategy to improve the bit error rate (BER) performance of IDD schemes, which takes into account the soft \textit{a posteriori} output of the decoder in a block-fading channel when Root-Check LDPC codes are used. A MIMO IDD receiver with soft information processing that exploits the code structure and the behavior of the log likelihood ratios is also developed. Moreover, we present a scheduling algorithm for decoding LDPC codes in block-fading channels. Simulations show that the proposed techniques result in significant gains in terms of BER for both block-fading and fast-fading channels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,738
2403.03483
A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-Distillation
Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for such an academic-industry gap is the neighborhood-fetching latency incurred by data dependency in GNNs. To reduce their gaps, Graph Knowledge Distillation (GKD) is proposed, usually based on a standard teacher-student architecture, to distill knowledge from a large teacher GNN into a lightweight student GNN or MLP. However, we found in this paper that neither teachers nor GNNs are necessary for graph knowledge distillation. We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference. More importantly, the proposed TGS framework is purely based on MLPs, where structural information is only implicitly used to guide dual knowledge self-distillation between the target node and its neighborhood. As a result, TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with dual self-distillation, e.g., TGS improves over vanilla MLPs by 15.54% on average and outperforms state-of-the-art GKD algorithms on six real-world datasets. In terms of inference speed, TGS infers 75X-89X faster than existing GNNs and 16X-25X faster than classical inference acceleration methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
435,213
1810.10535
Meta-modeling game for deriving theoretical-consistent, micro-structural-based traction-separation laws via deep reinforcement learning
This paper presents a new meta-modeling framework to employ deep reinforcement learning (DRL) to generate mechanical constitutive models for interfaces. The constitutive models are conceptualized as information flow in directed graphs. The process of writing constitutive models are simplified as a sequence of forming graph edges with the goal of maximizing the model score (a function of accuracy, robustness and forward prediction quality). Thus meta-modeling can be formulated as a Markov decision process with well-defined states, actions, rules, objective functions, and rewards. By using neural networks to estimate policies and state values, the computer agent is able to efficiently self-improve the constitutive model it generated through self-playing, in the same way AlphaGo Zero (the algorithm that outplayed the world champion in the game of Go)improves its gameplay. Our numerical examples show that this automated meta-modeling framework not only produces models which outperform existing cohesive models on benchmark traction-separation data but is also capable of detecting hidden mechanisms among micro-structural features and incorporating them in constitutive models to improve the forward prediction accuracy, which are difficult tasks to do manually.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,315
1906.07125
Replacing the do-calculus with Bayes rule
The concept of causality has a controversial history. The question of whether it is possible to represent and address causal problems with probability theory, or if fundamentally new mathematics such as the do calculus is required has been hotly debated, e.g. Pearl (2001) states "the building blocks of our scientific and everyday knowledge are elementary facts such as "mud does not cause rain" and "symptoms do not cause disease" and those facts, strangely enough, cannot be expressed in the vocabulary of probability calculus". This has lead to a dichotomy between advocates of causal graphical modeling and the do calculus, and researchers applying Bayesian methods. In this paper we demonstrate that, while it is critical to explicitly model our assumptions on the impact of intervening in a system, provided we do so, estimating causal effects can be done entirely within the standard Bayesian paradigm. The invariance assumptions underlying causal graphical models can be encoded in ordinary Probabilistic graphical models, allowing causal estimation with Bayesian statistics, equivalent to the do calculus. Elucidating the connections between these approaches is a key step toward enabling the insights provided by each to be combined to solve real problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
135,513
1711.05561
A Stochastic Resource-Sharing Network for Electric Vehicle Charging
We consider a distribution grid used to charge electric vehicles such that voltage drops stay bounded. We model this as a class of resource-sharing networks, known as bandwidth-sharing networks in the communication network literature. We focus on resource-sharing networks that are driven by a class of greedy control rules that can be implemented in a decentralized fashion. For a large number of such control rules, we can characterize the performance of the system by a fluid approximation. This leads to a set of dynamic equations that take into account the stochastic behavior of EVs. We show that the invariant point of these equations is unique and can be computed by solving a specific ACOPF problem, which admits an exact convex relaxation. We illustrate our findings with a case study using the SCE 47-bus network and several special cases that allow for explicit computations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
84,600
2311.05221
Let's Get the FACS Straight -- Reconstructing Obstructed Facial Features
The human face is one of the most crucial parts in interhuman communication. Even when parts of the face are hidden or obstructed the underlying facial movements can be understood. Machine learning approaches often fail in that regard due to the complexity of the facial structures. To alleviate this problem a common approach is to fine-tune a model for such a specific application. However, this is computational intensive and might have to be repeated for each desired analysis task. In this paper, we propose to reconstruct obstructed facial parts to avoid the task of repeated fine-tuning. As a result, existing facial analysis methods can be used without further changes with respect to the data. In our approach, the restoration of facial features is interpreted as a style transfer task between different recording setups. By using the CycleGAN architecture the requirement of matched pairs, which is often hard to fullfill, can be eliminated. To proof the viability of our approach, we compare our reconstructions with real unobstructed recordings. We created a novel data set in which 36 test subjects were recorded both with and without 62 surface electromyography sensors attached to their faces. In our evaluation, we feature typical facial analysis tasks, like the computation of Facial Action Units and the detection of emotions. To further assess the quality of the restoration, we also compare perceptional distances. We can show, that scores similar to the videos without obstructing sensors can be achieved.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
406,516
2202.10613
Gaussian Processes and Statistical Decision-making in Non-Euclidean Spaces
Bayesian learning using Gaussian processes provides a foundational framework for making decisions in a manner that balances what is known with what could be learned by gathering data. In this dissertation, we develop techniques for broadening the applicability of Gaussian processes. This is done in two ways. Firstly, we develop pathwise conditioning techniques for Gaussian processes, which allow one to express posterior random functions as prior random functions plus a dependent update term. We introduce a wide class of efficient approximations built from this viewpoint, which can be randomly sampled once in advance, and evaluated at arbitrary locations without any subsequent stochasticity. This key property improves efficiency and makes it simpler to deploy Gaussian process models in decision-making settings. Secondly, we develop a collection of Gaussian process models over non-Euclidean spaces, including Riemannian manifolds and graphs. We derive fully constructive expressions for the covariance kernels of scalar-valued Gaussian processes on Riemannian manifolds and graphs. Building on these ideas, we describe a formalism for defining vector-valued Gaussian processes on Riemannian manifolds. The introduced techniques allow all of these models to be trained using standard computational methods. In total, these contributions make Gaussian processes easier to work with and allow them to be used within a wider class of domains in an effective and principled manner. This, in turn, makes it possible to potentially apply Gaussian processes to novel decision-making settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,594
1512.02548
A Stabilised Nodal Spectral Element Method for Fully Nonlinear Water Waves
We present an arbitrary-order spectral element method for general-purpose simulation of non-overturning water waves, described by fully nonlinear potential theory. The method can be viewed as a high-order extension of the classical finite element method proposed by Cai et al (1998) \cite{CaiEtAl1998}, although the numerical implementation differs greatly. Features of the proposed spectral element method include: nodal Lagrange basis functions, a general quadrature-free approach and gradient recovery using global $L^2$ projections. The quartic nonlinear terms present in the Zakharov form of the free surface conditions can cause severe aliasing problems and consequently numerical instability for marginally resolved or very steep waves. We show how the scheme can be stabilised through a combination of over-integration of the Galerkin projections and a mild spectral filtering on a per element basis. This effectively removes any aliasing driven instabilities while retaining the high-order accuracy of the numerical scheme. The additional computational cost of the over-integration is found insignificant compared to the cost of solving the Laplace problem. The model is applied to several benchmark cases in two dimensions. The results confirm the high order accuracy of the model (exponential convergence), and demonstrate the potential for accuracy and speedup. The results of numerical experiments are in excellent agreement with both analytical and experimental results for strongly nonlinear and irregular dispersive wave propagation. The benefit of using a high-order -- possibly adapted -- spatial discretization for accurate water wave propagation over long times and distances is particularly attractive for marine hydrodynamics applications.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
49,948
2404.13649
Distributional Principal Autoencoders
Dimension reduction techniques usually lose information in the sense that reconstructed data are not identical to the original data. However, we argue that it is possible to have reconstructed data identically distributed as the original data, irrespective of the retained dimension or the specific mapping. This can be achieved by learning a distributional model that matches the conditional distribution of data given its low-dimensional latent variables. Motivated by this, we propose Distributional Principal Autoencoder (DPA) that consists of an encoder that maps high-dimensional data to low-dimensional latent variables and a decoder that maps the latent variables back to the data space. For reducing the dimension, the DPA encoder aims to minimise the unexplained variability of the data with an adaptive choice of the latent dimension. For reconstructing data, the DPA decoder aims to match the conditional distribution of all data that are mapped to a certain latent value, thus ensuring that the reconstructed data retains the original data distribution. Our numerical results on climate data, single-cell data, and image benchmarks demonstrate the practical feasibility and success of the approach in reconstructing the original distribution of the data. DPA embeddings are shown to preserve meaningful structures of data such as the seasonal cycle for precipitations and cell types for gene expression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
448,381
2407.09732
Speech Slytherin: Examining the Performance and Efficiency of Mamba for Speech Separation, Recognition, and Synthesis
It is too early to conclude that Mamba is a better alternative to transformers for speech before comparing Mamba with transformers in terms of both performance and efficiency in multiple speech-related tasks. To reach this conclusion, we propose and evaluate three models for three tasks: Mamba-TasNet for speech separation, ConMamba for speech recognition, and VALL-M for speech synthesis. We compare them with transformers of similar sizes in performance, memory, and speed. Our Mamba or Mamba-transformer hybrid models show comparable or higher performance than their transformer counterparts: Sepformer, Conformer, and VALL-E. They are more efficient than transformers in memory and speed for speech longer than a threshold duration, inversely related to the resolution of a speech token. Mamba for separation is the most efficient, and Mamba for recognition is the least. Further, we show that Mamba is not more efficient than transformer for speech shorter than the threshold duration and performs worse in models that require joint modeling of text and speech, such as cross or masked attention of two inputs. Therefore, we argue that the superiority of Mamba or transformer depends on particular problems and models. Code available at https://github.com/xi-j/Mamba-TasNet and https://github.com/xi-j/Mamba-ASR.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
472,699
1107.1128
AISMOTIF-An Artificial Immune System for DNA Motif Discovery
Discovery of transcription factor binding sites is a much explored and still exploring area of research in functional genomics. Many computational tools have been developed for finding motifs and each of them has their own advantages as well as disadvantages. Most of these algorithms need prior knowledge about the data to construct background models. However there is not a single technique that can be considered as best for finding regulatory motifs. This paper proposes an artificial immune system based algorithm for finding the transcription factor binding sites or motifs and two new weighted scores for motif evaluation. The algorithm is enumerative, but sufficient pruning of the pattern search space has been incorporated using immune system concepts. The performance of AISMOTIF has been evaluated by comparing it with eight state of art composite motif discovery algorithms and found that AISMOTIF predicts known motifs as well as new motifs from the benchmark dataset without any prior knowledge about the data.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
11,171
2311.06379
DeMuX: Data-efficient Multilingual Learning
We consider the task of optimally fine-tuning pre-trained multilingual models, given small amounts of unlabelled target data and an annotation budget. In this paper, we introduce DEMUX, a framework that prescribes the exact data-points to label from vast amounts of unlabelled multilingual data, having unknown degrees of overlap with the target set. Unlike most prior works, our end-to-end framework is language-agnostic, accounts for model representations, and supports multilingual target configurations. Our active learning strategies rely upon distance and uncertainty measures to select task-specific neighbors that are most informative to label, given a model. DeMuX outperforms strong baselines in 84% of the test cases, in the zero-shot setting of disjoint source and target language sets (including multilingual target pools), across three models and four tasks. Notably, in low-budget settings (5-100 examples), we observe gains of up to 8-11 F1 points for token-level tasks, and 2-5 F1 for complex tasks. Our code is released here: https://github.com/simran-khanuja/demux.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
406,914
1904.07837
Predicting GNSS satellite visibility from dense point clouds
To help future mobile agents plan their movement in harsh environments,a predictive model has been designed to determine what areas would be favorable for Global Navigation Satellite System (GNSS) positioning. The model is able to predict the number of viable satellites for a GNSS receiver, based on a 3D point cloud map and a satellite constellation. Both occlusion and absorption effects of the environment are considered. A rugged mobile platform was designed to collect data in order to generate the point cloud maps. It was deployed during the Canadian winter known for large amounts of snow and extremely low temperatures. The test environments include a highly dense boreal forest and a university campus with high buildings. The experiment results indicate that the model performs well in both structured and unstructured environments
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
127,897
2307.09944
ProtoCaps: A Fast and Non-Iterative Capsule Network Routing Method
Capsule Networks have emerged as a powerful class of deep learning architectures, known for robust performance with relatively few parameters compared to Convolutional Neural Networks (CNNs). However, their inherent efficiency is often overshadowed by their slow, iterative routing mechanisms which establish connections between Capsule layers, posing computational challenges resulting in an inability to scale. In this paper, we introduce a novel, non-iterative routing mechanism, inspired by trainable prototype clustering. This innovative approach aims to mitigate computational complexity, while retaining, if not enhancing, performance efficacy. Furthermore, we harness a shared Capsule subspace, negating the need to project each lower-level Capsule to each higher-level Capsule, thereby significantly reducing memory requisites during training. Our approach demonstrates superior results compared to the current best non-iterative Capsule Network and tests on the Imagewoof dataset, which is too computationally demanding to handle efficiently by iterative approaches. Our findings underscore the potential of our proposed methodology in enhancing the operational efficiency and performance of Capsule Networks, paving the way for their application in increasingly complex computational scenarios. Code is available at https://github.com/mileseverett/ProtoCaps.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,360
1901.00190
End-to-End Performance Optimization in Hybrid Molecular and Electromagnetic Communications
Telemedicine refers to the use of information and communication technology to assist with medical information and services. In health care applications, high reliable communication links between the health care provider and the desired destination in the human body play a central role in designing end-to-end (E2E) telemedicine system. In the advanced health care applications, $\text{e.g.}$ drug delivery, molecular communication becomes a major building block in bio-nano-medical applications. In this paper, an E2E communication link consisting of the electromagnetic and the molecular link is investigated. This paradigm is crucial when the body is a part of the communication system. Based on the quality of service (QoS) metrics, we present a closed-form expression for the E2E BER of the combination of molecular and wireless electromagnetic communications. \textcolor{black}{ Next, we formulate an optimization problem with the aim of minimizing the E2E BER of the system to achieve the optimal symbol duration for EC and DMC regarding the imposing delivery time from telemedicine services.} The proposed problem is solved by an iterative algorithm based on the bisection method. Also, we study the impact of the system parameters, including drift velocity, detection threshold at the receiver in molecular communication, on the performance of the system. Numerical results show that the proposed method obtains the minimum E2E bit error probability by selecting an appropriate symbol duration of electromagnetic and molecular communications.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
117,713
1502.04617
Deep Transform: Error Correction via Probabilistic Re-Synthesis
Errors in data are usually unwelcome and so some means to correct them is useful. However, it is difficult to define, detect or correct errors in an unsupervised way. Here, we train a deep neural network to re-synthesize its inputs at its output layer for a given class of data. We then exploit the fact that this abstract transformation, which we call a deep transform (DT), inherently rejects information (errors) existing outside of the abstract feature space. Using the DT to perform probabilistic re-synthesis, we demonstrate the recovery of data that has been subject to extreme degradation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
40,285
2401.10590
Adversarial Robustness of Link Sign Prediction in Signed Graphs
Signed graphs serve as fundamental data structures for representing positive and negative relationships in social networks, with signed graph neural networks (SGNNs) emerging as the primary tool for their analysis. Our investigation reveals that balance theory, while essential for modeling signed relationships in SGNNs, inadvertently introduces exploitable vulnerabilities to black-box attacks. To demonstrate this vulnerability, we propose balance-attack, a novel adversarial strategy specifically designed to compromise graph balance degree, and develop an efficient heuristic algorithm to solve the associated NP-hard optimization problem. While existing approaches attempt to restore attacked graphs through balance learning techniques, they face a critical challenge we term "Irreversibility of Balance-related Information," where restored edges fail to align with original attack targets. To address this limitation, we introduce Balance Augmented-Signed Graph Contrastive Learning (BA-SGCL), an innovative framework that combines contrastive learning with balance augmentation techniques to achieve robust graph representations. By maintaining high balance degree in the latent space, BA-SGCL effectively circumvents the irreversibility challenge and enhances model resilience. Extensive experiments across multiple SGNN architectures and real-world datasets demonstrate both the effectiveness of our proposed balance-attack and the superior robustness of BA-SGCL, advancing the security and reliability of signed graph analysis in social networks. Datasets and codes of the proposed framework are at the github repository https://anonymous.4open.science/r/BA-SGCL-submit-DF41/.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
422,701
2010.03170
Modeling and Prediction of Rigid Body Motion with Planar Non-Convex Contact
We present a principled method for motion prediction via dynamic simulation for rigid bodies in intermittent contact with each other where the contact region is a planar non-convex contact patch. Such methods are useful in planning and control for robotic manipulation. The planar non-convex contact patch can either be a topologically connected set or disconnected set. Most work in rigid body dynamic simulation assume that the contact between objects is a point contact, which may not be valid in many applications. In this paper, by using the convex hull of the contact patch, we build on our recent work on simulating rigid bodies with convex contact patches for simulating motion of objects with planar non-convex contact patches. We formulate a discrete-time mixed complementarity problem where we solve the contact detection and integration of the equations of motion simultaneously. We solve for the equivalent contact point (ECP) and contact impulse of each contact patch simultaneously along with the state, i.e., configuration and velocity of the objects. We prove that although we are representing a patch contact by an equivalent point, our model for enforcing non-penetration constraints ensure that there is no artificial penetration between the contacting rigid bodies. We provide empirical evidence to show that our method can seamlessly capture transition among different contact modes like patch contact, multiple or single point contact.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
199,307
1504.01101
Private Data Transfer over a Broadcast Channel
We study the following private data transfer problem: Alice has a database of files. Bob and Cathy want to access a file each from this database (which may or may not be the same file), but each of them wants to ensure that their choices of file do not get revealed even if Alice colludes with the other user. Alice, on the other hand, wants to make sure that each of Bob and Cathy does not learn any more information from the database than the files they demand (the identities of which will be unknown to her). Moreover, they should not learn any information about the other files even if they collude. It turns out that it is impossible to accomplish this if Alice, Bob, and Cathy have access only to private randomness and noiseless communication links. We consider this problem when a binary erasure broadcast channel with independent erasures is available from Alice to Bob and Cathy in addition to a noiseless public discussion channel. We study the file-length-per-broadcast-channel-use rate in the honest-but-curious model. We focus on the case when the database consists of two files, and obtain the optimal rate. We then extend to the case of larger databases, and give upper and lower bounds on the optimal rate.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
41,770