id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2111.03516
Solving the Class Imbalance Problem Using a Counterfactual Method for Data Augmentation
Learning from class imbalanced datasets poses challenges for many machine learning algorithms. Many real-world domains are, by definition, class imbalanced by virtue of having a majority class that naturally has many more instances than its minority class (e.g. genuine bank transactions occur much more often than fraudulent ones). Many methods have been proposed to solve the class imbalance problem, among the most popular being oversampling techniques (such as SMOTE). These methods generate synthetic instances in the minority class, to balance the dataset, performing data augmentations that improve the performance of predictive machine learning (ML) models. In this paper we advance a novel data augmentation method (adapted from eXplainable AI), that generates synthetic, counterfactual instances in the minority class. Unlike other oversampling techniques, this method adaptively combines exist-ing instances from the dataset, using actual feature-values rather than interpolating values between instances. Several experiments using four different classifiers and 25 datasets are reported, which show that this Counterfactual Augmentation method (CFA) generates useful synthetic data points in the minority class. The experiments also show that CFA is competitive with many other oversampling methods many of which are variants of SMOTE. The basis for CFAs performance is discussed, along with the conditions under which it is likely to perform better or worse in future tests.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
265,198
2009.14068
Graph convolutional regression of cardiac depolarization from sparse endocardial maps
Electroanatomic mapping as routinely acquired in ablation therapy of ventricular tachycardia is the gold standard method to identify the arrhythmogenic substrate. To reduce the acquisition time and still provide maps with high spatial resolution, we propose a novel deep learning method based on graph convolutional neural networks to estimate the depolarization time in the myocardium, given sparse catheter data on the left ventricular endocardium, ECG, and magnetic resonance images. The training set consists of data produced by a computational model of cardiac electrophysiology on a large cohort of synthetically generated geometries of ischemic hearts. The predicted depolarization pattern has good agreement with activation times computed by the cardiac electrophysiology model in a validation set of five swine heart geometries with complex scar and border zone morphologies. The mean absolute error hereby measures 8 ms on the entire myocardium when providing 50\% of the endocardial ground truth in over 500 computed depolarization patterns. Furthermore, when considering a complete animal data set with high density electroanatomic mapping data as reference, the neural network can accurately reproduce the endocardial depolarization pattern, even when a small percentage of measurements are provided as input features (mean absolute error of 7 ms with 50\% of input samples). The results show that the proposed method, trained on synthetically generated data, may generalize to real data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
197,925
2103.13447
DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation
In this paper, we present DRANet, a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation. Unlike the existing domain adaptation methods that learn associated features sharing a domain, DRANet preserves the distinctiveness of each domain's characteristics. Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images. Then, it adapts the domain by incorporating the transferred style factor into the content factor along with learnable weights specified for each domain. This learning framework allows bi-/multi-directional domain adaptation with a single encoder-decoder network and aligns their domain shift. Additionally, we propose a content-adaptive domain transfer module that helps retain scene structure while transferring style. Extensive experiments show our model successfully separates content-style factors and synthesizes visually pleasing domain-transferred images. The proposed method demonstrates state-of-the-art performance on standard digit classification tasks as well as semantic segmentation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
226,486
2102.07834
One Line To Rule Them All: Generating LO-Shot Soft-Label Prototypes
Increasingly large datasets are rapidly driving up the computational costs of machine learning. Prototype generation methods aim to create a small set of synthetic observations that accurately represent a training dataset but greatly reduce the computational cost of learning from it. Assigning soft labels to prototypes can allow increasingly small sets of prototypes to accurately represent the original training dataset. Although foundational work on `less than one'-shot learning has proven the theoretical plausibility of learning with fewer than one observation per class, developing practical algorithms for generating such prototypes remains an unexplored territory. We propose a novel, modular method for generating soft-label prototypical lines that still maintains representational accuracy even when there are fewer prototypes than the number of classes in the data. In addition, we propose the Hierarchical Soft-Label Prototype k-Nearest Neighbor classification algorithm based on these prototypical lines. We show that our method maintains high classification accuracy while greatly reducing the number of prototypes required to represent a dataset, even when working with severely imbalanced and difficult data. Our code is available at https://github.com/ilia10000/SLkNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,240
2309.10506
Enhancing Open-Domain Table Question Answering via Syntax- and Structure-aware Dense Retrieval
Open-domain table question answering aims to provide answers to a question by retrieving and extracting information from a large collection of tables. Existing studies of open-domain table QA either directly adopt text retrieval methods or consider the table structure only in the encoding layer for table retrieval, which may cause syntactical and structural information loss during table scoring. To address this issue, we propose a syntax- and structure-aware retrieval method for the open-domain table QA task. It provides syntactical representations for the question and uses the structural header and value representations for the tables to avoid the loss of fine-grained syntactical and structural information. Then, a syntactical-to-structural aggregator is used to obtain the matching score between the question and a candidate table by mimicking the human retrieval process. Experimental results show that our method achieves the state-of-the-art on the NQ-tables dataset and overwhelms strong baselines on a newly curated open-domain Text-to-SQL dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
393,037
1202.0119
Opportunistic Scheduling in Heterogeneous Networks: Distributed Algorithms and System Capacity
In this work, we design and analyze novel distributed scheduling algorithms for multi-user MIMO systems. In particular, we consider algorithms which do not require sending channel state information to a central processing unit, nor do they require communication between the users themselves, yet, we prove their performance closely approximates that of a centrally-controlled system, which is able to schedule the strongest user in each time-slot. Our analysis is based on a novel application of the Point-Process approximation. This novel technique allows us to examine non-homogeneous cases, such as non-identically distributed users, or handling various QoS considerations, and give exact expressions for the capacity of the system under these schemes, solving analytically problems which to date had been open. Possible application include, but are not limited to, modern 4G networks such as 3GPP LTE, or random access protocols.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
14,044
2311.02438
On the stable Cholesky factorization-based method for the maximum correntropy criterion Kalman filtering
This paper continues the research devoted to the design of numerically stable square-root implementations for the maximum correntropy criterion Kalman filtering (MCC-KF). In contrast to the previously obtained results, here we reveal the first robust (with respect to round-off errors) method within the Cholesky factorization-based approach. The method is formulated in terms of square-root factors of the {\it covariance} matrices, i.e. it belongs to the covariance-type filtering methodology. Additionally, a numerically stable orthogonal transformation is utilized at each iterate of the algorithm for accurate propagation of the Cholesky factors involved. The results of numerical experiments illustrate a superior performance of the novel MCC-KF implementation compared to both the conventional algorithm and its previously published Cholesky-based variant.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
405,444
1211.0361
Sketched SVD: Recovering Spectral Features from Compressive Measurements
We consider a streaming data model in which n sensors observe individual streams of data, presented in a turnstile model. Our goal is to analyze the singular value decomposition (SVD) of the matrix of data defined implicitly by the stream of updates. Each column i of the data matrix is given by the stream of updates seen at sensor i. Our approach is to sketch each column of the matrix, forming a "sketch matrix" Y, and then to compute the SVD of the sketch matrix. We show that the singular values and right singular vectors of Y are close to those of X, with small relative error. We also believe that this bound is of independent interest in non-streaming and non-distributed data collection settings. Assuming that the data matrix X is of size Nxn, then with m linear measurements of each column of X, we obtain a smaller matrix Y with dimensions mxn. If m = O(k \epsilon^{-2} (log(1/\epsilon) + log(1/\delta)), where k denotes the rank of X, then with probability at least 1-\delta, the singular values \sigma'_j of Y satisfy the following relative error result (1-\epsilon)^(1/2)<= \sigma'_j/\sigma_j <= (1 + \epsilon)^(1/2) as compared to the singular values \sigma_j of the original matrix X. Furthermore, the right singular vectors v'_j of Y satisfy ||v_j-v_j'||_2 <= min(sqrt{2}, (\epsilon\sqrt{1+\epsilon})/(\sqrt{1-\epsilon}) max_{i\neq j} (\sqrt{2}\sigma_i\sigma_j)/(min_{c\in[-1,1]}(|\sigma^2_i-\sigma^2_j(1+c\epsilon)|))) as compared to the right singular vectors v_j of X. We apply this result to obtain a streaming graph algorithm to approximate the eigenvalues and eigenvectors of the graph Laplacian in the case where the graph has low rank (many connected components).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
19,521
2302.00905
4D topology optimization: Integrated optimization of the structure and self-actuation of soft bodies for dynamic motions
Topology optimization is a powerful tool utilized in various fields for structural design. However, its application has primarily been restricted to static or passively moving objects, mainly focusing on hard materials with limited deformations and contact capabilities. Designing soft and actively moving objects, such as soft robots equipped with actuators, poses challenges due to simulating dynamics problems involving large deformations and intricate contact interactions. Moreover, the optimal structure depends on the object's motion, necessitating a simultaneous design approach. To address these challenges, we propose "4D topology optimization," an extension of density-based topology optimization that incorporates the time dimension. This enables the simultaneous optimization of both the structure and self-actuation of soft bodies for specific dynamic tasks. Our method utilizes multi-indexed and hierarchized density variables distributed over the spatiotemporal design domain, representing the material layout, actuator layout, and time-varying actuation. These variables are efficiently optimized using gradient-based methods. Forward and backward simulations of soft bodies are done using the material point method, a Lagrangian-Eulerian hybrid approach, implemented on a recent automatic differentiation framework. We present several numerical examples of self-actuating soft body designs aimed at achieving locomotion, posture control, and rotation tasks. The results demonstrate the effectiveness of our method in successfully designing soft bodies with complex structures and biomimetic movements, benefiting from its high degree of design freedom.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
343,400
2401.12564
Graph Contrastive Invariant Learning from the Causal Perspective
Graph contrastive learning (GCL), learning the node representation by contrasting two augmented graphs in a self-supervised way, has attracted considerable attention. GCL is usually believed to learn the invariant representation. However, does this understanding always hold in practice? In this paper, we first study GCL from the perspective of causality. By analyzing GCL with the structural causal model (SCM), we discover that traditional GCL may not well learn the invariant representations due to the non-causal information contained in the graph. How can we fix it and encourage the current GCL to learn better invariant representations? The SCM offers two requirements and motives us to propose a novel GCL method. Particularly, we introduce the spectral graph augmentation to simulate the intervention upon non-causal factors. Then we design the invariance objective and independence objective to better capture the causal factors. Specifically, (i) the invariance objective encourages the encoder to capture the invariant information contained in causal variables, and (ii) the independence objective aims to reduce the influence of confounders on the causal variables. Experimental results demonstrate the effectiveness of our approach on node classification tasks.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
423,428
1611.04967
Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models
Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
63,929
2406.13145
Constructing and Evaluating Digital Twins: An Intelligent Framework for DT Development
The development of Digital Twins (DTs) represents a transformative advance for simulating and optimizing complex systems in a controlled digital space. Despite their potential, the challenge of constructing DTs that accurately replicate and predict the dynamics of real-world systems remains substantial. This paper introduces an intelligent framework for the construction and evaluation of DTs, specifically designed to enhance the accuracy and utility of DTs in testing algorithmic performance. We propose a novel construction methodology that integrates deep learning-based policy gradient techniques to dynamically tune the DT parameters, ensuring high fidelity in the digital replication of physical systems. Moreover, the Mean STate Error (MSTE) is proposed as a robust metric for evaluating the performance of algorithms within these digital space. The efficacy of our framework is demonstrated through extensive simulations that show our DT not only accurately mirrors the physical reality but also provides a reliable platform for algorithm evaluation. This work lays a foundation for future research into DT technologies, highlighting pathways for both theoretical enhancements and practical implementations in various industries.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
465,714
cmp-lg/9707009
Recognizing Referential Links: An Information Extraction Perspective
We present an efficient and robust reference resolution algorithm in an end-to-end state-of-the-art information extraction system, which must work with a considerably impoverished syntactic analysis of the input sentences. Considering this disadvantage, the basic setup to collect, filter, then order by salience does remarkably well with third-person pronouns, but needs more semantic and discourse information to improve the treatments of other expression types.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,776
2305.03710
Data Encoding For Healthcare Data Democratisation and Information Leakage Prevention
The lack of data democratization and information leakage from trained models hinder the development and acceptance of robust deep learning-based healthcare solutions. This paper argues that irreversible data encoding can provide an effective solution to achieve data democratization without violating the privacy constraints imposed on healthcare data and clinical models. An ideal encoding framework transforms the data into a new space where it is imperceptible to a manual or computational inspection. However, encoded data should preserve the semantics of the original data such that deep learning models can be trained effectively. This paper hypothesizes the characteristics of the desired encoding framework and then exploits random projections and random quantum encoding to realize this framework for dense and longitudinal or time-series data. Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
362,481
2404.07898
Anomaly Detection in Power Grids via Context-Agnostic Learning
An important tool grid operators use to safeguard against failures, whether naturally occurring or malicious, involves detecting anomalies in the power system SCADA data. In this paper, we aim to solve a real-time anomaly detection problem. Given time-series measurement values coming from a fixed set of sensors on the grid, can we identify anomalies in the network topology or measurement data? Existing methods, primarily optimization-based, mostly use only a single snapshot of the measurement values and do not scale well with the network size. Recent data-driven ML techniques have shown promise by using a combination of current and historical data for anomaly detection but generally do not consider physical attributes like the impact of topology or load/generation changes on sensor measurements and thus cannot accommodate regular context-variability in the historical data. To address this gap, we propose a novel context-aware anomaly detection algorithm, GridCAL, that considers the effect of regular topology and load/generation changes. This algorithm converts the real-time power flow measurements to context-agnostic values, which allows us to analyze measurement coming from different grid contexts in an aggregate fashion, enabling us to derive a unified statistical model that becomes the basis of anomaly detection. Through numerical simulations on networks up to 2383 nodes, we show that our approach is accurate, outperforming state-of-the-art approaches, and is computationally efficient.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
446,012
1906.01062
A Curated Image Parameter Dataset from Solar Dynamics Observatory Mission
We provide a large image parameter dataset extracted from the Solar Dynamics Observatory (SDO) mission's AIA instrument, for the period of January 2011 through the current date, with the cadence of six minutes, for nine wavelength channels. The volume of the dataset for each year is just short of 1 TiB. Towards achieving better results in the region classification of active regions and coronal holes, we improve upon the performance of a set of ten image parameters, through an in depth evaluation of various assumptions that are necessary for calculation of these image parameters. Then, where possible, a method for finding an appropriate settings for the parameter calculations was devised, as well as a validation task to show our improved results. In addition, we include comparisons of JP2 and FITS image formats using supervised classification models, by tuning the parameters specific to the format of the images from which they are extracted, and specific to each wavelength. The results of these comparisons show that utilizing JP2 images, which are significantly smaller files, is not detrimental to the region classification task that these parameters were originally intended for. Finally, we compute the tuned parameters on the AIA images and provide a public API (http://dmlab.cs.gsu.edu/dmlabapi) to access the dataset. This dataset can be used in a range of studies on AIA images, such as content-based image retrieval or tracking of solar events, where dimensionality reduction on the images is necessary for feasibility of the tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
133,579
2210.15377
Retrieving Users' Opinions on Social Media with Multimodal Aspect-Based Sentiment Analysis
People post their opinions and experiences on social media, yielding rich databases of end-users' sentiments. This paper shows to what extent machine learning can analyze and structure these databases. An automated data analysis pipeline is deployed to provide insights into user-generated content for researchers in other domains. First, the domain expert can select an image and a term of interest. Then, the pipeline uses image retrieval to find all images showing similar content and applies aspect-based sentiment analysis to outline users' opinions about the selected term. As part of an interdisciplinary project between architecture and computer science researchers, an empirical study of Hamburg's Elbphilharmonie was conveyed. Therefore, we selected 300 thousand posts with the hashtag \enquote{\texttt{hamburg}} from the platform Flickr. Image retrieval methods generated a subset of slightly more than 1.5 thousand images displaying the Elbphilharmonie. We found that these posts mainly convey a neutral or positive sentiment towards it. With this pipeline, we suggest a new semantic computing method that offers novel insights into end-users opinions, e.g., for architecture domain experts.
false
false
false
false
true
true
true
false
true
false
false
true
false
false
false
false
false
false
326,926
2405.03614
Repairing with Zero Skip Cost
To measure repair latency at helper nodes, we introduce a new metric called skip cost that quantifies the number of contiguous sections accessed on a disk. We provide explicit constructions of zigzag codes and fractional repetition codes that incur zero skip cost
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
452,242
1211.1799
Algorithm for Missing Values Imputation in Categorical Data with Use of Association Rules
This paper presents algorithm for missing values imputation in categorical data. The algorithm is based on using association rules and is presented in three variants. Experimental shows better accuracy of missing values imputation using the algorithm then using most common attribute value.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
19,631
2011.12706
Quantized Neural Networks for Radar Interference Mitigation
Radar sensors are crucial for environment perception of driver assistance systems as well as autonomous vehicles. Key performance factors are weather resistance and the possibility to directly measure velocity. With a rising number of radar sensors and the so far unregulated automotive radar frequency band, mutual interference is inevitable and must be dealt with. Algorithms and models operating on radar data in early processing stages are required to run directly on specialized hardware, i.e. the radar sensor. This specialized hardware typically has strict resource-constraints, i.e. a low memory capacity and low computational power. Convolutional Neural Network (CNN)-based approaches for denoising and interference mitigation yield promising results for radar processing in terms of performance. However, these models typically contain millions of parameters, stored in hundreds of megabytes of memory, and require additional memory during execution. In this paper we investigate quantization techniques for CNN-based denoising and interference mitigation of radar signals. We analyze the quantization potential of different CNN-based model architectures and sizes by considering (i) quantized weights and (ii) piecewise constant activation functions, which results in reduced memory requirements for model storage and during the inference step respectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,247
1504.04970
Information-Theoretic Limits of Matrix Completion
We propose an information-theoretic framework for matrix completion. The theory goes beyond the low-rank structure and applies to general matrices of "low description complexity". Specifically, we consider $m\times n$ random matrices $\mathbf{X}$ of arbitrary distribution (continuous, discrete, discrete-continuous mixture, or even singular). With $\mathcal{S}$ an $\varepsilon$-support set of $\mathbf{X}$, i.e., $\mathrm{P}[\mathbf{X}\in\mathcal{S}]\geq 1-\varepsilon$, and $\underline{\mathrm{dim}}_\mathrm{B}(\mathcal{S})$ denoting the lower Minkowski dimension of $\mathcal{S}$, we show that $k> \underline{\mathrm{dim}}_\mathrm{B}(\mathcal{S})$ trace inner product measurements with measurement matrices $A_i$, suffice to recover $\mathbf{X}$ with probability of error at most $\varepsilon$. The result holds for Lebesgue a.a. $A_i$ and does not need incoherence between the $A_i$ and the unknown matrix $\mathbf{X}$. We furthermore show that $k> \underline{\mathrm{dim}}_\mathrm{B}(\mathcal{S})$ measurements also suffice to recover the unknown matrix $\mathbf{X}$ from measurements taken with rank-one $A_i$, again this applies to a.a. rank-one $A_i$. Rank-one measurement matrices are attractive as they require less storage space than general measurement matrices and can be applied faster. Particularizing our results to the recovery of low-rank matrices, we find that $k>(m+n-r)r$ measurements are sufficient to recover matrices of rank at most $r$. Finally, we construct a class of rank-$r$ matrices that can be recovered with arbitrarily small probability of error from $k<(m+n-r)r$ measurements.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,214
1405.1573
Evolutionary dynamics of cooperation on interdependent networks with Prisoner's Dilemma and Snowdrift Game
The world in which we are living is a huge network of networks and should be described by interdependent networks. The interdependence between networks significantly affects the evolutionary dynamics of cooperation on them. Meanwhile, due to the diversity and complexity of social and biological systems, players on different networks may not interact with each other by the same way, which should be described by multiple models in evolutionary game theory, such as the Prisoner's Dilemma and Snowdrift Game. We therefore study the evolutionary dynamics of cooperation on two interdependent networks playing different games respectively. We clearly evidence that, with the increment of network interdependence, the evolution of cooperation is dramatically promoted on the network playing Prisoner's Dilemma. The cooperation level of the network playing Snowdrift Game reduces correspondingly, although it is almost invisible. In particular, there exists an optimal intermediate region of network interdependence maximizing the growth rate of the evolution of cooperation on the network playing Prisoner's Dilemma. Remarkably, players contacting with other network have advantage in the evolution of cooperation than the others on the same network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
32,896
2304.00377
A Survey on Personalized Affective Computing in Human-Machine Interaction
In computing, the aim of personalization is to train a model that caters to a specific individual or group of people by optimizing one or more performance metrics and adhering to specific constraints. In this paper, we discuss the need for personalization in affective and personality computing (hereinafter referred to as affective computing). We present a survey of state-of-the-art approaches for personalization in affective computing. Our review spans training techniques and objectives towards the personalization of affective computing models. We group existing approaches into seven categories: (1) Target-specific Models, (2) Group-specific Models, (3) Weighting-based Approaches, (4) Fine-tuning Approaches, (5) Multitask Learning, (6) Generative-based Models, and (7) Feature Augmentation. Additionally, we provide a statistical meta-analysis of the surveyed literature, analyzing the prevalence of different affective computing tasks, interaction modes, interaction contexts, and the level of personalization among the surveyed works. Based on that, we provide a road-map for those who are interested in exploring this direction.
true
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
355,659
2102.12777
IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation
This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of question answering is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches achieve eighth rank on subtask-1 and tenth rank on subtask-2.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
221,845
2006.09142
Cogradient Descent for Bilinear Optimization
Conventional learning methods simplify the bilinear model by regarding two intrinsically coupled factors independently, which degrades the optimization procedure. One reason lies in the insufficient training due to the asynchronous gradient descent, which results in vanishing gradients for the coupled variables. In this paper, we introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem, based on a theoretical framework to coordinate the gradient of hidden variables via a projection function. We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent to facilitate the optimization procedure. Our algorithm is applied to solve problems with one variable under the sparsity constraint, which is widely used in the learning paradigm. We validate our CoGD considering an extensive set of applications including image reconstruction, inpainting, and network pruning. Experiments show that it improves the state-of-the-art by a significant margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
182,445
1712.10095
Blind Identification of Fully Observed Linear Time-Varying Systems via Sparse Recovery
Discrete-time linear time-varying (LTV) systems form a powerful class of models to approximate complex dynamical systems with nonlinear dynamics for the purpose of analysis, design and control. Motivated by inference of spatio-temporal dynamics in breast cancer research, we propose a method to efficiently solve an identification problem for a specific class of discrete-time LTV systems, in which the states are fully observed and there is no access to system inputs. In addition, it is assumed that we do not know on which states the inputs act, which can change between time steps, and that the total number of inputs is sparse over all states and over time. The problem is formulated as a compressive sensing problem, which incorporates the effect of measurement noise and which has a solution with a partially sparse support. We derive sufficient conditions for the unique recovery of the system model and input values, which lead to practical conditions on the number of experiments and rank conditions on system outputs. Synthetic experiments analyze the method's sensitivity to noise for randomly generated models.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
87,450
2411.00904
Similarity and Dissimilarity Guided Co-association Matrix Construction for Ensemble Clustering
Ensemble clustering aggregates multiple weak clusterings to achieve a more accurate and robust consensus result. The Co-Association matrix (CA matrix) based method is the mainstream ensemble clustering approach that constructs the similarity relationships between sample pairs according the weak clustering partitions to generate the final clustering result. However, the existing methods neglect that the quality of cluster is related to its size, i.e., a cluster with smaller size tends to higher accuracy. Moreover, they also do not consider the valuable dissimilarity information in the base clusterings which can reflect the varying importance of sample pairs that are completely disconnected. To this end, we propose the Similarity and Dissimilarity Guided Co-association matrix (SDGCA) to achieve ensemble clustering. First, we introduce normalized ensemble entropy to estimate the quality of each cluster, and construct a similarity matrix based on this estimation. Then, we employ the random walk to explore high-order proximity of base clusterings to construct a dissimilarity matrix. Finally, the adversarial relationship between the similarity matrix and the dissimilarity matrix is utilized to construct a promoted CA matrix for ensemble clustering. We compared our method with 13 state-of-the-art methods across 12 datasets, and the results demonstrated the superiority clustering ability and robustness of the proposed approach. The code is available at https://github.com/xuz2019/SDGCA.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
504,829
2108.02167
Acyclic and Cyclic Reversing Computations in Petri Nets
Reversible computations constitute an unconventional form of computing where any sequence of performed operations can be undone by executing in reverse order at any point during a computation. It has been attracting increasing attention as it provides opportunities for low-power computation, being at the same time essential or eligible in various applications. In recent work, we have proposed a structural way of translating Reversing Petri Nets (RPNs) - a type of Petri nets that embeds reversible computation, to bounded Coloured Petri Nets (CPNs) - an extension of traditional Petri Nets, where tokens carry data values. Three reversing semantics are possible in RPNs: backtracking (reversing of the lately executed action), causal reversing (action can be reversed only when all its effects have been undone) and out of causal reversing (any previously performed action can be reversed). In this paper, we extend the RPN to CPN translation with formal proofs of correctness. Moreover, the possibility of introduction of cycles to RPNs is discussed. We analyze which type of cycles could be allowed in RPNs to ensure consistency with the current semantics. It emerged that the most interesting case related to cycles in RPNs occurs in causal semantics, where various interpretations of dependency result in different net's behaviour during reversing. Three definitions of dependence are presented and discussed.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
249,239
2308.11854
Finding the Perfect Fit: Applying Regression Models to ClimateBench v1.0
Climate projections using data driven machine learning models acting as emulators, is one of the prevailing areas of research to enable policy makers make informed decisions. Use of machine learning emulators as surrogates for computationally heavy GCM simulators reduces time and carbon footprints. In this direction, ClimateBench [1] is a recently curated benchmarking dataset for evaluating the performance of machine learning emulators designed for climate data. Recent studies have reported that despite being considered fundamental, regression models offer several advantages pertaining to climate emulations. In particular, by leveraging the kernel trick, regression models can capture complex relationships and improve their predictive capabilities. This study focuses on evaluating non-linear regression models using the aforementioned dataset. Specifically, we compare the emulation capabilities of three non-linear regression models. Among them, Gaussian Process Regressor demonstrates the best-in-class performance against standard evaluation metrics used for climate field emulation studies. However, Gaussian Process Regression suffers from being computational resource hungry in terms of space and time complexity. Alternatively, Support Vector and Kernel Ridge models also deliver competitive results and but there are certain trade-offs to be addressed. Additionally, we are actively investigating the performance of composite kernels and techniques such as variational inference to further enhance the performance of the regression models and effectively model complex non-linear patterns, including phenomena like precipitation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
387,303
1501.04131
Structure Learning and Statistical Estimation in Distribution Networks - Part I
Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presents algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient -- polynomial in time -- which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
39,326
1701.00939
Dense Associative Memory is Robust to Adversarial Inputs
Deep neural networks (DNN) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation, so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNN and humans classify patterns, and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our paper examines these questions within the framework of Dense Associative Memory (DAM) models. These models are defined by the energy function, with higher order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions. This opens up a possibility to use higher order models for detecting and stopping malicious adversarial attacks. The presented results suggest that DAM with higher order energy functions are closer to human visual perception than DNN with ReLUs.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
66,336
2403.06908
FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
3D Gaussian splatting has achieved very impressive performance in real-time novel view synthesis. However, it often suffers from over-reconstruction during Gaussian densification where high-variance image regions are covered by a few large Gaussians only, leading to blur and artifacts in the rendered images. We design a progressive frequency regularization (FreGS) technique to tackle the over-reconstruction issue within the frequency space. Specifically, FreGS performs coarse-to-fine Gaussian densification by exploiting low-to-high frequency components that can be easily extracted with low-pass and high-pass filters in the Fourier space. By minimizing the discrepancy between the frequency spectrum of the rendered image and the corresponding ground truth, it achieves high-quality Gaussian densification and alleviates the over-reconstruction of Gaussian splatting effectively. Experiments over multiple widely adopted benchmarks (e.g., Mip-NeRF360, Tanks-and-Temples and Deep Blending) show that FreGS achieves superior novel view synthesis and outperforms the state-of-the-art consistently.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
436,661
2103.03704
Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features
Intensive research has been conducted on the verification and validation of deep neural networks (DNNs), aiming to understand if, and how, DNNs can be applied to safety critical applications. However, existing verification and validation techniques are limited by their scalability, over both the size of the DNN and the size of the dataset. In this paper, we propose a novel abstraction method which abstracts a DNN and a dataset into a Bayesian network (BN). We make use of dimensionality reduction techniques to identify hidden features that have been learned by hidden layers of the DNN, and associate each hidden feature with a node of the BN. On this BN, we can conduct probabilistic inference to understand the behaviours of the DNN processing data. More importantly, we can derive a runtime monitoring approach to detect in operational time rare inputs and covariate shift of the input data. We can also adapt existing structural coverage-guided testing techniques (i.e., based on low-level elements of the DNN such as neurons), in order to generate test cases that better exercise hidden features. We implement and evaluate the BN abstraction technique using our DeepConcolic tool available at https://github.com/TrustAI/DeepConcolic.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
223,381
2110.11199
Asynchronous Decentralized Distributed Training of Acoustic Models
Large-scale distributed training of deep acoustic models plays an important role in today's high-performance automatic speech recognition (ASR). In this paper we investigate a variety of asynchronous decentralized distributed training strategies based on data parallel stochastic gradient descent (SGD) to show their superior performance over the commonly-used synchronous distributed training via allreduce, especially when dealing with large batch sizes. Specifically, we study three variants of asynchronous decentralized parallel SGD (ADPSGD), namely, fixed and randomized communication patterns on a ring as well as a delay-by-one scheme. We introduce a mathematical model of ADPSGD, give its theoretical convergence rate, and compare the empirical convergence behavior and straggler resilience properties of the three variants. Experiments are carried out on an IBM supercomputer for training deep long short-term memory (LSTM) acoustic models on the 2000-hour Switchboard dataset. Recognition and speedup performance of the proposed strategies are evaluated under various training configurations. We show that ADPSGD with fixed and randomized communication patterns cope well with slow learners. When learners are equally fast, ADPSGD with the delay-by-one strategy has the fastest convergence with large batches. In particular, using the delay-by-one strategy, we can train the acoustic model in less than 2 hours using 128 V100 GPUs with competitive word error rates.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
262,391
2109.01401
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models
We propose CX-ToM, short for counterfactual explanations with theory-of mind, a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN). In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process, i.e. dialog, between the machine and human user. More concretely, our CX-ToM framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user. To do this, we use Theory of Mind (ToM) which helps us in explicitly modeling human's intention, machine's mind as inferred by the human as well as human's mind as inferred by the machine. Moreover, most state-of-the-art XAI frameworks provide attention (or heat map) based explanations. In our work, we show that these attention based explanations are not sufficient for increasing human trust in the underlying CNN model. In CX-ToM, we instead use counterfactual explanations called fault-lines which we define as follows: given an input image I for which a CNN classification model M predicts class c_pred, a fault-line identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class c_alt. We argue that, due to the iterative, conceptual and counterfactual nature of CX-ToM explanations, our framework is practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, demonstrating that our CX-ToM significantly outperforms the state-of-the-art explainable AI models.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
253,427
2312.15949
HyperDeepONet: learning operator with complex target function space using the limited resources via hypernetwork
Fast and accurate predictions for complex physical dynamics are a significant challenge across various applications. Real-time prediction on resource-constrained hardware is even more crucial in real-world problems. The deep operator network (DeepONet) has recently been proposed as a framework for learning nonlinear mappings between function spaces. However, the DeepONet requires many parameters and has a high computational cost when learning operators, particularly those with complex (discontinuous or non-smooth) target functions. This study proposes HyperDeepONet, which uses the expressive power of the hypernetwork to enable the learning of a complex operator with a smaller set of parameters. The DeepONet and its variant models can be thought of as a method of injecting the input function information into the target function. From this perspective, these models can be viewed as a particular case of HyperDeepONet. We analyze the complexity of DeepONet and conclude that HyperDeepONet needs relatively lower complexity to obtain the desired accuracy for operator learning. HyperDeepONet successfully learned various operators with fewer computational resources compared to other benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
418,206
1607.06783
Can DMD obtain a Scene Background in Color?
A background model describes a scene without any foreground objects and has a number of applications, ranging from video surveillance to computational photography. Recent studies have introduced the method of Dynamic Mode Decomposition (DMD) for robustly separating video frames into a background model and foreground components. While the method introduced operates by converting color images to grayscale, we in this study propose a technique to obtain the background model in the color domain. The effectiveness of our technique is demonstrated using a publicly available Scene Background Initialisation (SBI) dataset. Our results both qualitatively and quantitatively show that DMD can successfully obtain a colored background model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,923
2008.01950
Area-wide traffic signal control based on a deep graph Q-Network (DGQN) trained in an asynchronous manner
Reinforcement learning (RL) algorithms have been widely applied in traffic signal studies. There are, however, several problems in jointly controlling traffic lights for a large transportation network. First, the action space exponentially explodes as the number of intersections to be jointly controlled increases. Although a multi-agent RL algorithm has been used to solve the curse of dimensionality, this neither guaranteed a global optimum, nor could it break the ties between joint actions. The problem was circumvented by revising the output structure of a deep Q-network (DQN) within the framework of a single-agent RL algorithm. Second, when mapping traffic states into an action value, it is difficult to consider spatio-temporal correlations over a large transportation network. A deep graph Q-network (DGQN) was devised to efficiently accommodate spatio-temporal dependencies on a large scale. Finally, training a RL model to jointly control traffic lights in a large transportation network requires much time to converge. An asynchronous update methodology was devised for a DGQN to quickly reach an optimal policy. Using these three remedies, a DGQN succeeded in jointly controlling the traffic lights in a large transportation network in Seoul. This approach outperformed other state-of-the-art RL algorithms as well as an actual fixed-signal operation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
190,479
2407.09327
Sina at FigNews 2024: Multilingual Datasets Annotated with Bias and Propaganda
The proliferation of bias and propaganda on social media is an increasingly significant concern, leading to the development of techniques for automatic detection. This article presents a multilingual corpus of 12, 000 Facebook posts fully annotated for bias and propaganda. The corpus was created as part of the FigNews 2024 Shared Task on News Media Narratives for framing the Israeli War on Gaza. It covers various events during the War from October 7, 2023 to January 31, 2024. The corpus comprises 12, 000 posts in five languages (Arabic, Hebrew, English, French, and Hindi), with 2, 400 posts for each language. The annotation process involved 10 graduate students specializing in Law. The Inter-Annotator Agreement (IAA) was used to evaluate the annotations of the corpus, with an average IAA of 80.8% for bias and 70.15% for propaganda annotations. Our team was ranked among the bestperforming teams in both Bias and Propaganda subtasks. The corpus is open-source and available at https://sina.birzeit.edu/fada
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
472,530
1611.09803
InterpoNet, A brain inspired neural network for optical flow dense interpolation
Sparse-to-dense interpolation for optical flow is a fundamental phase in the pipeline of most of the leading optical flow estimation algorithms. The current state-of-the-art method for interpolation, EpicFlow, is a local average method based on an edge aware geodesic distance. We propose a new data-driven sparse-to-dense interpolation algorithm based on a fully convolutional network. We draw inspiration from the filling-in process in the visual cortex and introduce lateral dependencies between neurons and multi-layer supervision into our learning process. We also show the importance of the image contour to the learning process. Our method is robust and outperforms EpicFlow on competitive optical flow benchmarks with several underlying matching algorithms. This leads to state-of-the-art performance on the Sintel and KITTI 2012 benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,718
2410.11165
Toward Efficient Kernel-Based Solvers for Nonlinear PDEs
This paper introduces a novel kernel learning framework toward efficiently solving nonlinear partial differential equations (PDEs). In contrast to the state-of-the-art kernel solver that embeds differential operators within kernels, posing challenges with a large number of collocation points, our approach eliminates these operators from the kernel. We model the solution using a standard kernel interpolation form and differentiate the interpolant to compute the derivatives. Our framework obviates the need for complex Gram matrix construction between solutions and their derivatives, allowing for a straightforward implementation and scalable computation. As an instance, we allocate the collocation points on a grid and adopt a product kernel, which yields a Kronecker product structure in the interpolation. This structure enables us to avoid computing the full Gram matrix, reducing costs and scaling efficiently to a large number of collocation points. We provide a proof of the convergence and rate analysis of our method under appropriate regularity assumptions. In numerical experiments, we demonstrate the advantages of our method in solving several benchmark PDEs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
498,418
2305.00262
Hierarchical Dialogue Understanding with Special Tokens and Turn-level Attention
Compared with standard text, understanding dialogue is more challenging for machines as the dynamic and unexpected semantic changes in each turn. To model such inconsistent semantics, we propose a simple but effective Hierarchical Dialogue Understanding model, HiDialog. Specifically, we first insert multiple special tokens into a dialogue and propose the turn-level attention to learn turn embeddings hierarchically. Then, a heterogeneous graph module is leveraged to polish the learned embeddings. We evaluate our model on various dialogue understanding tasks including dialogue relation extraction, dialogue emotion recognition, and dialogue act classification. Results show that our simple approach achieves state-of-the-art performance on all three tasks above. All our source code is publicly available at https://github.com/ShawX825/HiDialog.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
361,274
1811.04136
The GaussianSketch for Almost Relative Error Kernel Distance
We introduce two versions of a new sketch for approximately embedding the Gaussian kernel into Euclidean inner product space. These work by truncating infinite expansions of the Gaussian kernel, and carefully invoking the RecursiveTensorSketch [Ahle et al. SODA 2020]. After providing concentration and approximation properties of these sketches, we use them to approximate the kernel distance between points sets. These sketches yield almost $(1+\varepsilon)$-relative error, but with a small additive $\alpha$ term. In the first variants the dependence on $1/\alpha$ is poly-logarithmic, but has higher degree of polynomial dependence on the original dimension $d$. In the second variant, the dependence on $1/\alpha$ is still poly-logarithmic, but the dependence on $d$ is linear.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
113,001
2207.14131
PencilNet: Zero-Shot Sim-to-Real Transfer Learning for Robust Gate Perception in Autonomous Drone Racing
In autonomous and mobile robotics, one of the main challenges is the robust on-the-fly perception of the environment, which is often unknown and dynamic, like in autonomous drone racing. In this work, we propose a novel deep neural network-based perception method for racing gate detection -- PencilNet -- which relies on a lightweight neural network backbone on top of a pencil filter. This approach unifies predictions of the gates' 2D position, distance, and orientation in a single pose tuple. We show that our method is effective for zero-shot sim-to-real transfer learning that does not need any real-world training samples. Moreover, our framework is highly robust to illumination changes commonly seen under rapid flight compared to state-of-art methods. A thorough set of experiments demonstrates the effectiveness of this approach in multiple challenging scenarios, where the drone completes various tracks under different lighting conditions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
310,490
2201.00350
The Interpretability of LSTM Models for Predicting Oil Company Stocks: Impact of Correlated Features
Oil companies are among the largest companies in the world whose economic indicators in the global stock market have a great impact on the world economy\cite{ec00} and market due to their relation to gold\cite{ec01}, crude oil\cite{ec02}, and the dollar\cite{ec03}. This study investigates the impact of correlated features on the interpretability of Long Short-Term Memory(LSTM)\cite{ec04} models for predicting oil company stocks. To achieve this, we designed a Standard Long Short-Term Memory (LSTM) network and trained it using various correlated datasets. Our approach aims to improve the accuracy of stock price prediction by considering the multiple factors affecting the market, such as crude oil prices, gold prices, and the US dollar. The results demonstrate that adding a feature correlated with oil stocks does not improve the interpretability of LSTM models. These findings suggest that while LSTM models may be effective in predicting stock prices, their interpretability may be limited. Caution should be exercised when relying solely on LSTM models for stock price prediction as their lack of interpretability may make it difficult to fully understand the underlying factors driving stock price movements. We have employed complexity analysis to support our argument, considering that financial markets encompass a form of physical complex system\cite{ec05}. One of the fundamental challenges faced in utilizing LSTM models for financial markets lies in interpreting the unexpected feedback dynamics within them.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
273,938
1911.05171
Incentive Compatible Active Learning
We consider active learning under incentive compatibility constraints. The main application of our results is to economic experiments, in which a learner seeks to infer the parameters of a subject's preferences: for example their attitudes towards risk, or their beliefs over uncertain events. By cleverly adapting the experimental design, one can save on the time spent by subjects in the laboratory, or maximize the information obtained from each subject in a given laboratory session; but the resulting adaptive design raises complications due to incentive compatibility. A subject in the lab may answer questions strategically, and not truthfully, so as to steer subsequent questions in a profitable direction. We analyze two standard economic problems: inference of preferences over risk from multiple price lists, and belief elicitation in experiments on choice over uncertainty. In the first setting, we tune a simple and fast learning algorithm to retain certain incentive compatibility properties. In the second setting, we provide an incentive compatible learning algorithm based on scoring rules with query complexity that differs from obvious methods of achieving fast learning rates only by subpolynomial factors. Thus, for these areas of application, incentive compatibility may be achieved without paying a large sample complexity price.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
153,188
cmp-lg/9807006
A Maximum-Entropy Partial Parser for Unrestricted Text
This paper describes a partial parser that assigns syntactic structures to sequences of part-of-speech tags. The program uses the maximum entropy parameter estimation method, which allows a flexible combination of different knowledge sources: the hierarchical structure, parts of speech and phrasal categories. In effect, the parser goes beyond simple bracketing and recognises even fairly complex structures. We give accuracy figures for different applications of the parser.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,900
2406.05543
VP-LLM: Text-Driven 3D Volume Completion with Large Language Models through Patchification
Recent conditional 3D completion works have mainly relied on CLIP or BERT to encode textual information, which cannot support complex instruction. Meanwhile, large language models (LLMs) have shown great potential in multi-modal understanding and generation tasks. Inspired by the recent advancements of LLM, we present Volume Patch LLM (VP-LLM), which leverages LLMs to perform conditional 3D completion in a single-forward pass. To integrate a 3D model into the LLM tokenization configuration, the incomplete 3D object is first divided into small patches that can be encoded independently. These encoded patches are then fed into an LLM along with the text prompt, instructing the LLM to capture the relations between these patches as well as injecting semantic meanings into the 3D object. Our results demonstrate a strong ability of LLMs to interpret complex text instructions and understand 3D objects, surpassing state-of-the-art diffusion-based 3D completion models in generation quality.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
462,180
2308.05219
Decoding Layer Saliency in Language Transformers
In this paper, we introduce a strategy for identifying textual saliency in large-scale language models applied to classification tasks. In visual networks where saliency is more well-studied, saliency is naturally localized through the convolutional layers of the network; however, the same is not true in modern transformer-stack networks used to process natural language. We adapt gradient-based saliency methods for these networks, propose a method for evaluating the degree of semantic coherence of each layer, and demonstrate consistent improvement over numerous other methods for textual saliency on multiple benchmark classification datasets. Our approach requires no additional training or access to labelled data, and is comparatively very computationally efficient.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
384,713
2103.01151
Propagation Measurements and Path Loss Models for sub-THz in Urban Microcells
Terahertz frequency bands will likely be used for the next-generation wireless communication systems to provide data rates of hundreds of Gbps or even Tbps because of the wide swaths of unused and unexplored spectrum. This paper presents two outdoor wideband measurement campaigns in downtown Brooklyn (urban microcell environment) in the sub-THz band of 140 GHz with TX-RX separation distance up to 100 m: i) terrestrial urban microcell measurement campaign, and ii) rooftop surrogate satellite and backhaul measurement campaign. Outdoor omnidirectional and directional path loss models for both line-of-sight and non-line-of-sight scenarios, as well as foliage loss (signal attenuation through foliage), are provided at 140 GHz for urban microcell environments. These measurements and models provide an understanding of both the outdoor terrestrial (e.g., 6G cellular and backhaul) and non-terrestrial (e.g., satellite and unmanned aerial vehicle communications) wireless channels, and prove the feasibility of using THz frequency bands for outdoor fixed and mobile cellular communications. This paper can be used for future outdoor wireless system design at frequencies above 100 GHz.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
222,533
2207.06537
Scheduling Out-of-Coverage Vehicular Communications Using Reinforcement Learning
Performance of vehicle-to-vehicle (V2V) communications depends highly on the employed scheduling approach. While centralized network schedulers offer high V2V communication reliability, their operation is conventionally restricted to areas with full cellular network coverage. In contrast, in out-of-cellular-coverage areas, comparatively inefficient distributed radio resource management is used. To exploit the benefits of the centralized approach for enhancing the reliability of V2V communications on roads lacking cellular coverage, we propose VRLS (Vehicular Reinforcement Learning Scheduler), a centralized scheduler that proactively assigns resources for out-of-coverage V2V communications \textit{before} vehicles leave the cellular network coverage. By training in simulated vehicular environments, VRLS can learn a scheduling policy that is robust and adaptable to environmental changes, thus eliminating the need for targeted (re-)training in complex real-life environments. We evaluate the performance of VRLS under varying mobility, network load, wireless channel, and resource configurations. VRLS outperforms the state-of-the-art distributed scheduling algorithm in zones without cellular network coverage by reducing the packet error rate by half in highly loaded conditions and achieving near-maximum reliability in low-load scenarios.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
307,908
2306.04751
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources
In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied by limited evaluation, making it difficult to compare models across the board and determine the utility of various resources. We provide a large set of instruction-tuned models from 6.7B to 65B parameters in size, trained on 12 instruction datasets ranging from manually curated (e.g., OpenAssistant) to synthetic and distilled (e.g., Alpaca) and systematically evaluate them on their factual knowledge, reasoning, multilinguality, coding, and open-ended instruction following abilities through a collection of automatic, model-based, and human-based metrics. We further introduce T\"ulu, our best performing instruction-tuned model suite finetuned on a combination of high-quality open resources. Our experiments show that different instruction-tuning datasets can uncover or enhance specific skills, while no single dataset (or combination) provides the best performance across all evaluations. Interestingly, we find that model and human preference-based evaluations fail to reflect differences in model capabilities exposed by benchmark-based evaluations, suggesting the need for the type of systemic evaluation performed in this work. Our evaluations show that the best model in any given evaluation reaches on average 87% of ChatGPT performance, and 73% of GPT-4 performance, suggesting that further investment in building better base models and instruction-tuning data is required to close the gap. We release our instruction-tuned models, including a fully finetuned 65B T\"ulu, along with our code, data, and evaluation framework at https://github.com/allenai/open-instruct to facilitate future research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
371,881
2109.03092
Modelling Strategic Deceptive Planning in Adversarial Multi-Agent Systems
Deception is virtually ubiquitous in warfare, and should be a central consideration for military operations research. However, studies of agent behaviour in simulated operations have typically neglected to include explicit models of deception. This paper proposes that a computational model that approximates the human deceptive planning process would enable the authentic representation of strategic deception in multi-agent systems. The proposed deceptive planning model provides a framework for studying, explaining, and discovering deceptive behaviours, enabling the generation of novel solutions to adversarial planning problems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
253,954
2105.08086
Neural Error Mitigation of Near-Term Quantum Simulations
Near-term quantum computers provide a promising platform for finding ground states of quantum systems, which is an essential task in physics, chemistry, and materials science. Near-term approaches, however, are constrained by the effects of noise as well as the limited resources of near-term quantum hardware. We introduce "neural error mitigation," which uses neural networks to improve estimates of ground states and ground-state observables obtained using near-term quantum simulations. To demonstrate our method's broad applicability, we employ neural error mitigation to find the ground states of the H$_2$ and LiH molecular Hamiltonians, as well as the lattice Schwinger model, prepared via the variational quantum eigensolver (VQE). Our results show that neural error mitigation improves numerical and experimental VQE computations to yield low energy errors, high fidelities, and accurate estimations of more-complex observables like order parameters and entanglement entropy, without requiring additional quantum resources. Furthermore, neural error mitigation is agnostic with respect to the quantum state preparation algorithm used, the quantum hardware it is implemented on, and the particular noise channel affecting the experiment, contributing to its versatility as a tool for quantum simulation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
235,644
2402.16763
ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks
Behavior can be described as a temporal sequence of actions driven by neural activity. To learn complex sequential patterns in neural networks, memories of past activities need to persist on significantly longer timescales than the relaxation times of single-neuron activity. While recurrent networks can produce such long transients, training these networks is a challenge. Learning via error propagation confers models such as FORCE, RTRL or BPTT a significant functional advantage, but at the expense of biological plausibility. While reservoir computing circumvents this issue by learning only the readout weights, it does not scale well with problem complexity. We propose that two prominent structural features of cortical networks can alleviate these issues: the presence of a certain network scaffold at the onset of learning and the existence of dendritic compartments for enhancing neuronal information storage and computation. Our resulting model for Efficient Learning of Sequences (ELiSe) builds on these features to acquire and replay complex non-Markovian spatio-temporal patterns using only local, always-on and phase-free synaptic plasticity. We showcase the capabilities of ELiSe in a mock-up of birdsong learning, and demonstrate its flexibility with respect to parametrization, as well as its robustness to external disturbances.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
432,682
2006.03254
TCDesc: Learning Topology Consistent Descriptors
Triplet loss is widely used for learning local descriptors from image patch. However, triplet loss only minimizes the Euclidean distance between matching descriptors and maximizes that between the non-matching descriptors, which neglects the topology similarity between two descriptor sets. In this paper, we propose topology measure besides Euclidean distance to learn topology consistent descriptors by considering kNN descriptors of positive sample. First we establish a novel topology vector for each descriptor followed by Locally Linear Embedding (LLE) to indicate the topological relation among the descriptor and its kNN descriptors. Then we define topology distance between descriptors as the difference of their topology vectors. Last we employ the dynamic weighting strategy to fuse Euclidean distance and topology distance of matching descriptors and take the fusion result as the positive sample distance in the triplet loss. Experimental results on several benchmarks show that our method performs better than state-of-the-arts results and effectively improves the performance of triplet loss.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
180,266
1608.05610
A Strongly Quasiconvex PAC-Bayesian Bound
We propose a new PAC-Bayesian bound and a way of constructing a hypothesis space, so that the bound is convex in the posterior distribution and also convex in a trade-off parameter between empirical performance of the posterior distribution and its complexity. The complexity is measured by the Kullback-Leibler divergence to a prior. We derive an alternating procedure for minimizing the bound. We show that the bound can be rewritten as a one-dimensional function of the trade-off parameter and provide sufficient conditions under which the function has a single global minimum. When the conditions are satisfied the alternating minimization is guaranteed to converge to the global minimum of the bound. We provide experimental results demonstrating that rigorous minimization of the bound is competitive with cross-validation in tuning the trade-off between complexity and empirical performance. In all our experiments the trade-off turned to be quasiconvex even when the sufficient conditions were violated.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
60,003
2405.11281
Cooperative Cognitive Dynamic System in UAV Swarms: Reconfigurable Mechanism and Framework
As the demands for immediate and effective responses increase in both civilian and military domains, the unmanned aerial vehicle (UAV) swarms emerge as effective solutions, in which multiple cooperative UAVs can work together to achieve specific goals. However, how to manage such complex systems to ensure real-time adaptability lack sufficient researches. Hence, in this paper, we propose the cooperative cognitive dynamic system (CCDS), to optimize the management for UAV swarms. CCDS leverages a hierarchical and cooperative control structure that enables real-time data processing and decision. Accordingly, CCDS optimizes the UAV swarm management via dynamic reconfigurability and adaptive intelligent optimization. In addition, CCDS can be integrated with the biomimetic mechanism to efficiently allocate tasks for UAV swarms. Further, the distributed coordination of CCDS ensures reliable and resilient control, thus enhancing the adaptability and robustness. Finally, the potential challenges and future directions are analyzed, to provide insights into managing UAV swarms in dynamic heterogeneous networking.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
455,075
2004.08206
Vehicle Position Estimation with Aerial Imagery from Unmanned Aerial Vehicles
The availability of real-world data is a key element for novel developments in the fields of automotive and traffic research. Aerial imagery has the major advantage of recording multiple objects simultaneously and overcomes limitations such as occlusions. However, there are only few data sets available. This work describes a process to estimate a precise vehicle position from aerial imagery. A robust object detection is crucial for reliable results, hence the state-of-the-art deep neural network Mask-RCNN is applied for that purpose. Two training data sets are employed: The first one is optimized for detecting the test vehicle, while the second one consists of randomly selected images recorded on public roads. To reduce errors, several aspects are accounted for, such as the drone movement and the perspective projection from a photograph. The estimated position is comapared with a reference system installed in the test vehicle. It is shown, that a mean accuracy of 20 cm can be achieved with flight altitudes up to 100 m, Full-HD resolution and a frame-by-frame detection. A reliable position estimation is the basis for further data processing, such as obtaining additional vehicle state variables. The source code, training weights, labeled data and example videos are made publicly available. This supports researchers to create new traffic data sets with specific local conditions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,003
1108.4445
SNF Project Locomotion: Progress report 2008-2009
Summary of results (project period 1. 10. 2008 - 30. 9. 2009) of SNFS Project "From locomotion to cognition" The research that we have been involved in, and will continue to do, starts from the insight that in order to understand and design intelligent behavior, we must adopt an embodied perspective, i.e. we must take the entire agent, including its shape or morphology, the materials out of which it is built, and its interaction with the environment into account, in addition to the neural control. A lot of our research in the past has been on relatively low-level sensory-motor tasks such as locomotion (e.g. walking, running, jumping), navigation, and grasping. While this research is of interest in itself, in the context of artificial intelligence and cognitive science, this leads to the question of what these kinds of tasks have to do with higher levels of cognition, or to put it more provocatively, "What does walking have to do with thinking?" This question is of course reminiscent of the notorious "symbol grounding problem". In contrast to most of the research on symbol grounding, we propose to exploit the dynamic interaction between the embodied agent and the environment as the basis for grounding. We use the term "morphological computation" to designate the fact that some of the control or computation can be taken over by the dynamic interaction derived from morphological properties (e.g. the passive forward swing of the leg in walking, the spring-like properties of the muscles, and the weight distribution). By taking morphological computation into account, an agent will be able to achieve not only faster, more robust, and more energy-efficient behavior, but also more situated exploration by the agent for the comprehensive understanding of the environment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
11,771
2205.00256
Heterogeneous Graph Neural Networks using Self-supervised Reciprocally Contrastive Learning
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs. Most existing HGNN-based approaches are supervised or semi-supervised learning methods requiring graphs to be annotated, which is costly and time-consuming. Self-supervised contrastive learning has been proposed to address the problem of requiring annotated data by mining intrinsic information hidden within the given data. However, the existing contrastive learning methods are inadequate for heterogeneous graphs because they construct contrastive views only based on data perturbation or pre-defined structural properties (e.g., meta-path) in graph data while ignore the noises that may exist in both node attributes and graph topologies. We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies and integrates and enhances them by reciprocally contrastive mechanism to better model heterogeneous graphs. In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately. We further use both attribute similarity and topological correlation to construct high-quality contrastive samples. Extensive experiments on three large real-world heterogeneous graphs demonstrate the superiority and robustness of HGCL over state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
294,185
2301.01201
Uncertainty in Real-Time Semantic Segmentation on Embedded Systems
Application for semantic segmentation models in areas such as autonomous vehicles and human computer interaction require real-time predictive capabilities. The challenges of addressing real-time application is amplified by the need to operate on resource constrained hardware. Whilst development of real-time methods for these platforms has increased, these models are unable to sufficiently reason about uncertainty present when applied on embedded real-time systems. This paper addresses this by combining deep feature extraction from pre-trained models with Bayesian regression and moment propagation for uncertainty aware predictions. We demonstrate how the proposed method can yield meaningful epistemic uncertainty on embedded hardware in real-time whilst maintaining predictive performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
339,163
1511.01512
Mean-field inference of Hawkes point processes
We propose a fast and efficient estimation method that is able to accurately recover the parameters of a d-dimensional Hawkes point-process from a set of observations. We exploit a mean-field approximation that is valid when the fluctuations of the stochastic intensity are small. We show that this is notably the case in situations when interactions are sufficiently weak, when the dimension of the system is high or when the fluctuations are self-averaging due to the large number of past events they involve. In such a regime the estimation of a Hawkes process can be mapped on a least-squares problem for which we provide an analytic solution. Though this estimator is biased, we show that its precision can be comparable to the one of the Maximum Likelihood Estimator while its computation speed is shown to be improved considerably. We give a theoretical control on the accuracy of our new approach and illustrate its efficiency using synthetic datasets, in order to assess the statistical estimation error of the parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
48,509
2405.00361
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
We introduce AdaMoLE, a novel method for fine-tuning large language models (LLMs) through an Adaptive Mixture of Low-Rank Adaptation (LoRA) Experts. Moving beyond conventional methods that employ a static top-k strategy for activating experts, AdaMoLE dynamically adjusts the activation threshold using a dedicated threshold network, adaptively responding to the varying complexities of different tasks. By replacing a single LoRA in a layer with multiple LoRA experts and integrating a gating function with the threshold mechanism, AdaMoLE effectively selects and activates the most appropriate experts based on the input context. Our extensive evaluations across a variety of commonsense reasoning and natural language processing tasks show that AdaMoLE exceeds baseline performance. This enhancement highlights the advantages of AdaMoLE's adaptive selection of LoRA experts, improving model effectiveness without a corresponding increase in the expert count. The experimental validation not only confirms AdaMoLE as a robust approach for enhancing LLMs but also suggests valuable directions for future research in adaptive expert selection mechanisms, potentially broadening the scope for optimizing model performance across diverse language processing tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
450,892
2303.09826
Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution
Existing real-world video super-resolution (VSR) methods focus on designing a general degradation pipeline for open-domain videos while ignoring data intrinsic characteristics which strongly limit their performance when applying to some specific domains (eg., animation videos). In this paper, we thoroughly explore the characteristics of animation videos and leverage the rich priors in real-world animation data for a more practical animation VSR model. In particular, we propose a multi-scale Vector-Quantized Degradation model for animation video Super-Resolution (VQD-SR) to decompose the local details from global structures and transfer the degradation priors in real-world animation videos to a learned vector-quantized codebook for degradation modeling. A rich-content Real Animation Low-quality (RAL) video dataset is collected for extracting the priors. We further propose a data enhancement strategy for high-resolution (HR) training videos based on our observation that existing HR videos are mostly collected from the Web which contains conspicuous compression artifacts. The proposed strategy is valid to lift the upper bound of animation VSR performance, regardless of the specific VSR model. Experimental results demonstrate the superiority of the proposed VQD-SR over state-of-the-art methods, through extensive quantitative and qualitative evaluations of the latest animation video super-resolution benchmark. The code and pre-trained models can be downloaded at https://github.com/researchmm/VQD-SR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,206
1712.06580
Path Loss and Directional Gain Measurements at 28 GHz for non-Line-of-Sight Coverage of Indoors with Corridors
Adequate coverage with high gain antennas is key to realizing the full promise of the bandwidth available at mm/cm wave bands. We report extensive indoor measurements at 28 GHz (1000 links, 9.9 million individual power measurements, 10 offices, 2 buildings), with/without line-of-sight (LOS) using a continuous wave channel sounder, with a 10o spinning horn, capable of capturing a full azimuth scan every 200 ms, in up to 171 dB path loss to characterize coverage with 90% confidence level. The environment had prominent corridors and rooms, as opposed to open/mixed offices in latest 3GPP standards. Guiding in corridors leads to much lower RMS azimuth spread (7 degree median in corridor non-LOS vs. 42 degree in 3GPP) and higher penetration loss into rooms and around corners (30-32 dB, some 12 dB more loss than 3GPP at 20 m non-LOS). Measured path gain in non-LOS is predicted by a mode-diffusion model with 3.9 dB RMS error. Scattering degraded azimuth gain by up to 4 dB in the corridor and 7 dB in rooms with 90% probability. Link simulations in a canonical building indicate every corridor needs an access point to provide 1 Gbps rate to adjoining rooms within 50 m using 400 MHz of bandwidth.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
86,910
1402.5475
Soft Consistency Reconstruction: A Robust 1-bit Compressive Sensing Algorithm
A class of recovering algorithms for 1-bit compressive sensing (CS) named Soft Consistency Reconstructions (SCRs) are proposed. Recognizing that CS recovery is essentially an optimization problem, we endeavor to improve the characteristics of the objective function under noisy environments. With a family of re-designed consistency criteria, SCRs achieve remarkable counter-noise performance gain over the existing counterparts, thus acquiring the desired robustness in many real-world applications. The benefits of soft decisions are exemplified through structural analysis of the objective function, with intuition described for better understanding. As expected, through comparisons with existing methods in simulations, SCRs demonstrate preferable robustness against noise in low signal-to-noise ratio (SNR) regime, while maintaining comparable performance in high SNR regime.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
31,064
2009.09381
Stochastic Model Predictive Control with a Safety Guarantee for Automated Driving: Extended Version
Automated vehicles require efficient and safe planning to maneuver in uncertain environments. Largely this uncertainty is caused by other traffic participants, e.g., surrounding vehicles. Future motion of surrounding vehicles is often difficult to predict. Whereas robust control approaches achieve safe, yet conservative motion planning for automated vehicles, Stochastic Model Predictive Control (SMPC) provides efficient planning in the presence of uncertainty. Probabilistic constraints are applied to ensure that the maximal risk remains below a predefined level. However, safety cannot be ensured as probabilistic constraints may be violated, which is not acceptable for automated vehicles. Here, we propose an efficient trajectory planning framework with safety guarantees for automated vehicles. SMPC is applied to obtain efficient vehicle trajectories for a finite horizon. Based on the first optimized SMPC input, a guaranteed safe backup trajectory is planned using reachable sets. This backup is used to overwrite the SMPC input if necessary for safety. Recursive feasibility of the safe SMPC algorithm is proved. Highway simulations show the effectiveness of the proposed method regarding performance and safety.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
196,564
1504.03810
Text Localization in Video Using Multiscale Weber's Local Descriptor
In this paper, we propose a novel approach for detecting the text present in videos and scene images based on the Multiscale Weber's Local Descriptor (MWLD). Given an input video, the shots are identified and the key frames are extracted based on their spatio-temporal relationship. From each key frame, we detect the local region information using WLD with different radius and neighborhood relationship of pixel values and hence obtained intensity enhanced key frames at multiple scales. These multiscale WLD key frames are merged together and then the horizontal gradients are computed using morphological operations. The obtained results are then binarized and the false positives are eliminated based on geometrical properties. Finally, we employ connected component analysis and morphological dilation operation to determine the text regions that aids in text localization. The experimental results obtained on publicly available standard Hua, Horizontal-1 and Horizontal-2 video dataset illustrate that the proposed method can accurately detect and localize texts of various sizes, fonts and colors in videos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
42,071
2303.17418
Semantic Image Translation for Repairing the Texture Defects of Building Models
The accurate representation of 3D building models in urban environments is significantly hindered by challenges such as texture occlusion, blurring, and missing details, which are difficult to mitigate through standard photogrammetric texture mapping pipelines. Current image completion methods often struggle to produce structured results and effectively handle the intricate nature of highly-structured fa\c{c}ade textures with diverse architectural styles. Furthermore, existing image synthesis methods encounter difficulties in preserving high-frequency details and artificial regular structures, which are essential for achieving realistic fa\c{c}ade texture synthesis. To address these challenges, we introduce a novel approach for synthesizing fa\c{c}ade texture images that authentically reflect the architectural style from a structured label map, guided by a ground-truth fa\c{c}ade image. In order to preserve fine details and regular structures, we propose a regularity-aware multi-domain method that capitalizes on frequency information and corner maps. We also incorporate SEAN blocks into our generator to enable versatile style transfer. To generate plausible structured images without undesirable regions, we employ image completion techniques to remove occlusions according to semantics prior to image inference. Our proposed method is also capable of synthesizing texture images with specific styles for fa\c{c}ades that lack pre-existing textures, using manually annotated labels. Experimental results on publicly available fa\c{c}ade image and 3D model datasets demonstrate that our method yields superior results and effectively addresses issues associated with flawed textures. The code and datasets will be made publicly available for further research and development.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
355,199
2410.20312
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
``Distribution shift'' is the main obstacle to the success of offline reinforcement learning. A learning policy may take actions beyond the behavior policy's knowledge, referred to as Out-of-Distribution (OOD) actions. The Q-values for these OOD actions can be easily overestimated. As a result, the learning policy is biased by using incorrect Q-value estimates. One common approach to avoid Q-value overestimation is to make a pessimistic adjustment. Our key idea is to penalize the Q-values of OOD actions associated with high uncertainty. In this work, we propose Q-Distribution Guided Q-Learning (QDQ), which applies a pessimistic adjustment to Q-values in OOD regions based on uncertainty estimation. This uncertainty measure relies on the conditional Q-value distribution, learned through a high-fidelity and efficient consistency model. Additionally, to prevent overly conservative estimates, we introduce an uncertainty-aware optimization objective for updating the Q-value function. The proposed QDQ demonstrates solid theoretical guarantees for the accuracy of Q-value distribution learning and uncertainty measurement, as well as the performance of the learning policy. QDQ consistently shows strong performance on the D4RL benchmark and achieves significant improvements across many tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
502,759
2008.04887
Impact of natural disasters on consumer behavior: case of the 2017 El Nino phenomenon in Peru
El Nino is an extreme weather event featuring unusual warming of surface waters in the eastern equatorial Pacific Ocean. This phenomenon is characterized by heavy rains and floods that negatively affect the economic activities of the impacted areas. Understanding how this phenomenon influences consumption behavior at different granularity levels is essential for recommending strategies to normalize the situation. With this aim, we performed a multi-scale analysis of data associated with bank transactions involving credit and debit cards. Our findings can be summarized into two main results: Coarse-grained analysis reveals the presence of the El Ni\~no phenomenon and the recovery time in a given territory, while fine-grained analysis demonstrates a change in individuals' purchasing patterns and in merchant relevance as a consequence of the climatic event. The results also indicate that society successfully withstood the natural disaster owing to the economic structure built over time. In this study, we present a new method that may be useful for better characterizing future extreme events.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
191,351
0912.1830
Gesture Recognition with a Focus on Important Actions by Using a Path Searching Method in Weighted Graph
This paper proposes a method of gesture recognition with a focus on important actions for distinguishing similar gestures. The method generates a partial action sequence by using optical flow images, expresses the sequence in the eigenspace, and checks the feature vector sequence by applying an optimum path-searching method of weighted graph to focus the important actions. Also presented are the results of an experiment on the recognition of similar sign language words.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
5,132
2310.12551
Iterative PnP and its application in 3D-2D vascular image registration for robot navigation
This paper reports on a new real-time robot-centered 3D-2D vascular image alignment algorithm, which is robust to outliers and can align nonrigid shapes. Few works have managed to achieve both real-time and accurate performance for vascular intervention robots. This work bridges high-accuracy 3D-2D registration techniques and computational efficiency requirements in intervention robot applications. We categorize centerline-based vascular 3D-2D image registration problems as an iterative Perspective-n-Point (PnP) problem and propose to use the Levenberg-Marquardt solver on the Lie manifold. Then, the recently developed Reproducing Kernel Hilbert Space (RKHS) algorithm is introduced to overcome the ``big-to-small'' problem in typical robotic scenarios. Finally, an iterative reweighted least squares is applied to solve RKHS-based formulation efficiently. Experiments indicate that the proposed algorithm processes registration over 50 Hz (rigid) and 20 Hz (nonrigid) and obtains competing registration accuracy similar to other works. Results indicate that our Iterative PnP is suitable for future vascular intervention robot applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
401,065
2111.01264
Human-Level Control without Server-Grade Hardware
Deep Q-Network (DQN) marked a major milestone for reinforcement learning, demonstrating for the first time that human-level control policies could be learned directly from raw visual inputs via reward maximization. Even years after its introduction, DQN remains highly relevant to the research community since many of its innovations have been adopted by successor methods. Nevertheless, despite significant hardware advances in the interim, DQN's original Atari 2600 experiments remain costly to replicate in full. This poses an immense barrier to researchers who cannot afford state-of-the-art hardware or lack access to large-scale cloud computing resources. To facilitate improved access to deep reinforcement learning research, we introduce a DQN implementation that leverages a novel concurrent and synchronized execution framework designed to maximally utilize a heterogeneous CPU-GPU desktop system. With just one NVIDIA GeForce GTX 1080 GPU, our implementation reduces the training time of a 200-million-frame Atari experiment from 25 hours to just 9 hours. The ideas introduced in our paper should be generalizable to a large number of off-policy deep reinforcement learning methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
264,503
2402.11235
ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs
With the development of foundation models such as large language models, zero-shot transfer learning has become increasingly significant. This is highlighted by the generative capabilities of NLP models like GPT-4, and the retrieval-based approaches of CV models like CLIP, both of which effectively bridge the gap between seen and unseen data. In the realm of graph learning, the continuous emergence of new graphs and the challenges of human labeling also amplify the necessity for zero-shot transfer learning, driving the exploration of approaches that can generalize across diverse graph data without necessitating dataset-specific and label-specific fine-tuning. In this study, we extend such paradigms to zero-shot transferability in graphs by introducing ZeroG, a new framework tailored to enable cross-dataset generalization. Addressing the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer, we leverage a language model to encode both node attributes and class semantics, ensuring consistent feature dimensions across datasets. We also propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs using prompting nodes and neighborhood aggregation, respectively. We further adopt a lightweight fine-tuning strategy that reduces the risk of overfitting and maintains the zero-shot learning efficacy of the language model. The results underscore the effectiveness of our model in achieving significant cross-dataset zero-shot transferability, opening pathways for the development of graph foundation models. Codes and data are available at https://github.com/NineAbyss/ZeroG.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
430,299
2411.14496
Multi-agent reinforcement learning strategy to maximize the lifetime of Wireless Rechargeable
The thesis proposes a generalized charging framework for multiple mobile chargers to maximize the network lifetime and ensure target coverage and connectivity in large scale WRSNs. Moreover, a multi-point charging model is leveraged to enhance charging efficiency, where the MC can charge multiple sensors simultaneously at each charging location. The thesis proposes an effective Decentralized Partially Observable Semi-Markov Decision Process (Dec POSMDP) model that promotes Mobile Chargers (MCs) cooperation and detects optimal charging locations based on realtime network information. Furthermore, the proposal allows reinforcement algorithms to be applied to different networks without requiring extensive retraining. To solve the Dec POSMDP model, the thesis proposes an Asynchronous Multi Agent Reinforcement Learning algorithm (AMAPPO) based on the Proximal Policy Optimization algorithm (PPO).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
true
false
false
true
510,200
2209.02167
Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL Agents
Adversarial examples can be useful for identifying vulnerabilities in AI systems before they are deployed. In reinforcement learning (RL), adversarial policies can be developed by training an adversarial agent to minimize a target agent's rewards. Prior work has studied black-box versions of these attacks where the adversary only observes the world state and treats the target agent as any other part of the environment. However, this does not take into account additional structure in the problem. In this work, we study white-box adversarial policies and show that having access to a target agent's internal state can be useful for identifying its vulnerabilities. We make two contributions. (1) We introduce white-box adversarial policies where an attacker observes both a target's internal state and the world state at each timestep. We formulate ways of using these policies to attack agents in 2-player games and text-generating language models. (2) We demonstrate that these policies can achieve higher initial and asymptotic performance against a target agent than black-box controls. Code is available at https://github.com/thestephencasper/lm_white_box_attacks
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
316,122
2003.08513
Virtual Control Contraction Metrics: Convex Nonlinear Feedback Design via Behavioral Embedding
This paper presents a systematic approach to nonlinear state-feedback control design that has three main advantages: (i) it ensures exponential stability and $ \mathcal{L}_2 $-gain performance with respect to a user-defined set of reference trajectories, and (ii) it provides constructive conditions based on convex optimization and a path-integral-based control realization, and (iii) it is less restrictive than previous similar approaches. In the proposed approach, first a virtual representation of the nonlinear dynamics is constructed for which a behavioral (parameter-varying) embedding is generated. Then, by introducing a virtual control contraction metric, a convex control synthesis formulation is derived. Finally, a control realization with a virtual reference generator is computed, which is guaranteed to achieve exponential stability and $ \mathcal{L}_2 $-gain performance for all trajectories of the targeted reference behavior. We show that the proposed methodology is a unified generalization of the two distinct categories of linear-parameter-varying (LPV) state-feedback control approaches: global and local methods. Moreover, it provides rigorous stability and performance guarantees as a method for nonlinear tracking control, while such properties are not guaranteed for tracking control using standard LPV approaches.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
168,758
1906.04358
Weight Agnostic Neural Networks
Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights. Interactive version of this paper at https://weightagnostic.github.io/
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
134,688
2404.10946
Information encoding and decoding in in-vitro neural networks on micro electrode arrays through stimulation timing
A primary challenge in utilizing in-vitro biological neural networks for computations is finding good encoding and decoding schemes for inputting and decoding data to and from the networks. Furthermore, identifying the optimal parameter settings for a given combination of encoding and decoding schemes adds additional complexity to this challenge. In this study we explore stimulation timing as an encoding method, i.e. we encode information as the delay between stimulation pulses and identify the bounds and acuity of stimulation timings which produce linearly separable spike responses. We also examine the optimal readout parameters for a linear decoder in the form of epoch length, time bin size and epoch offset. Our results suggest that stimulation timings between 36 and 436ms may be optimal for encoding and that different combinations of readout parameters may be optimal at different parts of the evoked spike response.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
447,317
2408.07955
GERestaurant: A German Dataset of Annotated Restaurant Reviews for Aspect-Based Sentiment Analysis
We present GERestaurant, a novel dataset consisting of 3,078 German language restaurant reviews manually annotated for Aspect-Based Sentiment Analysis (ABSA). All reviews were collected from Tripadvisor, covering a diverse selection of restaurants, including regional and international cuisine with various culinary styles. The annotations encompass both implicit and explicit aspects, including all aspect terms, their corresponding aspect categories, and the sentiments expressed towards them. Furthermore, we provide baseline scores for the four ABSA tasks Aspect Category Detection, Aspect Category Sentiment Analysis, End-to-End ABSA and Target Aspect Sentiment Detection as a reference point for future advances. The dataset fills a gap in German language resources and facilitates exploration of ABSA in the restaurant domain.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
480,792
2002.04374
Convolutional Neural Networks and a Transfer Learning Strategy to Classify Parkinson's Disease from Speech in Three Different Languages
Parkinson's disease patients develop different speech impairments that affect their communication capabilities. The automatic assessment of the speech of the patients allows the development of computer aided tools to support the diagnosis and the evaluation of the disease severity. This paper introduces a methodology to classify Parkinson's disease from speech in three different languages: Spanish, German, and Czech. The proposed approach considers convolutional neural networks trained with time frequency representations and a transfer learning strategy among the three languages. The transfer learning scheme aims to improve the accuracy of the models when the weights of the neural network are initialized with utterances from a different language than the used for the test set. The results suggest that the proposed strategy improves the accuracy of the models in up to 8\% when the base model used to initialize the weights of the classifier is robust enough. In addition, the results obtained after the transfer learning are in most cases more balanced in terms of specificity-sensitivity than those trained without the transfer learning strategy.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
163,591
2103.08826
GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks
Node classification is an important research topic in graph learning. Graph neural networks (GNNs) have achieved state-of-the-art performance of node classification. However, existing GNNs address the problem where node samples for different classes are balanced; while for many real-world scenarios, some classes may have much fewer instances than others. Directly training a GNN classifier in this case would under-represent samples from those minority classes and result in sub-optimal performance. Therefore, it is very important to develop GNNs for imbalanced node classification. However, the work on this is rather limited. Hence, we seek to extend previous imbalanced learning techniques for i.i.d data to the imbalanced node classification task to facilitate GNN classifiers. In particular, we choose to adopt synthetic minority over-sampling algorithms, as they are found to be the most effective and stable. This task is non-trivial, as previous synthetic minority over-sampling algorithms fail to provide relation information for newly synthesized samples, which is vital for learning on graphs. Moreover, node attributes are high-dimensional. Directly over-sampling in the original input domain could generates out-of-domain samples, which may impair the accuracy of the classifier. We propose a novel framework, GraphSMOTE, in which an embedding space is constructed to encode the similarity among the nodes. New samples are synthesize in this space to assure genuineness. In addition, an edge generator is trained simultaneously to model the relation information, and provide it for those new samples. This framework is general and can be easily extended into different variations. The proposed framework is evaluated using three different datasets, and it outperforms all baselines with a large margin.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
224,996
2307.11468
Zero-touch realization of Pervasive Artificial Intelligence-as-a-service in 6G networks
The vision of the upcoming 6G technologies, characterized by ultra-dense network, low latency, and fast data rate is to support Pervasive AI (PAI) using zero-touch solutions enabling self-X (e.g., self-configuration, self-monitoring, and self-healing) services. However, the research on 6G is still in its infancy, and only the first steps have been taken to conceptualize its design, investigate its implementation, and plan for use cases. Toward this end, academia and industry communities have gradually shifted from theoretical studies of AI distribution to real-world deployment and standardization. Still, designing an end-to-end framework that systematizes the AI distribution by allowing easier access to the service using a third-party application assisted by a zero-touch service provisioning has not been well explored. In this context, we introduce a novel platform architecture to deploy a zero-touch PAI-as-a-Service (PAIaaS) in 6G networks supported by a blockchain-based smart system. This platform aims to standardize the pervasive AI at all levels of the architecture and unify the interfaces in order to facilitate the service deployment across application and infrastructure domains, relieve the users worries about cost, security, and resource allocation, and at the same time, respect the 6G stringent performance requirements. As a proof of concept, we present a Federated Learning-as-a-service use case where we evaluate the ability of our proposed system to self-optimize and self-adapt to the dynamics of 6G networks in addition to minimizing the users' perceived costs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
380,918
2105.09561
Generating Significant Examples for Conceptual Schema Validation
This report bases itself on the idea of using concrete examples to verify conceptual schemas, and in particular cardinality constraints. When novice ORM modellers model domains, the selection of proper cardinality constraints for relationship types is quite often prone to errors. In this report we propose a mechanism for the generation of significant examples for selected subschemas. The generated examples are significant in the sense that they illustrate the possible combinations of instances that are allowed with respect to the cardinality constraints on the involved relationship types. In this report we firstly provide a brief informal discussion of the basic idea. Then we present a syntactic mechanism to select the subschema for which example instances are to be generated. This is followed by the actual example generation algorithm itself. We will also present, as a {\em spin-off}, an algorithm that allows us to detect possible flaws in the conceptual schema by calculating the number of instances that can be used to populate the types in the schema.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
236,105
1607.00510
Harnessing Self-Interference in Full-Duplex Relaying: An Analog Filter-and-Forward Approach
This paper studies a full-duplex filter-and-forward (FD-FF) relay system in frequency-selective channels. Conventionally, the loop-back signal at the FD relay is treated as harmful self-interference and needs to be significantly suppressed via both analog- and digital-domain cancellation. However, the performance of the conventional self-interference cancellation approach is fundamentally limited due to the quantization error induced by the analog-to-digital converter (ADC) with limited dynamic range. In this paper, we consider an analog filter-and-forward design to help avoid the quantization error, and surprisingly show that the maximum achievable rate of such an FD-FF relay system is in fact regardless of the loop-back channel at the FD relay. We characterize the maximum achievable rate of this channel by jointly optimizing the transmit power allocation over frequency at the source and the frequency response of the filter at the relay, subject to their individual power constraints. Although this problem is non-convex, we obtain its optimal solution by applying the Lagrange duality method. By simulations it is shown that the proposed joint source and relay optimization achieves rate gains over other heuristic designs, and is also advantageous over the conventional approach by cancelling the relay loop-back signal as self-interference, especially when the residual self-interference after cancellation is still significant.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
58,090
1508.06095
OCReP: An Optimally Conditioned Regularization for Pseudoinversion Based Neural Training
In this paper we consider the training of single hidden layer neural networks by pseudoinversion, which, in spite of its popularity, is sometimes affected by numerical instability issues. Regularization is known to be effective in such cases, so that we introduce, in the framework of Tikhonov regularization, a matricial reformulation of the problem which allows us to use the condition number as a diagnostic tool for identification of instability. By imposing well-conditioning requirements on the relevant matrices, our theoretical analysis allows the identification of an optimal value for the regularization parameter from the standpoint of stability. We compare with the value derived by cross-validation for overfitting control and optimisation of the generalization performance. We test our method for both regression and classification tasks. The proposed method is quite effective in terms of predictivity, often with some improvement on performance with respect to the reference cases considered. This approach, due to analytical determination of the regularization parameter, dramatically reduces the computational load required by many other techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
46,290
2004.04054
Semi-supervised acoustic and language model training for English-isiZulu code-switched speech recognition
We present an analysis of semi-supervised acoustic and language model training for English-isiZulu code-switched ASR using soap opera speech. Approximately 11 hours of untranscribed multilingual speech was transcribed automatically using four bilingual code-switching transcription systems operating in English-isiZulu, English-isiXhosa, English-Setswana and English-Sesotho. These transcriptions were incorporated into the acoustic and language model training sets. Results showed that the TDNN-F acoustic models benefit from the additional semi-supervised data and that even better performance could be achieved by including additional CNN layers. Using these CNN-TDNN-F acoustic models, a first iteration of semi-supervised training achieved an absolute mixed-language WER reduction of 3.4%, and a further 2.2% after a second iteration. Although the languages in the untranscribed data were unknown, the best results were obtained when all automatically transcribed data was used for training and not just the utterances classified as English-isiZulu. Despite reducing perplexity, the semi-supervised language model was not able to improve the ASR performance.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
171,766
2309.02218
Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models
The rise of deepfake images, especially of well-known personalities, poses a serious threat to the dissemination of authentic information. To tackle this, we present a thorough investigation into how deepfakes are produced and how they can be identified. The cornerstone of our research is a rich collection of artificial celebrity faces, titled DeepFakeFace (DFF). We crafted the DFF dataset using advanced diffusion models and have shared it with the community through online platforms. This data serves as a robust foundation to train and test algorithms designed to spot deepfakes. We carried out a thorough review of the DFF dataset and suggest two evaluation methods to gauge the strength and adaptability of deepfake recognition tools. The first method tests whether an algorithm trained on one type of fake images can recognize those produced by other methods. The second evaluates the algorithm's performance with imperfect images, like those that are blurry, of low quality, or compressed. Given varied results across deepfake methods and image changes, our findings stress the need for better deepfake detectors. Our DFF dataset and tests aim to boost the development of more effective tools against deepfakes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,975
1802.02208
Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network
Automated pavement crack detection is a challenging task that has been researched for decades due to the complicated pavement conditions in real world. In this paper, a supervised method based on deep learning is proposed, which has the capability of dealing with different pavement conditions. Specifically, a convolutional neural network (CNN) is used to learn the structure of the cracks from raw images, without any preprocessing. Small patches are extracted from crack images as inputs to generate a large training database, a CNN is trained and crack detection is modeled as a multi-label classification problem. Typically, crack pixels are much fewer than non-crack pixels. To deal with the problem with severely imbalanced data, a strategy with modifying the ratio of positive to negative samples is proposed. The method is tested on two public databases and compared with five existing methods. Experimental results show that it outperforms the other methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,729
2012.03242
Esophageal Tumor Segmentation in CT Images using Dilated Dense Attention Unet (DDAUnet)
Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with \mbox{air pockets}, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a $\mathrm{DSC}$ value of $0.79 \pm 0.20$, a mean surface distance of $5.4 \pm 20.2mm$ and $95\%$ Hausdorff distance of $14.7 \pm 25.0mm$ for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via \url{https://github.com/yousefis/DenseUnet_Esophagus_Segmentation}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
210,046
1408.2053
Predicting the behavior of interacting humans by fusing data from multiple sources
Multi-fidelity methods combine inexpensive low-fidelity simulations with costly but highfidelity simulations to produce an accurate model of a system of interest at minimal cost. They have proven useful in modeling physical systems and have been applied to engineering problems such as wing-design optimization. During human-in-the-loop experimentation, it has become increasingly common to use online platforms, like Mechanical Turk, to run low-fidelity experiments to gather human performance data in an efficient manner. One concern with these experiments is that the results obtained from the online environment generalize poorly to the actual domain of interest. To address this limitation, we extend traditional multi-fidelity approaches to allow us to combine fewer data points from high-fidelity human-in-the-loop experiments with plentiful but less accurate data from low-fidelity experiments to produce accurate models of how humans interact. We present both model-based and model-free methods, and summarize the predictive performance of each method under dierent conditions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
35,250
2409.17791
Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness
Recently, there has been significant interest in replacing the reward model in Reinforcement Learning with Human Feedback (RLHF) methods for Large Language Models (LLMs), such as Direct Preference Optimization (DPO) and its variants. These approaches commonly use a binary cross-entropy mechanism on pairwise samples, i.e., minimizing and maximizing the loss based on preferred or dis-preferred responses, respectively. However, while this training strategy omits the reward model, it also overlooks the varying preference degrees within different responses. We hypothesize that this is a key factor hindering LLMs from sufficiently understanding human preferences. To address this problem, we propose a novel Self-supervised Preference Optimization (SPO) framework, which constructs a self-supervised preference degree loss combined with the alignment loss, thereby helping LLMs improve their ability to understand the degree of preference. Extensive experiments are conducted on two widely used datasets of different tasks. The results demonstrate that SPO can be seamlessly integrated with existing preference optimization methods and significantly boost their performance to achieve state-of-the-art performance. We also conduct detailed analyses to offer comprehensive insights into SPO, which verifies its effectiveness. The code is available at https://github.com/lijian16/SPO.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
491,981
1811.10783
A True Random Number Generator Method Embedded in Wireless Communication Systems
To increase the number of wireless devices, e.g., mobile or IoT terminals, cryptosystems are essential for secure communications. In this regard, random number generation is crucial because the appropriate function of cryptosystems relies on it to work properly. This paper proposes a true random number generator (TRNG) method capable of working in wireless communication systems. By embedding a TRNG in such systems, no additional analog circuits are required and working conditions can be limited as long as wireless communication systems are functioning properly, making TRNG method cost-effective. We also present some theoretical background and considerations. We next conduct experimental verification, which strongly supports the viability of the proposed method.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
114,583
2005.10283
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
Recent studies have revealed a number of pathologies of neural machine translation (NMT) systems. Hypotheses explaining these mostly suggest there is something fundamentally wrong with NMT as a model or its training algorithm, maximum likelihood estimation (MLE). Most of this evidence was gathered using maximum a posteriori (MAP) decoding, a decision rule aimed at identifying the highest-scoring translation, i.e. the mode. We argue that the evidence corroborates the inadequacy of MAP decoding more than casts doubt on the model and its training algorithm. In this work, we show that translation distributions do reproduce various statistics of the data well, but that beam search strays from such statistics. We show that some of the known pathologies and biases of NMT are due to MAP decoding and not to NMT's statistical assumptions nor MLE. In particular, we show that the most likely translations under the model accumulate so little probability mass that the mode can be considered essentially arbitrary. We therefore advocate for the use of decision rules that take into account the translation distribution holistically. We show that an approximation to minimum Bayes risk decoding gives competitive results confirming that NMT models do capture important aspects of translation well in expectation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
178,132
1910.13689
ON-TRAC Consortium End-to-End Speech Translation Systems for the IWSLT 2019 Shared Task
This paper describes the ON-TRAC Consortium translation systems developed for the end-to-end model task of IWSLT Evaluation 2019 for the English-to-Portuguese language pair. ON-TRAC Consortium is composed of researchers from three French academic laboratories: LIA (Avignon Universit\'e), LIG (Universit\'e Grenoble Alpes), and LIUM (Le Mans Universit\'e). A single end-to-end model built as a neural encoder-decoder architecture with attention mechanism was used for two primary submissions corresponding to the two EN-PT evaluations sets: (1) TED (MuST-C) and (2) How2. In this paper, we notably investigate impact of pooling heterogeneous corpora for training, impact of target tokenization (characters or BPEs), impact of speech input segmentation and we also compare our best end-to-end model (BLEU of 26.91 on MuST-C and 43.82 on How2 validation sets) to a pipeline (ASR+MT) approach.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
151,457
1911.11071
Learning to Optimize Variational Quantum Circuits to Solve Combinatorial Problems
Quantum computing is a computational paradigm with the potential to outperform classical methods for a variety of problems. Proposed recently, the Quantum Approximate Optimization Algorithm (QAOA) is considered as one of the leading candidates for demonstrating quantum advantage in the near term. QAOA is a variational hybrid quantum-classical algorithm for approximately solving combinatorial optimization problems. The quality of the solution obtained by QAOA for a given problem instance depends on the performance of the classical optimizer used to optimize the variational parameters. In this paper, we formulate the problem of finding optimal QAOA parameters as a learning task in which the knowledge gained from solving training instances can be leveraged to find high-quality solutions for unseen test instances. To this end, we develop two machine-learning-based approaches. Our first approach adopts a reinforcement learning (RL) framework to learn a policy network to optimize QAOA circuits. Our second approach adopts a kernel density estimation (KDE) technique to learn a generative model of optimal QAOA parameters. In both approaches, the training procedure is performed on small-sized problem instances that can be simulated on a classical computer; yet the learned RL policy and the generative model can be used to efficiently solve larger problems. Extensive simulations using the IBM Qiskit Aer quantum circuit simulator demonstrate that our proposed RL- and KDE-based approaches reduce the optimality gap by factors up to 30.15 when compared with other commonly used off-the-shelf optimizers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
155,017
2410.08164
Agent S: An Open Agentic Framework that Uses Computers Like a Human
We present Agent S, an open agentic framework that enables autonomous interaction with computers through a Graphical User Interface (GUI), aimed at transforming human-computer interaction by automating complex, multi-step tasks. Agent S aims to address three key challenges in automating computer tasks: acquiring domain-specific knowledge, planning over long task horizons, and handling dynamic, non-uniform interfaces. To this end, Agent S introduces experience-augmented hierarchical planning, which learns from external knowledge search and internal experience retrieval at multiple levels, facilitating efficient task planning and subtask execution. In addition, it employs an Agent-Computer Interface (ACI) to better elicit the reasoning and control capabilities of GUI agents based on Multimodal Large Language Models (MLLMs). Evaluation on the OSWorld benchmark shows that Agent S outperforms the baseline by 9.37% on success rate (an 83.6% relative improvement) and achieves a new state-of-the-art. Comprehensive analysis highlights the effectiveness of individual components and provides insights for future improvements. Furthermore, Agent S demonstrates broad generalizability to different operating systems on a newly-released WindowsAgentArena benchmark. Code available at https://github.com/simular-ai/Agent-S.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
496,981
2403.06366
Finite-Time Error Analysis of Soft Q-Learning: Switching System Approach
Soft Q-learning is a variation of Q-learning designed to solve entropy regularized Markov decision problems where an agent aims to maximize the entropy regularized value function. Despite its empirical success, there have been limited theoretical studies of soft Q-learning to date. This paper aims to offer a novel and unified finite-time, control-theoretic analysis of soft Q-learning algorithms. We focus on two types of soft Q-learning algorithms: one utilizing the log-sum-exp operator and the other employing the Boltzmann operator. By using dynamical switching system models, we derive novel finite-time error bounds for both soft Q-learning algorithms. We hope that our analysis will deepen the current understanding of soft Q-learning by establishing connections with switching system models and may even pave the way for new frameworks in the finite-time analysis of other reinforcement learning algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
436,415