id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1911.03916
Intelligent Reflecting Surface with Discrete Phase Shifts: Channel Estimation and Passive Beamforming
In this paper, we consider an intelligent reflecting surface (IRS)-aided single-user system where an IRS with discrete phase shifts is deployed to assist the uplink communication. A practical transmission protocol is proposed to execute channel estimation and passive beamforming successively. To minimize the mean square error (MSE) of channel estimation, we first formulate an optimization problem for designing the IRS reflection pattern in the training phase under the constraints of unit-modulus, discrete phase, and full rank. This problem, however, is NP-hard and thus difficult to solve in general. As such, we propose a low-complexity yet efficient method to solve it sub-optimally, by constructing a near-orthogonal reflection pattern based on either discrete Fourier transform (DFT)-matrix quantization or Hadamard-matrix truncation. Based on the estimated channel, we then formulate an optimization problem to maximize the achievable rate by designing the discrete-phase passive beamforming at the IRS with the training overhead and channel estimation error taken into account. To reduce the computational complexity of exhaustive search, we further propose a low-complexity successive refinement algorithm with a properly-designed initialization to obtain a high-quality suboptimal solution. Numerical results are presented to show the significant rate improvement of our proposed IRS training reflection pattern and passive beamforming designs as compared to other benchmark schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
152,828
1907.09190
ELI5: Long Form Question Answering
We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum ``Explain Like I'm Five'' (ELI5) where an online community provides answers to questions which are comprehensible by five year olds. Compared to existing datasets, ELI5 comprises diverse questions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline. However, our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
139,297
2111.14565
A new Sinkhorn algorithm with Deletion and Insertion operations
This technical report is devoted to the continuous estimation of an epsilon-assignment. Roughly speaking, an epsilon assignment between two sets V1 and V2 may be understood as a bijective mapping between a sub part of V1 and a sub part of V2 . The remaining elements of V1 (not included in this mapping) are mapped onto an epsilon pseudo element of V2 . We say that such elements are deleted. Conversely, the remaining elements of V2 correspond to the image of the epsilon pseudo element of V1. We say that these elements are inserted. As a result our method provides a result similar to the one of the Sinkhorn algorithm with the additional ability to reject some elements which are either inserted or deleted. It thus naturally handles sets V1 and V2 of different sizes and decides mappings/insertions/deletions in a unified way. Our algorithms are iterative and differentiable and may thus be easily inserted within a backpropagation based learning framework such as artificial neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
268,634
1812.10381
Decision Support System for Renal Transplantation
The burgeoning need for kidney transplantation mandates immediate attention. Mismatch of deceased donor-recipient kidney leads to post-transplant death. To ensure ideal kidney donor-recipient match and minimize post-transplant deaths, the paper develops a prediction model that identifies factors that determine the probability of success of renal transplantation, that is, if the kidney procured from the deceased donor can be transplanted or discarded. The paper conducts a study enveloping data for 584 imported kidneys collected from 12 transplant centers associated with an organ procurement organization located in New York City, NY. The predicting model yielding best performance measures can be beneficial to the healthcare industry. Transplant centers and organ procurement organizations can take advantage of the prediction model to efficiently predict the outcome of kidney transplantation. Consequently, it will reduce the mortality rate caused by mismatching of donor-recipient kidney transplantation during the surgery. Keywords
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
117,347
1506.03338
Neural Adaptive Sequential Monte Carlo
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Kullback-Leibler divergence between the true posterior and the proposal distribution. The method is very flexible, applicable to any parameterized proposal distribution and it supports online and batch variants. We use the new framework to adapt powerful proposal distributions with rich parameterizations based upon neural networks leading to Neural Adaptive Sequential Monte Carlo (NASMC). Experiments indicate that NASMC significantly improves inference in a non-linear state space model outperforming adaptive proposal methods including the Extended Kalman and Unscented Particle Filters. Experiments also indicate that improved inference translates into improved parameter learning when NASMC is used as a subroutine of Particle Marginal Metropolis Hastings. Finally we show that NASMC is able to train a latent variable recurrent neural network (LV-RNN) achieving results that compete with the state-of-the-art for polymorphic music modelling. NASMC can be seen as bridging the gap between adaptive SMC methods and the recent work in scalable, black-box variational inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
44,034
2107.07443
Multi-label Chaining with Imprecise Probabilities
We present two different strategies to extend the classical multi-label chaining approach to handle imprecise probability estimates. These estimates use convex sets of distributions (or credal sets) in order to describe our uncertainty rather than a precise one. The main reasons one could have for using such estimations are (1) to make cautious predictions (or no decision at all) when a high uncertainty is detected in the chaining and (2) to make better precise predictions by avoiding biases caused in early decisions in the chaining. We adapt both strategies to the case of the naive credal classifier, showing that this adaptations are computationally efficient. Our experimental results on missing labels, which investigate how reliable these predictions are in both approaches, indicate that our approaches produce relevant cautiousness on those hard-to-predict instances where the precise models fail.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
246,423
1911.05934
Multi-Attribute Bayesian Optimization With Interactive Preference Learning
We consider black-box global optimization of time-consuming-to-evaluate functions on behalf of a decision-maker (DM) whose preferences must be learned. Each feasible design is associated with a time-consuming-to-evaluate vector of attributes and each vector of attributes is assigned a utility by the DM's utility function, which may be learned approximately using preferences expressed over pairs of attribute vectors. Past work has used a point estimate of this utility function as if it were error-free within single-objective optimization. However, utility estimation errors may yield a poor suggested design. Furthermore, this approach produces a single suggested "best" design, whereas DMs often prefer to choose from a menu. We propose a novel multi-attribute Bayesian optimization with preference learning approach. Our approach acknowledges the uncertainty in preference estimation and implicitly chooses designs to evaluate that are good not just for a single estimated utility function but a range of likely ones. The outcome of our approach is a menu of designs and evaluated attributes from which the DM makes a final selection. We demonstrate the value and flexibility of our approach in a variety of experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,413
2303.11896
Quantifying the Safety of Trajectories using Peak-Minimizing Control
This work quantifies the safety of trajectories of a dynamical system by the perturbation intensity required to render a system unsafe (crash into the unsafe set). Computation of this measure of safety is posed as a peak-minimizing optimal control problem. Convergent lower bounds on the minimal peak value of controller effort are computed using polynomial optimization and the moment-Sum-of-Squares hierarchy. The crash-safety framework is extended towards data-driven safety analysis by measuring safety as the maximum amount of data corruption required to crash into the unsafe set.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
353,052
2502.11057
A Physics-Informed Machine Learning Framework for Safe and Optimal Control of Autonomous Systems
As autonomous systems become more ubiquitous in daily life, ensuring high performance with guaranteed safety is crucial. However, safety and performance could be competing objectives, which makes their co-optimization difficult. Learning-based methods, such as Constrained Reinforcement Learning (CRL), achieve strong performance but lack formal safety guarantees due to safety being enforced as soft constraints, limiting their use in safety-critical settings. Conversely, formal methods such as Hamilton-Jacobi (HJ) Reachability Analysis and Control Barrier Functions (CBFs) provide rigorous safety assurances but often neglect performance, resulting in overly conservative controllers. To bridge this gap, we formulate the co-optimization of safety and performance as a state-constrained optimal control problem, where performance objectives are encoded via a cost function and safety requirements are imposed as state constraints. We demonstrate that the resultant value function satisfies a Hamilton-Jacobi-Bellman (HJB) equation, which we approximate efficiently using a novel physics-informed machine learning framework. In addition, we introduce a conformal prediction-based verification strategy to quantify the learning errors, recovering a high-confidence safety value function, along with a probabilistic error bound on performance degradation. Through several case studies, we demonstrate the efficacy of the proposed framework in enabling scalable learning of safe and performant controllers for complex, high-dimensional autonomous systems.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
534,168
2410.21717
Generating Realistic Tabular Data with Large Language Models
While most generative models show achievements in image data generation, few are developed for tabular data generation. Recently, due to success of large language models (LLM) in diverse tasks, they have also been used for tabular data generation. However, these methods do not capture the correct correlation between the features and the target variable, hindering their applications in downstream predictive tasks. To address this problem, we propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data. First, we propose a novel permutation strategy for the input data in the fine-tuning phase. Second, we propose a feature-conditional sampling approach to generate synthetic samples. Finally, we generate the labels by constructing prompts based on the generated samples to query our fine-tuned LLM. Our extensive experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks. It also produces highly realistic synthetic samples in terms of quality and diversity. More importantly, classifiers trained with our synthetic data can even compete with classifiers trained with the original data on half of the benchmark datasets, which is a significant achievement in tabular data generation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
503,366
2012.13021
K-Means Kernel Classifier
We combine K-means clustering with the least-squares kernel classification method. K-means clustering is used to extract a set of representative vectors for each class. The least-squares kernel method uses these representative vectors as a training set for the classification task. We show that this combination of unsupervised and supervised learning algorithms performs very well, and we illustrate this approach using the MNIST dataset
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
213,077
1404.2267
Transparallel mind: Classical computing with quantum power
Inspired by the extraordinary computing power promised by quantum computers, the quantum mind hypothesis postulated that quantum mechanical phenomena are the source of neuronal synchronization, which, in turn, might underlie consciousness. Here, I present an alternative inspired by a classical computing method with quantum power. This method relies on special distributed representations called hyperstrings. Hyperstrings are superpositions of up to an exponential number of strings, which -- by a single-processor classical computer -- can be evaluated in a transparallel fashion, that is, simultaneously as if only one string were concerned. Building on a neurally plausible model of human visual perceptual organization, in which hyperstrings are formal counterparts of transient neural assemblies, I postulate that synchronization in such assemblies is a manifestation of transparallel information processing. This accounts for the high combinatorial capacity and speed of human visual perceptual organization and strengthens ideas that self-organizing cognitive architecture bridges the gap between neurons and consciousness.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
32,196
2404.10259
Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On the other hand, traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances. Consequently, a significant portion of research into social media discourse still depends on labor-intensive manual coding techniques and a human-in-the-loop approach, which are both time-consuming and costly. In this work, we study the problem of discovering arguments associated with a specific theme. We propose a generic LLMs-in-the-Loop strategy that leverages the advanced capabilities of Large Language Models (LLMs) to extract latent arguments from social media messaging. To demonstrate our approach, we apply our framework to contentious topics. We use two publicly available datasets: (1) the climate campaigns dataset of 14k Facebook ads with 25 themes and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads with 14 themes. Additionally, we design a downstream task as stance prediction by leveraging talking points in climate debates. Furthermore, we analyze demographic targeting and the adaptation of messaging based on real-world events.
false
false
false
true
true
false
true
false
true
false
false
false
false
true
false
false
false
false
447,016
2207.01773
Approximating Discontinuous Nash Equilibrial Values of Two-Player General-Sum Differential Games
Finding Nash equilibrial policies for two-player differential games requires solving Hamilton-Jacobi-Isaacs (HJI) PDEs. Self-supervised learning has been used to approximate solutions of such PDEs while circumventing the curse of dimensionality. However, this method fails to learn discontinuous PDE solutions due to its sampling nature, leading to poor safety performance of the resulting controllers in robotics applications when player rewards are discontinuous. This paper investigates two potential solutions to this problem: a hybrid method that leverages both supervised Nash equilibria and the HJI PDE, and a value-hardening method where a sequence of HJIs are solved with a gradually hardening reward. We compare these solutions using the resulting generalization and safety performance in two vehicle interaction simulation studies with 5D and 9D state spaces, respectively. Results show that with informative supervision (e.g., collision and near-collision demonstrations) and the low cost of self-supervised learning, the hybrid method achieves better safety performance than the supervised, self-supervised, and value hardening approaches on equal computational budget. Value hardening fails to generalize in the higher-dimensional case without informative supervision. Lastly, we show that the neural activation function needs to be continuously differentiable for learning PDEs and its choice can be case dependent.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
true
306,286
1905.11979
Causal Confusion in Imitation Learning
Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive "causal misidentification" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,612
2310.03965
Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models
Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \textit{from scratch}. To address these issues, we propose \textbf{\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\% improvement of human preference in Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent Planning.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
397,490
1809.04938
LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts
We introduce the task of automatic live commenting. Live commenting, which is also called `video barrage', is an emerging feature on online video sites that allows real-time comments from viewers to fly across the screen like bullets or roll at the right side of the screen. The live comments are a mixture of opinions for the video and the chit chats with other comments. Automatic live commenting requires AI agents to comprehend the videos and interact with human viewers who also make the comments, so it is a good testbed of an AI agent's ability of dealing with both dynamic vision and language. In this work, we construct a large-scale live comment dataset with 2,361 videos and 895,929 live comments. Then, we introduce two neural models to generate live comments based on the visual and textual contexts, which achieve better performance than previous neural baselines such as the sequence-to-sequence model. Finally, we provide a retrieval-based evaluation protocol for automatic live commenting where the model is asked to sort a set of candidate comments based on the log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank. Putting it all together, we demonstrate the first `LiveBot'.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,680
1909.08234
Value of Information in Probabilistic Logic Programs
In medical decision making, we have to choose among several expensive diagnostic tests such that the certainty about a patient's health is maximized while remaining within the bounds of resources like time and money. The expected increase in certainty in the patient's condition due to performing a test is called the value of information (VoI) for that test. In general, VoI relates to acquiring additional information to improve decision-making based on probabilistic reasoning in an uncertain system. This paper presents a framework for acquiring information based on VoI in uncertain systems modeled as Probabilistic Logic Programs (PLPs). Optimal decision-making in uncertain systems modeled as PLPs have already been studied before. But, acquiring additional information to further improve the results of making the optimal decision has remained open in this context. We model decision-making in an uncertain system with a PLP and a set of top-level queries, with a set of utility measures over the distributions of these queries. The PLP is annotated with a set of atoms labeled as "observable"; in the medical diagnosis example, the observable atoms will be results of diagnostic tests. Each observable atom has an associated cost. This setting of optimally selecting observations based on VoI is more general than that considered by any prior work. Given a limited budget, optimally choosing observable atoms based on VoI is intractable in general. We give a greedy algorithm for constructing a "conditional plan" of observations: a schedule where the selection of what atom to observe next depends on earlier observations. We show that, preempting the algorithm anytime before completion provides a usable result, the result improves over time, and, in the absence of a well-defined budget, converges to the optimal solution.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
145,916
2209.01194
CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural Representations
Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties. However, NeRFs often fail for large, unbounded scenes that are captured under very sparse views with the scene content concentrated far away from the camera, as is typical for field robotics applications. In particular, NeRF-style algorithms perform poorly: (1) when there are insufficient views with little pose diversity, (2) when scenes contain saturation and shadows, and (3) when finely sampling large unbounded scenes with fine structures becomes computationally intensive. This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes that are observed from sparse input sensor views. This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively. In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for volumetric rendering in metric space. Through extensive quantitative and qualitative experiments on scenes from the KITTI dataset, this paper demonstrates that the proposed method outperforms state-of-the-art NeRF models on both novel view synthesis and dense depth prediction tasks when trained on sparse input data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,802
2403.04489
Optimal Denial-of-Service Attacks Against Status Updating
In this paper, we investigate denial-of-service attacks against status updating. The target system is modeled by a Markov chain and an unreliable wireless channel, and the performance of status updating in the target system is measured based on two metrics: age of information and age of incorrect information. Our objective is to devise optimal attack policies that strike a balance between the deterioration of the system's performance and the adversary's energy. We model the optimal problem as a Markov decision process and prove rigorously that the optimal jamming policy is a threshold-based policy under both metrics. In addition, we provide a low-complexity algorithm to obtain the optimal threshold value of the jamming policy. Our numerical results show that the networked system with the age-of-incorrect-information metric is less sensitive to jamming attacks than with the age-of-information metric. Index Terms-age of incorrect information, age of information, cyber-physical systems, status updating, remote monitoring.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
435,619
2210.00286
NeuroEvo: A Cloud-based Platform for Automated Design and Training of Neural Networks using Evolutionary and Particle Swarm Algorithms
Evolutionary algorithms (EAs) provide unique advantages for optimizing neural networks in complex search spaces. This paper introduces a new web platform, NeuroEvo (neuroevo.io), that allows users to interactively design and train neural network classifiers using evolutionary and particle swarm algorithms. The classification problem and training data are provided by the user and, upon completion of the training process, the best classifier is made available to download and implement in Python, Java, and JavaScript. NeuroEvo is a cloud-based application that leverages GPU parallelization to improve the speed with which the independent evolutionary steps, such as mutation, crossover, and fitness evaluation, are executed across the population. This paper outlines the training algorithms and opportunities for users to specify design decisions and hyperparameter settings. The algorithms described in this paper are also made available as a Python package, neuroevo (PyPI: https://pypi.org/project/neuroevo/).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
320,808
2004.12240
A Smartphone enabled Approach to Manage COVID-19 Lockdown and Economic Crisis
The emergence of novel COVID-19 causing an overload in health system and high mortality rate. The key priority is to contain the epidemic and prevent the infection rate. In this context, many countries are now in some degree of lockdown to ensure extreme social distancing of entire population and hence slowing down the epidemic spread. Further, authorities use case quarantine strategy and manual second/third contact-tracing to contain the COVID-19 disease. However, manual contact tracing is time consuming and labor-intensive task which tremendously overload public health systems. In this paper, we developed a smartphone-based approach to automatically and widely trace the contacts for confirmed COVID-19 cases. Particularly, contact-tracing approach creates a list of individuals in the vicinity and notifying contacts or officials of confirmed COVID-19 cases. This approach is not only providing awareness to individuals they are in the proximity to the infected area, but also tracks the incidental contacts that the COVID-19 carrier might not recall. Thereafter, we developed a dashboard to provide a plan for government officials on how lockdown/mass quarantine can be safely lifted, and hence tackling the economic crisis. The dashboard used to predict the level of lockdown area based on collected positions and distance measurements of the registered users in the vicinity. The prediction model uses K-means algorithm as an unsupervised machine learning technique for lockdown management.
true
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
174,178
1811.08560
Adjustable Real-time Style Transfer
Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying "optimum" for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,062
1806.01045
Topic Modelling of Empirical Text Corpora: Validity, Reliability, and Reproducibility in Comparison to Semantic Maps
Using the 6,638 case descriptions of societal impact submitted for evaluation in the Research Excellence Framework (REF 2014), we replicate the topic model (Latent Dirichlet Allocation or LDA) made in this context and compare the results with factor-analytic results using a traditional word-document matrix (Principal Component Analysis or PCA). Removing a small fraction of documents from the sample, for example, has on average a much larger impact on LDA than on PCA-based models to the extent that the largest distortion in the case of PCA has less effect than the smallest distortion of LDA-based models. In terms of semantic coherence, however, LDA models outperform PCA-based models. The topic models inform us about the statistical properties of the document sets under study, but the results are statistical and should not be used for a semantic interpretation - for example, in grant selections and micro-decision making, or scholarly work-without follow-up using domain-specific semantic maps.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
99,469
cs/0607136
Competing with Markov prediction strategies
Assuming that the loss function is convex in the prediction, we construct a prediction strategy universal for the class of Markov prediction strategies, not necessarily continuous. Allowing randomization, we remove the requirement of convexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
539,616
2208.08938
Meta Sparse Principal Component Analysis
We study the meta-learning for support (i.e. the set of non-zero entries) recovery in high-dimensional Principal Component Analysis. We reduce the sufficient sample complexity in a novel task with the information that is learned from auxiliary tasks. We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small. We then pool the data from all the tasks to execute an improper estimation of a single PC matrix by maximising the $l_1$-regularised predictive covariance to establish that with high probability the true support union can be recovered provided a sufficient number of tasks $m$ and a sufficient number of samples $ O\left(\frac{\log(p)}{m}\right)$ for each task, for $p$-dimensional vectors. Then, for a novel task, we prove that the maximisation of the $l_1$-regularised predictive covariance with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to $O(\log |J|)$, where $J$ is the support union recovered from the auxiliary tasks. Typically, $|J|$ would be much less than $p$ for sparse matrices. Finally, we demonstrate the validity of our experiments through numerical simulations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
313,542
2309.13092
Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks
The variety and complexity of relations in multimedia data lead to Heterogeneous Information Networks (HINs). Capturing the semantics from such networks requires approaches capable of utilizing the full richness of the HINs. Existing methods for modeling HINs employ techniques originally designed for graph neural networks, and HINs decomposition analysis, like using manually predefined metapaths. In this paper, we introduce a novel prototype-enhanced hypergraph learning approach for node classification in HINs. Using hypergraphs instead of graphs, our method captures higher-order relationships among nodes and extracts semantic information without relying on metapaths. Our method leverages the power of prototypes to improve the robustness of the hypergraph learning process and creates the potential to provide human-interpretable insights into the underlying network structure. Extensive experiments on three real-world HINs demonstrate the effectiveness of our method.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
394,055
2101.02674
Waveform and Beamforming Design for Intelligent Reflecting Surface Aided Wireless Power Transfer: Single-User and Multi-User Solutions
In this paper, we study the waveform and passive beamforming design for intelligent reflecting surface (IRS)-aided wireless power transfer (WPT). Generalized multi-user and low complexity single-user algorithms are derived based on alternating optimization (AO) framework to maximize the weighted sum output DC current, subject to transmit power constraints and passive beamforming phases unit modulus constraints. The input signal waveform and IRS passive beamforming phase shifts are jointly designed as a function of users' individual frequency-selective channel state information (CSI). The energy harvester nonlinearity is explored and two IRS deployment schemes, namely frequency selective IRS (FS-IRS) and frequency flat IRS (FF-IRS), are modeled and analyzed. This paper highlights the fact that IRS can provide an extra passive beamforming gain on output DC power over conventional WPT designs and significantly influence the waveform design by leveraging the benefit of passive beamforming, frequency diversity and energy harvester nonlinearity. Even though FF-IRS exhibits lower output DC current than FS-IRS, it still achieves substantially increased DC power over conventional WPT designs. Performance evaluations confirm the significant benefits of a joint waveform and passive beamforming design accounting for the energy harvester nonlinearity to boost the performance of single-user and multi-user WPT system.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
214,703
2011.13511
Physics-Informed Neural Network for Modelling the Thermochemical Curing Process of Composite-Tool Systems During Manufacture
We present a Physics-Informed Neural Network (PINN) to simulate the thermochemical evolution of a composite material on a tool undergoing cure in an autoclave. In particular, we solve the governing coupled system of differential equations -- including conductive heat transfer and resin cure kinetics -- by optimizing the parameters of a deep neural network (DNN) using a physics-based loss function. To account for the vastly different behaviour of thermal conduction and resin cure, we design a PINN consisting of two disconnected subnetworks, and develop a sequential training algorithm that mitigates instability present in traditional training methods. Further, we incorporate explicit discontinuities into the DNN at the composite-tool interface and enforce known physical behaviour directly in the loss function to improve the solution near the interface. We train the PINN with a technique that automatically adapts the weights on the loss terms corresponding to PDE, boundary, interface, and initial conditions. Finally, we demonstrate that one can include problem parameters as an input to the model -- resulting in a surrogate that provides real-time simulation for a range of problem settings -- and that one can use transfer learning to significantly reduce the training time for problem settings similar to that of an initial trained model. The performance of the proposed PINN is demonstrated in multiple scenarios with different material thicknesses and thermal boundary conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,497
2403.11026
EfficientMorph: Parameter-Efficient Transformer-Based Architecture for 3D Image Registration
Transformers have emerged as the state-of-the-art architecture in medical image registration, outperforming convolutional neural networks (CNNs) by addressing their limited receptive fields and overcoming gradient instability in deeper models. Despite their success, transformer-based models require substantial resources for training, including data, memory, and computational power, which may restrict their applicability for end users with limited resources. In particular, existing transformer-based 3D image registration architectures face two critical gaps that challenge their efficiency and effectiveness. Firstly, although window-based attention mechanisms reduce the quadratic complexity of full attention by focusing on local regions, they often struggle to effectively integrate both local and global information. Secondly, the granularity of tokenization, a crucial factor in registration accuracy, presents a performance trade-off: smaller voxel-size tokens enhance detail capture but come with increased computational complexity, higher memory usage, and a greater risk of overfitting. We present \name, a transformer-based architecture for unsupervised 3D image registration that balances local and global attention in 3D volumes through a plane-based attention mechanism and employs a Hi-Res tokenization strategy with merging operations, thus capturing finer details without compromising computational efficiency. Notably, \name sets a new benchmark for performance on the OASIS dataset with 16-27x fewer parameters. https://github.com/MedVIC-Lab/Efficient_Morph_Registration
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,485
1305.3596
Robust Streaming Erasure Codes based on Deterministic Channel Approximations
We study near optimal error correction codes for real-time communication. In our setup the encoder must operate on an incoming source stream in a sequential manner, and the decoder must reconstruct each source packet within a fixed playback deadline of $T$ packets. The underlying channel is a packet erasure channel that can introduce both burst and isolated losses. We first consider a class of channels that in any window of length ${T+1}$ introduce either a single erasure burst of a given maximum length $B,$ or a certain maximum number $N$ of isolated erasures. We demonstrate that for a fixed rate and delay, there exists a tradeoff between the achievable values of $B$ and $N,$ and propose a family of codes that is near optimal with respect to this tradeoff. We also consider another class of channels that introduce both a burst {\em and} an isolated loss in each window of interest and develop the associated streaming codes. All our constructions are based on a layered design and provide significant improvements over baseline codes in simulations over the Gilbert-Elliott channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
24,620
1208.0468
Probabilistic interconnection between interdependent networks promotes cooperation in the public goods game
Most previous works study the evolution of cooperation in a structured population by commonly employing an isolated single network. However, realistic systems are composed of many interdependent networks coupled with each other, rather than the isolated single one. In this paper, we consider a system including two interacting networks with the same size, entangled with each other by the introduction of probabilistic interconnections. We introduce the public goods game into such system, and study how the probabilistic interconnection influences the evolution of cooperation of the whole system and the coupling effect between two layers of interdependent networks. Simulation results show that there exists an intermediate region of interconnection probability leading to the maximum cooperation level in the whole system. Interestingly, we find that at the optimal interconnection probability the fraction of internal links between cooperators in two layers is maximal. Also, even if initially there are no cooperators in one layer of interdependent networks, cooperation can still be promoted by probabilistic interconnection, and the cooperation levels in both layers can more easily reach an agreement at the intermediate interconnection probability. Our results may be helpful in understanding the cooperative behavior in some realistic interdependent networks and thus highlight the importance of probabilistic interconnection on the evolution of cooperation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
17,918
2301.00802
Deep Clustering of Tabular Data by Weighted Gaussian Distribution Learning
Deep learning methods are primarily proposed for supervised learning of images or text with limited applications to clustering problems. In contrast, tabular data with heterogeneous features pose unique challenges in representation learning, where deep learning has yet to replace traditional machine learning. This paper addresses these challenges in developing one of the first deep clustering methods for tabular data: Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS). G-CEALS is an unsupervised deep clustering framework for learning the parameters of multivariate Gaussian cluster distributions by iteratively updating individual cluster weights. The G-CEALS method presents average rank orderings of 2.9(1.7) and 2.8(1.7) based on clustering accuracy and adjusted Rand index (ARI) scores on sixteen tabular data sets, respectively, and outperforms nine state-of-the-art clustering methods. G-CEALS substantially improves clustering performance compared to traditional K-means and GMM, which are still de facto methods for clustering tabular data. Similar computationally efficient and high-performing deep clustering frameworks are imperative to reap the myriad benefits of deep learning on tabular data over traditional machine learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
339,029
2502.08011
Training-Free Safe Denoisers for Safe Use of Diffusion Models
There is growing concern over the safety of powerful diffusion models (DMs), as they are often misused to produce inappropriate, not-safe-for-work (NSFW) content or generate copyrighted material or data of individuals who wish to be forgotten. Many existing methods tackle these issues by heavily relying on text-based negative prompts or extensively retraining DMs to eliminate certain features or samples. In this paper, we take a radically different approach, directly modifying the sampling trajectory by leveraging a negation set (e.g., unsafe images, copyrighted data, or datapoints needed to be excluded) to avoid specific regions of data distribution, without needing to retrain or fine-tune DMs. We formally derive the relationship between the expected denoised samples that are safe and those that are not safe, leading to our $\textit{safe}$ denoiser which ensures its final samples are away from the area to be negated. Inspired by the derivation, we develop a practical algorithm that successfully produces high-quality samples while avoiding negation areas of the data distribution in text-conditional, class-conditional, and unconditional image generation scenarios. These results hint at the great potential of our training-free safe denoiser for using DMs more safely.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
532,861
1806.08908
Zeta Distribution and Transfer Learning Problem
We explore the relations between the zeta distribution and algorithmic information theory via a new model of the transfer learning problem. The program distribution is approximated by a zeta distribution with parameter near $1$. We model the training sequence as a stochastic process. We analyze the upper temporal bound for learning a training sequence and its entropy rates, assuming an oracle for the transfer learning problem. We argue from empirical evidence that power-law models are suitable for natural processes. Four sequence models are proposed. Random typing model is like no-free lunch where transfer learning does not work. Zeta process independently samples programs from the zeta distribution. A model of common sub-programs inspired by genetics uses a database of sub-programs. An evolutionary zeta process samples mutations from Zeta distribution. The analysis of stochastic processes inspired by evolution suggest that AI may be feasible in nature, countering no-free lunch sort of arguments.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
101,248
2006.10225
On Sparsity in Overparametrised Shallow ReLU Networks
The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer. The limit of infinitely wide networks provides an appealing route forward through the mean-field perspective, but a key challenge is to bring learning guarantees back to the finite-neuron setting, where practical algorithms operate. Towards closing this gap, and focusing on shallow neural networks, in this work we study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime. Specifically, we consider (i) a form of implicit regularisation obtained by injecting noise into training targets [Blanc et al.~19], and (ii) the variation-norm regularisation [Bach~17], compatible with the mean-field scaling. Under mild assumptions on the activation function (satisfied for instance with ReLUs), we establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation. We study the consequences of such property and describe the settings where one form of regularisation is favorable over the other.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,809
1304.1507
Deciding Consistency of Databases Containing Defeasible and Strict Information
We propose a norm of consistency for a mixed set of defeasible and strict sentences, based on a probabilistic semantics. This norm establishes a clear distinction between knowledge bases depicting exceptions and those containing outright contradictions. We then define a notion of entailment based also on probabilistic considerations and provide a characterization of the relation between consistency and entailment. We derive necessary and sufficient conditions for consistency, and provide a simple decision procedure for testing consistency and deciding whether a sentence is entailed by a database. Finally, it is shown that if al1 sentences are Horn clauses, consistency and entailment can be tested in polynomial time.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,540
2003.07492
AutoCogniSys: IoT Assisted Context-Aware Automatic Cognitive Health Assessment
Cognitive impairment has become epidemic in older adult population. The recent advent of tiny wearable and ambient devices, a.k.a Internet of Things (IoT) provides ample platforms for continuous functional and cognitive health assessment of older adults. In this paper, we design, implement and evaluate AutoCogniSys, a context-aware automated cognitive health assessment system, combining the sensing powers of wearable physiological (Electrodermal Activity, Photoplethysmography) and physical (Accelerometer, Object) sensors in conjunction with ambient sensors. We design appropriate signal processing and machine learning techniques, and develop an automatic cognitive health assessment system in a natural older adults living environment. We validate our approaches using two datasets: (i) a naturalistic sensor data streams related to Activities of Daily Living and mental arousal of 22 older adults recruited in a retirement community center, individually living in their own apartments using a customized inexpensive IoT system (IRB #HP-00064387) and (ii) a publicly available dataset for emotion detection. The performance of AutoCogniSys attests max. 93\% of accuracy in assessing cognitive health of older adults.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
168,445
2109.14573
Towards Flexible Blind JPEG Artifacts Removal
Training a single deep blind model to handle different quality factors for JPEG image artifacts removal has been attracting considerable attention due to its convenience for practical usage. However, existing deep blind methods usually directly reconstruct the image without predicting the quality factor, thus lacking the flexibility to control the output as the non-blind methods. To remedy this problem, in this paper, we propose a flexible blind convolutional neural network, namely FBCNN, that can predict the adjustable quality factor to control the trade-off between artifacts removal and details preservation. Specifically, FBCNN decouples the quality factor from the JPEG image via a decoupler module and then embeds the predicted quality factor into the subsequent reconstructor module through a quality factor attention block for flexible control. Besides, we find existing methods are prone to fail on non-aligned double JPEG images even with only a one-pixel shift, and we thus propose a double JPEG degradation model to augment the training data. Extensive experiments on single JPEG images, more general double JPEG images, and real-world JPEG images demonstrate that our proposed FBCNN achieves favorable performance against state-of-the-art methods in terms of both quantitative metrics and visual quality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
258,001
2002.02869
Differential Evolution with Reversible Linear Transformations
Differential evolution (DE) is a well-known type of evolutionary algorithms (EA). Similarly to other EA variants it can suffer from small populations and loose diversity too quickly. This paper presents a new approach to mitigate this issue: We propose to generate new candidate solutions by utilizing reversible linear transformation applied to a triplet of solutions from the population. In other words, the population is enlarged by using newly generated individuals without evaluating their fitness. We assess our methods on three problems: (i) benchmark function optimization, (ii) discovering parameter values of the gene repressilator system, (iii) learning neural networks. The empirical results indicate that the proposed approach outperforms vanilla DE and a version of DE with applying differential mutation three times on all testbeds.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
163,058
1502.01853
Generalized Inpainting Method for Hyperspectral Image Acquisition
A recently designed hyperspectral imaging device enables multiplexed acquisition of an entire data volume in a single snapshot thanks to monolithically-integrated spectral filters. Such an agile imaging technique comes at the cost of a reduced spatial resolution and the need for a demosaicing procedure on its interleaved data. In this work, we address both issues and propose an approach inspired by recent developments in compressed sensing and analysis sparse models. We formulate our superresolution and demosaicing task as a 3-D generalized inpainting problem. Interestingly, the target spatial resolution can be adjusted for mitigating the compression level of our sensing. The reconstruction procedure uses a fast greedy method called Pseudo-inverse IHT. We also show on simulations that a random arrangement of the spectral filters on the sensor is preferable to regular mosaic layout as it improves the quality of the reconstruction. The efficiency of our technique is demonstrated through numerical experiments on both synthetic and real data as acquired by the snapshot imager.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,973
1912.06353
Elastic registration based on compliance analysis and biomechanical graph matching
An automatic elastic registration method suited for vascularized organs is proposed. The vasculature in both the preoperative and intra-operative images is represented as a graph. A typical application of this method is the fusion of pre-operative information onto the organ during surgery, to compensate for the limited details provided by the intra-operative imaging modality (e.g. CBCT) and to cope with changes in the shape of the organ. Due to image modalities differences and organ deformation, each graph has a different topology and shape. The Adaptive Compliance Graph Matching (ACGM) method presented does not require any manual initialization, handles intra-operative nonrigid deformations of up to 65 mm and computes a complete displacement field over the organ from only the matched vasculature. ACGM is better than the previous Biomechanical Graph Matching method 3 (BGM) because it uses an efficient biomechanical vascularized liver model to compute the organ's transformation and the vessels bifurcations compliance. This allows to efficiently find the best graph matches with a novel compliance-based adaptive search. These contributions are evaluated on ten realistic synthetic and two real porcine automatically segmented datasets. ACGM obtains better target registration error (TRE) than BGM, with an average TRE in the real datasets of 4.2 mm compared to 6.5 mm, respectively. It also is up to one order of magnitude faster, less dependent on the parameters used and more robust to noise.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,332
1111.1941
Semantic-Driven e-Government: Application of Uschold and King Ontology Building Methodology for Semantic Ontology Models Development
Electronic government (e-government) has been one of the most active areas of ontology development during the past six years. In e-government, ontologies are being used to describe and specify e-government services (e-services) because they enable easy composition, matching, mapping and merging of various e-government services. More importantly, they also facilitate the semantic integration and interoperability of e-government services. However, it is still unclear in the current literature how an existing ontology building methodology can be applied to develop semantic ontology models in a government service domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic ontology models in a government service domain. Firstly, the Uschold and King methodology is presented, discussed and applied to build a government domain ontology. Secondly, the domain ontology is evaluated for semantic consistency using its semi-formal representation in Description Logic. Thirdly, an alignment of the domain ontology with the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) upper level ontology is drawn to allow its wider visibility and facilitate its integration with existing metadata standard. Finally, the domain ontology is formally written in Web Ontology Language (OWL) to enable its automatic processing by computers. The study aims to provide direction for the application of existing ontology building methodologies in the Semantic Web development processes of e-government domain specific ontology models; which would enable their repeatability in other e-government projects and strengthen the adoption of semantic technologies in e-government.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
12,955
2502.11308
ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
With the growing popularity of Large Language Models (LLMs) and vector databases, private textual data is increasingly processed and stored as numerical embeddings. However, recent studies have proven that such embeddings are vulnerable to inversion attacks, where original text is reconstructed to reveal sensitive information. Previous research has largely assumed access to millions of sentences to train attack models, e.g., through data leakage or nearly unrestricted API access. With our method, a single data point is sufficient for a partially successful inversion attack. With as little as 1k data samples, performance reaches an optimum across a range of black-box encoders, without training on leaked data. We present a Few-shot Textual Embedding Inversion Attack using ALignment and GENeration (ALGEN), by aligning victim embeddings to the attack space and using a generative model to reconstruct text. We find that ALGEN attacks can be effectively transferred across domains and languages, revealing key information. We further examine a variety of defense mechanisms against ALGEN, and find that none are effective, highlighting the vulnerabilities posed by inversion attacks. By significantly lowering the cost of inversion and proving that embedding spaces can be aligned through one-step optimization, we establish a new textual embedding inversion paradigm with broader applications for embedding alignment in NLP.
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
false
false
false
534,298
2408.07484
GRFormer: Grouped Residual Self-Attention for Lightweight Single Image Super-Resolution
Previous works have shown that reducing parameter overhead and computations for transformer-based single image super-resolution (SISR) models (e.g., SwinIR) usually leads to a reduction of performance. In this paper, we present GRFormer, an efficient and lightweight method, which not only reduces the parameter overhead and computations, but also greatly improves performance. The core of GRFormer is Grouped Residual Self-Attention (GRSA), which is specifically oriented towards two fundamental components. Firstly, it introduces a novel grouped residual layer (GRL) to replace the Query, Key, Value (QKV) linear layer in self-attention, aimed at efficiently reducing parameter overhead, computations, and performance loss at the same time. Secondly, it integrates a compact Exponential-Space Relative Position Bias (ES-RPB) as a substitute for the original relative position bias to improve the ability to represent position information while further minimizing the parameter count. Extensive experimental results demonstrate that GRFormer outperforms state-of-the-art transformer-based methods for $\times$2, $\times$3 and $\times$4 SISR tasks, notably outperforming SOTA by a maximum PSNR of 0.23dB when trained on the DIV2K dataset, while reducing the number of parameter and MACs by about \textbf{60\%} and \textbf{49\% } in only self-attention module respectively. We hope that our simple and effective method that can easily applied to SR models based on window-division self-attention can serve as a useful tool for further research in image super-resolution. The code is available at \url{https://github.com/sisrformer/GRFormer}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
480,604
2204.00426
A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness
Existing models that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on convolution operations conditioned with feature-wise linear modulation (FiLM) layers. These layers require many new parameters and are hyperparameter sensitive. They significantly increase training time, memory cost, and potential latency which can prove costly for resource-limited or real-time applications. In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a trade-off between clean and adversarial performance. Extensive experiments show that FLOAT can yield SOTA performance improving both clean and perturbed image classification by up to ~6% and ~10%, respectively. Moreover, real hardware measurement shows that FLOAT can reduce the training time by up to 1.43x with fewer model parameters of up to 1.47x on iso-hyperparameter settings compared to the FiLM-based alternatives. Additionally, to further improve memory efficiency we introduce FLOAT sparse (FLOATS), a form of non-iterative model pruning and provide detailed empirical analysis to provide a three way accuracy-robustness-complexity trade-off for these new class of pruned conditionally trained models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
289,251
1811.04441
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
113,090
2404.08188
Fundamental Limits of Communication-Assisted Sensing in ISAC Systems
In this paper, we introduce a novel communication-assisted sensing (CAS) framework that explores the potential coordination gains offered by the integrated sensing and communication technique. The CAS system endows users with beyond-line-of-the-sight sensing capabilities, supported by a dual-functional base station that enables simultaneous sensing and communication. To delve into the system's fundamental limits, we characterize the information-theoretic framework of the CAS system in terms of rate-distortion theory. We reveal the achievable overall distortion between the target's state and the reconstructions at the end-user, referred to as the sensing quality of service, within a special case where the distortion metric is separable for sensing and communication processes. As a case study, we employ a typical application to demonstrate distortion minimization under the ISAC signaling strategy, showcasing the potential of CAS in enhancing sensing capabilities.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
446,145
2305.19836
Inverse-design of nonlinear mechanical metamaterials via video denoising diffusion models
The accelerated inverse design of complex material properties - such as identifying a material with a given stress-strain response over a nonlinear deformation path - holds great potential for addressing challenges from soft robotics to biomedical implants and impact mitigation. While machine learning models have provided such inverse mappings, they are typically restricted to linear target properties such as stiffness. To tailor the nonlinear response, we here show that video diffusion generative models trained on full-field data of periodic stochastic cellular structures can successfully predict and tune their nonlinear deformation and stress response under compression in the large-strain regime, including buckling and contact. Unlike commonly encountered black-box models, our framework intrinsically provides an estimate of the expected deformation path, including the full-field internal stress distribution closely agreeing with finite element simulations. This work has thus the potential to simplify and accelerate the identification of materials with complex target performance.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
369,701
2407.07829
Disentangled Representation Learning with the Gromov-Monge Gap
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. Although remarkably challenging to solve in theory, disentanglement is often achieved in practice through prior matching. Furthermore, recent works have shown that prior matching approaches can be enhanced by leveraging geometrical considerations, e.g., by learning representations that preserve geometric features of the data, such as distances or angles between points. However, matching the prior while preserving geometric features is challenging, as a mapping that fully preserves these features while aligning the data distribution with the prior does not exist in general. To address these challenges, we introduce a novel approach to disentangled representation learning based on quadratic optimal transport. We formulate the problem using Gromov-Monge maps that transport one distribution onto another with minimal distortion of predefined geometric features, preserving them as much as can be achieved. To compute such maps, we propose the Gromov-Monge-Gap (GMG), a regularizer quantifying whether a map moves a reference distribution with minimal geometry distortion. We demonstrate the effectiveness of our approach for disentanglement across four standard benchmarks, outperforming other methods leveraging geometric considerations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
471,919
1911.08265
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
154,145
2211.09791
MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors
In this paper, we propose MOTRv2, a simple yet effective pipeline to bootstrap end-to-end multi-object tracking with a pretrained object detector. Existing end-to-end methods, MOTR and TrackFormer are inferior to their tracking-by-detection counterparts mainly due to their poor detection performance. We aim to improve MOTR by elegantly incorporating an extra object detector. We first adopt the anchor formulation of queries and then use an extra object detector to generate proposals as anchors, providing detection prior to MOTR. The simple modification greatly eases the conflict between joint learning detection and association tasks in MOTR. MOTRv2 keeps the query propogation feature and scales well on large-scale benchmarks. MOTRv2 ranks the 1st place (73.4% HOTA on DanceTrack) in the 1st Multiple People Tracking in Group Dance Challenge. Moreover, MOTRv2 reaches state-of-the-art performance on the BDD100K dataset. We hope this simple and effective pipeline can provide some new insights to the end-to-end MOT community. Code is available at \url{https://github.com/megvii-research/MOTRv2}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,102
1605.00441
Gaussian States Minimize the Output Entropy of the One-Mode Quantum Attenuator
We prove that Gaussian thermal input states minimize the output von Neumann entropy of the one-mode Gaussian quantum-limited attenuator for fixed input entropy. The Gaussian quantum-limited attenuator models the attenuation of an electromagnetic signal in the quantum regime. The Shannon entropy of an attenuated real-valued classical signal is a simple function of the entropy of the original signal. A striking consequence of energy quantization is that the output von Neumann entropy of the quantum-limited attenuator is no more a function of the input entropy alone. The proof starts from the majorization result of De Palma et al., IEEE Trans. Inf. Theory 62, 2895 (2016), and is based on a new isoperimetric inequality. Our result implies that geometric input probability distributions minimize the output Shannon entropy of the thinning for fixed input entropy. Moreover, our result opens the way to the multimode generalization, that permits to determine both the triple trade-off region of the Gaussian quantum-limited attenuator and the classical capacity region of the Gaussian degraded quantum broadcast channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,343
2408.15210
Data-enabled Predictive Repetitive Control
Many systems are subject to periodic disturbances and exhibit repetitive behaviour. Model-based repetitive control employs knowledge of such periodicity to attenuate periodic disturbances and has seen a wide range of successful industrial implementations. The aim of this paper is to develop a data-driven repetitive control method. In the developed framework, linear periodically time-varying (LPTV) behaviour is lifted to linear time-invariant (LTI) behaviour. Periodic disturbance mitigation is enabled by developing an extension of Willems' fundamental lemma for systems with exogenous disturbances. The resulting Data-enabled Predictive Repetitive Control (DeePRC) technique accounts for periodic system behaviour to perform attenuation of a periodic disturbance. Simulations demonstrate the ability of DeePRC to effectively mitigate periodic disturbances in the presence of noise.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
483,839
2201.05273
Pretrained Language Models for Text Generation: A Survey
Text Generation aims to produce plausible and readable text in a human language from input data. The resurgence of deep learning has greatly advanced this field, in particular, with the help of neural generation models based on pre-trained language models (PLMs). Text generation based on PLMs is viewed as a promising approach in both academia and industry. In this paper, we provide a survey on the utilization of PLMs in text generation. We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties. Then, we show the major challenges arisen in these aspects, as well as possible solutions for them. We also include a summary of various useful resources and typical text generation applications based on PLMs. Finally, we highlight the future research directions which will further improve these PLMs for text generation. This comprehensive survey is intended to help researchers interested in text generation problems to learn the core concepts, the main techniques and the latest developments in this area based on PLMs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
275,338
1510.02947
Concentration of Measure Inequalities and Their Communication and Information-Theoretic Applications
During the last two decades, concentration of measure has been a subject of various exciting developments in convex geometry, functional analysis, statistical physics, high-dimensional statistics, probability theory, information theory, communications and coding theory, computer science, and learning theory. One common theme which emerges in these fields is probabilistic stability: complicated, nonlinear functions of a large number of independent or weakly dependent random variables often tend to concentrate sharply around their expected values. Information theory plays a key role in the derivation of concentration inequalities. Indeed, both the entropy method and the approach based on transportation-cost inequalities are two major information-theoretic paths toward proving concentration. This brief survey is based on a recent monograph of the authors in the Foundations and Trends in Communications and Information Theory (online available at http://arxiv.org/pdf/1212.4663v8.pdf), and a tutorial given by the authors at ISIT 2015. It introduces information theorists to three main techniques for deriving concentration inequalities: the martingale method, the entropy method, and the transportation-cost inequalities. Some applications in information theory, communications, and coding theory are used to illustrate the main ideas.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,784
1701.07544
Spherical Planetary Robot for Rugged Terrain Traversal
Wheeled planetary rovers such as the Mars Exploration Rovers (MERs) and Mars Science Laboratory (MSL) have provided unprecedented, detailed images of the Mars surface. However, these rovers are large and are of high-cost as they need to carry sophisticated instruments and science laboratories. We propose the development of low-cost planetary rovers that are the size and shape of cantaloupes and that can be deployed from a larger rover. The rover named SphereX is 2 kg in mass, is spherical, holonomic and contains a hopping mechanism to jump over rugged terrain. A small low-cost rover complements a larger rover, particularly to traverse rugged terrain or roll down a canyon, cliff or crater to obtain images and science data. While it may be a one-way journey for these small robots, they could be used tactically to obtain high-reward science data. The robot is equipped with a pair of stereo cameras to perform visual navigation and has room for a science payload. In this paper, we analyze the design and development of a laboratory prototype. The results show a promising pathway towards development of a field system.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
67,312
2405.03659
A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose
Novel view synthesis from a sparse set of input images is a challenging problem of great practical interest, especially when camera poses are absent or inaccurate. Direct optimization of camera poses and usage of estimated depths in neural radiance field algorithms usually do not produce good results because of the coupling between poses and depths, and inaccuracies in monocular depth estimation. In this paper, we leverage the recent 3D Gaussian splatting method to develop a novel construct-and-optimize method for sparse view synthesis without camera poses. Specifically, we construct a solution progressively by using monocular depth and projecting pixels back into the 3D world. During construction, we optimize the solution by detecting 2D correspondences between training views and the corresponding rendered images. We develop a unified differentiable pipeline for camera registration and adjustment of both camera poses and depths, followed by back-projection. We also introduce a novel notion of an expected surface in Gaussian splatting, which is critical to our optimization. These steps enable a coarse solution, which can then be low-pass filtered and refined using standard optimization methods. We demonstrate results on the Tanks and Temples and Static Hikes datasets with as few as three widely-spaced views, showing significantly better quality than competing methods, including those with approximate camera pose information. Moreover, our results improve with more views and outperform previous InstantNGP and Gaussian Splatting algorithms even when using half the dataset. Project page: https://raymondjiangkw.github.io/cogs.github.io/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
452,262
cs/0507025
Comparison of Resampling Schemes for Particle Filtering
This contribution is devoted to the comparison of various resampling approaches that have been proposed in the literature on particle filtering. It is first shown using simple arguments that the so-called residual and stratified methods do yield an improvement over the basic multinomial resampling approach. A simple counter-example showing that this property does not hold true for systematic resampling is given. Finally, some results on the large-sample behavior of the simple bootstrap filter algorithm are given. In particular, a central limit theorem is established for the case where resampling is performed using the residual approach.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
538,820
2009.00577
A Short Review on Data Modelling for Vector Fields
Machine learning methods based on statistical principles have proven highly successful in dealing with a wide variety of data analysis and analytics tasks. Traditional data models are mostly concerned with independent identically distributed data. The recent success of end-to-end modelling scheme using deep neural networks equipped with effective structures such as convolutional layers or skip connections allows the extension to more sophisticated and structured practical data, such as natural language, images, videos, etc. On the application side, vector fields are an extremely useful type of data in empirical sciences, as well as signal processing, e.g. non-parametric transformations of 3D point clouds using 3D vector fields, the modelling of the fluid flow in earth science, and the modelling of physical fields. This review article is dedicated to recent computational tools of vector fields, including vector data representations, predictive model of spatial data, as well as applications in computer vision, signal processing, and empirical sciences.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
194,079
2202.09679
Simple Genetic Operators are Universal Approximators of Probability Distributions (and other Advantages of Expressive Encodings)
This paper characterizes the inherent power of evolutionary algorithms. This power depends on the computational properties of the genetic encoding. With some encodings, two parents recombined with a simple crossover operator can sample from an arbitrary distribution of child phenotypes. Such encodings are termed \emph{expressive encodings} in this paper. Universal function approximators, including popular evolutionary substrates of genetic programming and neural networks, can be used to construct expressive encodings. Remarkably, this approach need not be applied only to domains where the phenotype is a function: Expressivity can be achieved even when optimizing static structures, such as binary vectors. Such simpler settings make it possible to characterize expressive encodings theoretically: Across a variety of test problems, expressive encodings are shown to achieve up to super-exponential convergence speed-ups over the standard direct encoding. The conclusion is that, across evolutionary computation areas as diverse as genetic programming, neuroevolution, genetic algorithms, and theory, expressive encodings can be a key to understanding and realizing the full power of evolution.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
281,278
1301.0957
On Large Scale Distributed Compression and Dispersive Information Routing for Networks
This paper considers the problem of distributed source coding for a large network. A major obstacle that poses an existential threat to practical deployment of conventional approaches to distributed coding is the exponential growth of the decoder complexity with the number of sources and the encoding rates. This growth in complexity renders many traditional approaches impractical even for moderately sized networks. In this paper, we propose a new decoding paradigm for large scale distributed compression wherein the decoder complexity is explicitly controlled during the design. Central to our approach is a module called the "bit-subset selector" whose role is to judiciously extract an appropriate subset of the received bits for decoding per individual source. We propose a practical design strategy, based on deterministic annealing (DA) for the joint design of the system components, that enables direct optimization of the decoder complexity-distortion trade-off, and thereby the desired scalability. We also point out the direct connections between the problem of large scale distributed compression and a related problem in sensor networks, namely, dispersive information routing of correlated sources. This allows us to extend the design principles proposed in the context of large scale distributed compression to design efficient routers for minimum cost communication of correlated sources across a network. Experiments on both real and synthetic data-sets provide evidence for substantial gains over conventional approaches.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
20,821
2405.07482
Towards Marginal Fairness Sliced Wasserstein Barycenter
The sliced Wasserstein barycenter (SWB) is a widely acknowledged method for efficiently generalizing the averaging operation within probability measure spaces. However, achieving marginal fairness SWB, ensuring approximately equal distances from the barycenter to marginals, remains unexplored. The uniform weighted SWB is not necessarily the optimal choice to obtain the desired marginal fairness barycenter due to the heterogeneous structure of marginals and the non-optimality of the optimization. As the first attempt to tackle the problem, we define the marginal fairness sliced Wasserstein barycenter (MFSWB) as a constrained SWB problem. Due to the computational disadvantages of the formal definition, we propose two hyperparameter-free and computationally tractable surrogate MFSWB problems that implicitly minimize the distances to marginals and encourage marginal fairness at the same time. To further improve the efficiency, we perform slicing distribution selection and obtain the third surrogate definition by introducing a new slicing distribution that focuses more on marginally unfair projecting directions. We discuss the relationship of the three proposed problems and their relationship to sliced multi-marginal Wasserstein distance. Finally, we conduct experiments on finding 3D point-clouds averaging, color harmonization, and training of sliced Wasserstein autoencoder with class-fairness representation to show the favorable performance of the proposed surrogate MFSWB problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
453,732
2206.00787
On the Generalization of Neural Combinatorial Optimization Heuristics
Neural Combinatorial Optimization approaches have recently leveraged the expressiveness and flexibility of deep neural networks to learn efficient heuristics for hard Combinatorial Optimization (CO) problems. However, most of the current methods lack generalization: for a given CO problem, heuristics which are trained on instances with certain characteristics underperform when tested on instances with different characteristics. While some previous works have focused on varying the training instances properties, we postulate that a one-size-fit-all model is out of reach. Instead, we formalize solving a CO problem over a given instance distribution as a separate learning task and investigate meta-learning techniques to learn a model on a variety of tasks, in order to optimize its capacity to adapt to new tasks. Through extensive experiments, on two CO problems, using both synthetic and realistic instances, we show that our proposed meta-learning approach significantly improves the generalization of two state-of-the-art models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,250
2312.06198
Optimized View and Geometry Distillation from Multi-view Diffuser
Generating multi-view images from a single input view using image-conditioned diffusion models is a recent advancement and has shown considerable potential. However, issues such as the lack of consistency in synthesized views and over-smoothing in extracted geometry persist. Previous methods integrate multi-view consistency modules or impose additional supervisory to enhance view consistency while compromising on the flexibility of camera positioning and limiting the versatility of view synthesis. In this study, we consider the radiance field optimized during geometry extraction as a more rigid consistency prior, compared to volume and ray aggregation used in previous works. We further identify and rectify a critical bias in the traditional radiance field optimization process through score distillation from a multi-view diffuser. We introduce an Unbiased Score Distillation (USD) that utilizes unconditioned noises from a 2D diffusion model, greatly refining the radiance field fidelity. We leverage the rendered views from the optimized radiance field as the basis and develop a two-step specialization process of a 2D diffusion model, which is adept at conducting object-specific denoising and generating high-quality multi-view images. Finally, we recover faithful geometry and texture directly from the refined multi-view images. Empirical evaluations demonstrate that our optimized geometry and view distillation technique generates comparable results to the state-of-the-art models trained on extensive datasets, all while maintaining freedom in camera positioning. Please see our project page at https://youjiazhang.github.io/USD/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
414,419
2311.01882
Indicative Summarization of Long Discussions
Online forums encourage the exchange and discussion of different stances on many topics. Not only do they provide an opportunity to present one's own arguments, but may also gather a broad cross-section of others' arguments. However, the resulting long discussions are difficult to overview. This paper presents a novel unsupervised approach using large language models (LLMs) to generating indicative summaries for long discussions that basically serve as tables of contents. Our approach first clusters argument sentences, generates cluster labels as abstractive summaries, and classifies the generated cluster labels into argumentation frames resulting in a two-level summary. Based on an extensively optimized prompt engineering approach, we evaluate 19~LLMs for generative cluster labeling and frame classification. To evaluate the usefulness of our indicative summaries, we conduct a purpose-driven user study via a new visual interface called Discussion Explorer: It shows that our proposed indicative summaries serve as a convenient navigation tool to explore long discussions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
405,214
2405.02183
Metalearners for Ranking Treatment Effects
Efficiently allocating treatments with a budget constraint constitutes an important challenge across various domains. In marketing, for example, the use of promotions to target potential customers and boost conversions is limited by the available budget. While much research focuses on estimating causal effects, there is relatively limited work on learning to allocate treatments while considering the operational context. Existing methods for uplift modeling or causal inference primarily estimate treatment effects, without considering how this relates to a profit maximizing allocation policy that respects budget constraints. The potential downside of using these methods is that the resulting predictive model is not aligned with the operational context. Therefore, prediction errors are propagated to the optimization of the budget allocation problem, subsequently leading to a suboptimal allocation policy. We propose an alternative approach based on learning to rank. Our proposed methodology directly learns an allocation policy by prioritizing instances in terms of their incremental profit. We propose an efficient sampling procedure for the optimization of the ranking model to scale our methodology to large-scale data sets. Theoretically, we show how learning to rank can maximize the area under a policy's incremental profit curve. Empirically, we validate our methodology and show its effectiveness in practice through a series of experiments on both synthetic and real-world data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
451,658
2405.15100
Mobile Robot Sensory Coverage in 2-D Environments: An Optimization Approach with Efficiency Bounds
This paper considers three related mobile robot multi-target sensory coverage and inspection planning problems in 2-D environments. In the first problem, a mobile robot must find the shortest path to observe multiple targets with a limited range sensor in an obstacle free environment. In the second problem, the mobile robot must efficiently observe multiple targets while taking advantage of multi-target views in an obstacle free environment. The third problem considers multi-target sensory coverage in the presence of obstacles that obstruct sensor views of the targets. We show how all three problems can be formulated in a MINLP optimization framework. Because exact solutions to these problems are NP-hard, we introduce polynomial time approximation algorithms for each problem. These algorithms combine polynomial-time methods to approximate the optimal target sensing order, combined with efficient convex optimization methods that incorporate the constraints posed by the robot sensor footprint and obstacles in the environment. Importantly, we develop bounds that limit the gap between the exact and approximate solutions. Algorithms for all problems are fully implemented and illustrated with examples. Beyond the utility of our algorithms, the bounds derived in the paper contribute to the theory of optimal coverage planning algorithms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
456,743
2104.00303
MeanShift++: Extremely Fast Mode-Seeking With Applications to Segmentation and Object Tracking
MeanShift is a popular mode-seeking clustering algorithm used in a wide range of applications in machine learning. However, it is known to be prohibitively slow, with quadratic runtime per iteration. We propose MeanShift++, an extremely fast mode-seeking algorithm based on MeanShift that uses a grid-based approach to speed up the mean shift step, replacing the computationally expensive neighbors search with a density-weighted mean of adjacent grid cells. In addition, we show that this grid-based technique for density estimation comes with theoretical guarantees. The runtime is linear in the number of points and exponential in dimension, which makes MeanShift++ ideal on low-dimensional applications such as image segmentation and object tracking. We provide extensive experimental analysis showing that MeanShift++ can be more than 10,000x faster than MeanShift with competitive clustering results on benchmark datasets and nearly identical image segmentations as MeanShift. Finally, we show promising results for object tracking.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
227,950
2412.12433
Refining Dimensions for Improving Clustering-based Cross-lingual Topic Models
Recent works in clustering-based topic models perform well in monolingual topic identification by introducing a pipeline to cluster the contextualized representations. However, the pipeline is suboptimal in identifying topics across languages due to the presence of language-dependent dimensions (LDDs) generated by multilingual language models. To address this issue, we introduce a novel, SVD-based dimension refinement component into the pipeline of the clustering-based topic model. This component effectively neutralizes the negative impact of LDDs, enabling the model to accurately identify topics across languages. Our experiments on three datasets demonstrate that the updated pipeline with the dimension refinement component generally outperforms other state-of-the-art cross-lingual topic models.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
517,861
2009.09648
Measuring the effect of Non-Pharmaceutical Interventions (NPIs) on mobility during the COVID-19 pandemic using global mobility data
The implementation of governmental Non-Pharmaceutical Interventions (NPIs) has been the primary means of controlling the spread of the COVID-19 disease. The intended effect of these NPIs has been to reduce mobility. A strong reduction in mobility is believed to have a positive effect on the reduction of COVID-19 transmission by limiting the opportunity for the virus to spread in the population. Due to the huge costs of implementing these NPIs, it is essential to have a good understanding of their efficacy. Using global mobility data, released by Apple and Google, and ACAPS NPI data, we investigate the proportional contribution of NPIs on i) size of the change (magnitude) of transition between pre- and post-lockdown mobility levels and ii) rate (gradient) of this transition. Using generalized linear models to find the best fit model we found similar results using Apple or Google data. NPIs found to impact the magnitude of the change in mobility were: Lockdown measures (Apple, Google Retail and Recreation (RAR) and Google Transit and Stations (TS)), declaring a state of emergency (Apple, Google RAR and Google TS), closure of businesses and public services (Google RAR) and school closures (Apple). Using cluster analysis and chi square tests we found that closure of businesses and public services, school closures and limiting public gatherings as well as border closures and international flight suspensions were closely related. The implementation of lockdown measures and limiting public gatherings had the greatest effect on the rate of mobility change. In conclusion, we were able to quantitatively assess the efficacy of NPIs in reducing mobility, which enables us to understand their fine grained effects in a timely manner and therefore facilitate well-informed and cost-effective interventions.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
196,645
2005.10195
Learning natural locomotion behaviors for humanoid robots using human knowledge
This paper presents a new learning framework that leverages the knowledge from imitation learning, deep reinforcement learning, and control theories to achieve human-style locomotion that is natural, dynamic, and robust for humanoids. We proposed novel approaches to introduce human bias, i.e. motion capture data and a special Multi-Expert network structure. We used the Multi-Expert network structure to smoothly blend behavioral features, and used the augmented reward design for the task and imitation rewards. Our reward design is composable, tunable, and explainable by using fundamental concepts from conventional humanoid control. We rigorously validated and benchmarked the learning framework which consistently produced robust locomotion behaviors in various test scenarios. Further, we demonstrated the capability of learning robust and versatile policies in the presence of disturbances, such as terrain irregularities and external pushes.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
178,113
2212.02882
Electromagnetic Information Theory: Fundamentals, Modeling, Applications, and Open Problems
Traditional massive multiple-input multiple-output (MIMO) information theory adopt non-physically consistent assumptions, including white-noised, scalar-quantity, far-field, discretized, and monochromatic EM fields, which mismatch the nature of the underlying electromagnetic (EM) fields supporting the physical layer of wireless communication systems. To incorporate EM laws into designing procedures of the physical layer, we first propose the novel concept of EM physical layer, whose backbone theory is called EM information theory (EIT). In this article, we systematically investigate the basic ideas and main results of EIT. First, we review the fundamental analytical tools of classical information theory and EM theory. Then, we introduce the modeling and analysis methodologies of EIT, including continuous field modeling, degrees of freedom, and mutual information analyses. Several EIT-inspired applications are discussed to illustrate how EIT guides the design of practical wireless systems. Finally, we point out the open problems of EIT, where further research efforts are required for EIT to construct a unified interdisciplinary theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
334,921
2203.03480
Reinforcement Learning for Location-Aware Scheduling
Recent techniques in dynamical scheduling and resource management have found applications in warehouse environments due to their ability to organize and prioritize tasks in a higher temporal resolution. The rise of deep reinforcement learning, as a learning paradigm, has enabled decentralized agent populations to discover complex coordination strategies. However, training multiple agents simultaneously introduce many obstacles in training as observation and action spaces become exponentially large. In our work, we experimentally quantify how various aspects of the warehouse environment (e.g., floor plan complexity, information about agents' live location, level of task parallelizability) affect performance and execution priority. To achieve efficiency, we propose a compact representation of the state and action space for location-aware multi-agent systems, wherein each agent has knowledge of only self and task coordinates, hence only partial observability of the underlying Markov Decision Process. Finally, we show how agents trained in certain environments maintain performance in completely unseen settings and also correlate performance degradation with floor plan geometry.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
284,102
2307.16813
Capturing Co-existing Distortions in User-Generated Content for No-reference Video Quality Assessment
Video Quality Assessment (VQA), which aims to predict the perceptual quality of a video, has attracted raising attention with the rapid development of streaming media technology, such as Facebook, TikTok, Kwai, and so on. Compared with other sequence-based visual tasks (\textit{e.g.,} action recognition), VQA faces two under-estimated challenges unresolved in User Generated Content (UGC) videos. \textit{First}, it is not rare that several frames containing serious distortions (\textit{e.g.,}blocking, blurriness), can determine the perceptual quality of the whole video, while other sequence-based tasks require more frames of equal importance for representations. \textit{Second}, the perceptual quality of a video exhibits a multi-distortion distribution, due to the differences in the duration and probability of occurrence for various distortions. In order to solve the above challenges, we propose \textit{Visual Quality Transformer (VQT)} to extract quality-related sparse features more efficiently. Methodologically, a Sparse Temporal Attention (STA) is proposed to sample keyframes by analyzing the temporal correlation between frames, which reduces the computational complexity from $O(T^2)$ to $O(T \log T)$. Structurally, a Multi-Pathway Temporal Network (MPTN) utilizes multiple STA modules with different degrees of sparsity in parallel, capturing co-existing distortions in a video. Experimentally, VQT demonstrates superior performance than many \textit{state-of-the-art} methods in three public no-reference VQA datasets. Furthermore, VQT shows better performance in four full-reference VQA datasets against widely-adopted industrial algorithms (\textit{i.e.,} VMAF and AVQT).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
382,745
2402.12694
Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling
Predicting multivariate time series is crucial, demanding precise modeling of intricate patterns, including inter-series dependencies and intra-series variations. Distinctive trend characteristics in each time series pose challenges, and existing methods, relying on basic moving average kernels, may struggle with the non-linear structure and complex trends in real-world data. Given that, we introduce a learnable decomposition strategy to capture dynamic trend information more reasonably. Additionally, we propose a dual attention module tailored to capture inter-series dependencies and intra-series variations simultaneously for better time series forecasting, which is implemented by channel-wise self-attention and autoregressive self-attention. To evaluate the effectiveness of our method, we conducted experiments across eight open-source datasets and compared it with the state-of-the-art methods. Through the comparison results, our Leddam (LEarnable Decomposition and Dual Attention Module) not only demonstrates significant advancements in predictive performance, but also the proposed decomposition strategy can be plugged into other methods with a large performance-boosting, from 11.87% to 48.56% MSE error degradation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
430,951
2005.13781
A Maneuver-based Urban Driving Dataset and Model for Cooperative Vehicle Applications
Short-term future of automated driving can be imagined as a hybrid scenario in which both automated and human-driven vehicles co-exist in the same environment. In order to address the needs of such road configuration, many technology solutions such as vehicular communication and predictive control for automated vehicles have been introduced in the literature. Both aforementioned solutions rely on driving data of the human driver. In this work, we investigate the currently available driving datasets and introduce a real-world maneuver-based driving dataset that is collected during our urban driving data collection campaign. We also provide a model that embeds the patterns in maneuver-specific samples. Such model can be employed for classification and prediction purposes.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
179,098
2410.22081
Choosy Babies Need One Coach: Inducing Mode-Seeking Behavior in BabyLlama with Reverse KL Divergence
This study presents our submission to the Strict-Small Track of the 2nd BabyLM Challenge. We use a teacher-student distillation setup with the BabyLLaMa model (Timiryasov and Tastet, 2023) as a backbone. To make the student's learning process more focused, we replace the objective function with a reverse Kullback-Leibler divergence, known to cause mode-seeking (rather than mode-averaging) behaviour in computational learners. We further experiment with having a single teacher (instead of an ensemble of two teachers) and implement additional optimization strategies to improve the distillation process. Our experiments show that under reverse KL divergence, a single-teacher model often outperforms or matches multiple-teacher models across most tasks. Additionally, incorporating advanced optimization techniques further enhances model performance, demonstrating the effectiveness and robustness of our proposed approach. These findings support our idea that "choosy babies need one coach".
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
503,491
2305.17034
Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with the RS. Justification and transparency represent two crucial goals in explainable recommendation. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand results given by the RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What-Why-How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N=12) to investigate the potential effects of providing Why and How explanations together in an explainable RS on the users' perceptions regarding transparency, trust, and satisfaction. Our study showed qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
true
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
368,385
1903.06246
SuperTML: Two-Dimensional Word Embedding for the Precognition on Structured Tabular Data
Tabular data is the most commonly used form of data in industry. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data. DNN models using categorical embeddings are also applied in this task, but all attempts thus far have used one-dimensional embeddings. The recent work of Super Characters method using two-dimensional word embeddings achieved the state of art result in text classification tasks, showcasing the promise of this new approach. In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embeddings to address the problem of classification on tabular data. For each input of tabular data, the features are first projected into two-dimensional embeddings like an image, and then this image is fed into fine-tuned two-dimensional CNN models for classification. Experimental results have shown that the proposed SuperTML method had achieved state-of-the-art results on both large and small datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,328
2502.09658
Neuro-Conceptual Artificial Intelligence: Integrating OPM with Deep Learning to Enhance Question Answering Quality
Knowledge representation and reasoning are critical challenges in Artificial Intelligence (AI), particularly in integrating neural and symbolic approaches to achieve explainable and transparent AI systems. Traditional knowledge representation methods often fall short of capturing complex processes and state changes. We introduce Neuro-Conceptual Artificial Intelligence (NCAI), a specialization of the neuro-symbolic AI approach that integrates conceptual modeling using Object-Process Methodology (OPM) ISO 19450:2024 with deep learning to enhance question-answering (QA) quality. By converting natural language text into OPM models using in-context learning, NCAI leverages the expressive power of OPM to represent complex OPM elements-processes, objects, and states-beyond what traditional triplet-based knowledge graphs can easily capture. This rich structured knowledge representation improves reasoning transparency and answer accuracy in an OPM-QA system. We further propose transparency evaluation metrics to quantitatively measure how faithfully the predicted reasoning aligns with OPM-based conceptual logic. Our experiments demonstrate that NCAI outperforms traditional methods, highlighting its potential for advancing neuro-symbolic AI by providing rich knowledge representations, measurable transparency, and improved reasoning.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
533,544
1407.1232
On Quantum Codes Obtained From Cyclic Codes Over F_2+vF_2+v^2F_2
A new Gray map which is both an isometry and a weight preserving map from R=F_2+vF_2+v^2F_2 to (F_2)^3 is defined. A construction for quantum error correcting codes from cyclic codes over finite ring R=F_2+vF_2+v^2F_2, v^3=v is given. The parameters of quantum codes which are obtained from cyclic codes over R are determined.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
34,411
1908.04174
Domain-Specific Embedding Network for Zero-Shot Recognition
Zero-Shot Learning (ZSL) seeks to recognize a sample from either seen or unseen domain by projecting the image data and semantic labels into a joint embedding space. However, most existing methods directly adapt a well-trained projection from one domain to another, thereby ignoring the serious bias problem caused by domain differences. To address this issue, we propose a novel Domain-Specific Embedding Network (DSEN) that can apply specific projections to different domains for unbiased embedding, as well as several domain constraints. In contrast to previous methods, the DSEN decomposes the domain-shared projection function into one domain-invariant and two domain-specific sub-functions to explore the similarities and differences between two domains. To prevent the two specific projections from breaking the semantic relationship, a semantic reconstruction constraint is proposed by applying the same decoder function to them in a cycle consistency way. Furthermore, a domain division constraint is developed to directly penalize the margin between real and pseudo image features in respective seen and unseen domains, which can enlarge the inter-domain difference of visual features. Extensive experiments on four public benchmarks demonstrate the effectiveness of DSEN with an average of $9.2\%$ improvement in terms of harmonic mean. The code is available in \url{https://github.com/mboboGO/DSEN-for-GZSL}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
141,417
2501.05368
Developing a Foundation of Vector Symbolic Architectures Using Category Theory
At the risk of overstating the case, connectionist approaches to machine learning, i.e. neural networks, are enjoying a small vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, we conduct a brief literature survey demonstrating the lacking intersection of these two topics, provide a list of desiderata for VSAs, and propose that VSAs may be understood as a (division) rig in a category enriched over a monoid in Met (the category of Lawvere metric spaces). This final contribution suggests that VSAs may be generalised beyond current implementations. It is our hope that grounding VSAs in category theory will lead to more rigorous connections with other research, both within and beyond, learning and cognition.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
523,552
2105.10267
Towards a Universal NLG for Dialogue Systems and Simulators with Future Bridging
In a dialogue system pipeline, a natural language generation (NLG) unit converts the dialogue direction and content to a corresponding natural language realization. A recent trend for dialogue systems is to first pre-train on large datasets and then fine-tune in a supervised manner using datasets annotated with application-specific features. Though novel behaviours can be learned from custom annotation, the required effort severely bounds the quantity of the training set, and the application-specific nature limits the reuse. In light of the recent success of data-driven approaches, we propose the novel future bridging NLG (FBNLG) concept for dialogue systems and simulators. The critical step is for an FBNLG to accept a future user or system utterance to bridge the present context towards. Future bridging enables self supervised training over annotation-free datasets, decoupled the training of NLG from the rest of the system. An FBNLG, pre-trained with massive datasets, is expected to apply in classical or new dialogue scenarios with minimal adaptation effort. We evaluate a prototype FBNLG to show that future bridging can be a viable approach to a universal few-shot NLG for task-oriented and chit-chat dialogues.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
236,334
1907.01984
Cooperative Schedule-Driven Intersection Control with Connected and Autonomous Vehicles
Recent work in decentralized, schedule-driven traffic control has demonstrated the ability to improve the efficiency of traffic flow in complex urban road networks. In this approach, a scheduling agent is associated with each intersection. Each agent senses the traffic approaching its intersection and in real-time constructs a schedule that minimizes the cumulative wait time of vehicles approaching the intersection over the current look-ahead horizon. In this paper, we propose a cooperative algorithm that utilizes both connected and autonomous vehicles (CAV) and schedule-driven traffic control to create better traffic flow in the city. The algorithm enables an intersection scheduling agent to adjust the arrival time of an approaching platoon through use of wireless communication to control the velocity of vehicles. The sequence of approaching platoons is thus shifted toward a new shape that has smaller cumulative delay. We demonstrate how this algorithm outperforms the original approach in a real-time traffic signal control problem.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
137,490
0806.2035
Nodal distances for rooted phylogenetic trees
Dissimilarity measures for (possibly weighted) phylogenetic trees based on the comparison of their vectors of path lengths between pairs of taxa, have been present in the systematics literature since the early seventies. But, as far as rooted phylogenetic trees goes, these vectors can only separate non-weighted binary trees, and therefore these dissimilarity measures are metrics only on this class. In this paper we overcome this problem, by splitting in a suitable way each path length between two taxa into two lengths. We prove that the resulting splitted path lengths matrices single out arbitrary rooted phylogenetic trees with nested taxa and arcs weighted in the set of positive real numbers. This allows the definition of metrics on this general class by comparing these matrices by means of metrics in spaces of real-valued $n\times n$ matrices. We conclude this paper by establishing some basic facts about the metrics for non-weighted phylogenetic trees defined in this way using $L^p$ metrics on these spaces of matrices.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
1,914
1101.5317
A Novel Unified Expression for the Capacity and Bit Error Probability of Wireless Communication Systems over Generalized Fading Channels
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function (MGF)-based \emph{unified expression} for the ABEP and AC of single and multiple link communication with maximal ratio combining. In addition, this paper proposes the hyper-Fox's H fading model as a unified fading distribution of a majority of the well-known generalized fading models. As such, we offer a generic unified performance expression that can be easily calculated and that is applicable to a wide variety of fading scenarios. The mathematical formalism is illustrated with some selected numerical examples that validate the correctness of our newly derived results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
8,939
2311.05937
Genetic Algorithm enhanced by Deep Reinforcement Learning in parent selection mechanism and mutation : Minimizing makespan in permutation flow shop scheduling problems
This paper introduces a reinforcement learning (RL) approach to address the challenges associated with configuring and optimizing genetic algorithms (GAs) for solving difficult combinatorial or non-linear problems. The proposed RL+GA method was specifically tested on the flow shop scheduling problem (FSP). The hybrid algorithm incorporates neural networks (NN) and uses the off-policy method Q-learning or the on-policy method Sarsa(0) to control two key genetic algorithm (GA) operators: parent selection mechanism and mutation. At each generation, the RL agent's action is determining the selection method, the probability of the parent selection and the probability of the offspring mutation. This allows the RL agent to dynamically adjust the selection and mutation based on its learned policy. The results of the study highlight the effectiveness of the RL+GA approach in improving the performance of the primitive GA. They also demonstrate its ability to learn and adapt from population diversity and solution improvements over time. This adaptability leads to improved scheduling solutions compared to static parameter configurations while maintaining population diversity throughout the evolutionary process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
406,767
2404.08279
Convolutional neural network classification of cancer cytopathology images: taking breast cancer as an example
Breast cancer is a relatively common cancer among gynecological cancers. Its diagnosis often relies on the pathology of cells in the lesion. The pathological diagnosis of breast cancer not only requires professionals and time, but also sometimes involves subjective judgment. To address the challenges of dependence on pathologists expertise and the time-consuming nature of achieving accurate breast pathological image classification, this paper introduces an approach utilizing convolutional neural networks (CNNs) for the rapid categorization of pathological images, aiming to enhance the efficiency of breast pathological image detection. And the approach enables the rapid and automatic classification of pathological images into benign and malignant groups. The methodology involves utilizing a convolutional neural network (CNN) model leveraging the Inceptionv3 architecture and transfer learning algorithm for extracting features from pathological images. Utilizing a neural network with fully connected layers and employing the SoftMax function for image classification. Additionally, the concept of image partitioning is introduced to handle high-resolution images. To achieve the ultimate classification outcome, the classification probabilities of each image block are aggregated using three algorithms: summation, product, and maximum. Experimental validation was conducted on the BreaKHis public dataset, resulting in accuracy rates surpassing 0.92 across all four magnification coefficients (40X, 100X, 200X, and 400X). It demonstrates that the proposed method effectively enhances the accuracy in classifying pathological images of breast cancer.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
446,176
2011.12099
Model Order Reduction for Gas and Energy Networks
To counter the volatile nature of renewable energy sources, gas networks take a vital role. But, to ensure fulfillment of contracts under these circumstances, a vast number of possible scenarios, incorporating uncertain supply and demand, has to be simulated ahead of time. This many-query gas network simulation task can be accelerated by model reduction, yet, large-scale, nonlinear, parametric, hyperbolic partial differential(-algebraic) equation systems, modeling natural gas transport, are a challenging application for model order reduction algorithms. For this industrial application, we bring together the scientific computing topics of: mathematical modeling of gas transport networks, numerical simulation of hyperbolic partial differential equation, and parametric model reduction for nonlinear systems. This research resulted in the "morgen" (Model Order Reduction for Gas and Energy Networks) software platform, which enables modular testing of various combinations of models, solvers, and model reduction methods. In this work we present the theoretical background on systemic modeling and structured, data-driven, system-theoretic model reduction for gas networks, as well as the implementation of "morgen" and associated numerical experiments testing model reduction adapted to gas network models.
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
208,054
2302.04556
Auditing Recommender Systems -- Putting the DSA into practice with a risk-scenario-based approach
Today's online platforms rely heavily on recommendation systems to serve content to their users; social media is a prime example. In turn, recommendation systems largely depend on artificial intelligence algorithms to decide who gets to see what. While the content social media platforms deliver is as varied as the users who engage with them, it has been shown that platforms can contribute to serious harm to individuals, groups and societies. Studies have suggested that these negative impacts range from worsening an individual's mental health to driving society-wide polarisation capable of putting democracies at risk. To better safeguard people from these harms, the European Union's Digital Services Act (DSA) requires platforms, especially those with large numbers of users, to make their algorithmic systems more transparent and follow due diligence obligations. These requirements constitute an important legislative step towards mitigating the systemic risks posed by online platforms. However, the DSA lacks concrete guidelines to operationalise a viable audit process that would allow auditors to hold these platforms accountable. This void could foster the spread of 'audit-washing', that is, platforms exploiting audits to legitimise their practices and neglect responsibility. To fill this gap, we propose a risk-scenario-based audit process. We explain in detail what audits and assessments of recommender systems according to the DSA should look like. Our approach also considers the evolving nature of platforms and emphasises the observability of their recommender systems' components. The resulting audit facilitates internal (among audits of the same system at different moments in time) and external comparability (among audits of different platforms) while also affording the evaluation of mitigation measures implemented by the platforms themselves.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
344,749
2404.03623
Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge Graph
Large Language Models (LLMs) demonstrate an impressive capacity to recall a vast range of factual knowledge. However, understanding their underlying reasoning and internal mechanisms in exploiting this knowledge remains a key research area. This work unveils the factual information an LLM represents internally for sentence-level claim verification. We propose an end-to-end framework to decode factual knowledge embedded in token representations from a vector space to a set of ground predicates, showing its layer-wise evolution using a dynamic knowledge graph. Our framework employs activation patching, a vector-level technique that alters a token representation during inference, to extract encoded knowledge. Accordingly, we neither rely on training nor external models. Using factual and common-sense claims from two claim verification datasets, we showcase interpretability analyses at local and global levels. The local analysis highlights entity centrality in LLM reasoning, from claim-related information and multi-hop reasoning to representation errors causing erroneous evaluation. On the other hand, the global reveals trends in the underlying evolution, such as word-based knowledge evolving into claim-related facts. By interpreting semantics from LLM latent representations and enabling graph-related analyses, this work enhances the understanding of the factual knowledge resolution process.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
444,333
1904.08496
Tensor Sparse PCA and Face Recognition: A Novel Approach
Face recognition is the important field in machine learning and pattern recognition research area. It has a lot of applications in military, finance, public security, to name a few. In this paper, the combination of the tensor sparse PCA with the nearest-neighbor method (and with the kernel ridge regression method) will be proposed and applied to the face dataset. Experimental results show that the combination of the tensor sparse PCA with any classification system does not always reach the best accuracy performance measures. However, the accuracy of the combination of the sparse PCA method and one specific classification system is always better than the accuracy of the combination of the PCA method and one specific classification system and is always better than the accuracy of the classification system itself.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
128,082
2101.09810
FakeFlow: Fake News Detection by Modeling the Flow of Affective Information
Fake news articles often stir the readers' attention by means of emotional appeals that arouse their feelings. Unlike in short news texts, authors of longer articles can exploit such affective factors to manipulate readers by adding exaggerations or fabricating events, in order to affect the readers' emotions. To capture this, we propose in this paper to model the flow of affective information in fake news articles using a neural architecture. The proposed model, FakeFlow, learns this flow by combining topic and affective information extracted from text. We evaluate the model's performance with several experiments on four real-world datasets. The results show that FakeFlow achieves superior results when compared against state-of-the-art methods, thus confirming the importance of capturing the flow of the affective information in news articles.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
216,721
2412.00068
Enhanced Lung Cancer Survival Prediction using Semi-Supervised Pseudo-Labeling and Learning from Diverse PET/CT Datasets
Objective: This study explores a semi-supervised learning (SSL), pseudo-labeled strategy using diverse datasets to enhance lung cancer (LCa) survival predictions, analyzing Handcrafted and Deep Radiomic Features (HRF/DRF) from PET/CT scans with Hybrid Machine Learning Systems (HMLS). Methods: We collected 199 LCa patients with both PET & CT images, obtained from The Cancer Imaging Archive (TCIA) and our local database, alongside 408 head&neck cancer (HNCa) PET/CT images from TCIA. We extracted 215 HRFs and 1024 DRFs by PySERA and a 3D-Autoencoder, respectively, within the ViSERA software, from segmented primary tumors. The supervised strategy (SL) employed a HMLSs: PCA connected with 4 classifiers on both HRF and DRFs. SSL strategy expanded the datasets by adding 408 pseudo-labeled HNCa cases (labeled by Random Forest algorithm) to 199 LCa cases, using the same HMLSs techniques. Furthermore, Principal Component Analysis (PCA) linked with 4 survival prediction algorithms were utilized in survival hazard ratio analysis. Results: SSL strategy outperformed SL method (p-value<0.05), achieving an average accuracy of 0.85 with DRFs from PET and PCA+ Multi-Layer Perceptron (MLP), compared to 0.65 for SL strategy using DRFs from CT and PCA+ K-Nearest Neighbor (KNN). Additionally, PCA linked with Component-wise Gradient Boosting Survival Analysis on both HRFs and DRFs, as extracted from CT, had an average c-index of 0.80 with a Log Rank p-value<<0.001, confirmed by external testing. Conclusions: Shifting from HRFs and SL to DRFs and SSL strategies, particularly in contexts with limited data points, enabling CT or PET alone to significantly achieve high predictive performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
512,470
2212.09723
MANER: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages
This paper investigates the problem of Named Entity Recognition (NER) for extreme low-resource languages with only a few hundred tagged data samples. NER is a fundamental task in Natural Language Processing (NLP). A critical driver accelerating NER systems' progress is the existence of large-scale language corpora that enable NER systems to achieve outstanding performance in languages such as English and French with abundant training data. However, NER for low-resource languages remains relatively unexplored. In this paper, we introduce Mask Augmented Named Entity Recognition (MANER), a new methodology that leverages the distributional hypothesis of pre-trained masked language models (MLMs) for NER. The <mask> token in pre-trained MLMs encodes valuable semantic contextual information. MANER re-purposes the <mask> token for NER prediction. Specifically, we prepend the <mask> token to every word in a sentence for which we would like to predict the named entity tag. During training, we jointly fine-tune the MLM and a new NER prediction head attached to each <mask> token. We demonstrate that MANER is well-suited for NER in low-resource languages; our experiments show that for 100 languages with as few as 100 training examples, it improves on state-of-the-art methods by up to 48% and by 12% on average on F1 score. We also perform detailed analyses and ablation studies to understand the scenarios that are best-suited to MANER.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
337,204
2401.02292
GridFormer: Point-Grid Transformer for Surface Reconstruction
Implicit neural networks have emerged as a crucial technology in 3D surface reconstruction. To reconstruct continuous surfaces from discrete point clouds, encoding the input points into regular grid features (plane or volume) has been commonly employed in existing approaches. However, these methods typically use the grid as an index for uniformly scattering point features. Compared with the irregular point features, the regular grid features may sacrifice some reconstruction details but improve efficiency. To take full advantage of these two types of features, we introduce a novel and high-efficiency attention mechanism between the grid and point features named Point-Grid Transformer (GridFormer). This mechanism treats the grid as a transfer point connecting the space and point cloud. Our method maximizes the spatial expressiveness of grid features and maintains computational efficiency. Furthermore, optimizing predictions over the entire space could potentially result in blurred boundaries. To address this issue, we further propose a boundary optimization strategy incorporating margin binary cross-entropy loss and boundary sampling. This approach enables us to achieve a more precise representation of the object structure. Our experiments validate that our method is effective and outperforms the state-of-the-art approaches under widely used benchmarks by producing more precise geometry reconstructions. The code is available at https://github.com/list17/GridFormer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
419,662
2307.07798
Opinion mining using Double Channel CNN for Recommender System
Much unstructured data has been produced with the growth of the Internet and social media. A significant volume of textual data includes users' opinions about products in online stores and social media. By exploring and categorizing them, helpful information can be acquired, including customer satisfaction, user feedback about a particular event, predicting the sale of a specific product, and other similar cases. In this paper, we present an approach for sentiment analysis with a deep learning model and use it to recommend products. A two-channel convolutional neural network model has been used for opinion mining, which has five layers and extracts essential features from the data. We increased the number of comments by applying the SMOTE algorithm to the initial dataset and balanced the data. Then we proceed to cluster the aspects. We also assign a weight to each cluster using tensor decomposition algorithms that improve the recommender system's performance. Our proposed method has reached 91.6% accuracy, significantly improved compared to previous aspect-based approaches.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
379,546
1311.2912
A Misanthropic Reinterpretation of the Chinese Room Problem
The chinese room problem asks if computers can think; I ask here if most humans can.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
28,368