id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2403.05693
Shielded Deep Reinforcement Learning for Complex Spacecraft Tasking
Autonomous spacecraft control via Shielded Deep Reinforcement Learning (SDRL) has become a rapidly growing research area. However, the construction of shields and the definition of tasking remains informal, resulting in policies with no guarantees on safety and ambiguous goals for the RL agent. In this paper, we first explore the use of formal languages, namely Linear Temporal Logic (LTL), to formalize spacecraft tasks and safety requirements. We then define a manner in which to construct a reward function from a co-safe LTL specification automatically for effective training in SDRL framework. We also investigate methods for constructing a shield from a safe LTL specification for spacecraft applications and propose three designs that provide probabilistic guarantees. We show how these shields interact with different policies and the flexibility of the reward structure through several experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
436,107
2006.14267
Large-Scale Fading Precoding for Spatially Correlated Rician Fading with Phase Shifts
We consider large-scale fading precoding (LSFP), which is a two-layer precoding scheme in the downlink of multi-cell massive MIMO (multiple-input multiple-output) systems to suppress inter-cell interference. We obtain the closed-form spectral efficiency (SE) with LSFP at the central network controller and maximum ratio precoding at the base stations (BSs) using the linear minimum mean-squared error or least squares channel estimators. The LSFP weights are designed based on the long-term channel statistics and two important performance metrics are optimized under the per-BS transmit power constraints. These metrics are sum SE and proportional fairness, where the resulting optimization problems are non-convex. Two efficient algorithms are developed to solve these problems by using the weighted minimum mean-squared error and the alternating direction method of multipliers methods. Moreover, two partial LSFP schemes are proposed to reduce the fronthaul signaling requirements. Simulations quantify the performance improvement of LSFP over standard single-layer precoding schemes and identify the specific advantage of each optimization problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
184,172
1904.12534
Casting Geometric Constraints in Semantic Segmentation as Semi-Supervised Learning
We propose a simple yet effective method to learn to segment new indoor scenes from video frames: State-of-the-art methods trained on one dataset, even as large as the SUNRGB-D dataset, can perform poorly when applied to images that are not part of the dataset, because of the dataset bias, a common phenomenon in computer vision. To make semantic segmentation more useful in practice, one can exploit geometric constraints. Our main contribution is to show that these constraints can be cast conveniently as semi-supervised terms, which enforce the fact that the same class should be predicted for the projections of the same 3D location in different images. This is interesting as we can exploit general existing techniques developed for semi-supervised learning to efficiently incorporate the constraints. We show that this approach can efficiently and accurately learn to segment target sequences of ScanNet and our own target sequences using only annotations from SUNRGB-D, and geometric relations between the video frames of target sequences.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
129,126
1808.06940
End to End Vehicle Lateral Control Using a Single Fisheye Camera
Convolutional neural networks are commonly used to control the steering angle for autonomous cars. Most of the time, multiple long range cameras are used to generate lateral failure cases. In this paper we present a novel model to generate this data and label augmentation using only one short range fisheye camera. We present our simulator and how it can be used as a consistent metric for lateral end-to-end control evaluation. Experiments are conducted on a custom dataset corresponding to more than 10000 km and 200 hours of open road driving. Finally we evaluate this model on real world driving scenarios, open road and a custom test track with challenging obstacle avoidance and sharp turns. In our simulator based on real-world videos, the final model was capable of more than 99% autonomy on urban road
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
105,641
2408.08824
LEVIS: Large Exact Verifiable Input Spaces for Neural Networks
The robustness of neural networks is paramount in safety-critical applications. While most current robustness verification methods assess the worst-case output under the assumption that the input space is known, identifying a verifiable input space $\mathcal{C}$, where no adversarial examples exist, is crucial for effective model selection, robustness evaluation, and the development of reliable control strategies. To address this challenge, we introduce a novel framework, $\texttt{LEVIS}$, comprising $\texttt{LEVIS}$-$\alpha$ and $\texttt{LEVIS}$-$\beta$. $\texttt{LEVIS}$-$\alpha$ locates the largest possible verifiable ball within the central region of $\mathcal{C}$ that intersects at least two boundaries. In contrast, $\texttt{LEVIS}$-$\beta$ integrates multiple verifiable balls to encapsulate the entirety of the verifiable space comprehensively. Our contributions are threefold: (1) We propose $\texttt{LEVIS}$ equipped with three pioneering techniques that identify the maximum verifiable ball and the nearest adversarial point along collinear or orthogonal directions. (2) We offer a theoretical analysis elucidating the properties of the verifiable balls acquired through $\texttt{LEVIS}$-$\alpha$ and $\texttt{LEVIS}$-$\beta$. (3) We validate our methodology across diverse applications, including electrical power flow regression and image classification, showcasing performance enhancements and visualizations of the searching characteristics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,168
2107.10484
Neural Ordinary Differential Equation Model for Evolutionary Subspace Clustering and Its Applications
The neural ordinary differential equation (neural ODE) model has attracted increasing attention in time series analysis for its capability to process irregular time steps, i.e., data are not observed over equally-spaced time intervals. In multi-dimensional time series analysis, a task is to conduct evolutionary subspace clustering, aiming at clustering temporal data according to their evolving low-dimensional subspace structures. Many existing methods can only process time series with regular time steps while time series are unevenly sampled in many situations such as missing data. In this paper, we propose a neural ODE model for evolutionary subspace clustering to overcome this limitation and a new objective function with subspace self-expressiveness constraint is introduced. We demonstrate that this method can not only interpolate data at any time step for the evolutionary subspace clustering task, but also achieve higher accuracy than other state-of-the-art evolutionary subspace clustering methods. Both synthetic and real-world data are used to illustrate the efficacy of our proposed method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
247,318
2410.14786
BDDC Preconditioning on GPUs for Cardiac Simulations
In order to understand cardiac arrhythmia, computer models for electrophysiology are essential. In the EuroHPC MicroCARD project, we adapt the current models and leverage modern computing resources to model diseased hearts and their microstructure accurately. Towards this objective, we develop a portable, highly efficient, and performing BDDC preconditioner and solver implementation, demonstrating scalability with over 90% efficiency on up to 100 GPUs.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
500,209
2209.01418
Outsourcing Control requires Control Complexity
An embodied agent constantly influences its environment and is influenced by it. We use the sensorimotor loop to model these interactions and thereby we can quantify different information flows in the system by various information theoretic measures. This includes a measure for the interaction among the agent's body and its environment, called Morphological Computation. Additionally, we examine the controller complexity by two measures, one of which can be seen in the context of the Integrated Information Theory of consciousness. Applying this framework to an experimental setting with simulated agents allows us to analyze the interaction between an agent and its environment, as well as the complexity of its controller, the brain of the agent. Previous research reveals an antagonistic relationship between the controller complexity and Morphological Computation. A morphology adapted well to a task can reduce the necessary complexity of the controller significantly. This creates the problem that embodied intelligence is correlated with a reduced necessity of a controller, a brain. However, in order to interact well with their surroundings, the agents first have to understand the relevant dynamics of the environment. By analyzing learning agents we observe that an increased controller complexity can facilitate a better interaction between an agent's body and its environment. Hence, learning requires an increased controller complexity and the controller complexity and Morphological Computation influence each other.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
315,884
2304.08712
Impossibility of Characterizing Distribution Learning -- a simple solution to a long-standing problem
We consider the long-standing question of finding a parameter of a class of probability distributions that characterizes its PAC learnability. We provide a rather surprising answer - no such parameter exists. Our techniques allow us to show similar results for several general notions of characterizing learnability and for several learning tasks. We show that there is no notion of dimension that characterizes the sample complexity of learning distribution classes. We then consider the weaker requirement of only characterizing learnability (rather than the quantitative sample complexity function). We propose some natural requirements for such a characterization and go on to show that there exists no characterization of learnability that satisfies these requirements for classes of distributions. Furthermore, we show that our results hold for various other learning problems. In particular, we show that there is no notion of dimension characterizing (or characterization of learnability) for any of the tasks: classification learning for distribution classes, learning of binary classifications w.r.t. a restricted set of marginal distributions, and learnability of classes of real-valued functions with continuous losses.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
358,794
1507.07148
Quantifying knowledge exchange in R&D networks: A data-driven model
We propose a model that reflects two important processes in R&D activities of firms, the formation of R&D alliances and the exchange of knowledge as a result of these collaborations. In a data-driven approach, we analyze two large-scale data sets extracting unique information about 7500 R&D alliances and 5200 patent portfolios of firms. This data is used to calibrate the model parameters for network formation and knowledge exchange. We obtain probabilities for incumbent and newcomer firms to link to other incumbents or newcomers which are able to reproduce the topology of the empirical R&D network. The position of firms in a knowledge space is obtained from their patents using two different classification schemes, IPC in 8 dimensions and ISI-OST-INPI in 35 dimensions. Our dynamics of knowledge exchange assumes that collaborating firms approach each other in knowledge space at a rate $\mu$ for an alliance duration $\tau$. Both parameters are obtained in two different ways, by comparing knowledge distances from simulations and empirics and by analyzing the collaboration efficiency $\mathcal{\hat{C}}_{n}$. This is a new measure, that takes also in account the effort of firms to maintain concurrent alliances, and is evaluated via extensive computer simulations. We find that R&D alliances have a duration of around two years and that the subsequent knowledge exchange occurs at a very low rate. Hence, a firm's position in the knowledge space is rather a determinant than a consequence of its R&D alliances. From our data-driven approach we also find model configurations that can be both realistic and optimized with respect to the collaboration efficiency $\mathcal{\hat{C}}_{n}$. Effective policies, as suggested by our model, would incentivize shorter R&D alliances and higher knowledge exchange rates.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
45,453
2106.04938
Attacking Adversarial Attacks as A Defense
It is well known that adversarial attacks can fool deep neural networks with imperceptible perturbations. Although adversarial training significantly improves model robustness, failure cases of defense still broadly exist. In this work, we find that the adversarial attacks can also be vulnerable to small perturbations. Namely, on adversarially-trained models, perturbing adversarial examples with a small random noise may invalidate their misled predictions. After carefully examining state-of-the-art attacks of various kinds, we find that all these attacks have this deficiency to different extents. Enlightened by this finding, we propose to counter attacks by crafting more effective defensive perturbations. Our defensive perturbations leverage the advantage that adversarial training endows the ground-truth class with smaller local Lipschitzness. By simultaneously attacking all the classes, the misled predictions with larger Lipschitzness can be flipped into correct ones. We verify our defensive perturbation with both empirical experiments and theoretical analyses on a linear model. On CIFAR10, it boosts the state-of-the-art model from 66.16% to 72.66% against the four attacks of AutoAttack, including 71.76% to 83.30% against the Square attack. On ImageNet, the top-1 robust accuracy of FastAT is improved from 33.18% to 38.54% under the 100-step PGD attack.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
239,911
1706.01303
The Singularity May Be Near
Toby Walsh in 'The Singularity May Never Be Near' gives six arguments to support his point of view that technological singularity may happen but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the 'likely to happen' probability.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
74,778
1208.2515
A Sub-Nyquist Radar Prototype: Hardware and Algorithms
Traditional radar sensing typically involves matched filtering between the received signal and the shape of the transmitted pulse. Under the confinement of classic sampling theorem this requires that the received signals must first be sampled at twice the baseband bandwidth, in order to avoid aliasing. The growing demands for target distinction capability and spatial resolution imply significant growth in the bandwidth of the transmitted pulse. Thus, correlation based radar systems require high sampling rates, and with the large amounts of data sampled also necessitate vast memory capacity. In addition, real-time processing of the data typically results in high power consumption. Recently, new approaches for radar sensing and detection were introduced, based on the Finite Rate of Innovation and Xampling frameworks. These techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system's detection capabilities at high enough SNR. Here we present for the first time a design and implementation of a Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demostrate by real-time analog experiments that our system is able to maintain reasonable detection capabilities, while sampling radar signals that require sampling at a rate of about 30MHz at a total rate of 1Mhz.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
18,051
2311.02790
CausalCite: A Causal Formulation of Paper Citations
Citation count of a paper is a commonly used proxy for evaluating the significance of a paper in the scientific community. Yet citation measures are widely criticized for failing to accurately reflect the true impact of a paper. Thus, we propose CausalCite, a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers. CausalCite is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings. TextMatch encodes each paper using text embeddings from large language models (LLMs), extracts similar samples by cosine similarity, and synthesizes a counterfactual sample as the weighted average of similar papers according to their similarity values. We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts on a previous dataset of 1K papers, (test-of-time) awards for past papers, and its stability across various subfields of AI. We also provide a set of findings that can serve as suggested ways for future researchers to use our metric for a better understanding of the quality of a paper. Our code is available at https://github.com/causalNLP/causal-cite.
false
false
false
false
true
true
true
false
true
false
false
false
false
true
false
false
false
false
405,581
1802.08941
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization
In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information, it also extends the existing results for first-order, but primal-only algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
91,225
2408.15293
Learning Granularity Representation for Temporal Knowledge Graph Completion
Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect the dynamic structural knowledge and evolutionary patterns of real-world facts. Nevertheless, TKGs are still limited in downstream applications due to the problem of incompleteness. Consequently, TKG completion (also known as link prediction) has been widely studied, with recent research focusing on incorporating independent embeddings of time or combining them with entities and relations to form temporal representations. However, most existing methods overlook the impact of history from a multi-granularity aspect. The inherent semantics of human-defined temporal granularities, such as ordinal dates, reveal general patterns to which facts typically adhere. To counter this limitation, this paper proposes \textbf{L}earning \textbf{G}ranularity \textbf{Re}presentation (termed $\mathsf{LGRe}$) for TKG completion. It comprises two main components: Granularity Representation Learning (GRL) and Adaptive Granularity Balancing (AGB). Specifically, GRL employs time-specific multi-layer convolutional neural networks to capture interactions between entities and relations at different granularities. After that, AGB generates adaptive weights for these embeddings according to temporal semantics, resulting in expressive representations of predictions. Moreover, to reflect similar semantics of adjacent timestamps, a temporal loss function is introduced. Extensive experimental results on four event benchmarks demonstrate the effectiveness of $\mathsf{LGRe}$ in learning time-related representations. To ensure reproducibility, our code is available at https://github.com/KcAcoZhang/LGRe.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
483,886
2411.17215
Interval-based validation of a nonlinear estimator
In engineering, models are often used to represent the behavior of a system. Estimators are then needed to approximate the values of the model's parameters based on observations. This approximation implies a difference between the values predicted by the model and the observations that have been made. It creates an uncertainty that can lead to dangerous decision making. Interval analysis tools can be used to guarantee some properties of an estimator, even when the estimator itself doesn't rely on interval analysis (Adam, 2019) (Adam, 2015). This paper contributes to this dynamic by proposing an interval-based and guaranteed method to validate a nonlinear estimator. It is based on the Moore-Skelboe algorithm (van Emden, 2004). This method returns a guaranteed maximum error that the estimator will never exceed. We will show that we can guarantee properties even when working with non-guaranteed estimators such as neural networks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
511,355
2012.01295
Generating Descriptions for Sequential Images with Local-Object Attention and Global Semantic Context Modelling
In this paper, we propose an end-to-end CNN-LSTM model for generating descriptions for sequential images with a local-object attention mechanism. To generate coherent descriptions, we capture global semantic context using a multi-layer perceptron, which learns the dependencies between sequential images. A paralleled LSTM network is exploited for decoding the sequence descriptions. Experimental results show that our model outperforms the baseline across three different evaluation metrics on the datasets published by Microsoft.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
209,381
1907.06484
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed -- including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good "black" image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
138,634
2407.03021
Predictions and Decision Making for Resilient Intelligent Sustainable Energy Systems
Future energy systems are subject to various uncertain influences. As resilient systems they should maintain a constantly high operational performance whatever happens. We explore different levels and time scales of decision making in energy systems, highlighting different uncertainty sources that are relevant in different domains. We discuss how the uncertainties can be represented and how one can react to them. The article closes by summarizing, which uncertainties are already well examined and which ones still need further scientific inquiry to obtain resilient energy systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
469,990
1505.04803
Predicting Important Objects for Egocentric Video Summarization
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,226
2301.10611
Discriminator-free Unsupervised Domain Adaptation for Multi-label Image Classification
In this paper, a discriminator-free adversarial-based Unsupervised Domain Adaptation (UDA) for Multi-Label Image Classification (MLIC) referred to as DDA-MLIC is proposed. Recently, some attempts have been made for introducing adversarial-based UDA methods in the context of MLIC. However, these methods which rely on an additional discriminator subnet present one major shortcoming. The learning of domain-invariant features may harm their task-specific discriminative power, since the classification and discrimination tasks are decoupled. Herein, we propose to overcome this issue by introducing a novel adversarial critic that is directly deduced from the task-specific classifier. Specifically, a two-component Gaussian Mixture Model (GMM) is fitted on the source and target predictions in order to distinguish between two clusters. This allows extracting a Gaussian distribution for each component. The resulting Gaussian distributions are then used for formulating an adversarial loss based on a Frechet distance. The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift. The obtained results demonstrate that DDA-MLIC outperforms existing state-of-the-art methods in terms of precision while requiring a lower number of parameters. The code is publicly available at github.com/cvi2snt/DDA-MLIC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,861
2410.08666
DeltaDQ: Ultra-High Delta Compression for Fine-Tuned LLMs via Group-wise Dropout and Separate Quantization
Large language models achieve exceptional performance on various downstream tasks through supervised fine-tuning. However, the diversity of downstream tasks and practical requirements makes deploying multiple full-parameter fine-tuned models challenging. Current methods that compress the delta weight struggle to achieve ultra-high compression, failing to minimize the deployment overhead. To address the above issue, we propose a novel distribution-driven delta compression framework DeltaDQ, which utilizes Group-wise Dropout and Separate Quantization to achieve ultra-high compression for the delta weight. We have observed that the matrix-computed intermediate results for the delta weight exhibit extremely small variance and min-max range characteristics, referred to as Balanced Intermediate Results. Exploiting this phenomenon, we introduce Group-wise Dropout to perform dropout on the delta weight using an optimal group size. Furthermore, using Separate Quantization, sparse weights are quantized and decomposed to achieve a lower bit. Experimental results show that DeltaDQ achieves 16x compression with improved accuracy compared to baselines for WizardMath and WizardCoder models across different parameter scales. Moreover, DeltaDQ demonstrates the ability for ultra-high compression ratio, achieving 128x compression for the WizardMath-7B model and 512x compression for the WizardMath-70B model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
497,225
1709.03698
Reversible Architectures for Arbitrarily Deep Residual Neural Networks
Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
80,514
2401.06559
A General Benchmark Framework is Dynamic Graph Neural Network Need
Dynamic graph learning is crucial for modeling real-world systems with evolving relationships and temporal dynamics. However, the lack of a unified benchmark framework in current research has led to inaccurate evaluations of dynamic graph models. This paper highlights the significance of dynamic graph learning and its applications in various domains. It emphasizes the need for a standardized benchmark framework that captures temporal dynamics, evolving graph structures, and downstream task requirements. Establishing a unified benchmark will help researchers understand the strengths and limitations of existing models, foster innovation, and advance dynamic graph learning. In conclusion, this paper identifies the lack of a standardized benchmark framework as a current limitation in dynamic graph learning research . Such a framework will facilitate accurate model evaluation, drive advancements in dynamic graph learning techniques, and enable the development of more effective models for real-world applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
421,201
2008.00829
Deep Network Ensemble Learning applied to Image Classification using CNN Trees
Traditional machine learning approaches may fail to perform satisfactorily when dealing with complex data. In this context, the importance of data mining evolves w.r.t. building an efficient knowledge discovery and mining framework. Ensemble learning is aimed at integration of fusion, modeling and mining of data into a unified model. However, traditional ensemble learning methods are complex and have optimization or tuning problems. In this paper, we propose a simple, sequential, efficient, ensemble learning approach using multiple deep networks. The deep network used in the ensembles is ResNet50. The model draws inspiration from binary decision/classification trees. The proposed approach is compared against the baseline viz. the single classifier approach i.e. using a single multiclass ResNet50 on the ImageNet and Natural Images datasets. Our approach outperforms the baseline on all experiments on the ImageNet dataset. Code is available in https://github.com/mueedhafiz1982/CNNTreeEnsemble.git
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
190,132
2111.07415
Read-and-Run Constrained Coding for Modern Flash Devices
The pivotal storage density win achieved by solid-state devices over magnetic devices in 2015 is a result of multiple innovations in physics, architecture, and signal processing. One of the most important innovations in that regard is enabling the storage of more than one bit per cell in the Flash device, i.e., having more than two charge levels per cell. Constrained coding is used in Flash devices to increase reliability via mitigating inter-cell interference that stems from charge propagation among cells. Recently, capacity-achieving constrained codes were introduced to serve that purpose in modern Flash devices, which have more than two levels per cell. While these codes result in minimal redundancy via exploiting the underlying physics, they result in non-negligible complexity increase and access speed limitation since pages cannot be read separately. In this paper, we suggest new constrained coding schemes that have low-complexity and preserve the desirable high access speed in modern Flash devices. The idea is to eliminate error-prone patterns by coding data only on the left-most page while leaving data on all the remaining pages uncoded. Our coding schemes work for any number of levels per cell, offer systematic encoding and decoding, and are capacity-approaching. Since the proposed schemes enable the separation of pages, we refer to them as read-and-run (RR) constrained coding schemes as opposed to schemes adopting read-and-wait for other pages. We analyze the new RR coding schemes and discuss their impact on the probability of occurrence of different charge levels. We also demonstrate the performance improvement achieved via RR coding on a practical triple-level cell Flash device.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
266,361
2310.01686
OFDM-RSMA: Robust Transmission under Inter-Carrier Interference
Rate-splitting multiple access (RSMA) is a multiple access scheme to mitigate the effects of the multi-user interference (MUI) in multi-antenna systems. In this study, we leverage the interference management capabilities of RSMA to tackle the issue of inter-carrier interference (ICI) in orthogonal frequency division multiplexing (OFDM) waveform. We formulate a sum-rate maximization problem to find the optimal subcarrier and power allocation for downlink transmission in a two-user system using RSMA and OFDM. A weighted minimum mean-square error (WMMSE)-based algorithm is proposed to obtain a solution for the formulated non-convex problem. We show that the marriage of rate-splitting (RS) with OFDM provides complementary strengths to cope with peculiar characteristic of wireless medium and its performance-limiting challenges including inter-symbol interference (ISI), MUI, ICI, and inter-numerology interference (INI). The sum-rate performance of the proposed OFDM-RSMA scheme is numerically compared with that of conventional orthogonal frequency division multiple access (OFDMA) and OFDM-non-orthogonal multiple access (NOMA). It is shown that the proposed OFDM-RSMA outperforms OFDM-NOMA and OFDMA in diverse propagation channel conditions owing to its flexible structure and robust interference management capabilities.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
396,524
1911.00804
Generalizing to unseen domains via distribution matching
Supervised learning results typically rely on assumptions of i.i.d. data. Unfortunately, those assumptions are commonly violated in practice. In this work, we tackle such problem by focusing on domain generalization: a formalization where the data generating process at test time may yield samples from never-before-seen domains (distributions). Our work relies on the following lemma: by minimizing a notion of discrepancy between all pairs from a set of given domains, we also minimize the discrepancy between any pairs of mixtures of domains. Using this result, we derive a generalization bound for our setting. We then show that low risk over unseen domains can be achieved by representing the data in a space where (i) the training distributions are indistinguishable, and (ii) relevant information for the task at hand is preserved. Minimizing the terms in our bound yields an adversarial formulation which estimates and minimizes pairwise discrepancies. We validate our proposed strategy on standard domain generalization benchmarks, outperforming a number of recently introduced methods. Notably, we tackle a real-world application where the underlying data corresponds to multi-channel electroencephalography time series from different subjects, each considered as a distinct domain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
151,922
0912.3599
Robust Principal Component Analysis?
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,180
1804.09154
DOOM Level Generation using Generative Adversarial Networks
We applied Generative Adversarial Networks (GANs) to learn a model of DOOM levels from human-designed content. Initially, we analysed the levels and extracted several topological features. Then, for each level, we extracted a set of images identifying the occupied area, the height map, the walls, and the position of game objects. We trained two GANs: one using plain level images, one using both the images and some of the features extracted during the preliminary analysis. We used the two networks to generate new levels and compared the results to assess whether the network trained using also the topological features could generate levels more similar to human-designed ones. Our results show that GANs can capture intrinsic structure of DOOM levels and appears to be a promising approach to level generation in first person shooter games.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
95,914
1909.07205
Knee Compliance Reduces Peak Swing Phase Collision Forces in a Lower-Limb Exoskeleton Leg: A Test Bench Evaluation
Powered lower limb exoskeletons are a viable solution for people with a spinal cord injury to regain mobility for their daily activities. However, the commonly employed rigid actuation and pre-programmed trajectories increase the risk of falling in case of collisions with external objects. Compliant actuation may reduce forces during collisions, thus protecting hardware and user. However, experimental data of collisions specific to lower limb exoskeletons are not available. In this work, we investigated how a variable stiffness actuator at the knee joint influences collision forces transmitted to the user via the exoskeleton. In a test bench experiment, we compared three configurations of an exoskeleton leg with a variable stiffness knee actuator in (i) compliant or (ii) stiff configurations, and with (iii) a rigid actuator. The peak torque observed at the pelvis was reduced from 260.2 Nm to 116.2 Nm as stiffness decreased. In addition, the mechanical impulse was reduced by a factor of three. These results indicate that compliance in the knee joint of an exoskeleton can be favorable in case of collision and should be considered when designing powered lower limb exoskeletons. Overall, this could decrease the effort necessary to maintain balance after a collision and improved collision handling in exoskeletons could result in safer use and benefit their usefulness in daily life.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
145,619
1707.00123
Joint Time Allocation and Power Control in Multi-Cell Networks with Load Coupling: Energy Saving and Rate Improvement
In this paper, we consider the problems of minimizing sum power and maximizing sum rate for multi-cell networks with load coupling, where coupling relation occurs among cells due to inter-cell interference. This coupling relation is characterized by the signal-to-interference-and-noise-ratio (SINR) coupling model with cell load vector and cell power vector as the variables. Due to the nonlinear SINR coupling model, the optimization problems for multi-cell networks with load coupling is nonconvex. To solve these nonconvex problems, we first consider the optimization problems for single-cell networks. Through variable transformations, the optimization problems can be equivalently transformed into convex problems. By solving the Karush-Kuhn-Tucker (KKT), the optimal solutions to power minimization and rate maximization problems can be obtained in closed form. Based on the theoretical findings of optimization problems for single-cell networks, we develop a distributed time allocation and power control algorithm with low complexity for sum power minimization in multi-cell networks. This algorithm is proved to be convergent and globally optimal by using the properties of standard interference function. For sum rate optimization in multi-cell networks, we also provide a distributed algorithm which yields suboptimal solution. Besides, the convergence for this distributed algorithm is proved. Numerical results illustrate the theoretical findings, showing the superiority of our solutions compared to the conventional solution of allocating uniform power for users in the same cell.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
76,299
1505.01716
Spacetimes with Semantics (II), Scaling of agency, semantics, and tenancy
Using Promise Theory as a calculus, I review how to define agency in a scalable way, for the purpose of understanding semantic spacetimes. By following simple scaling rules, replacing individual agents with `super-agents' (sub-spaces), it is shown how agency can be scaled both dynamically and semantically. The notion of occupancy and tenancy, or how space is used and filled in different ways, is also defined, showing how spacetime can be shared between independent parties, both by remote association and local encapsulation. I describe how to build up dynamic and semantic continuity, by joining discrete individual atoms and molecules of space into quasi-continuous lattices.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
42,873
2208.03419
Multi-view deep learning for reliable post-disaster damage classification
This study aims to enable more reliable automated post-disaster building damage classification using artificial intelligence (AI) and multi-view imagery. The current practices and research efforts in adopting AI for post-disaster damage assessment are generally (a) qualitative, lacking refined classification of building damage levels based on standard damage scales, and (b) trained based on aerial or satellite imagery with limited views, which, although indicative, are not completely descriptive of the damage scale. To enable more accurate and reliable automated quantification of damage levels, the present study proposes the use of more comprehensive visual data in the form of multiple ground and aerial views of the buildings. To have such a spatially-aware damage prediction model, a Multi-view Convolution Neural Network (MV-CNN) architecture is used that combines the information from different views of a damaged building. This spatial 3D context damage information will result in more accurate identification of damages and reliable quantification of damage levels. The proposed model is trained and validated on reconnaissance visual dataset containing expert-labeled, geotagged images of the inspected buildings following hurricane Harvey. The developed model demonstrates reasonably good accuracy in predicting the damage levels and can be used to support more informed and reliable AI-assisted disaster management practices.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
311,778
1705.02753
Optimizing Pilot Overhead for Ultra-Reliable Short-Packet Transmission
In this paper we optimize the pilot overhead for ultra-reliable short-packet transmission and investigate the dependence of this overhead on packet size and error probability. In particular, we consider a point-to-point communication in which one sensor sends messages to a central node, or base-station, over AWGN with Rayleigh fading channel. We formalize the optimization in terms of approximate achievable rates at a given block length, pilot length, and error probability. This leads to more accurate pilot overhead optimization. Simulation results show that it is important to take into account the packet size and the error probability when optimizing the pilot overhead.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,057
2310.18509
Deep Reinforcement Learning for Weapons to Targets Assignment in a Hypersonic strike
We use deep reinforcement learning (RL) to optimize a weapons to target assignment (WTA) policy for multi-vehicle hypersonic strike against multiple targets. The objective is to maximize the total value of destroyed targets in each episode. Each randomly generated episode varies the number and initial conditions of the hypersonic strike weapons (HSW) and targets, the value distribution of the targets, and the probability of a HSW being intercepted. We compare the performance of this WTA policy to that of a benchmark WTA policy derived using non-linear integer programming (NLIP), and find that the RL WTA policy gives near optimal performance with a 1000X speedup in computation time, allowing real time operation that facilitates autonomous decision making in the mission end game.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
403,571
2209.14910
Graph Modeling in Computer Assisted Automotive Development
We consider graph modeling for a knowledge graph for vehicle development, with a focus on crash safety. An organized schema that incorporates information from various structured and unstructured data sources is provided, which includes relevant concepts within the domain. In particular, we propose semantics for crash computer aided engineering (CAE) data, which enables searchability, filtering, recommendation, and prediction for crash CAE data during the development process. This graph modeling considers the CAE data in the context of the R\&D development process and vehicle safety. Consequently, we connect CAE data to the protocols that are used to assess vehicle safety performances. The R\&D process includes CAD engineering and safety attributes, with a focus on multidisciplinary problem-solving. We describe previous efforts in graph modeling in comparison to our proposal, discuss its strengths and limitations, and identify areas for future work.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
320,392
2310.09468
Randomized Benchmarking of Local Zeroth-Order Optimizers for Variational Quantum Systems
In the field of quantum information, classical optimizers play an important role. From experimentalists optimizing their physical devices to theorists exploring variational quantum algorithms, many aspects of quantum information require the use of a classical optimizer. For this reason, there are many papers that benchmark the effectiveness of different optimizers for specific quantum optimization tasks and choices of parameterized algorithms. However, for researchers exploring new algorithms or physical devices, the insights from these studies don't necessarily translate. To address this concern, we compare the performance of classical optimizers across a series of partially-randomized tasks to more broadly sample the space of quantum optimization problems. We focus on local zeroth-order optimizers due to their generally favorable performance and query-efficiency on quantum systems. We discuss insights from these experiments that can help motivate future works to improve these optimizers for use on quantum systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,785
2304.07969
Learning to "Segment Anything" in Thermal Infrared Images through Knowledge Distillation with a Large Scale Dataset SATIR
The Segment Anything Model (SAM) is a promptable segmentation model recently introduced by Meta AI that has demonstrated its prowess across various fields beyond just image segmentation. SAM can accurately segment images across diverse fields, and generating various masks. We discovered that this ability of SAM can be leveraged to pretrain models for specific fields. Accordingly, we have proposed a framework that utilizes SAM to generate pseudo labels for pretraining thermal infrared image segmentation tasks. Our proposed framework can effectively improve the accuracy of segmentation results of specific categories beyond the SOTA ImageNet pretrained model. Our framework presents a novel approach to collaborate with models trained with large data like SAM to address problems in special fields. Also, we generated a large scale thermal infrared segmentation dataset used for pretaining, which contains over 100,000 images with pixel-annotation labels. This approach offers an effective solution for working with large models in special fields where label annotation is challenging. Our code is available at https://github.com/chenjzBUAA/SATIR
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
358,542
1911.11394
LaFIn: Generative Landmark Guided Face Inpainting
It is challenging to inpaint face images in the wild, due to the large variation of appearance, such as different poses, expressions and occlusions. A good inpainting algorithm should guarantee the realism of output, including the topological structure among eyes, nose and mouth, as well as the attribute consistency on pose, gender, ethnicity, expression, etc. This paper studies an effective deep learning based strategy to deal with these issues, which comprises of a facial landmark predicting subnet and an image inpainting subnet. Concretely, given partial observation, the landmark predictor aims to provide the structural information (e.g. topological relationship and expression) of incomplete faces, while the inpaintor is to generate plausible appearance (e.g. gender and ethnicity) conditioned on the predicted landmarks. Experiments on the CelebA-HQ and CelebA datasets are conducted to reveal the efficacy of our design and, to demonstrate its superiority over state-of-the-art alternatives both qualitatively and quantitatively. In addition, we assume that high-quality completed faces together with their landmarks can be utilized as augmented data to further improve the performance of (any) landmark predictor, which is corroborated by experimental results on the 300W and WFLW datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
155,112
1808.07864
Secure Relaying in Non-Orthogonal Multiple Access: Trusted and Untrusted Scenarios
A downlink single-input single-output non-orthogonal multiple access setting is considered, in which a base station (BS) is communicating with two legitimate users in two possible scenarios of unsecure environments: existence of an external eavesdropper and communicating through an untrusted relay. For the first scenario, a number of trusted cooperative half-duplex relays is employed to assist with the BS's transmission and secure its signals from the external eavesdropper. Various relaying schemes are proposed and analyzed for that matter: cooperative jamming, decode-and-forward, and amplify-and-forward. For each scheme, secure beamforming signals are devised at the relays to maximize the achievable secrecy rate regions. For the second scenario, with the untrusted relay, achievable secrecy rate regions are derived for two different relaying schemes: compress-and-forward and amplify-and-forward, under two different modes of operation. In the first mode, coined passive user mode, the users receive signals from both the BS and the untrusted relay, and combine them to decode their messages. In the second mode, coined active user mode, the users transmit a cooperative jamming signal simultaneously with the BS's transmission to further confuse the relay. Focusing on half-duplex nodes, the users cannot receive the BS's signal while jamming the relay, i.e., while being active, and rely only on the signals forwarded to them by the relay. It is shown that the best relaying scheme highly depends on the system parameters, in particular distances between the nodes, and also on which part of the secrecy rate region the system is to operate at.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
105,826
2406.16143
Review of Zero-Shot and Few-Shot AI Algorithms in The Medical Domain
In this paper, different techniques of few-shot, zero-shot, and regular object detection have been investigated. The need for few-shot learning and zero-shot learning techniques is crucial and arises from the limitations and challenges in traditional machine learning, deep learning, and computer vision methods where they require large amounts of data, plus the poor generalization of those traditional methods. Those techniques can give us prominent results by using only a few training sets reducing the required amounts of data and improving the generalization. This survey will highlight the recent papers of the last three years that introduce the usage of few-shot learning and zero-shot learning techniques in addressing the challenges mentioned earlier. In this paper we reviewed the Zero-shot, few-shot and regular object detection methods and categorized them in an understandable manner. Based on the comparison made within each category. It been found that the approaches are quite impressive. This integrated review of diverse papers on few-shot, zero-shot, and regular object detection reveals a shared focus on advancing the field through novel frameworks and techniques. A noteworthy observation is the scarcity of detailed discussions regarding the difficulties encountered during the development phase. Contributions include the introduction of innovative models, such as ZSD-YOLO and GTNet, often showcasing improvements with various metrics such as mean average precision (mAP),Recall@100 (RE@100), the area under the receiver operating characteristic curve (AUROC) and precision. These findings underscore a collective move towards leveraging vision-language models for versatile applications, with potential areas for future research including a more thorough exploration of limitations and domain-specific adaptations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
467,013
2002.11369
Lipschitz standardization for multivariate learning
Probabilistic learning is increasingly being tackled as an optimization problem, with gradient-based approaches as predominant methods. When modelling multivariate likelihoods, a usual but undesirable outcome is that the learned model fits only a subset of the observed variables, overlooking the rest. In this work, we study this problem through the lens of multitask learning (MTL), where similar effects have been broadly studied. While MTL solutions do not directly apply in the probabilistic setting (as they cannot handle the likelihood constraints) we show that similar ideas may be leveraged during data preprocessing. First, we show that data standardization often helps under common continuous likelihoods, but it is not enough in the general case, specially under mixed continuous and discrete likelihood models. In order for balance multivariate learning, we then propose a novel data preprocessing, Lipschitz standardization, which balances the local Lipschitz smoothness across variables. Our experiments on real-world datasets show that Lipschitz standardization leads to more accurate multivariate models than the ones learned using existing data preprocessing techniques. The models and datasets employed in the experiments can be found in https://github.com/adrianjav/lipschitz-standardization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
165,689
2011.00241
Methods for Pruning Deep Neural Networks
This paper presents a survey of methods for pruning deep neural networks. It begins by categorising over 150 studies based on the underlying approach used and then focuses on three categories: methods that use magnitude based pruning, methods that utilise clustering to identify redundancy, and methods that use sensitivity analysis to assess the effect of pruning. Some of the key influencing studies within these categories are presented to highlight the underlying approaches and results achieved. Most studies present results which are distributed in the literature as new architectures, algorithms and data sets have developed with time, making comparison across different studied difficult. The paper therefore provides a resource for the community that can be used to quickly compare the results from many different methods on a variety of data sets, and a range of architectures, including AlexNet, ResNet, DenseNet and VGG. The resource is illustrated by comparing the results published for pruning AlexNet and ResNet50 on ImageNet and ResNet56 and VGG16 on the CIFAR10 data to reveal which pruning methods work well in terms of retaining accuracy whilst achieving good compression rates. The paper concludes by identifying some promising directions for future research.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
204,143
2403.11887
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Low-rank adaptation (LoRA) and its variants are widely employed in fine-tuning large models, including large language models for natural language processing and diffusion models for computer vision. This paper proposes a generalized framework called SuperLoRA that unifies and extends different LoRA variants, which can be realized under different hyper-parameter settings. Introducing grouping, folding, shuffling, projecting, and tensor factoring, SuperLoRA offers high flexibility compared with other LoRA variants and demonstrates superior performance for transfer learning tasks especially in the extremely few-parameter regimes.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
438,914
1808.01426
Abstractive Summarization Improved by WordNet-based Extractive Sentences
Recently, the seq2seq abstractive summarization models have achieved good results on the CNN/Daily Mail dataset. Still, how to improve abstractive methods with extractive methods is a good research direction, since extractive methods have their potentials of exploiting various efficient features for extracting important sentences in one text. In this paper, in order to improve the semantic relevance of abstractive summaries, we adopt the WordNet based sentence ranking algorithm to extract the sentences which are most semantically to one text. Then, we design a dual attentional seq2seq framework to generate summaries with consideration of the extracted information. At the same time, we combine pointer-generator and coverage mechanisms to solve the problems of out-of-vocabulary (OOV) words and duplicate words which exist in the abstractive models. Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-the-art ROUGE scores. Human evaluations also show that the summaries generated by our models have high semantic relevance to the original text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
104,566
2110.00338
Summarize and Search: Learning Consensus-aware Dynamic Convolution for Co-Saliency Detection
Humans perform co-saliency detection by first summarizing the consensus knowledge in the whole group and then searching corresponding objects in each image. Previous methods usually lack robustness, scalability, or stability for the first process and simply fuse consensus features with image features for the second process. In this paper, we propose a novel consensus-aware dynamic convolution model to explicitly and effectively perform the "summarize and search" process. To summarize consensus image features, we first summarize robust features for every single image using an effective pooling method and then aggregate cross-image consensus cues via the self-attention mechanism. By doing this, our model meets the scalability and stability requirements. Next, we generate dynamic kernels from consensus features to encode the summarized consensus knowledge. Two kinds of kernels are generated in a supplementary way to summarize fine-grained image-specific consensus object cues and the coarse group-wise common knowledge, respectively. Then, we can effectively perform object searching by employing dynamic convolution at multiple scales. Besides, a novel and effective data synthesis method is also proposed to train our network. Experimental results on four benchmark datasets verify the effectiveness of our proposed method. Our code and saliency maps are available at \url{https://github.com/nnizhang/CADC}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
258,361
2008.10634
DiverseNet: When One Right Answer is not Enough
Many structured prediction tasks in machine vision have a collection of acceptable answers, instead of one definitive ground truth answer. Segmentation of images, for example, is subject to human labeling bias. Similarly, there are multiple possible pixel values that could plausibly complete occluded image regions. State-of-the art supervised learning methods are typically optimized to make a single test-time prediction for each query, failing to find other modes in the output space. Existing methods that allow for sampling often sacrifice speed or accuracy. We introduce a simple method for training a neural network, which enables diverse structured predictions to be made for each test-time query. For a single input, we learn to predict a range of possible answers. We compare favorably to methods that seek diversity through an ensemble of networks. Such stochastic multiple choice learning faces mode collapse, where one or more ensemble members fail to receive any training signal. Our best performing solution can be deployed for various tasks, and just involves small modifications to the existing single-mode architecture, loss function, and training regime. We demonstrate that our method results in quantitative improvements across three challenging tasks: 2D image completion, 3D volume estimation, and flow prediction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,054
2302.05997
The FOLE Database
This paper continues the discussion of the representation and interpretation of ontologies in the first-order logical environment {\ttfamily FOLE} (Kent). Ontologies are represented and interpreted in (many-sorted) first-order logic. Five papers provide a rigorous mathematical representation for the {\ttfamily ERA} (entity-relationship-attribute) data model (Chen) in particular, and ontologies in general, within the first-order logical environment {\ttfamily FOLE}. Two papers (Kent and another paper) represent the formalism and semantics of (many-sorted) first-order logic in a \emph{classification form} corresponding to ideas discussed in the Information Flow Framework (IFF). Two papers (Kent and the current paper) represent (many-sorted) first-order logic in an \emph{interpretation form} expanding on material found in the paper (Kent). A fifth paper (Kent) demonstrates that the classification form of {\ttfamily FOLE} is "informationally equivalent" to the interpretation form of {\ttfamily FOLE}, thereby defining the formalism and semantics of first-order logical/relational database systems. Although the classification form follows the entity-relationship-attribute data model of Chen, the interpretation form incorporates the relational data model of Codd. Two further papers discuss the "relational algebra" (Kent) and the "relational calculus". In general, the {\ttfamily FOLE} representation uses a conceptual structures approach, that is completely compatible with the theory of institutions (Goguen and Burstall), formal concept analysis (Ganter and Wille), and information flow (Barwise and Seligman).
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
345,249
1205.6695
On The Spatiotemporal Burstiness of Terms
Thousands of documents are made available to the users via the web on a daily basis. One of the most extensively studied problems in the context of such document streams is burst identification. Given a term t, a burst is generally exhibited when an unusually high frequency is observed for t. While spatial and temporal burstiness have been studied individually in the past, our work is the first to simultaneously track and measure spatiotemporal term burstiness. In addition, we use the mined burstiness information toward an efficient document-search engine: given a user's query of terms, our engine returns a ranked list of documents discussing influential events with a strong spatiotemporal impact. We demonstrate the efficiency of our methods with an extensive experimental evaluation on real and synthetic datasets.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
16,241
1801.02281
Winograd Schema - Knowledge Extraction Using Narrative Chains
The Winograd Schema Challenge (WSC) is a test of machine intelligence, designed to be an improvement on the Turing test. A Winograd Schema consists of a sentence and a corresponding question. To successfully answer these questions, one requires the use of commonsense knowledge and reasoning. This work focuses on extracting common sense knowledge which can be used to generate answers for the Winograd schema challenge. Common sense knowledge is extracted based on events (or actions) and their participants; called Event-Based Conditional Commonsense (ECC). I propose an approach using Narrative Event Chains [Chambers et al., 2008] to extract ECC knowledge. These are stored in templates, to be later used for answering the WSC questions. This approach works well with respect to a subset of WSC tasks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
87,899
2011.02578
Learning and Evaluating Representations for Deep One-class Classification
We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. We argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as a normality score from a surrogate classifier. We thoroughly evaluate different self-supervised representation learning algorithms under the proposed framework for one-class classification. Moreover, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks, including novelty and anomaly detection. Finally, we present visual explanations, confirming that the decision-making process of deep one-class classifiers is intuitive to humans. The code is available at https://github.com/google-research/deep_representation_one_class.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
204,964
2110.14051
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
Machine learning models often encounter samples that are diverged from the training distribution. Failure to recognize an out-of-distribution (OOD) sample, and consequently assign that sample to an in-class label significantly compromises the reliability of a model. The problem has gained significant attention due to its importance for safety deploying models in open-world settings. Detecting OOD samples is challenging due to the intractability of modeling all possible unknown distributions. To date, several research domains tackle the problem of detecting unfamiliar samples, including anomaly detection, novelty detection, one-class learning, open set recognition, and out-of-distribution detection. Despite having similar and shared concepts, out-of-distribution, open-set, and anomaly detection have been investigated independently. Accordingly, these research avenues have not cross-pollinated, creating research barriers. While some surveys intend to provide an overview of these approaches, they seem to only focus on a specific domain without examining the relationship between different domains. This survey aims to provide a cross-domain and comprehensive review of numerous eminent works in respective areas while identifying their commonalities. Researchers can benefit from the overview of research advances in different fields and develop future methodology synergistically. Furthermore, to the best of our knowledge, while there are surveys in anomaly detection or one-class learning, there is no comprehensive or up-to-date survey on out-of-distribution detection, which our survey covers extensively. Finally, having a unified cross-domain perspective, we discuss and shed light on future lines of research, intending to bring these fields closer together.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
263,395
2308.13191
Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Although dominant in natural language processing, transformer-based models remain challenged by the task of long-sequence processing, because the computational cost of self-attention operations in transformers swells quadratically with the input sequence length. To alleviate the complexity of long-sequence processing, we propose a simple framework to enable the offthe-shelf pre-trained transformers to process much longer sequences, while the computation and memory costs remain growing linearly with the input sequence lengths. More specifically, our method divides each long-sequence input into a batch of chunks, then aligns the interchunk information during the encoding steps, and finally selects the most representative hidden states from the encoder for the decoding process. To extract inter-chunk semantic information, we align the start and end token embeddings among chunks in each encoding transformer block. To learn an effective hidden selection policy, we design a dual updating scheme inspired by reinforcement learning, which regards the decoders of transformers as environments, and the downstream performance metrics as the rewards to evaluate the hidden selection actions. Our empirical results on real-world long-text summarization and reading comprehension tasks demonstrate effective improvements compared to prior longsequence processing baselines.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
387,818
2408.05749
Efficient and Versatile Robust Fine-Tuning of Zero-shot Models
Large-scale image-text pre-trained models enable zero-shot classification and provide consistent accuracy across various data distributions. Nonetheless, optimizing these models in downstream tasks typically requires fine-tuning, which reduces generalization to out-of-distribution (OOD) data and demands extensive computational resources. We introduce Robust Adapter (R-Adapter), a novel method for fine-tuning zero-shot models to downstream tasks while simultaneously addressing both these issues. Our method integrates lightweight modules into the pre-trained model and employs novel self-ensemble techniques to boost OOD robustness and reduce storage expenses substantially. Furthermore, we propose MPM-NCE loss designed for fine-tuning on vision-language downstream tasks. It ensures precise alignment of multiple image-text pairs and discriminative feature learning. By extending the benchmark for robust fine-tuning beyond classification to include diverse tasks such as cross-modal retrieval and open vocabulary segmentation, we demonstrate the broad applicability of R-Adapter. Our extensive experiments demonstrate that R-Adapter achieves state-of-the-art performance across a diverse set of tasks, tuning only 13% of the parameters of the CLIP encoders.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
479,918
2309.11655
Achieving Autonomous Cloth Manipulation with Optimal Control via Differentiable Physics-Aware Regularization and Safety Constraints
Cloth manipulation is a category of deformable object manipulation of great interest to the robotics community, from applications of automated laundry-folding and home organizing and cleaning to textiles and flexible manufacturing. Despite the desire for automated cloth manipulation, the thin-shell dynamics and under-actuation nature of cloth present significant challenges for robots to effectively interact with them. Many recent works omit explicit modeling in favor of learning-based methods that may yield control policies directly. However, these methods require large training sets that must be collected and curated. In this regard, we create a framework for differentiable modeling of cloth dynamics leveraging an Extended Position-based Dynamics (XPBD) algorithm. Together with the desired control objective, physics-aware regularization terms are designed for better results, including trajectory smoothness and elastic potential energy. In addition, safety constraints, such as avoiding obstacles, can be specified using signed distance functions (SDFs). We formulate the cloth manipulation task with safety constraints as a constrained optimization problem, which can be effectively solved by mainstream gradient-based optimizers thanks to the end-to-end differentiability of our framework. Finally, we assess the proposed framework for manipulation tasks with various safety thresholds and demonstrate the feasibility of result trajectories on a surgical robot. The effects of the regularization terms are analyzed in an additional ablation study.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
393,485
2209.05247
TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories
Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose TrackletMapper, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available at http://trackletmapper.cs.uni-freiburg.de.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
317,037
2407.10359
Evolved Developmental Artificial Neural Networks for Multitasking with Advanced Activity Dependence
Recently, Cartesian Genetic Programming has been used to evolve developmental programs to guide the formation of artificial neural networks (ANNs). This approach has demonstrated success in enabling ANNs to perform multiple tasks while avoiding catastrophic forgetting. One unique aspect of this approach is the use of separate developmental programs evolved to regulate the development of separate soma and dendrite units. An opportunity afforded by this approach is the ability to incorporate Activity Dependence (AD) into the model such that environmental feedback can help to regulate the behavior of each type of unit. Previous work has shown a limited version of AD (influencing neural bias) to provide marginal improvements over non-AD ANNs. In this work, we present promising results from new extensions to AD. Specifically, we demonstrate a more significant improvement via AD on new neural parameters including health and position, as well as a combination of all of these along with bias. We report on the implications of this work and suggest several promising directions for future work.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
472,955
2403.15192
SFOD: Spiking Fusion Object Detector
Event cameras, characterized by high temporal resolution, high dynamic range, low power consumption, and high pixel bandwidth, offer unique capabilities for object detection in specialized contexts. Despite these advantages, the inherent sparsity and asynchrony of event data pose challenges to existing object detection algorithms. Spiking Neural Networks (SNNs), inspired by the way the human brain codes and processes information, offer a potential solution to these difficulties. However, their performance in object detection using event cameras is limited in current implementations. In this paper, we propose the Spiking Fusion Object Detector (SFOD), a simple and efficient approach to SNN-based object detection. Specifically, we design a Spiking Fusion Module, achieving the first-time fusion of feature maps from different scales in SNNs applied to event cameras. Additionally, through integrating our analysis and experiments conducted during the pretraining of the backbone network on the NCAR dataset, we delve deeply into the impact of spiking decoding strategies and loss functions on model performance. Thereby, we establish state-of-the-art classification results based on SNNs, achieving 93.7\% accuracy on the NCAR dataset. Experimental results on the GEN1 detection dataset demonstrate that the SFOD achieves a state-of-the-art mAP of 32.1\%, outperforming existing SNN-based approaches. Our research not only underscores the potential of SNNs in object detection with event cameras but also propels the advancement of SNNs. Code is available at https://github.com/yimeng-fan/SFOD.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
440,439
1005.2443
Network Coded Transmission of Fountain Codes over Cooperative Relay Networks
In this paper, a transmission strategy of fountain codes over cooperative relay networks is proposed. When more than one relay nodes are available, we apply network coding to fountain-coded packets. By doing this, partial information is made available to the destination node about the upcoming message block. It is therefore able to reduce the required number of transmissions over erasure channels, hence increasing the effective throughput. Its application to wireless channels with Rayleigh fading and AWGN noise is also analysed, whereby the role of analogue network coding and optimal weight selection is demonstrated.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,484
2209.12362
Multi-dataset Training of Transformers for Robust Action Recognition
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are released.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
319,508
1105.2988
Anatomy of a Bit: Information in a Time Series Observation
Appealing to several multivariate information measures---some familiar, some new here---we analyze the information embedded in discrete-valued stochastic time series. We dissect the uncertainty of a single observation to demonstrate how the measures' asymptotic behavior sheds structural and semantic light on the generating process's internal information dynamics. The measures scale with the length of time window, which captures both intensive (rates of growth) and subextensive components. We provide interpretations for the components, developing explicit relationships between them. We also identify the informational component shared between the past and the future that is not contained in a single observation. The existence of this component directly motivates the notion of a process's effective (internal) states and indicates why one must build models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,376
1401.2690
Distance Landmarks Revisited for Road Graphs
Computing shortest distances is one of the fundamental problems on graphs, and remains a {\em challenging} task today. {\em Distance} {\em landmarks} have been recently studied for shortest distance queries with an auxiliary data structure, referred to as {\em landmark} {\em covers}. This paper studies how to apply distance landmarks for fast {\em exact} shortest distance query answering on large road graphs. However, the {\em direct} application of distance landmarks is {\em impractical} due to the high space and time cost. To rectify this problem, we investigate novel techniques that can be seamlessly combined with distance landmarks. We first propose a notion of {\em hybrid landmark covers}, a revision of landmark covers. Second, we propose a notion of {\em agents}, each of which represents a small subgraph and holds good properties for fast distance query answering. We also show that agents can be computed in {\em linear time}. Third, we introduce graph partitions to deal with the remaining subgraph that cannot be captured by agents. Fourth, we develop a unified framework that seamlessly integrates our proposed techniques and existing optimization techniques, for fast shortest distance query answering. Finally, we experimentally verify that our techniques significantly improve the efficiency of shortest distance queries, using real-life road graphs.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
29,776
1907.06189
Emergency DC Power Support Strategy Based on Coordinated Droop Control in Multi-Infeed HVDC System
With the complex hybrid AC-DC power system in China coming into being, the HVDC faults, such as DC block faults, have an enormous effect on the frequency stability of the AC side. In multi-infeed HVDC (MIDC) system, to improve the frequency stability of the receiving-end system, this paper proposes an emergency DC power support (EDCPS) strategy, which is based on a designed droop characteristic between the active power of LCC-HVDC lines and the receiving-end system frequency. Then, a coordinated optimization method is proposed to determine the droop coefficients of the MIDC lines. To test the proposed EDCPS method, the electromagnetic transient (EMT) model of a MIDC system is built on the CloudPSS platform, and the test results verify the effectiveness of the proposed EDCPS strategy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
138,555
2404.00288
Seeing the Unseen: A Frequency Prompt Guided Transformer for Image Restoration
How to explore useful features from images as prompts to guide the deep image restoration models is an effective way to solve image restoration. In contrast to mining spatial relations within images as prompt, which leads to characteristics of different frequencies being neglected and further remaining subtle or undetectable artifacts in the restored image, we develop a Frequency Prompting image restoration method, dubbed FPro, which can effectively provide prompt components from a frequency perspective to guild the restoration model address these differences. Specifically, we first decompose input features into separate frequency parts via dynamically learned filters, where we introduce a gating mechanism for suppressing the less informative elements within the kernels. To propagate useful frequency information as prompt, we then propose a dual prompt block, consisting of a low-frequency prompt modulator (LPM) and a high-frequency prompt modulator (HPM), to handle signals from different bands respectively. Each modulator contains a generation process to incorporate prompting components into the extracted frequency maps, and a modulation part that modifies the prompt feature with the guidance of the decoder features. Experimental results on commonly used benchmarks have demonstrated the favorable performance of our pipeline against SOTA methods on 5 image restoration tasks, including deraining, deraindrop, demoir\'eing, deblurring, and dehazing. The source code and pre-trained models will be available at https://github.com/joshyZhou/FPro.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
442,852
1911.13038
Indirect Local Attacks for Context-aware Semantic Segmentation Networks
Recently, deep networks have achieved impressive semantic segmentation performance, in particular thanks to their use of larger contextual information. In this paper, we show that the resulting networks are sensitive not only to global attacks, where perturbations affect the entire input image, but also to indirect local attacks where perturbations are confined to a small image region that does not overlap with the area that we aim to fool. To this end, we introduce several indirect attack strategies, including adaptive local attacks, aiming to find the best image location to perturb, and universal local attacks. Furthermore, we propose attack detection techniques both for the global image level and to obtain a pixel-wise localization of the fooled regions. Our results are unsettling: Because they exploit a larger context, more accurate semantic segmentation networks are more sensitive to indirect local attacks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
155,566
2311.16431
An exact mathematical description of computation with transient spatiotemporal dynamics in a complex-valued neural network
We study a complex-valued neural network (cv-NN) with linear, time-delayed interactions. We report the cv-NN displays sophisticated spatiotemporal dynamics, including partially synchronized ``chimera'' states. We then use these spatiotemporal dynamics, in combination with a nonlinear readout, for computation. The cv-NN can instantiate dynamics-based logic gates, encode short-term memories, and mediate secure message passing through a combination of interactions and time delays. The computations in this system can be fully described in an exact, closed-form mathematical expression. Finally, using direct intracellular recordings of neurons in slices from neocortex, we demonstrate that computations in the cv-NN are decodable by living biological neurons. These results demonstrate that complex-valued linear systems can perform sophisticated computations, while also being exactly solvable. Taken together, these results open future avenues for design of highly adaptable, bio-hybrid computing systems that can interface seamlessly with other neural networks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
410,881
1906.06362
Comparison of Diverse Decoding Methods from Conditional Language Models
While conditional language models have greatly improved in their ability to output high-quality natural language, many NLP applications benefit from being able to generate a diverse set of candidate sequences. Diverse decoding strategies aim to, within a given-sized candidate list, cover as much of the space of high-quality outputs as possible, leading to improvements for tasks that re-rank and combine candidate outputs. Standard decoding methods, such as beam search, optimize for generating high likelihood sequences rather than diverse ones, though recent work has focused on increasing diversity in these methods. In this work, we perform an extensive survey of decoding-time strategies for generating diverse outputs from conditional language models. We also show how diversity can be improved without sacrificing quality by over-sampling additional candidates, then filtering to the desired number.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
135,274
2310.16162
Brainchop: Next Generation Web-Based Neuroimaging Application
Performing volumetric image processing directly within the browser, particularly with medical data, presents unprecedented challenges compared to conventional backend tools. These challenges arise from limitations inherent in browser environments, such as constrained computational resources and the availability of frontend machine learning libraries. Consequently, there is a shortage of neuroimaging frontend tools capable of providing comprehensive end-to-end solutions for whole brain preprocessing and segmentation while preserving end-user data privacy and residency. In light of this context, we introduce Brainchop (http://www.brainchop.org) as a groundbreaking in-browser neuroimaging tool that enables volumetric analysis of structural MRI using pre-trained full-brain deep learning models, all without requiring technical expertise or intricate setup procedures. Beyond its commitment to data privacy, this frontend tool offers multiple features, including scalability, low latency, user-friendly operation, cross-platform compatibility, and enhanced accessibility. This paper outlines the processing pipeline of Brainchop and evaluates the performance of models across various software and hardware configurations. The results demonstrate the practicality of client-side processing for volumetric data, owing to the robust MeshNet architecture, even within the resource-constrained environment of web browsers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,605
1808.04503
Shared Multi-Task Imitation Learning for Indoor Self-Navigation
Deep imitation learning enables robots to learn from expert demonstrations to perform tasks such as lane following or obstacle avoidance. However, in the traditional imitation learning framework, one model only learns one task, and thus it lacks of the capability to support a robot to perform various different navigation tasks with one model in indoor environments. This paper proposes a new framework, Shared Multi-headed Imitation Learning(SMIL), that allows a robot to perform multiple tasks with one model without switching among different models. We model each task as a sub-policy and design a multi-headed policy to learn the shared information among related tasks by summing up activations from all sub-policies. Compared to single or non-shared multi-headed policies, this framework is able to leverage correlated information among tasks to increase performance.We have implemented this framework using a robot based on NVIDIA TX2 and performed extensive experiments in indoor environments with different baseline solutions. The results demonstrate that SMIL has doubled the performance over nonshared multi-headed policy.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
105,166
2210.02365
SoccerNet 2022 Challenges Results
The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on https://www.soccer-net.org. Baselines and development kits are available on https://github.com/SoccerNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
321,623
2402.17231
MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical Reasoning
Tool-augmented Large Language Models (TALMs) are known to enhance the skillset of large language models (LLMs), thereby, leading to their improved reasoning abilities across many tasks. While, TALMs have been successfully employed in different question-answering benchmarks, their efficacy on complex mathematical reasoning benchmarks, and the potential complementary benefits offered by tools for knowledge retrieval and mathematical equation solving are open research questions. In this work, we present MathSensei, a tool-augmented large language model for mathematical reasoning. We study the complementary benefits of the tools - knowledge retriever (Bing Web Search), program generator + executor (Python), and symbolic equation solver (Wolfram-Alpha API) through evaluations on mathematical reasoning datasets. We perform exhaustive ablations on MATH, a popular dataset for evaluating mathematical reasoning on diverse mathematical disciplines. We also conduct experiments involving well-known tool planners to study the impact of tool sequencing on the model performance. MathSensei achieves 13.5% better accuracy over gpt-3.5-turbo with Chain-of-Thought on the MATH dataset. We further observe that TALMs are not as effective for simpler math word problems (in GSM-8K), and the benefit increases as the complexity and required knowledge increases (progressively over AQuA, MMLU-Math, and higher level complex questions in MATH). The code and data are available at https://github.com/Debrup-61/MathSensei.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
432,901
2208.12666
Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages
End-to-end (E2E) models have become the default choice for state-of-the-art speech recognition systems. Such models are trained on large amounts of labelled data, which are often not available for low-resource languages. Techniques such as self-supervised learning and transfer learning hold promise, but have not yet been effective in training accurate models. On the other hand, collecting labelled datasets on a diverse set of domains and speakers is very expensive. In this work, we demonstrate an inexpensive and effective alternative to these approaches by ``mining'' text and audio pairs for Indian languages from public sources, specifically from the public archives of All India Radio. As a key component, we adapt the Needleman-Wunsch algorithm to align sentences with corresponding audio segments given a long audio and a PDF of its transcript, while being robust to errors due to OCR, extraneous text, and non-transcribed speech. We thus create Shrutilipi, a dataset which contains over 6,400 hours of labelled audio across 12 Indian languages totalling to 4.95M sentences. On average, Shrutilipi results in a 2.3x increase over publicly available labelled data. We establish the quality of Shrutilipi with 21 human evaluators across the 12 languages. We also establish the diversity of Shrutilipi in terms of represented regions, speakers, and mentioned named entities. Significantly, we show that adding Shrutilipi to the training set of Wav2Vec models leads to an average decrease in WER of 5.8\% for 7 languages on the IndicSUPERB benchmark. For Hindi, which has the most benchmarks (7), the average WER falls from 18.8% to 13.5%. This improvement extends to efficient models: We show a 2.3% drop in WER for a Conformer model (10x smaller than Wav2Vec). Finally, we demonstrate the diversity of Shrutilipi by showing that the model trained with it is more robust to noisy input.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
314,818
1708.05296
A Survey of Parallel A*
A* is a best-first search algorithm for finding optimal-cost paths in graphs. A* benefits significantly from parallelism because in many applications, A* is limited by memory usage, so distributed memory implementations of A* that use all of the aggregate memory on the cluster enable problems that can not be solved by serial, single-machine implementations to be solved. We survey approaches to parallel A*, focusing on decentralized approaches to A* which partition the state space among processors. We also survey approaches to parallel, limited-memory variants of A* such as parallel IDA*.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
79,107
1412.8099
Extraction of Web Usage Profiles using Simulated Annealing Based Biclustering Approach
In this paper, the Simulated Annealing (SA) based biclustering approach is proposed in which SA is used as an optimization tool for biclustering of web usage data to identify the optimal user profile from the given web usage data. Extracted biclusters are consists of correlated users whose usage behaviors are similar across the subset of web pages of a web site where as these users are uncorrelated for remaining pages of a web site. These results are very useful in web personalization so that it communicates better with its users and for making customized prediction. Also useful for providing customized web service too. Experiment was conducted on the real web usage dataset called CTI dataset. Results show that proposed SA based biclustering approach can extract highly correlated user groups from the preprocessed web usage data.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
38,890
2210.02391
Geometry Driven Progressive Warping for One-Shot Face Animation
Face animation aims at creating photo-realistic portrait videos with animated poses and expressions. A common practice is to generate displacement fields that are used to warp pixels and features from source to target. However, prior attempts often produce sub-optimal displacements. In this work, we present a geometry driven model and propose two geometric patterns as guidance: 3D face rendered displacement maps and posed neural codes. The model can optionally use one of the patterns as guidance for displacement estimation. To model displacements at locations not covered by the face model (e.g., hair), we resort to source image features for contextual information and propose a progressive warping module that alternates between feature warping and displacement estimation at increasing resolutions. We show that the proposed model can synthesize portrait videos with high fidelity and achieve the new state-of-the-art results on the VoxCeleb1 and VoxCeleb2 datasets for both cross identity and same identity reconstruction.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
321,630
2001.09735
An AI model for Rapid and Accurate Identification of Chemical Agents in Mass Casualty Incidents
In this report we examine the effectiveness of WISER in identification of a chemical culprit during a chemical based Mass Casualty Incident (MCI). We also evaluate and compare Binary Decision Tree (BDT) and Artificial Neural Networks (ANN) using the same experimental conditions as WISER. The reverse engineered set of Signs/Symptoms from the WISER application was used as the training set and 31,100 simulated patient records were used as the testing set. Three sets of simulated patient records were generated by 5%, 10% and 15% perturbation of the Signs/Symptoms of each chemical record. While all three methods achieved a 100% training accuracy, WISER, BDT and ANN produced performances in the range of: 1.8%-0%, 65%-26%, 67%-21% respectively. A preliminary investigation of dimensional reduction using ANN illustrated a dimensional collapse from 79 variables to 40 with little loss of classification performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,655
2310.19099
Web3 Meets AI Marketplace: Exploring Opportunities, Analyzing Challenges, and Suggesting Solutions
Web3 and AI have been among the most discussed fields over the recent years, with substantial hype surrounding each field's potential to transform the world as we know it. However, as the hype settles, it's evident that neither AI nor Web3 can address all challenges independently. Consequently, the intersection of AI and Web3 is gaining increased attention, emerging as a new field with the potential to address the limitations of each. In this article, we will focus on the integration of web3 and the AI marketplace, where AI services and products can be provided in a decentralized manner (DeAI). A comprehensive review is provided by summarizing the opportunities and challenges on this topic. Additionally, we offer analyses and solutions to address these challenges. We've developed a framework that lets users pay with any kind of cryptocurrency to get AI services. Additionally, they can also enjoy AI services for free on our platform by simply locking up their assets temporarily in the protocol. This unique approach is a first in the industry. Before this, offering free AI services in the web3 community wasn't possible. Our solution opens up exciting opportunities for the AI marketplace in the web3 space to grow and be widely adopted.
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
403,851
2308.14632
Comparing AutoML and Deep Learning Methods for Condition Monitoring using Realistic Validation Scenarios
This study extensively compares conventional machine learning methods and deep learning for condition monitoring tasks using an AutoML toolbox. The experiments reveal consistent high accuracy in random K-fold cross-validation scenarios across all tested models. However, when employing leave-one-group-out (LOGO) cross-validation on the same datasets, no clear winner emerges, indicating the presence of domain shift in real-world scenarios. Additionally, the study assesses the scalability and interpretability of conventional methods and neural networks. Conventional methods offer explainability with their modular structure aiding feature identification. In contrast, neural networks require specialized interpretation techniques like occlusion maps to visualize important regions in the input data. Finally, the paper highlights the significance of feature selection, particularly in condition monitoring tasks with limited class variations. Low-complexity models prove sufficient for such tasks, as only a few features from the input signal are typically needed. In summary, these findings offer crucial insights into the strengths and limitations of various approaches, providing valuable benchmarks and identifying the most suitable methods for condition monitoring applications, thereby enhancing their applicability in real-world scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
388,396
2311.16151
Estimating Post-Synaptic Effects for Online Training of Feed-Forward SNNs
Facilitating online learning in spiking neural networks (SNNs) is a key step in developing event-based models that can adapt to changing environments and learn from continuous data streams in real-time. Although forward-mode differentiation enables online learning, its computational requirements restrict scalability. This is typically addressed through approximations that limit learning in deep models. In this study, we propose Online Training with Postsynaptic Estimates (OTPE) for training feed-forward SNNs, which approximates Real-Time Recurrent Learning (RTRL) by incorporating temporal dynamics not captured by current approximations, such as Online Training Through Time (OTTT) and Online Spatio-Temporal Learning (OSTL). We show improved scaling for multi-layer networks using a novel approximation of temporal effects on the subsequent layer's activity. This approximation incurs minimal overhead in the time and space complexity compared to similar algorithms, and the calculation of temporal effects remains local to each layer. We characterize the learning performance of our proposed algorithms on multiple SNN model configurations for rate-based and time-based encoding. OTPE exhibits the highest directional alignment to exact gradients, calculated with backpropagation through time (BPTT), in deep networks and, on time-based encoding, outperforms other approximate methods. We also observe sizeable gains in average performance over similar algorithms in offline training of Spiking Heidelberg Digits with equivalent hyper-parameters (OTTT/OSTL - 70.5%; OTPE - 75.2%; BPTT - 78.1%).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
410,795
2302.00654
Task2KB: A Public Task-Oriented Knowledge Base
Search engines and conversational assistants are commonly used to help users complete their every day tasks such as booking travel, cooking, etc. While there are some existing datasets that can be used for this purpose, their coverage is limited to very few domains. In this paper, we propose a novel knowledge base, 'Task2KB', which is constructed using data crawled from WikiHow, an online knowledge resource offering instructional articles on a wide range of tasks. Task2KB encapsulates various types of task-related information and attributes, such as requirements, detailed step description, and available methods to complete tasks. Due to its higher coverage compared to existing related knowledge graphs, Task2KB can be highly useful in the development of general purpose task completion assistants
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
343,295
2310.17034
Follow-on Question Suggestion via Voice Hints for Voice Assistants
The adoption of voice assistants like Alexa or Siri has grown rapidly, allowing users to instantly access information via voice search. Query suggestion is a standard feature of screen-based search experiences, allowing users to explore additional topics. However, this is not trivial to implement in voice-based settings. To enable this, we tackle the novel task of suggesting questions with compact and natural voice hints to allow users to ask follow-up questions. We define the task, ground it in syntactic theory and outline linguistic desiderata for spoken hints. We propose baselines and an approach using sequence-to-sequence Transformers to generate spoken hints from a list of questions. Using a new dataset of 6681 input questions and human written hints, we evaluated the models with automatic metrics and human evaluation. Results show that a naive approach of concatenating suggested questions creates poor voice hints. Our approach, which applies a linguistically-motivated pretraining task was strongly preferred by humans for producing the most natural hints.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
402,967
2201.13027
BOAT: Bilateral Local Attention Vision Transformer
Vision Transformers achieved outstanding performance in many computer vision tasks. Early Vision Transformers such as ViT and DeiT adopt global self-attention, which is computationally expensive when the number of patches is large. To improve efficiency, recent Vision Transformers adopt local self-attention mechanisms, where self-attention is computed within local windows. Despite the fact that window-based local self-attention significantly boosts efficiency, it fails to capture the relationships between distant but similar patches in the image plane. To overcome this limitation of image-space local attention, in this paper, we further exploit the locality of patches in the feature space. We group the patches into multiple clusters using their features, and self-attention is computed within every cluster. Such feature-space local attention effectively captures the connections between patches across different local windows but still relevant. We propose a Bilateral lOcal Attention vision Transformer (BOAT), which integrates feature-space local attention with image-space local attention. We further integrate BOAT with both Swin and CSWin models, and extensive experiments on several benchmark datasets demonstrate that our BOAT-CSWin model clearly and consistently outperforms existing state-of-the-art CNN models and vision Transformers.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
277,865
2009.03730
Large-scale Neural Solvers for Partial Differential Equations
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs. However, recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing. Scanning the parameters of the underlying model significantly increases the runtime as the simulations have to be cold-started for each parameter configuration. Machine Learning based surrogate models denote promising ways for learning complex relationship among input, parameter and solution. However, recent generative neural networks require lots of training data, i.e. full simulation runs making them costly. In contrast, we examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs) solely requiring initial/boundary values and validation points for training but no simulation data. The induced curse of dimensionality is approached by learning a domain decomposition that steers the number of neurons per unit volume and significantly improves runtime. Distributed training on large-scale cluster systems also promises great utilization of large quantities of GPUs which we assess by a comprehensive evaluation study. Finally, we discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
194,876
2403.13512
Scale Decoupled Distillation
Logit knowledge distillation attracts increasing attention due to its practicality in recent studies. However, it often suffers inferior performance compared to the feature knowledge distillation. In this paper, we argue that existing logit-based methods may be sub-optimal since they only leverage the global logit output that couples multiple semantic knowledge. This may transfer ambiguous knowledge to the student and mislead its learning. To this end, we propose a simple but effective method, i.e., Scale Decoupled Distillation (SDD), for logit knowledge distillation. SDD decouples the global logit output into multiple local logit outputs and establishes distillation pipelines for them. This helps the student to mine and inherit fine-grained and unambiguous logit knowledge. Moreover, the decoupled knowledge can be further divided into consistent and complementary logit knowledge that transfers the semantic information and sample ambiguity, respectively. By increasing the weight of complementary parts, SDD can guide the student to focus more on ambiguous samples, improving its discrimination ability. Extensive experiments on several benchmark datasets demonstrate the effectiveness of SDD for wide teacher-student pairs, especially in the fine-grained classification task. Code is available at: https://github.com/shicaiwei123/SDD-CVPR2024
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
439,662
2405.07759
MADRL-Based Rate Adaptation for 360{\deg} Video Streaming with Multi-Viewpoint Prediction
Over the last few years, 360{\deg} video traffic on the network has grown significantly. A key challenge of 360{\deg} video playback is ensuring a high quality of experience (QoE) with limited network bandwidth. Currently, most studies focus on tile-based adaptive bitrate (ABR) streaming based on single viewport prediction to reduce bandwidth consumption. However, the performance of models for single-viewpoint prediction is severely limited by the inherent uncertainty in head movement, which can not cope with the sudden movement of users very well. This paper first presents a multimodal spatial-temporal attention transformer to generate multiple viewpoint trajectories with their probabilities given a historical trajectory. The proposed method models viewpoint prediction as a classification problem and uses attention mechanisms to capture the spatial and temporal characteristics of input video frames and viewpoint trajectories for multi-viewpoint prediction. After that, a multi-agent deep reinforcement learning (MADRL)-based ABR algorithm utilizing multi-viewpoint prediction for 360{\deg} video streaming is proposed for maximizing different QoE objectives under various network conditions. We formulate the ABR problem as a decentralized partially observable Markov decision process (Dec-POMDP) problem and present a MAPPO algorithm based on centralized training and decentralized execution (CTDE) framework to solve the problem. The experimental results show that our proposed method improves the defined QoE metric by up to 85.5% compared to existing ABR methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
453,842
1804.04348
Identification of Risk Significant Automotive Scenarios Under Hardware Failures
The level of autonomous functions in vehicular control systems has been on a steady rise. This rise makes it more challenging for control system engineers to ensure a high level of safety, especially against unexpected failures such as stochastic hardware failures. A generic Backtracking Process Algorithm (BPA) based on a deductive implementation of the Markov/Cell-to-Cell Mapping technique is proposed for the identification of critical scenarios leading to the violation of safety goals. A discretized state-space representation of the system allows tracing of fault propagation throughout the system, and the quantification of probabilistic system evolution in time. A case study of a Hybrid State Control System for an autonomous vehicle prone to a brake-by-wire failure is constructed. The hazard of interest is collision with a stationary vehicle. The BPA is implemented to identify the risk significant scenarios leading to the hazard of interest.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
94,826
2408.08258
Snuffy: Efficient Whole Slide Image Classifier
Whole Slide Image (WSI) classification with multiple instance learning (MIL) in digital pathology faces significant computational challenges. Current methods mostly rely on extensive self-supervised learning (SSL) for satisfactory performance, requiring long training periods and considerable computational resources. At the same time, no pre-training affects performance due to domain shifts from natural images to WSIs. We introduce Snuffy architecture, a novel MIL-pooling method based on sparse transformers that mitigates performance loss with limited pre-training and enables continual few-shot pre-training as a competitive option. Our sparsity pattern is tailored for pathology and is theoretically proven to be a universal approximator with the tightest probabilistic sharp bound on the number of layers for sparse transformers, to date. We demonstrate Snuffy's effectiveness on CAMELYON16 and TCGA Lung cancer datasets, achieving superior WSI and patch-level accuracies. The code is available on https://github.com/jafarinia/snuffy.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
480,924
1401.7584
XLSearch: A Search Engine for Spreadsheets
Spreadsheets are end-user programs and domain models that are heavily employed in administration, financial forecasting, education, and science because of their intuitive, flexible, and direct approach to computation. As a result, institutions are swamped by millions of spreadsheets that are becoming increasingly difficult to manage, access, and control. This note presents the XLSearch system, a novel search engine for spreadsheets. It indexes spreadsheet formulae and efficiently answers formula queries via unification (a complex query language that allows metavariables in both the query as well as the index). But a web-based search engine is only one application of the underlying technology: Spreadsheet formula export to web standards like MathML combined with formula indexing can be used to find similar spreadsheets or common formula errors.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
30,470
2003.11102
Learning to Play Soccer by Reinforcement and Applying Sim-to-Real to Compete in the Real World
This work presents an application of Reinforcement Learning (RL) for the complete control of real soccer robots of the IEEE Very Small Size Soccer (VSSS), a traditional league in the Latin American Robotics Competition (LARC). In the VSSS league, two teams of three small robots play against each other. We propose a simulated environment in which continuous or discrete control policies can be trained, and a Sim-to-Real method to allow using the obtained policies to control a robot in the real world. The results show that the learned policies display a broad repertoire of behaviors that are difficult to specify by hand. This approach, called VSSS-RL, was able to beat the human-designed policy for the striker of the team ranked 3rd place in the 2018 LARC, in 1-vs-1 matches.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
169,513
2201.08314
Adaptive neighborhood Metric learning
In this paper, we reveal that metric learning would suffer from serious inseparable problem if without informative sample mining. Since the inseparable samples are often mixed with hard samples, current informative sample mining strategies used to deal with inseparable problem may bring up some side-effects, such as instability of objective function, etc. To alleviate this problem, we propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML). In ANML, we design two thresholds to adaptively identify the inseparable similar and dissimilar samples in the training procedure, thus inseparable sample removing and metric parameter learning are implemented in the same procedure. Due to the non-continuity of the proposed ANML, we develop an ingenious function, named \emph{log-exp mean function} to construct a continuous formulation to surrogate it, which can be efficiently solved by the gradient descent method. Similar to Triplet loss, ANML can be used to learn both the linear and deep embeddings. By analyzing the proposed method, we find it has some interesting properties. For example, when ANML is used to learn the linear embedding, current famous metric learning algorithms such as the large margin nearest neighbor (LMNN) and neighbourhood components analysis (NCA) are the special cases of the proposed ANML by setting the parameters different values. When it is used to learn deep features, the state-of-the-art deep metric learning algorithms such as Triplet loss, Lifted structure loss, and Multi-similarity loss become the special cases of ANML. Furthermore, the \emph{log-exp mean function} proposed in our method gives a new perspective to review the deep metric learning methods such as Prox-NCA and N-pairs loss. At last, promising experimental results demonstrate the effectiveness of the proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,303
2501.05292
Detection of Rumors and Their Sources in Social Networks: A Comprehensive Survey
With the recent advancements in social network platform technology, an overwhelming amount of information is spreading rapidly. In this situation, it can become increasingly difficult to discern what information is false or true. If false information proliferates significantly, it can lead to undesirable outcomes. Hence, when we receive some information, we can pose the following two questions: $(i)$ Is the information true? $(ii)$ If not, who initially spread that information? % The first problem is the rumor detection issue, while the second is the rumor source detection problem. A rumor-detection problem involves identifying and mitigating false or misleading information spread via various communication channels, particularly online platforms and social media. Rumors can range from harmless ones to deliberately misleading content aimed at deceiving or manipulating audiences. Detecting misinformation is crucial for maintaining the integrity of information ecosystems and preventing harmful effects such as the spread of false beliefs, polarization, and even societal harm. Therefore, it is very important to quickly distinguish such misinformation while simultaneously finding its source to block it from spreading on the network. However, most of the existing surveys have analyzed these two issues separately. In this work, we first survey the existing research on the rumor-detection and rumor source detection problems with joint detection approaches, simultaneously. % This survey deals with these two issues together so that their relationship can be observed and it provides how the two problems are similar and different. The limitations arising from the rumor detection, rumor source detection, and their combination problems are also explained, and some challenges to be addressed in future works are presented.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
523,535
1711.07657
Condition directed Multi-domain Adversarial Learning for Loop Closure Detection
Loop closure detection (LCD) is the key module in appearance based simultaneously localization and mapping (SLAM). However, in the real life, the appearance of visual inputs are usually affected by the illumination changes and texture changes under different weather conditions. Traditional methods in LCD usually rely on handcraft features, however, such methods are unable to capture the common descriptions under different weather conditions, such as rainy, foggy and sunny. Furthermore, traditional handcraft features could not capture the highly level understanding for the local scenes. In this paper, we proposed a novel condition directed multi-domain adversarial learning method, where we use the weather condition as the direction for feature inference. Based on the generative adversarial networks (GANs) and a classification networks, the proposed method could extract the high-level weather-invariant features directly from the raw data. The only labels required here are the weather condition of each visual input. Experiments are conducted in the GTAV game simulator, which could generated lifelike outdoor scenes under different weather conditions. The performance of LCD results shows that our method outperforms the state-of-arts significantly.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
85,044
1311.4235
On the definition of a general learning system with user-defined operators
In this paper, we push forward the idea of machine learning systems whose operators can be modified and fine-tuned for each problem. This allows us to propose a learning paradigm where users can write (or adapt) their operators, according to the problem, data representation and the way the information should be navigated. To achieve this goal, data instances, background knowledge, rules, programs and operators are all written in the same functional language, Erlang. Since changing operators affect how the search space needs to be explored, heuristics are learnt as a result of a decision process based on reinforcement learning where each action is defined as a choice of operator and rule. As a result, the architecture can be seen as a 'system for writing machine learning systems' or to explore new operators where the policy reuse (as a kind of transfer learning) is allowed. States and actions are represented in a Q matrix which is actually a table, from which a supervised model is learnt. This makes it possible to have a more flexible mapping between old and new problems, since we work with an abstraction of rules and actions. We include some examples sharing reuse and the application of the system gErl to IQ problems. In order to evaluate gErl, we will test it against some structured problems: a selection of IQ test tasks and some experiments on some structured prediction problems (list patterns).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
28,479
1604.05194
Preference Elicitation For Single Crossing Domain
Eliciting the preferences of a set of agents over a set of alternatives is a problem of fundamental importance in social choice theory. Prior work on this problem has studied the query complexity of preference elicitation for the unrestricted domain and for the domain of single peaked preferences. In this paper, we consider the domain of single crossing preference profiles and study the query complexity of preference elicitation under various settings. We consider two distinct situations: when an ordering of the voters with respect to which the profile is single crossing is known versus when it is unknown. We also consider different access models: when the votes can be accessed at random, as opposed to when they are coming in a pre-defined sequence. In the sequential access model, we distinguish two cases when the ordering is known: the first is that sequence in which the votes appear is also a single-crossing order, versus when it is not. The main contribution of our work is to provide polynomial time algorithms with low query complexity for preference elicitation in all the above six cases. Further, we show that the query complexities of our algorithms are optimal up to constant factors for all but one of the above six cases. We then present preference elicitation algorithms for profiles which are close to being single crossing under various notions of closeness, for example, single crossing width, minimum number of candidates | voters whose deletion makes a profile single crossing.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
54,775
2106.07807
Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data
Most existing works in few-shot learning rely on meta-learning the network on a large base dataset which is typically from the same domain as the target dataset. We tackle the problem of cross-domain few-shot learning where there is a large shift between the base and target domain. The problem of cross-domain few-shot recognition with unlabeled target data is largely unaddressed in the literature. STARTUP was the first method that tackles this problem using self-training. However, it uses a fixed teacher pretrained on a labeled base dataset to create soft labels for the unlabeled target samples. As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal. We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset. We impose consistency regularization by calculating predictions from the weakly-augmented versions of the unlabeled images from a teacher network and matching it with the strongly augmented versions of the same images from a student network. The parameters of the teacher network are updated as exponential moving average of the parameters of the student network. We show that the proposed network learns representation that can be easily adapted to the target domain even though it has not been trained with target-specific classes during the pretraining phase. Our model outperforms the current state-of-the art method by 4.4% for 1-shot and 3.6% for 5-shot classification in the BSCD-FSL benchmark, and also shows competitive performance on traditional in-domain few-shot learning task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
241,054
2411.00742
Modern, Efficient, and Differentiable Transport Equation Models using JAX: Applications to Population Balance Equations
Population balance equation (PBE) models have potential to automate many engineering processes with far-reaching implications. In the pharmaceutical sector, crystallization model-based design can contribute to shortening excessive drug development timelines. Even so, two major barriers, typical of most transport equations, not just PBEs, have limited this potential. Notably, the time taken to compute a solution to these models with representative accuracy is frequently limiting. Likewise, the model construction process is often tedious and wastes valuable time, owing to the reliance on human expertise to guess constituent models from empirical data. Hybrid models promise to overcome both barriers through tight integration of neural networks with physical PBE models. Towards eliminating experimental guesswork, hybrid models facilitate determining physical relationships from data, also known as 'discovering physics'. Here, we aim to prepare for planned Scientific Machine Learning (SciML) integration through a contemporary implementation of an existing PBE algorithm, one with computational efficiency and differentiability at the forefront. To accomplish this, we utilized JAX, a cutting-edge library for accelerated computing. We showcase the speed benefits of this modern take on PBE modelling by benchmarking our solver to others we prepared using older, more widespread software. Primarily among these software tools is the ubiquitous NumPy, where we show JAX achieves up to 300x relative acceleration in PBE simulations. Our solver is also fully differentiable, which we demonstrate is the only feasible option for integrating learnable data-driven models at scale. We show that differentiability can be 40x faster for optimizing larger models than conventional approaches, which represents the key to neural network integration for physics discovery in later work.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
504,725
2108.00705
Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning
This paper introduces a two-phase deep feature calibration framework for efficient learning of semantics enhanced text-image cross-modal joint embedding, which clearly separates the deep feature calibration in data preprocessing from training the joint embedding model. We use the Recipe1M dataset for the technical description and empirical validation. In preprocessing, we perform deep feature calibration by combining deep feature engineering with semantic context features derived from raw text-image input data. We leverage LSTM to identify key terms, NLP methods to produce ranking scores for key terms before generating the key term feature. We leverage wideResNet50 to extract and encode the image category semantics to help semantic alignment of the learned recipe and image embeddings in the joint latent space. In joint embedding learning, we perform deep feature calibration by optimizing the batch-hard triplet loss function with soft-margin and double negative sampling, also utilizing the category-based alignment loss and discriminator-based alignment loss. Extensive experiments demonstrate that our SEJE approach with the deep feature calibration significantly outperforms the state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
248,805
2405.11793
MM-Retinal: Knowledge-Enhanced Foundational Pretraining with Fundus Image-Text Expertise
Current fundus image analysis models are predominantly built for specific tasks relying on individual datasets. The learning process is usually based on data-driven paradigm without prior knowledge, resulting in poor transferability and generalizability. To address this issue, we propose MM-Retinal, a multi-modal dataset that encompasses high-quality image-text pairs collected from professional fundus diagram books. Moreover, enabled by MM-Retinal, we present a novel Knowledge-enhanced foundational pretraining model which incorporates Fundus Image-Text expertise, called KeepFIT. It is designed with image similarity-guided text revision and mixed training strategy to infuse expert knowledge. Our proposed fundus foundation model achieves state-of-the-art performance across six unseen downstream tasks and holds excellent generalization ability in zero-shot and few-shot scenarios. MM-Retinal and KeepFIT are available at https://github.com/lxirich/MM-Retinal.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
455,287