id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2
classes | cs.CE bool 2
classes | cs.SD bool 2
classes | cs.SI bool 2
classes | cs.AI bool 2
classes | cs.IR bool 2
classes | cs.LG bool 2
classes | cs.RO bool 2
classes | cs.CL bool 2
classes | cs.IT bool 2
classes | cs.SY bool 2
classes | cs.CV bool 2
classes | cs.CR bool 2
classes | cs.CY bool 2
classes | cs.MA bool 2
classes | cs.NE bool 2
classes | cs.DB bool 2
classes | Other bool 2
classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0805.0131 | Diversity-Multiplexing Tradeoff in Selective-Fading Multiple-Access MIMO
Channels | We establish the optimal diversity-multiplexing (DM) tradeoff of coherent selective-fading multiple-access multiple-input multiple-output (MIMO) channels and provide corresponding code design criteria. As a byproduct, on the conceptual level, we find an interesting relation between the DM tradeoff framework and the notion of dominant error event regions which was first introduced in the AWGN case by Gallager, IEEE Trans. IT, 1985. This relation allows to accurately characterize the error mechanisms in MIMO fading multiple-access channels. In particular, we find that, for a given rate tuple, the maximum achievable diversity order is determined by the error event that dominates the total error probability exponentially in SNR. Finally, we show that the distributed space-time code construction proposed recently by Badr and Belfiore, Int. Zurich Seminar on Commun., 2008, satisfies the code design criteria derived in this paper. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,694 |
2402.02768 | Intent Profiling and Translation Through Emergent Communication | To effectively express and satisfy network application requirements, intent-based network management has emerged as a promising solution. In intent-based methods, users and applications express their intent in a high-level abstract language to the network. Although this abstraction simplifies network operation, it induces many challenges to efficiently express applications' intents and map them to different network capabilities. Therefore, in this work, we propose an AI-based framework for intent profiling and translation. We consider a scenario where applications interacting with the network express their needs for network services in their domain language. The machine-to-machine communication (i.e., between applications and the network) is complex since it requires networks to learn how to understand the domain languages of each application, which is neither practical nor scalable. Instead, a framework based on emergent communication is proposed for intent profiling, in which applications express their abstract quality-of-experience (QoE) intents to the network through emergent communication messages. Subsequently, the network learns how to interpret these communication messages and map them to network capabilities (i.e., slices) to guarantee the requested Quality-of-Service (QoS). Simulation results show that the proposed method outperforms self-learning slicing and other baselines, and achieves a performance close to the perfect knowledge baseline. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 426,721 |
2206.04789 | Comprehensive Fair Meta-learned Recommender System | In recommender systems, one common challenge is the cold-start problem, where interactions are very limited for fresh users in the systems. To address this challenge, recently, many works introduce the meta-optimization idea into the recommendation scenarios, i.e. learning to learn the user preference by only a few past interaction items. The core idea is to learn global shared meta-initialization parameters for all users and rapidly adapt them into local parameters for each user respectively. They aim at deriving general knowledge across preference learning of various users, so as to rapidly adapt to the future new user with the learned prior and a small amount of training data. However, previous works have shown that recommender systems are generally vulnerable to bias and unfairness. Despite the success of meta-learning at improving the recommendation performance with cold-start, the fairness issues are largely overlooked. In this paper, we propose a comprehensive fair meta-learning framework, named CLOVER, for ensuring the fairness of meta-learned recommendation models. We systematically study three kinds of fairness - individual fairness, counterfactual fairness, and group fairness in the recommender systems, and propose to satisfy all three kinds via a multi-task adversarial learning scheme. Our framework offers a generic training paradigm that is applicable to different meta-learned recommender systems. We demonstrate the effectiveness of CLOVER on the representative meta-learned user preference estimator on three real-world data sets. Empirical results show that CLOVER achieves comprehensive fairness without deteriorating the overall cold-start recommendation performance. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 301,766 |
1805.01556 | Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | To achieve parsimonious inference in per-pixel labeling tasks with a limited computational budget, we propose a \emph{Pixel-wise Attentional Gating} unit (\emph{PAG}) that learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily "plugged in" to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-of-the-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by $10\%$ without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,677 |
2410.05561 | DDES Study of Confined and Unconfined NACA Wing Sections Using Spectral
Elements | We develop hybrid RANS-LES strategies within the spectral element code Nek5000 based on the $k-\tau$ class of turbulence models. We chose airfoil sections at small flight configurations as our target problem to comprehensively test the solver accuracy and performance. We present verification and validation results of an unconfined NACA0012 wing section in a pure RANS and in a hybrid RANS-LES setup for an angle of attack ranging from 0 to 90 degrees. The RANS results shows good corroboration with existing experimental and numerical datasets for low incoming flow angles. A small discrepancy appears at higher angle in comparison with the experiments, which is in line with our expectations from a RANS formulation. On the other hand, DDES captures both the attached and separated flow dynamics well when compared with available numerical datasets. We demonstrate that for the hybrid turbulence modeling approach a high-order spectral element discretization converges faster (i.e., with less resolution) and captures the flow dynamics more accurately than representative low-order finite-volume and finite-difference approaches. We also revise some of the guidelines on sample size requirements for statistics convergence. Furthermore, we analyze some of the observed discrepancies of our unconfined DDES at higher angles with the experiments by evaluating the side wall "blocking" effect. We carry out additional simulations in a confined 'numerical wind tunnel' and assess the observed differences as a function of Reynolds number. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 495,790 |
2102.08482 | Finding the Ground-Truth from Multiple Labellers: Why Parameters of the
Task Matter | Employing multiple workers to label data for machine learning models has become increasingly important in recent years with greater demand to collect huge volumes of labelled data to train complex models while mitigating the risk of incorrect and noisy labelling. Whether it is large scale data gathering on popular crowd-sourcing platforms or smaller sets of workers in high-expertise labelling exercises, there are various methods recommended to gather a consensus from employed workers and establish ground-truth labels. However, there is very little research on how the various parameters of a labelling task can impact said methods. These parameters include the number of workers, worker expertise, number of labels in a taxonomy and sample size. In this paper, Majority Vote, CrowdTruth and Binomial Expectation Maximisation are investigated against the permutations of these parameters in order to provide better understanding of the parameter settings to give an advantage in ground-truth inference. Findings show that both Expectation Maximisation and CrowdTruth are only likely to give an advantage over majority vote under certain parameter conditions, while there are many cases where the methods can be shown to have no major impact. Guidance is given as to what parameters methods work best under, while the experimental framework provides a way of testing other established methods and also testing new methods that can attempt to provide advantageous performance where the methods in this paper did not. A greater level of understanding regarding optimal crowd-sourcing parameters is also achieved. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 220,472 |
1603.08389 | The Umwelt of an Embodied Agent -- A Measure-Theoretic Definition | We consider a general model of the sensorimotor loop of an agent interacting with the world. This formalises Uexk\"ull's notion of a \emph{function-circle}. Here, we assume a particular causal structure, mechanistically described in terms of Markov kernels. In this generality, we define two $\sigma$-algebras of events in the world that describe two respective perspectives: (1) the perspective of an external observer, (2) the intrinsic perspective of the agent. Not all aspects of the world, seen from the external perspective, are accessible to the agent. This is expressed by the fact that the second $\sigma$-algebra is a subalgebra of the first one. We propose the smaller one as formalisation of Uexk\"ull's \emph{Umwelt} concept. We show that, under continuity and compactness assumptions, the global dynamics of the world can be simplified without changing the internal process. This simplification can serve as a minimal world model that the system must have in order to be consistent with the internal process. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 53,780 |
1610.09932 | Support Vector Machines and Generalisation in HEP | We review the concept of support vector machines (SVMs) and discuss examples of their use. One of the benefits of SVM algorithms, compared with neural networks and decision trees is that they can be less susceptible to over fitting than those other algorithms are to over training. This issue is related to the generalisation of a multivariate algorithm (MVA); a problem that has often been overlooked in particle physics. We discuss cross validation and how this can be used to improve the generalisation of a MVA in the context of High Energy Physics analyses. The examples presented use the Toolkit for Multivariate Analysis (TMVA) based on ROOT and describe our improvements to the SVM functionality and new tools introduced for cross validation within this framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,135 |
2307.14544 | Speed Reading Tool Powered by Artificial Intelligence for Students with
ADHD, Dyslexia, or Short Attention Span | This paper presents a novel approach to assist students with dyslexia, ADHD, and short attention span in digesting any text-based information more efficiently. The proposed solution utilizes the Multilayer Perceptron (MLP) algorithm for complex text processing and summarization tasks. The tool leverages the T5 (Text-to-Text Transfer Transformer) model from Hugging Face, which treats every NLP task as a text generation task. The model is fine-tuned on specific tasks using a smaller dataset. The NLTK's Punkt Sentence Tokenizer is used to divide a text into a list of sentences. The application is served using Flask, a lightweight web server and framework. The tool also applies principles from Bionic Reading to enhance readability, which includes a bolding function and adjustments to line, word, and character spacing. The paper discusses the methodology, implementation, and results of the AI-based speed reading tool. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 381,966 |
1803.03242 | Probably Approximately Metric-Fair Learning | The seminal work of Dwork {\em et al.} [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of {\em approximate metric-fairness}: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metric-fairness {\em does} generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 92,212 |
2102.05172 | Differential Privacy for Binary Functions via Randomized Graph Colorings | We present a framework for designing differentially private (DP) mechanisms for binary functions via a graph representation of datasets. Datasets are nodes in the graph and any two neighboring datasets are connected by an edge. The true binary function we want to approximate assigns a value (or true color) to a dataset. Randomized DP mechanisms are then equivalent to randomized colorings of the graph. A key notion we use is that of the boundary of the graph. Any two neighboring datasets assigned a different true color belong to the boundary. Under this framework, we show that fixing the mechanism behavior at the boundary induces a unique optimal mechanism. Moreover, if the mechanism is to have a homogeneous behavior at the boundary, we present a closed expression for the optimal mechanism, which is obtained by means of a \emph{pullback} operation on the optimal mechanism of a line graph. For balanced mechanisms, not favoring one binary value over another, the optimal $(\epsilon,\delta)$-DP mechanism takes a particularly simple form, depending only on the minimum distance to the boundary, on $\epsilon$, and on $\delta$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 219,344 |
2405.07801 | Deep Learning-Based Object Pose Estimation: A Comprehensive Survey | Object pose estimation is a fundamental computer vision problem with broad applications in augmented reality and robotics. Over the past decade, deep learning models, due to their superior accuracy and robustness, have increasingly supplanted conventional algorithms reliant on engineered point pair features. Nevertheless, several challenges persist in contemporary methods, including their dependency on labeled training data, model compactness, robustness under challenging conditions, and their ability to generalize to novel unseen objects. A recent survey discussing the progress made on different aspects of this area, outstanding challenges, and promising future directions, is missing. To fill this gap, we discuss the recent advances in deep learning-based object pose estimation, covering all three formulations of the problem, \emph{i.e.}, instance-level, category-level, and unseen object pose estimation. Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks, providing the readers with a holistic understanding of this field. Additionally, it discusses training paradigms of different domains, inference modes, application areas, evaluation metrics, and benchmark datasets, as well as reports the performance of current state-of-the-art methods on these benchmarks, thereby facilitating the readers in selecting the most suitable method for their application. Finally, the survey identifies key challenges, reviews the prevailing trends along with their pros and cons, and identifies promising directions for future research. We also keep tracing the latest works at https://github.com/CNJianLiu/Awesome-Object-Pose-Estimation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,869 |
1905.13413 | Improving Open Information Extraction via Iterative Rank-Aware Learning | Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 133,119 |
2402.11294 | Power Optimization for Integrated Active and Passive Sensing in DFRC
Systems | Most existing works on dual-function radar-communication (DFRC) systems mainly focus on active sensing, but ignore passive sensing. To leverage multi-static sensing capability, we explore integrated active and passive sensing (IAPS) in DFRC systems to remedy sensing performance. The multi-antenna base station (BS) is responsible for communication and active sensing by transmitting signals to user equipments while detecting a target according to echo signals. In contrast, passive sensing is performed at the receive access points (RAPs). We consider both the cases where the capacity of the backhaul links between the RAPs and BS is unlimited or limited and adopt different fusion strategies. Specifically, when the backhaul capacity is unlimited, the BS and RAPs transfer sensing signals they have received to the central controller (CC) for signal fusion. The CC processes the signals and leverages the generalized likelihood ratio test detector to determine the present of a target. However, when the backhaul capacity is limited, each RAP, as well as the BS, makes decisions independently and sends its binary inference results to the CC for result fusion via voting aggregation. Then, aiming at maximize the target detection probability under communication quality of service constraints, two power optimization algorithms are proposed. Finally, numerical simulations demonstrate that the sensing performance in case of unlimited backhaul capacity is much better than that in case of limited backhaul capacity. Moreover, it implied that the proposed IAPS scheme outperforms only-passive and only-active sensing schemes, especially in unlimited capacity case. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 430,323 |
2005.13284 | Convergence Analysis of Riemannian Stochastic Approximation Schemes | This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling stochastic optimization problems. In particular, the recursions we study use either the exponential map of the considered manifold (geodesic schemes) or more general retraction functions (retraction schemes) used as a proxy for the exponential map. Such approximations are of great interest since they are low complexity alternatives to geodesic schemes. Under the assumption that the mean field of the SA is correlated with the gradient of a smooth Lyapunov function (possibly non-convex), we show that the above Riemannian SA schemes find an ${\mathcal{O}}(b_\infty + \log n / \sqrt{n})$-stationary point (in expectation) within ${\mathcal{O}}(n)$ iterations, where $b_\infty \geq 0$ is the asymptotic bias. Compared to previous works, the conditions we derive are considerably milder. First, all our analysis are global as we do not assume iterates to be a-priori bounded. Second, we study biased SA schemes. To be more specific, we consider the case where the mean-field function can only be estimated up to a small bias, and/or the case in which the samples are drawn from a controlled Markov chain. Third, the conditions on retractions required to ensure convergence of the related SA schemes are weak and hold for well-known examples. We illustrate our results on three machine learning problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 178,970 |
1808.04679 | An Optimal Policy for Patient Laboratory Tests in Intensive Care Units | Laboratory testing is an integral tool in the management of patient care in hospitals, particularly in intensive care units (ICUs). There exists an inherent trade-off in the selection and timing of lab tests between considerations of the expected utility in clinical decision-making of a given test at a specific time, and the associated cost or risk it poses to the patient. In this work, we introduce a framework that learns policies for ordering lab tests which optimizes for this trade-off. Our approach uses batch off-policy reinforcement learning with a composite reward function based on clinical imperatives, applied to data that include examples of clinicians ordering labs for patients. To this end, we develop and extend principles of Pareto optimality to improve the selection of actions based on multiple reward function components while respecting typical procedural considerations and prioritization of clinical goals in the ICU. Our experiments show that we can estimate a policy that reduces the frequency of lab tests and optimizes timing to minimize information redundancy. We also find that the estimated policies typically suggest ordering lab tests well ahead of critical onsets--such as mechanical ventilation or dialysis--that depend on the lab results. We evaluate our approach by quantifying how these policies may initiate earlier onset of treatment. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 105,209 |
2410.12784 | JudgeBench: A Benchmark for Evaluating LLM-based Judges | LLM-based judges have emerged as a scalable alternative to human evaluation and are increasingly used to assess, compare, and improve models. However, the reliability of LLM-based judges themselves is rarely scrutinized. As LLMs become more advanced, their responses grow more sophisticated, requiring stronger judges to evaluate them. Existing benchmarks primarily focus on a judge's alignment with human preferences, but often fail to account for more challenging tasks where crowdsourced human preference is a poor indicator of factual and logical correctness. To address this, we propose a novel evaluation framework to objectively evaluate LLM-based judges. Based on this framework, we propose JudgeBench, a benchmark for evaluating LLM-based judges on challenging response pairs spanning knowledge, reasoning, math, and coding. JudgeBench leverages a novel pipeline for converting existing difficult datasets into challenging response pairs with preference labels reflecting objective correctness. Our comprehensive evaluation on a collection of prompted judges, fine-tuned judges, multi-agent judges, and reward models shows that JudgeBench poses a significantly greater challenge than previous benchmarks, with many strong models (e.g., GPT-4o) performing just slightly better than random guessing. Overall, JudgeBench offers a reliable platform for assessing increasingly advanced LLM-based judges. Data and code are available at https://github.com/ScalerLab/JudgeBench . | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 499,186 |
1705.02893 | Multi Resolution LSTM For Long Term Prediction In Neural Activity Video | Epileptic seizures are caused by abnormal, overly syn- chronized, electrical activity in the brain. The abnor- mal electrical activity manifests as waves, propagating across the brain. Accurate prediction of the propagation velocity and direction of these waves could enable real- time responsive brain stimulation to suppress or prevent the seizures entirely. However, this problem is very chal- lenging because the algorithm must be able to predict the neural signals in a sufficiently long time horizon to allow enough time for medical intervention. We consider how to accomplish long term prediction using a LSTM network. To alleviate the vanishing gradient problem, we propose two encoder-decoder-predictor structures, both using multi-resolution representation. The novel LSTM structure with multi-resolution layers could significantly outperform the single-resolution benchmark with similar number of parameters. To overcome the blurring effect associated with video prediction in the pixel domain using standard mean square error (MSE) loss, we use energy- based adversarial training to improve the long-term pre- diction. We demonstrate and analyze how a discriminative model with an encoder-decoder structure using 3D CNN model improves long term prediction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 73,078 |
2205.09589 | Learning Energy Networks with Generalized Fenchel-Young Losses | Energy-based models, a.k.a. energy networks, perform inference by optimizing an energy function, typically parametrized by a neural network. This allows one to capture potentially complex relationships between inputs and outputs. To learn the parameters of the energy function, the solution to that optimization problem is typically fed into a loss function. The key challenge for training energy networks lies in computing loss gradients, as this typically requires argmin/argmax differentiation. In this paper, building upon a generalized notion of conjugate function, which replaces the usual bilinear pairing with a general energy function, we propose generalized Fenchel-Young losses, a natural loss construction for learning energy networks. Our losses enjoy many desirable properties and their gradients can be computed efficiently without argmin/argmax differentiation. We also prove the calibration of their excess risk in the case of linear-concave energies. We demonstrate our losses on multilabel classification and imitation learning tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,330 |
1709.07330 | H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor
Segmentation from CT Volumes | Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, serve as the back-bone in many volumetric image segmentation. However, 2D convolutions can not fully leverage the spatial information along the third dimension while 3D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion (HFF) layer. We extensively evaluated our method on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge and 3DIRCADb Dataset. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 81,259 |
2106.13105 | The Option Keyboard: Combining Skills in Reinforcement Learning | The ability to combine known skills to create new ones may be crucial in the solution of complex reinforcement learning problems that unfold over extended periods. We argue that a robust way of combining skills is to define and manipulate them in the space of pseudo-rewards (or "cumulants"). Based on this premise, we propose a framework for combining skills using the formalism of options. We show that every deterministic option can be unambiguously represented as a cumulant defined in an extended domain. Building on this insight and on previous results on transfer learning, we show how to approximate options whose cumulants are linear combinations of the cumulants of known options. This means that, once we have learned options associated with a set of cumulants, we can instantaneously synthesise options induced by any linear combination of them, without any learning involved. We describe how this framework provides a hierarchical interface to the environment whose abstract actions correspond to combinations of basic skills. We demonstrate the practical benefits of our approach in a resource management problem and a navigation task involving a quadrupedal simulated robot. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 242,969 |
2209.00993 | Data Fusion in Neuromarketing: Multimodal Analysis of Biosignals,
Lifecycle Stages, Current Advances, Datasets, Trends, and Challenges | The primary goal of any company is to increase its profits by improving both the quality of its products and how they are advertised. In this context, neuromarketing seeks to enhance the promotion of products and generate a greater acceptance on potential buyers. Traditionally, neuromarketing studies have relied on a single biosignal to obtain feedback from presented stimuli. However, thanks to new devices and technological advances studying this area of knowledge, recent trends indicate a shift towards the fusion of diverse biosignals. An example is the usage of electroencephalography for understanding the impact of an advertisement at the neural level and visual tracking to identify the stimuli that induce such impacts. This emerging pattern determines which biosignals to employ for achieving specific neuromarketing objectives. Furthermore, the fusion of data from multiple sources demands advanced processing methodologies. Despite these complexities, there is a lack of literature that adequately collates and organizes the various data sources and the applied processing techniques for the research objectives pursued. To address these challenges, the current paper conducts a comprehensive analysis of the objectives, biosignals, and data processing techniques employed in neuromarketing research. This study provides both the technical definition and a graphical distribution of the elements under revision. Additionally, it presents a categorization based on research objectives and provides an overview of the combinatory methodologies employed. After this, the paper examines primary public datasets designed for neuromarketing research together with others whose main purpose is not neuromarketing, but can be used for this matter. Ultimately, this work provides a historical perspective on the evolution of techniques across various phases over recent years and enumerates key lessons learned. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 315,749 |
1301.2115 | Domain Generalization via Invariant Feature Representation | This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 20,908 |
2307.15362 | Prompt Guided Transformer for Multi-Task Dense Prediction | Task-conditional architecture offers advantage in parameter efficiency but falls short in performance compared to state-of-the-art multi-decoder methods. How to trade off performance and model parameters is an important and difficult problem. In this paper, we introduce a simple and lightweight task-conditional model called Prompt Guided Transformer (PGT) to optimize this challenge. Our approach designs a Prompt-conditioned Transformer block, which incorporates task-specific prompts in the self-attention mechanism to achieve global dependency modeling and parameter-efficient feature adaptation across multiple tasks. This block is integrated into both the shared encoder and decoder, enhancing the capture of intra- and inter-task features. Moreover, we design a lightweight decoder to further reduce parameter usage, which accounts for only 2.7% of the total model parameters. Extensive experiments on two multi-task dense prediction benchmarks, PASCAL-Context and NYUD-v2, demonstrate that our approach achieves state-of-the-art results among task-conditional methods while using fewer parameters, and maintains a significant balance between performance and parameter size. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,240 |
2206.12097 | Deep-Learning-Aided Distributed Clock Synchronization for Wireless
Networks | The proliferation of wireless communications networks over the past decades, combined with the scarcity of the wireless spectrum, have motivated a significant effort towards increasing the throughput of wireless networks. One of the major factors which limits the throughput in wireless communications networks is the accuracy of the time synchronization between the nodes in the network, as a higher throughput requires higher synchronization accuracy. Existing time synchronization schemes, and particularly, methods based on pulse-coupled oscillators (PCOs), which are the focus of the current work, have the advantage of simple implementation and achieve high accuracy when the nodes are closely located, yet tend to achieve poor synchronization performance for distant nodes. In this study, we propose a robust PCO-based time synchronization algorithm which retains the simple structure of existing approaches while operating reliably and converging quickly for both distant and closely located nodes. This is achieved by augmenting PCO-based synchronization with deep learning tools that are trainable in a distributed manner, thus allowing the nodes to train their neural network component of the synchronization algorithm without requiring additional exchange of information or central coordination. The numerical results show that our proposed deep learning-aided scheme is notably robust to propagation delays resulting from deployments over large areas, and to relative clock frequency offsets. It is also shown that the proposed approach rapidly attains full (i.e., clock frequency and phase) synchronization for all nodes in the wireless network, while the classic model-based implementation does not. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 304,477 |
2205.12787 | Impartial Games: A Challenge for Reinforcement Learning | While AlphaZero-style reinforcement learning (RL) algorithms excel in various board games, in this paper we show that they face challenges on impartial games where players share pieces. We present a concrete example of a game - namely the children's game of Nim - and other impartial games that seem to be a stumbling block for AlphaZero-style and similar self-play reinforcement learning algorithms. Our work is built on the challenges posed by the intricacies of data distribution on the ability of neural networks to learn parity functions, exacerbated by the noisy labels issue. Our findings are consistent with recent studies showing that AlphaZero-style algorithms are vulnerable to adversarial attacks and adversarial perturbations, showing the difficulty of learning to master the games in all legal states. We show that Nim can be learned on small boards, but the learning progress of AlphaZero-style algorithms dramatically slows down when the board size increases. Intuitively, the difference between impartial games like Nim and partisan games like Chess and Go can be explained by the fact that if a small part of the board is covered for impartial games it is typically not possible to predict whether the position is won or lost as there is often zero correlation between the visible part of a partly blanked-out position and its correct evaluation. This situation starkly contrasts partisan games where a partly blanked-out board position typically provides abundant or at least non-trifle information about the value of the fully uncovered position. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 298,712 |
2311.08983 | Edge Accelerated Robot Navigation With Collaborative Motion Planning | Low-cost distributed robots suffer from limited onboard computing power, resulting in excessive computation time when navigating in cluttered environments. This paper presents Edge Accelerated Robot Navigation (EARN), to achieve real-time collision avoidance by adopting collaborative motion planning (CMP). As such, each robot can dynamically switch between a conservative motion planner executed locally to guarantee safety (e.g., path-following) and an aggressive motion planner executed non-locally to guarantee efficiency (e.g., overtaking). In contrast to existing motion planning approaches that ignore the interdependency between low-level motion planning and high-level resource allocation, EARN adopts model predictive switching (MPS) that maximizes the expected switching gain with respect to robot states and actions under computation and communication resource constraints. The MPS problem is solved by a tightly-coupled decision making and motion planning framework based on bilevel mixed-integer nonlinear programming and penalty dual decomposition. We validate the performance of EARN in indoor simulation, outdoor simulation, and real-world environments. Experiments show that EARN achieves significantly smaller navigation time and higher success rates than state-of-the-art navigation approaches. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 407,941 |
1602.08908 | QoS-Aware Joint Mode Selection and Channel Assignment for D2D
Communications | Underlaying device-to-device (D2D) communications to a cellular network is considered as a key technique to improve spectral efficiency in 5G networks. For such D2D systems, mode selection and resource allocation have been widely utilized for managing interference. However, previous works allowed at most one D2D link to access the same channel, while mode selection and resource allocation are typically separately designed. In this paper, we jointly optimize the mode selection and channel assignment in a cellular network with underlaying D2D communications, where multiple D2D links may share the same channel. Meanwhile, the QoS requirements for both cellular and D2D links are guaranteed, in terms of Signal-to-Interference-plus-Noise Ratio (SINR). We first propose an optimal dynamic programming (DP) algorithm, which provides a much lower computation complexity compared to exhaustive search and serves as the performance bench mark. A bipartite graph based greedy algorithm is then proposed to achieve a polynomial time complexity. Simulation results will demonstrate the advantage of allowing each channel to be accessed by multiple D2D links in dense D2D networks, as well as, the effectiveness of the proposed algorithms. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 52,707 |
2011.03683 | Deeply-Supervised Density Regression for Automatic Cell Counting in
Microscopy Images | Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 205,317 |
1903.08828 | Convolutional Neural Network on Semi-Regular Triangulated Meshes and its
Application to Brain Image Data | We developed a convolution neural network (CNN) on semi-regular triangulated meshes whose vertices have 6 neighbours. The key blocks of the proposed CNN, including convolution and down-sampling, are directly defined in a vertex domain. By exploiting the ordering property of semi-regular meshes, the convolution is defined on a vertex domain with strong motivation from the spatial definition of classic convolution. Moreover, the down-sampling of a semi-regular mesh embedded in a 3D Euclidean space can achieve a down-sampling rate of 4, 16, 64, etc. We demonstrated the use of this vertex-based graph CNN for the classification of mild cognitive impairment (MCI) and Alzheimer's disease (AD) based on 3169 MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI). We compared the performance of the vertex-based graph CNN with that of the spectral graph CNN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,917 |
2108.02226 | Terabyte-scale supervised 3D training and benchmarking dataset of the
mouse kidney | The performance of machine learning algorithms, when used for segmenting 3D biomedical images, does not reach the level expected based on results achieved with 2D photos. This may be explained by the comparative lack of high-volume, high-quality training datasets, which require state-of-the-art imaging facilities, domain experts for annotation and large computational and personal resources. The HR-Kidney dataset presented in this work bridges this gap by providing 1.7 TB of artefact-corrected synchrotron radiation-based X-ray phase-contrast microtomography images of whole mouse kidneys and validated segmentations of 33 729 glomeruli, which corresponds to a one to two orders of magnitude increase over currently available biomedical datasets. The image sets also contain the underlying raw data, threshold- and morphology-based semi-automatic segmentations of renal vasculature and uriniferous tubules, as well as true 3D manual annotations. We therewith provide a broad basis for the scientific community to build upon and expand in the fields of image processing, data augmentation and machine learning, in particular unsupervised and semi-supervised learning investigations, as well as transfer learning and generative adversarial networks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,250 |
2111.14026 | Bounds and Constructions for Insertion and Deletion Codes | The present paper mainly studies limits and constructions of insertion and deletion (insdel for short) codes. The paper can be divided into two parts. The first part focuses on various bounds, while the second part concentrates on constructions of insdel codes. Although the insdel-metric Singleton bound has been derived before, it is still unknown if there are any nontrivial codes achieving this bound. Our first result shows that any nontrivial insdel codes do not achieve the insdel-metric Singleton bound. The second bound shows that every $[n,k]$ Reed-Solomon code has insdel distance upper bounded by $2n-4k+4$ and it is known in literature that an $[n,k]$ Reed-Solomon code can have insdel distance $2n-4k+4$ as long as the field size is sufficiently large. The third bound shows a trade-off between insdel distance and code alphabet size for codes achieving the Hamming-metric Singleton bound. In the second part of the paper, we first provide a non-explicit construction of nonlinear codes that can approach the insdel-metric Singleton bound arbitrarily when the code alphabet size is sufficiently large. The second construction gives two-dimensional Reed-Solomon codes of length $n$ and insdel distance $2n-4$ with field size $q=O(n^5)$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 268,467 |
2502.05446 | Stochastic Forward-Backward Deconvolution: Training Diffusion Models
with Finite Noisy Datasets | Recent diffusion-based generative models achieve remarkable results by training on massive datasets, yet this practice raises concerns about memorization and copyright infringement. A proposed remedy is to train exclusively on noisy data with potential copyright issues, ensuring the model never observes original content. However, through the lens of deconvolution theory, we show that although it is theoretically feasible to learn the data distribution from noisy samples, the practical challenge of collecting sufficient samples makes successful learning nearly unattainable. To overcome this limitation, we propose to pretrain the model with a small fraction of clean data to guide the deconvolution process. Combined with our Stochastic Forward--Backward Deconvolution (SFBD) method, we attain an FID of $6.31$ on CIFAR-10 with just $4\%$ clean images (and $3.58$ with $10\%$). Theoretically, we prove that SFBD guides the model to learn the true data distribution. The result also highlights the importance of pretraining on limited but clean data or the alternative from similar datasets. Empirical studies further support these findings and offer additional insights. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 531,612 |
2210.01719 | Learning Temporal Resolution in Spectrogram for Audio Classification | The audio spectrogram is a time-frequency representation that has been widely used for audio classification. One of the key attributes of the audio spectrogram is the temporal resolution, which depends on the hop size used in the Short-Time Fourier Transform (STFT). Previous works generally assume the hop size should be a constant value (e.g., 10 ms). However, a fixed temporal resolution is not always optimal for different types of sound. The temporal resolution affects not only classification accuracy but also computational cost. This paper proposes a novel method, DiffRes, that enables differentiable temporal resolution modeling for audio classification. Given a spectrogram calculated with a fixed hop size, DiffRes merges non-essential time frames while preserving important frames. DiffRes acts as a "drop-in" module between an audio spectrogram and a classifier and can be jointly optimized with the classification task. We evaluate DiffRes on five audio classification tasks, using mel-spectrograms as the acoustic features, followed by off-the-shelf classifier backbones. Compared with previous methods using the fixed temporal resolution, the DiffRes-based method can achieve the equivalent or better classification accuracy with at least 25% computational cost reduction. We further show that DiffRes can improve classification accuracy by increasing the temporal resolution of input acoustic features, without adding to the computational cost. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 321,362 |
2104.01940 | What's the best place for an AI conference, Vancouver or ______: Why
completing comparative questions is difficult | Although large neural language models (LMs) like BERT can be finetuned to yield state-of-the-art results on many NLP tasks, it is often unclear what these models actually learn. Here we study using such LMs to fill in entities in human-authored comparative questions, like ``Which country is older, India or ______?'' -- i.e., we study the ability of neural LMs to ask (not answer) reasonable questions. We show that accuracy in this fill-in-the-blank task is well-correlated with human judgements of whether a question is reasonable, and that these models can be trained to achieve nearly human-level performance in completing comparative questions in three different subdomains. However, analysis shows that what they learn fails to model any sort of broad notion of which entities are semantically comparable or similar -- instead the trained models are very domain-specific, and performance is highly correlated with co-occurrences between specific entities observed in the training set. This is true both for models that are pretrained on general text corpora, as well as models trained on a large corpus of comparison questions. Our study thus reinforces recent results on the difficulty of making claims about a deep model's world knowledge or linguistic competence based on performance on specific benchmark problems. We make our evaluation datasets publicly available to foster future research on complex understanding and reasoning in such models at standards of human interaction. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 228,528 |
2406.01467 | RaDe-GS: Rasterizing Depth in Gaussian Splatting | Gaussian Splatting (GS) has proven to be highly effective in novel view synthesis, achieving high-quality and real-time rendering. However, its potential for reconstructing detailed 3D shapes has not been fully explored. Existing methods often suffer from limited shape accuracy due to the discrete and unstructured nature of Gaussian splats, which complicates the shape extraction. While recent techniques like 2D GS have attempted to improve shape reconstruction, they often reformulate the Gaussian primitives in ways that reduce both rendering quality and computational efficiency. To address these problems, our work introduces a rasterized approach to render the depth maps and surface normal maps of general 3D Gaussian splats. Our method not only significantly enhances shape reconstruction accuracy but also maintains the computational efficiency intrinsic to Gaussian Splatting. It achieves a Chamfer distance error comparable to NeuraLangelo on the DTU dataset and maintains similar computational efficiency as the original 3D GS methods. Our method is a significant advancement in Gaussian Splatting and can be directly integrated into existing Gaussian Splatting-based methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 460,330 |
2312.06342 | Detecting Contextual Network Anomalies with Graph Neural Networks | Detecting anomalies on network traffic is a complex task due to the massive amount of traffic flows in today's networks, as well as the highly-dynamic nature of traffic over time. In this paper, we propose the use of Graph Neural Networks (GNN) for network traffic anomaly detection. We formulate the problem as contextual anomaly detection on network traffic measurements, and propose a custom GNN-based solution that detects traffic anomalies on origin-destination flows. In our evaluation, we use real-world data from Abilene (6 months), and make a comparison with other widely used methods for the same task (PCA, EWMA, RNN). The results show that the anomalies detected by our solution are quite complementary to those captured by the baselines (with a max. of 36.33% overlapping anomalies for PCA). Moreover, we manually inspect the anomalies detected by our method, and find that a large portion of them can be visually validated by a network expert (64% with high confidence, 18% with mid confidence, 18% normal traffic). Lastly, we analyze the characteristics of the anomalies through two paradigmatic cases that are quite representative of the bulk of anomalies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 414,479 |
2008.02555 | Reconfigurable Intelligent Surfaces with Reflection Pattern Modulation:
Beamforming Design and Performance Analysis | Recent considerations for reconfigurable intelligent surfaces (RISs) assume that RISs can convey information by reflection without the need of transmit radio frequency chains, which, however, is a challenging task. In this paper, we propose an RIS-enhanced multiple-input single-output system with reflection pattern modulation, where the RIS can configure its reflection state for boosting the received signal power via passive beamforming and simultaneously conveying its own information via reflection. We formulate an optimization problem to maximize the average received signal power by jointly optimizing the active beamforming at the access point (AP) and passive beamforming at the RIS for the case where the RIS's state information is statistically known by the AP, and propose a high-quality suboptimal solution based on the alternating optimization technique. We analyze the asymptotic outage probability of the proposed scheme under Rayleigh fading channels, for which a closed-form expression is derived. The achievable rate of the proposed scheme is also investigated for the case where the transmitted symbol is drawn from a finite constellation. Simulation results validate the effectiveness of the proposed scheme and reveal the effect of various system parameters on the achievable rate performance. It is shown that the proposed scheme outperforms the conventional RIS-assisted system without information transfer in terms of achievable rate performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 190,648 |
2009.09677 | CURIE: A Cellular Automaton for Concept Drift Detection | Data stream mining extracts information from large quantities of data flowing fast and continuously (data streams). They are usually affected by changes in the data distribution, giving rise to a phenomenon referred to as concept drift. Thus, learning models must detect and adapt to such changes, so as to exhibit a good predictive performance after a drift has occurred. In this regard, the development of effective drift detection algorithms becomes a key factor in data stream mining. In this work we propose CU RIE, a drift detector relying on cellular automata. Specifically, in CU RIE the distribution of the data stream is represented in the grid of a cellular automata, whose neighborhood rule can then be utilized to detect possible distribution changes over the stream. Computer simulations are presented and discussed to show that CU RIE, when hybridized with other base learners, renders a competitive behavior in terms of detection metrics and classification accuracy. CU RIE is compared with well-established drift detectors over synthetic datasets with varying drift characteristics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 196,655 |
2201.05991 | Video Transformers: A Survey | Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 275,583 |
1306.4606 | Keyphrase Cloud Generation of Broadcast News | This paper describes an enhanced automatic keyphrase extraction method applied to Broadcast News. The keyphrase extraction process is used to create a concept level for each news. On top of words resulting from a speech recognition system output and news indexation and it contributes to the generation of a tag/keyphrase cloud of the top news included in a Multimedia Monitoring Solution system for TV and Radio news/programs, running daily, and monitoring 12 TV channels and 4 Radios. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 25,320 |
2406.15655 | ProBE: Proportioning Privacy Budget for Complex Exploratory Decision
Support | This paper studies privacy in the context of complex decision support queries composed of multiple conditions on different aggregate statistics combined using disjunction and conjunction operators. Utility requirements for such queries necessitate the need for private mechanisms that guarantee a bound on the false negative and false positive errors. This paper formally defines complex decision support queries and their accuracy requirements, and provides algorithms that proportion the existing budget to optimally minimize privacy loss while supporting a bounded guarantee on the accuracy. Our experimental results on multiple real-life datasets show that our algorithms successfully maintain such utility guarantees, while also minimizing privacy loss. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 466,808 |
2208.13687 | Categorical semantics of compositional reinforcement learning | Reinforcement learning (RL) often requires decomposing a problem into subtasks and composing learned behaviors on these tasks. Compositionality in RL has the potential to create modular subtask units that interface with other system capabilities. However, generating compositional models requires the characterization of minimal assumptions for the robustness of the compositional feature. We develop a framework for a \emph{compositional theory} of RL using a categorical point of view. Given the categorical representation of compositionality, we investigate sufficient conditions under which learning-by-parts results in the same optimal policy as learning on the whole. In particular, our approach introduces a category $\mathsf{MDP}$, whose objects are Markov decision processes (MDPs) acting as models of tasks. We show that $\mathsf{MDP}$ admits natural compositional operations, such as certain fiber products and pushouts. These operations make explicit compositional phenomena in RL and unify existing constructions, such as puncturing hazardous states in composite MDPs and incorporating state-action symmetry. We also model sequential task completion by introducing the language of zig-zag diagrams that is an immediate application of the pushout operation in $\mathsf{MDP}$. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | true | 315,114 |
2004.12164 | Randomized spectral co-clustering for large-scale directed networks | Directed networks are broadly used to represent asymmetric relationships among units. Co-clustering aims to cluster the senders and receivers of directed networks simultaneously. In particular, the well-known spectral clustering algorithm could be modified as the spectral co-clustering to co-cluster directed networks. However, large-scale networks pose great computational challenges to it. In this paper, we leverage sketching techniques and derive two randomized spectral co-clustering algorithms, one \emph{random-projection-based} and the other \emph{random-sampling-based}, to accelerate the co-clustering of large-scale directed networks. We theoretically analyze the resulting algorithms under two generative models -- the stochastic co-block model and the degree-corrected stochastic co-block model, and establish their approximation error rates and misclustering error rates, indicating better bounds than the state-of-the-art results of co-clustering literature. Numerically, we design and conduct simulations to support our theoretical results and test the efficiency of the algorithms on real networks with up to millions of nodes. A publicly available R package \textsf{RandClust} is developed for better usability and reproducibility of the proposed methods. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 174,150 |
2203.09516 | AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation | Powerful priors allow us to perform inference with insufficient information. In this paper, we propose an autoregressive prior for 3D shapes to solve multimodal 3D tasks such as shape completion, reconstruction, and generation. We model the distribution over 3D shapes as a non-sequential autoregressive distribution over a discretized, low-dimensional, symbolic grid-like latent representation of 3D shapes. This enables us to represent distributions over 3D shapes conditioned on information from an arbitrary set of spatially anchored query locations and thus perform shape completion in such arbitrary settings (e.g., generating a complete chair given only a view of the back leg). We also show that the learned autoregressive prior can be leveraged for conditional tasks such as single-view reconstruction and language-based generation. This is achieved by learning task-specific naive conditionals which can be approximated by light-weight models trained on minimal paired data. We validate the effectiveness of the proposed method using both quantitative and qualitative evaluation and show that the proposed method outperforms the specialized state-of-the-art methods trained for individual tasks. The project page with code and video visualizations can be found at https://yccyenchicheng.github.io/AutoSDF/. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 286,184 |
2008.08448 | Virus Transmission Risk in Urban Rail Systems: A Microscopic
Simulation-based Analysis of Spatio-temporal Characteristics | Transmission risk of air-borne diseases in public transportation systems is a concern. The paper proposes a modified Wells-Riley model for risk analysis in public transportation systems to capture the passenger flow characteristics, including spatial and temporal patterns in terms of number of boarding, alighting passengers, and number of infectors. The model is utilized to assess overall risk as a function of OD flows, actual operations, and factors such as mask wearing, and ventilation. The model is integrated with a microscopic simulation model of subway operations (SimMETRO). Using actual data from a subway system, a case study explores the impact of different factors on transmission risk, including mask-wearing, ventilation rates, infectiousness levels of disease and carrier rates. In general, mask-wearing and ventilation are effective under various demand levels, infectiousness levels, and carrier rates. Mask-wearing is more effective in mitigating risks. Impacts from operations and service frequency are also evaluated, emphasizing the importance of maintaining reliable, frequent operations in lowering transmission risks. Risk spatial patterns are also explored, highlighting locations of higher risk. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 192,426 |
2303.08050 | Subjective and Objective Quality Assessment for in-the-Wild Computer
Graphics Images | Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc. In practice, the quality of CGIs consistently suffers from poor rendering during production, inevitable compression artifacts during the transmission of multimedia applications, and low aesthetic quality resulting from poor composition and design. However, few works have been dedicated to dealing with the challenge of computer graphics image quality assessment (CGIQA). Most image quality assessment (IQA) metrics are developed for natural scene images (NSIs) and validated on databases consisting of NSIs with synthetic distortions, which are not suitable for in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled laboratory environment to obtain the accurate perceptual ratings of the CGIs. Then, we propose an effective deep learning-based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation. Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database and other CGIQA-related databases. The database is released at https://github.com/zzc-1998/CGIQA6K. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,488 |
1903.06440 | Robots that Sync and Swarm: A Proof of Concept in ROS 2 | A unified mathematical model for synchronisation and swarming has recently been proposed. Each system entity, called a "swarmalator", coordinates its internal phase and location with the other entities in a way that these two attributes are mutually coupled. This paper realises and studies, for the first time, the concept of swarmalators in a technical system. We adapt and extend the original model for its use with mobile robots and implement it in the Robot Operating System 2 (ROS 2). Simulations and experiments with small robots demonstrate the feasibility of the model and show its potential to be applied to real-world systems. All types of space-time patterns achieved in theory can be reproduced in practice. Applications can be found in monitoring, exploration, entertainment and art, among other domains. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 124,385 |
2402.00236 | Positional Encoding Helps Recurrent Neural Networks Handle a Large
Vocabulary | This study reports an unintuitive finding that positional encoding enhances learning of recurrent neural networks (RNNs). Positional encoding is a high-dimensional representation of time indices on input data. Most famously, positional encoding complements the capabilities of Transformer neural networks, which lack an inherent mechanism for representing the data order. By contrast, RNNs can encode the temporal information of data points on their own, rendering their use of positional encoding seemingly redundant/unnecessary. Nonetheless, investigations through synthetic benchmarks reveal an advantage of coupling positional encoding and RNNs, especially for handling a large vocabulary that yields low-frequency tokens. Further scrutinization unveils that these low-frequency tokens destabilizes the gradients of vanilla RNNs, and the positional encoding resolves this instability. These results shed a new light on the utility of positional encoding beyond its canonical role as a timekeeper for Transformers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 425,526 |
2107.13629 | Discovering 3D Parts from Image Collections | Reasoning 3D shapes from 2D images is an essential yet challenging task, especially when only single-view images are at our disposal. While an object can have a complicated shape, individual parts are usually close to geometric primitives and thus are easier to model. Furthermore, parts provide a mid-level representation that is robust to appearance variations across objects in a particular category. In this work, we tackle the problem of 3D part discovery from only 2D image collections. Instead of relying on manually annotated parts for supervision, we propose a self-supervised approach, latent part discovery (LPD). Our key insight is to learn a novel part shape prior that allows each part to fit an object shape faithfully while constrained to have simple geometry. Extensive experiments on the synthetic ShapeNet, PartNet, and real-world Pascal 3D+ datasets show that our method discovers consistent object parts and achieves favorable reconstruction accuracy compared to the existing methods with the same level of supervision. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 248,249 |
1904.00937 | Early Diagnosis of Pneumonia with Deep Learning | Pneumonia has been one of the fatal diseases and has the potential to result in severe consequences within a short period of time, due to the flow of fluid in lungs, which leads to drowning. If not acted upon by drugs at the right time, pneumonia may result in death of individuals. Therefore, the early diagnosis is a key factor along the progress of the disease. This paper focuses on the biological progress of pneumonia and its detection by x-ray imaging, overviews the studies conducted on enhancing the level of diagnosis, and presents the methodology and results of an automation of xray images based on various parameters in order to detect the disease at very early stages. In this study we propose our deep learning architecture for the classification task, which is trained with modified images, through multiple steps of preprocessing. Our classification method uses convolutional neural networks and residual network architecture for classifying the images. Our findings yield an accuracy of 78.73%, surpassing the previously top scoring accuracy of 76.8%. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 126,020 |
1409.2944 | Collaborative Deep Learning for Recommender Systems | Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recent advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | true | false | false | 35,949 |
1203.6599 | Distributed Randomized Algorithms for the PageRank Computation | In the search engine of Google, the PageRank algorithm plays a crucial role in ranking the search results. The algorithm quantifies the importance of each web page based on the link structure of the web. We first provide an overview of the original problem setup. Then, we propose several distributed randomized schemes for the computation of the PageRank, where the pages can locally update their values by communicating to those connected by links. The main objective of the paper is to show that these schemes asymptotically converge in the mean-square sense to the true PageRank values. A detailed discussion on the close relations to the multi-agent consensus problems is also given. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 15,179 |
2204.08612 | Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors | Deepfakes utilise Artificial Intelligence (AI) techniques to create synthetic media where the likeness of one person is replaced with another. There are growing concerns that deepfakes can be maliciously used to create misleading and harmful digital contents. As deepfakes become more common, there is a dire need for deepfake detection technology to help spot deepfake media. Present deepfake detection models are able to achieve outstanding accuracy (>90%). However, most of them are limited to within-dataset scenario, where the same dataset is used for training and testing. Most models do not generalise well enough in cross-dataset scenario, where models are tested on unseen datasets from another source. Furthermore, state-of-the-art deepfake detection models rely on neural network-based classification models that are known to be vulnerable to adversarial attacks. Motivated by the need for a robust deepfake detection model, this study adapts metamorphic testing (MT) principles to help identify potential factors that could influence the robustness of the examined model, while overcoming the test oracle problem in this domain. Metamorphic testing is specifically chosen as the testing technique as it fits our demand to address learning-based system testing with probabilistic outcomes from largely black-box components, based on potentially large input domains. We performed our evaluations on MesoInception-4 and TwoStreamNet models, which are the state-of-the-art deepfake detection models. This study identified makeup application as an adversarial attack that could fool deepfake detectors. Our experimental results demonstrate that both the MesoInception-4 and TwoStreamNet models degrade in their performance by up to 30\% when the input data is perturbed with makeup. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 292,148 |
2408.15803 | ModalityMirror: Improving Audio Classification in Modality Heterogeneity
Federated Learning with Multimodal Distillation | Multimodal Federated Learning frequently encounters challenges of client modality heterogeneity, leading to undesired performances for secondary modality in multimodal learning. It is particularly prevalent in audiovisual learning, with audio is often assumed to be the weaker modality in recognition tasks. To address this challenge, we introduce ModalityMirror to improve audio model performance by leveraging knowledge distillation from an audiovisual federated learning model. ModalityMirror involves two phases: a modality-wise FL stage to aggregate uni-modal encoders; and a federated knowledge distillation stage on multi-modality clients to train an unimodal student model. Our results demonstrate that ModalityMirror significantly improves the audio classification compared to the state-of-the-art FL methods such as Harmony, particularly in audiovisual FL facing video missing. Our approach unlocks the potential for exploiting the diverse modality spectrum inherent in multi-modal FL. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 484,085 |
1811.03821 | Skeptical Deep Learning with Distribution Correction | Recently deep neural networks have been successfully used for various classification tasks, especially for problems with massive perfectly labeled training data. However, it is often costly to have large-scale credible labels in real-world applications. One solution is to make supervised learning robust with imperfectly labeled input. In this paper, we develop a distribution correction approach that allows deep neural networks to avoid overfitting imperfect training data. Specifically, we treat the noisy input as samples from an incorrect distribution, which will be automatically corrected during our training process. We test our approach on several classification datasets with elaborately generated noisy labels. The results show significantly higher prediction and recovery accuracy with our approach compared to alternative methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 112,938 |
2405.19921 | MCDS-VSS: Moving Camera Dynamic Scene Video Semantic Segmentation by
Filtering with Self-Supervised Geometry and Motion | Autonomous systems, such as self-driving cars, rely on reliable semantic environment perception for decision making. Despite great advances in video semantic segmentation, existing approaches ignore important inductive biases and lack structured and interpretable internal representations. In this work, we propose MCDS-VSS, a structured filter model that learns in a self-supervised manner to estimate scene geometry and ego-motion of the camera, while also estimating the motion of external objects. Our model leverages these representations to improve the temporal consistency of semantic segmentation without sacrificing segmentation accuracy. MCDS-VSS follows a prediction-fusion approach in which scene geometry and camera motion are first used to compensate for ego-motion, then residual flow is used to compensate motion of dynamic objects, and finally the predicted scene features are fused with the current features to obtain a temporally consistent scene segmentation. Our model parses automotive scenes into multiple decoupled interpretable representations such as scene geometry, ego-motion, and object motion. Quantitative evaluation shows that MCDS-VSS achieves superior temporal consistency on video sequences while retaining competitive segmentation performance. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 459,125 |
1905.06707 | Inferring Javascript types using Graph Neural Networks | The recent use of `Big Code' with state-of-the-art deep learning methods offers promising avenues to ease program source code writing and correction. As a first step towards automatic code repair, we implemented a graph neural network model that predicts token types for Javascript programs. The predictions achieve an accuracy above $90\%$, which improves on previous similar work. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 131,060 |
2106.05124 | PCNet: A Structure Similarity Enhancement Method for Multispectral and
Multimodal Image Registration | Multispectral and multimodal images are of important usage in the field of multi-source visual information fusion. Due to the alternation or movement of image devices, the acquired multispectral and multimodal images are usually misaligned, and hence image registration is pre-requisite. Different from the registration of common images, the registration of multispectral or multimodal images is a challenging problem due to the nonlinear variation of intensity and gradient. To cope with this challenge, we propose the phase congruency network (PCNet) to enhance the structure similarity of multispectral or multimodal images. The images can then be aligned using the similarity-enhanced feature maps produced by the network. PCNet is constructed under the inspiration of the well-known phase congruency. The network embeds the phase congruency prior into two simple trainable layers and series of modified learnable Gabor kernels. Thanks to the prior knowledge, once trained, PCNet is applicable on a variety of multispectral and multimodal data such as flash/no-flash and RGB/NIR images without additional further tuning. The prior also makes the network lightweight. The trainable parameters of PCNet are 2400 times less than the deep-learning registration method DHN, while its registration performance surpasses DHN. Experimental results validate that PCNet outperforms current state-of-the-art conventional multimodal registration algorithms. Besides, PCNet can act as a complementary part of the deep-learning registration methods, which significantly boosts their registration accuracy. The percentage of the number of images under 1 pixel average corner error (ACE) of UDHN is raised from 0.2% to 89.9% after the processing of PCNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 239,979 |
2407.15435 | Enhancement of 3D Gaussian Splatting using Raw Mesh for Photorealistic
Recreation of Architectures | The photorealistic reconstruction and rendering of architectural scenes have extensive applications in industries such as film, games, and transportation. It also plays an important role in urban planning, architectural design, and the city's promotion, especially in protecting historical and cultural relics. The 3D Gaussian Splatting, due to better performance over NeRF, has become a mainstream technology in 3D reconstruction. Its only input is a set of images but it relies heavily on geometric parameters computed by the SfM process. At the same time, there is an existing abundance of raw 3D models, that could inform the structural perception of certain buildings but cannot be applied. In this paper, we propose a straightforward method to harness these raw 3D models to guide 3D Gaussians in capturing the basic shape of the building and improve the visual quality of textures and details when photos are captured non-systematically. This exploration opens up new possibilities for improving the effectiveness of 3D reconstruction techniques in the field of architectural design. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 475,185 |
1801.07633 | Human Activity Recognition for Mobile Robot | Due to the increasing number of mobile robots including domestic robots for cleaning and maintenance in developed countries, human activity recognition is inevitable for congruent human-robot interaction. Needless to say that this is indeed a challenging task for robots, it is expedient to learn human activities for autonomous mobile robots (AMR) for navigating in an uncontrolled environment without any guidance. Building a correct classifier for complex human action is non-trivial since simple actions can be combined to recognize a complex human activity. In this paper, we trained a model for human activity recognition using convolutional neural network. We trained and validated the model using the Vicon physical action dataset and also tested the model on our generated dataset (VMCUHK). Our experiment shows that our method performs with high accuracy, human activity recognition task both on the Vicon physical action dataset and VMCUHK dataset. | true | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 88,818 |
2112.14153 | Processing M.A. Castr\'en's Materials: Multilingual Typed and
Handwritten Manuscripts | The study forms a technical report of various tasks that have been performed on the materials collected and published by Finnish ethnographer and linguist, Matthias Alexander Castr\'en (1813-1852). The Finno-Ugrian Society is publishing Castr\'en's manuscripts as new critical and digital editions, and at the same time different research groups have also paid attention to these materials. We discuss the workflows and technical infrastructure used, and consider how datasets that benefit different computational tasks could be created to further improve the usability of these materials, and also to aid the further processing of similar archived collections. We specifically focus on the parts of the collections that are processed in a way that improves their usability in more technical applications, complementing the earlier work on the cultural and linguistic aspects of these materials. Most of these datasets are openly available in Zenodo. The study points to specific areas where further research is needed, and provides benchmarks for text recognition tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 273,461 |
1903.01921 | Closed-Loop Sparse Channel Estimation for Wideband Millimeter-Wave
Full-Dimensional MIMO Systems | This paper proposes a closed-loop sparse channel estimation (CE) scheme for wideband millimeter-wave hybrid full-dimensional multiple-input multiple-output and time division duplexing based systems, which exploits the channel sparsity in both angle and delay domains. At the downlink CE stage, random transmit precoder is designed at base station (BS) for channel sounding, and receive combiners at user devices (UDs) are designed to visualize hybrid array as a low-dimensional digital array for facilitating the multi-dimensional unitary ESPRIT (MDU-ESPRIT) algorithm to estimate respective angle-of-arrivals (AoAs). At the uplink CE stage, the estimated downlink AoAs, namely, uplink angle-of-departures (AoDs), are exploited to design multi-beam transmit precoder at UDs to enable BS to estimate the uplink AoAs, i.e., the downlink AoDs, and delays of different UDs using the MDU-ESPRIT algorithm based on the designed receive combiners at BS. Furthermore, a maximum likelihood approach is proposed to pair the channel parameters acquired at the two stages, and the path gains are then obtained using least squares estimator. According to spectrum estimation theory, our solution can acquire the super-resolution estimations of the AoAs/AoDs and delays of sparse multipath components with low training overhead. Simulation results verify the better CE performance and lower computational complexity of our solution over existing state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 123,373 |
2409.08283 | Activation function optimization method: Learnable series linear units
(LSLUs) | Effective activation functions introduce non-linear transformations, providing neural networks with stronger fitting capa-bilities, which help them better adapt to real data distributions. Huawei Noah's Lab believes that dynamic activation functions are more suitable than static activation functions for enhancing the non-linear capabilities of neural networks. Tsinghua University's related research also suggests using dynamically adjusted activation functions. Building on the ideas of using fine-tuned activation functions from Tsinghua University and Huawei Noah's Lab, we propose a series-based learnable ac-tivation function called LSLU (Learnable Series Linear Units). This method simplifies deep learning networks while im-proving accuracy. This method introduces learnable parameters {\theta} and {\omega} to control the activation function, adapting it to the current layer's training stage and improving the model's generalization. The principle is to increase non-linearity in each activation layer, boosting the network's overall non-linearity. We evaluate LSLU's performance on CIFAR10, CIFAR100, and specific task datasets (e.g., Silkworm), validating its effectiveness. The convergence behavior of the learnable parameters {\theta} and {\omega}, as well as their effects on generalization, are analyzed. Our empirical results show that LSLU enhances the general-ization ability of the original model in various tasks while speeding up training. In VanillaNet training, parameter {\theta} initially decreases, then increases before stabilizing, while {\omega} shows an opposite trend. Ultimately, LSLU achieves a 3.17% accuracy improvement on CIFAR100 for VanillaNet (Table 3). Codes are available at https://github.com/vontran2021/Learnable-series-linear-units-LSLU. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 487,847 |
2305.19486 | Instance-dependent Noisy-label Learning with Graphical Model Based
Noise-rate Estimation | Deep learning faces a formidable challenge when handling noisy labels, as models tend to overfit samples affected by label noise. This challenge is further compounded by the presence of instance-dependent noise (IDN), a realistic form of label noise arising from ambiguous sample information. To address IDN, Label Noise Learning (LNL) incorporates a sample selection stage to differentiate clean and noisy-label samples. This stage uses an arbitrary criterion and a pre-defined curriculum that initially selects most samples as noisy and gradually decreases this selection rate during training. Such curriculum is sub-optimal since it does not consider the actual label noise rate in the training set. This paper addresses this issue with a new noise-rate estimation method that is easily integrated with most state-of-the-art (SOTA) LNL methods to produce a more effective curriculum. Synthetic and real-world benchmark results demonstrate that integrating our approach with SOTA LNL methods improves accuracy in most cases. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 369,547 |
1910.12770 | Skip-Clip: Self-Supervised Spatiotemporal Representation Learning by
Future Clip Order Ranking | Deep neural networks require collecting and annotating large amounts of data to train successfully. In order to alleviate the annotation bottleneck, we propose a novel self-supervised representation learning approach for spatiotemporal features extracted from videos. We introduce Skip-Clip, a method that utilizes temporal coherence in videos, by training a deep model for future clip order ranking conditioned on a context clip as a surrogate objective for video future prediction. We show that features learned using our method are generalizable and transfer strongly to downstream tasks. For action recognition on the UCF101 dataset, we obtain 51.8% improvement over random initialization and outperform models initialized using inflated ImageNet parameters. Skip-Clip also achieves results competitive with state-of-the-art self-supervision methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 151,193 |
2303.15407 | Dimensionality Collapse: Optimal Measurement Selection for Low-Error
Infinite-Horizon Forecasting | This work introduces a method to select linear functional measurements of a vector-valued time series optimized for forecasting distant time-horizons. By formulating and solving the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram\'{e}r-Rao lower bound (CRLB) for forecasting as the cost, the most informative data can be collected irrespective of the eventual forecasting algorithm. By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived. This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting. Implementations of both an approximate dynamic programming formulation and the proposed alternative are illustrated using an extended Kalman filter for state estimation, with results on simulated systems with limit cycles and chaotic behavior demonstrating a linear improvement in the CRLB as a function of the number of collapsing dimensions of the system. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 354,477 |
2303.14404 | Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection | Deep neural networks (DNNs) have enabled astounding progress in several vision-based problems. Despite showing high predictive accuracy, recently, several works have revealed that they tend to provide overconfident predictions and thus are poorly calibrated. The majority of the works addressing the miscalibration of DNNs fall under the scope of classification and consider only in-domain predictions. However, there is little to no progress in studying the calibration of DNN-based object detection models, which are central to many vision-based safety-critical applications. In this paper, inspired by the train-time calibration methods, we propose a novel auxiliary loss formulation that explicitly aims to align the class confidence of bounding boxes with the accurateness of predictions (i.e. precision). Since the original formulation of our loss depends on the counts of true positives and false positives in a minibatch, we develop a differentiable proxy of our loss that can be used during training with other application-specific loss functions. We perform extensive experiments on challenging in-domain and out-domain scenarios with six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios. Our source code and pre-trained models are available at https://github.com/akhtarvision/bpc_calibration | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,076 |
2302.14350 | Knowledge Augmented Relation Inference for Group Activity Recognition | Most existing group activity recognition methods construct spatial-temporal relations merely based on visual representation. Some methods introduce extra knowledge, such as action labels, to build semantic relations and use them to refine the visual presentation. However, the knowledge they explored just stay at the semantic-level, which is insufficient for pursing notable accuracy. In this paper, we propose to exploit knowledge concretization for the group activity recognition, and develop a novel Knowledge Augmented Relation Inference framework that can effectively use the concretized knowledge to improve the individual representations. Specifically, the framework consists of a Visual Representation Module to extract individual appearance features, a Knowledge Augmented Semantic Relation Module explore semantic representations of individual actions, and a Knowledge-Semantic-Visual Interaction Module aims to integrate visual and semantic information by the knowledge. Benefiting from these modules, the proposed framework can utilize knowledge to enhance the relation inference process and the individual representations, thus improving the performance of group activity recognition. Experimental results on two public datasets show that the proposed framework achieves competitive performance compared with state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 348,259 |
1508.03110 | Algorithmic Acceleration of Parallel ALS for Collaborative Filtering:
Speeding up Distributed Big Data Recommendation in Spark | Collaborative filtering algorithms are important building blocks in many practical recommendation systems. For example, many large-scale data processing environments include collaborative filtering models for which the Alternating Least Squares (ALS) algorithm is used to compute latent factor matrix decompositions. In this paper, we propose an approach to accelerate the convergence of parallel ALS-based optimization methods for collaborative filtering using a nonlinear conjugate gradient (NCG) wrapper around the ALS iterations. We also provide a parallel implementation of the accelerated ALS-NCG algorithm in the Apache Spark distributed data processing environment, and an efficient line search technique as part of the ALS-NCG implementation that requires only one pass over the data on distributed datasets. In serial numerical experiments on a linux workstation and parallel numerical experiments on a 16 node cluster with 256 computing cores, we demonstrate that the combined ALS-NCG method requires many fewer iterations and less time than standalone ALS to reach movie rankings with high accuracy on the MovieLens 20M dataset. In parallel, ALS-NCG can achieve an acceleration factor of 4 or greater in clock time when an accurate solution is desired; furthermore, the acceleration factor increases as greater numerical precision is required in the solution. In addition, the NCG acceleration mechanism is efficient in parallel and scales linearly with problem size on synthetic datasets with up to nearly 1 billion ratings. The acceleration mechanism is general and may also be applicable to other optimization methods for collaborative filtering. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 45,966 |
2211.02372 | Path Planning Using Wassertein Distributionally Robust Deep Q-learning | We investigate the problem of risk averse robot path planning using the deep reinforcement learning and distributionally robust optimization perspectives. Our problem formulation involves modelling the robot as a stochastic linear dynamical system, assuming that a collection of process noise samples is available. We cast the risk averse motion planning problem as a Markov decision process and propose a continuous reward function design that explicitly takes into account the risk of collision with obstacles while encouraging the robot's motion towards the goal. We learn the risk-averse robot control actions through Lipschitz approximated Wasserstein distributionally robust deep Q-learning to hedge against the noise uncertainty. The learned control actions result in a safe and risk averse trajectory from the source to the goal, avoiding all the obstacles. Various supporting numerical simulations are presented to demonstrate our proposed approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 328,560 |
1911.11081 | Improving Feature Attribution through Input-specific Network Pruning | Attributing the output of a neural network to the contribution of given input elements is a way of shedding light on the black-box nature of neural networks. Due to the complexity of current network architectures, current gradient-based attribution methods provide very noisy or coarse results. We propose to prune a neural network for a given single input to keep only neurons that highly contribute to the prediction. We show that by input-specific pruning, network gradients change from reflecting local (noisy) importance information to global importance. Our proposed method is efficient and generates fine-grained attribution maps. We further provide a theoretical justification of the pruning approach relating it to perturbations and validate it through a novel experimental setup. Our method is evaluated by multiple benchmarks: sanity checks, pixel perturbation, and Remove-and-Retrain (ROAR). These benchmarks evaluate the method from different perspectives and our method performs better than other methods across all evaluations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 155,019 |
2211.16886 | A Unifying Theory of Distance from Calibration | We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE) fail to satisfy basic properties like continuity. We present a rigorous framework for analyzing calibration measures, inspired by the literature on property testing. We propose a ground-truth notion of distance from calibration: the $\ell_1$ distance to the nearest perfectly calibrated predictor. We define a consistent calibration measure as one that is polynomially related to this distance. Applying our framework, we identify three calibration measures that are consistent and can be estimated efficiently: smooth calibration, interval calibration, and Laplace kernel calibration. The former two give quadratic approximations to the ground truth distance, which we show is information-theoretically optimal in a natural model for measuring calibration which we term the prediction-only access model. Our work thus establishes fundamental lower and upper bounds on measuring the distance to calibration, and also provides theoretical justification for preferring certain metrics (like Laplace kernel calibration) in practice. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,788 |
2010.04785 | A Reactive Autonomous Camera System for the RAVEN II Surgical Robot | The endoscopic camera of a surgical robot provides surgeons with a magnified 3D view of the surgical field, but repositioning it increases mental workload and operation time. Poor camera placement contributes to safety-critical events when surgical tools move out of the view of the camera. This paper presents a proof of concept of an autonomous camera system for the Raven II surgical robot that aims to reduce surgeon workload and improve safety by providing an optimal view of the workspace showing all objects of interest. This system uses transfer learning to localize and classify objects of interest within the view of a stereoscopic camera. The positions and centroid of the objects are estimated and a set of control rules determines the movement of the camera towards a more desired view. Our perception module had an accuracy of 61.21% overall for identifying objects of interest and was able to localize both graspers and multiple blocks in the environment. Comparison of the commands proposed by our system with the desired commands from a survey of 13 participants indicates that the autonomous camera system proposes appropriate movements for the tilt and pan of the camera. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 199,859 |
2103.14441 | Visual Explanations from Spiking Neural Networks using Interspike
Intervals | Spiking Neural Networks (SNNs) compute and communicate with asynchronous binary temporal events that can lead to significant energy savings with neuromorphic hardware. Recent algorithmic efforts on training SNNs have shown competitive performance on a variety of classification tasks. However, a visualization tool for analysing and explaining the internal spike behavior of such temporal deep SNNs has not been explored. In this paper, we propose a new concept of bio-plausible visualization for SNNs, called Spike Activation Map (SAM). The proposed SAM circumvents the non-differentiable characteristic of spiking neurons by eliminating the need for calculating gradients to obtain visual explanations. Instead, SAM calculates a temporal visualization map by forward propagating input spikes over different time-steps. SAM yields an attention map corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without both the backpropagation process and the class label, SAM highlights the discriminative region of the image while capturing fine-grained details. With SAM, for the first time, we provide a comprehensive analysis on how internal spikes work in various SNN training configurations depending on optimization types, leak behavior, as well as when faced with adversarial examples. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 226,862 |
1703.02925 | Assessing Code Authorship: The Case of the Linux Kernel | Code authorship is a key information in large-scale open source systems. Among others, it allows maintainers to assess division of work and identify key collaborators. Interestingly, open-source communities lack guidelines on how to manage authorship. This could be mitigated by setting to build an empirical body of knowledge on how authorship-related measures evolve in successful open-source communities. Towards that direction, we perform a case study on the Linux kernel. Our results show that: (a) only a small portion of developers (26 %) makes significant contributions to the code base; (b) the distribution of the number of files per author is highly skewed --- a small group of top authors (3 %) is responsible for hundreds of files, while most authors (75 %) are responsible for at most 11 files; (c) most authors (62 %) have a specialist profile; (d) authors with a high number of co-authorship connections tend to collaborate with others with less connections. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 69,645 |
1605.04661 | Optimization of Graph Based Codes for Belief Propagation Decoding | A low-density parity-check (LDPC) code is a linear block code described by a sparse parity-check matrix, which can be efficiently represented by a bipartite Tanner graph. The standard iterative decoding algorithm, known as belief propagation, passes messages along the edges of this Tanner graph. Density evolution is an efficient method to analyze the performance of the belief propagation decoding algorithm for a particular LDPC code ensemble, enabling the determination of a decoding threshold. The basic problem addressed in this work is how to optimize the Tanner graph so that the decoding threshold is as large as possible. We introduce a new code optimization technique which involves the search space range which can be thought of as minimizing randomness in differential evolution or limiting the search range in exhaustive search. This technique is applied to the design of good irregular LDPC codes and multiedge type LDPC codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,901 |
1909.10300 | Conservative set valued fields, automatic differentiation, stochastic
gradient method and deep learning | Modern problems in AI or in numerical analysis require nonsmooth approaches with a flexible calculus. We introduce generalized derivatives called conservative fields for which we develop a calculus and provide representation formulas. Functions having a conservative field are called path differentiable: convex, concave, Clarke regular and any semialgebraic Lipschitz continuous functions are path differentiable. Using Whitney stratification techniques for semialgebraic and definable sets, our model provides variational formulas for nonsmooth automatic differentiation oracles, as for instance the famous backpropagation algorithm in deep learning. Our differential model is applied to establish the convergence in values of nonsmooth stochastic gradient methods as they are implemented in practice. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 146,508 |
2305.03040 | TUVF: Learning Generalizable Texture UV Radiance Fields | Textures are a vital aspect of creating visually appealing and realistic 3D models. In this paper, we study the problem of generating high-fidelity texture given shapes of 3D assets, which has been relatively less explored compared with generic 3D shape modeling. Our goal is to facilitate a controllable texture generation process, such that one texture code can correspond to a particular appearance style independent of any input shapes from a category. We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape. This allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space, i.e., from the same category. We integrate the UV sphere space with the radiance field, which provides a more efficient and accurate representation of textures than traditional texture maps. We perform our experiments on synthetic and real-world object datasets where we achieve not only realistic synthesis but also substantial improvements over state-of-the-arts on texture controlling and editing. Project Page: https://www.anjiecheng.me/TUVF | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 362,250 |
1909.08246 | Extended Magic for Negation: Efficient Demand-Driven Evaluation of
Stratified Datalog with Precise Complexity Guarantees | Given a set of Datalog rules, facts, and a query, answers to the query can be inferred bottom-up starting from the facts or top-down starting from the query. For efficiency, top-down evaluation is extended with memoization of inferred facts, and bottom-up evaluation is performed after transformations to make rules driven by the demand from the query. Prior work has shown their precise complexity analysis and relationships. However, when Datalog is extended with even stratified negation, which has a simple and universally accepted semantics, transformations to make rules demand-driven may result in non-stratified negation, which has had many complex semantics and evaluation methods. This paper presents (1) a simple extension to demand transformation, a transformation to make rules demand-driven for Datalog without negation, to support stratified negation, and (2) a simple extension to an optimal bottom-up evaluation method for Datalog with stratified negation, to handle non-stratified negation in the resulting rules. We show that the method provides precise complexity guarantees. It is also optimal in that only facts needed for top-down evaluation of the query are inferred and each firing of a rule to infer such a fact takes worst-case constant time. We extend the precise relationship between top-down evaluation and demand-driven bottom-up evaluation to Datalog with stratified negation. Finally, we show experimental results for performance, as well as applications to previously challenging examples. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | true | 145,924 |
1210.5171 | Identification of Group Changes in Blogosphere | The paper addresses a problem of change identification in social group evolution. A new SGCI method for discovering of stable groups was proposed and compared with existing GED method. The experimental studies on a Polish blogosphere service revealed that both methods are able to identify similar evolution events even though both use different concepts. Some differences were demonstrated as well | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 19,255 |
1805.09007 | A Transition-based Algorithm for Unrestricted AMR Parsing | Non-projective parsing can be useful to handle cycles and reentrancy in AMR graphs. We explore this idea and introduce a greedy left-to-right non-projective transition-based parser. At each parsing configuration, an oracle decides whether to create a concept or whether to connect a pair of existing concepts. The algorithm handles reentrancy and arbitrary cycles natively, i.e. within the transition system itself. The model is evaluated on the LDC2015E86 corpus, obtaining results close to the state of the art, including a Smatch of 64%, and showing good behavior on reentrant edges. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 98,317 |
1507.00789 | Secure Massive MIMO Transmission with an Active Eavesdropper | In this paper, we investigate secure and reliable transmission strategies for multi-cell multi-user massive multiple-input multiple-output (MIMO) systems with a multi-antenna active eavesdropper. We consider a time-division duplex system where uplink training is required and an active eavesdropper can attack the training phase to cause pilot contamination at the transmitter. This forces the precoder used in the subsequent downlink transmission phase to implicitly beamform towards the eavesdropper, thus increasing its received signal power. Assuming matched filter precoding and artificial noise (AN) generation at the transmitter, we derive an asymptotic achievable secrecy rate when the number of transmit antennas approaches infinity. For the case of a single-antenna active eavesdropper, we obtain a closed-form expression for the optimal power allocation policy for the transmit signal and the AN, and find the minimum transmit power required to ensure reliable secure communication. Furthermore, we show that the transmit antenna correlation diversity of the intended users and the eavesdropper can be exploited in order to improve the secrecy rate. In fact, under certain orthogonality conditions of the channel covariance matrices, the secrecy rate loss introduced by the eavesdropper can be completely mitigated. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 44,787 |
2502.10674 | Occlusion-aware Text-Image-Point Cloud Pretraining for Open-World 3D
Object Recognition | Recent open-world representation learning approaches have leveraged CLIP to enable zero-shot 3D object recognition. However, performance on real point clouds with occlusions still falls short due to the unrealistic pretraining settings. Additionally, these methods incur high inference costs because they rely on Transformer's attention modules. In this paper, we make two contributions to address these limitations. First, we propose occlusion-aware text-image-point cloud pretraining to reduce the training-testing domain gap. From 52K synthetic 3D objects, our framework generates nearly 630K partial point clouds for pretraining, consistently improving real-world recognition performances of existing popular 3D networks. Second, to reduce computational requirements, we introduce DuoMamba, a two-stream linear state space model tailored for point clouds. By integrating two space-filling curves with 1D convolutions, DuoMamba effectively models spatial dependencies between point tokens, offering a powerful alternative to Transformer. When pretrained with our framework, DuoMamba surpasses current state-of-the-art methods while reducing latency and FLOPs, highlighting the potential of our approach for real-world applications. We will release our data and code to facilitate future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 533,993 |
2309.03754 | Convergence Analysis of Decentralized ASGD | Over the last decades, Stochastic Gradient Descent (SGD) has been intensively studied by the Machine Learning community. Despite its versatility and excellent performance, the optimization of large models via SGD still is a time-consuming task. To reduce training time, it is common to distribute the training process across multiple devices. Recently, it has been shown that the convergence of asynchronous SGD (ASGD) will always be faster than mini-batch SGD. However, despite these improvements in the theoretical bounds, most ASGD convergence-rate proofs still rely on a centralized parameter server, which is prone to become a bottleneck when scaling out the gradient computations across many distributed processes. In this paper, we present a novel convergence-rate analysis for decentralized and asynchronous SGD (DASGD) which does not require partial synchronization among nodes nor restrictive network topologies. Specifically, we provide a bound of $\mathcal{O}(\sigma\epsilon^{-2}) + \mathcal{O}(QS_{avg}\epsilon^{-3/2}) + \mathcal{O}(S_{avg}\epsilon^{-1})$ for the convergence rate of DASGD, where $S_{avg}$ is the average staleness between models, $Q$ is a constant that bounds the norm of the gradients, and $\epsilon$ is a (small) error that is allowed within the bound. Furthermore, when gradients are not bounded, we prove the convergence rate of DASGD to be $\mathcal{O}(\sigma\epsilon^{-2}) + \mathcal{O}(\sqrt{\hat{S}_{avg}\hat{S}_{max}}\epsilon^{-1})$, with $\hat{S}_{max}$ and $\hat{S}_{avg}$ representing a loose version of the average and maximum staleness, respectively. Our convergence proof holds for a fixed stepsize and any non-convex, homogeneous, and L-smooth objective function. We anticipate that our results will be of high relevance for the adoption of DASGD by a broad community of researchers and developers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 390,494 |
2403.14373 | A new control-oriented METANET model to encompass service stations on
highways | In this paper, we propose the METANET with service station (METANET-s) model, a second-order macroscopic traffic model that, compared to the classical METANET, incorporates the dynamics of service stations on highways. Specifically, we employ the (so-called) store-and-forward links to model the stop of vehicles and the possible queue forming in the process of merging back into the highway mainstream. We explore the capability of the METANET-s to capture well both traffic back propagation and capacity drops, which are typically caused by the presence of vehicles joining again the mainstream traffic from the service station. Therefore, capturing these effects is crucial to improving the model's predictive capabilities. Finally, we perform a comparative analysis with the Cell Transmission Model with service station (CTM-s), showcasing that the METANET-s describes the traffic evolution much better than its first-order counterpart. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 440,045 |
1507.00784 | Twitter Sentiment Analysis Applied to Finance: A Case Study in the
Retail Industry | This paper presents a financial analysis over Twitter sentiment analytics extracted from listed retail brands. We investigate whether there is statistically-significant information between the Twitter sentiment and volume, and stock returns and volatility. Traditional newswires are also considered as a proxy for the market sentiment for comparative purpose. The results suggest that social media is indeed a valuable source in the analysis of the financial dynamics in the retail sector even when compared to mainstream news such as the Wall Street Journal and Dow Jones Newswires. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 44,785 |
2502.09051 | AIDE: Agentically Improve Visual Language Model with Domain Experts | The enhancement of Visual Language Models (VLMs) has traditionally relied on knowledge distillation from larger, more capable models. This dependence creates a fundamental bottleneck for improving state-of-the-art systems, particularly when no superior models exist. We introduce AIDE (Agentic Improvement through Domain Experts), a novel framework that enables VLMs to autonomously enhance their capabilities by leveraging specialized domain expert models. AIDE operates through a four-stage process: (1) identifying instances for refinement, (2) engaging domain experts for targeted analysis, (3) synthesizing expert outputs with existing data, and (4) integrating enhanced instances into the training pipeline. Experiments on multiple benchmarks, including MMMU, MME, MMBench, etc., demonstrate AIDE's ability to achieve notable performance gains without relying on larger VLMs nor human supervision. Our framework provides a scalable, resource-efficient approach to continuous VLM improvement, addressing critical limitations in current methodologies, particularly valuable when larger models are unavailable to access. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | true | false | false | false | 533,284 |
2007.01480 | RSAC: Regularized Subspace Approximation Classifier for Lightweight
Continuous Learning | Continuous learning seeks to perform the learning on the data that arrives from time to time. While prior works have demonstrated several possible solutions, these approaches require excessive training time as well as memory usage. This is impractical for applications where time and storage are constrained, such as edge computing. In this work, a novel training algorithm, regularized subspace approximation classifier (RSAC), is proposed to achieve lightweight continuous learning. RSAC contains a feature reduction module and classifier module with regularization. Extensive experiments show that RSAC is more efficient than prior continuous learning works and outperforms these works on various experimental settings. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 185,442 |
2403.00771 | XProspeCT: CT Volume Generation from Paired X-Rays | Computed tomography (CT) is a beneficial imaging tool for diagnostic purposes. CT scans provide detailed information concerning the internal anatomic structures of a patient, but present higher radiation dose and costs compared to X-ray imaging. In this paper, we build on previous research to convert orthogonal X-ray images into simulated CT volumes by exploring larger datasets and various model structures. Significant model variations include UNet architectures, custom connections, activation functions, loss functions, optimizers, and a novel back projection approach. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 434,088 |
2501.12896 | Irrational Complex Rotations Empower Low-bit Optimizers | In this paper, we propose a novel optimizer state compression algorithm, namely $\pi$-Quant, which leverages the properties of irrational numbers (e.g., $\pi$) for memory-efficient training. The core idea is based on our mathematical findings, which show that a pair of parameters can be represented by a single rotation angle using the complex rotation scheme. Building on this insight, we map the parameters into a complex space and perform quantization using the corresponding rotation angles. To efficiently integrate it into optimization process, we develop an efficient system of geometric equations that computes the precise rotation angles with linear complexity. We evaluate $\pi$-Quant on a wide range of tasks. Our experiments show that it can reduce the bit-width of parameters to 3.32-bit, achieving a 75% reduction in parameter scale and a 40% decrease in GPU memory usage, all while maintaining full accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 526,475 |
1507.07242 | Face Search at Scale: 80 Million Gallery | Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed face search system could find the younger brother's (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds on an 80M gallery. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 45,465 |
1802.06072 | 3-D Volumetric Gamma-ray Imaging and Source Localization with a Mobile
Robot | Radiation detection has largely been a manual inspection process with point sensors such as Geiger-Muller counters and scintillation spectrometers to date. While their observations of source proximity prove useful, they lack the directional information necessary for efficient source localization and characterization in cluttered environments with multiple radiation sources. The recent commercialization of Compton gamma cameras provides directional information to the broader radiation detection community for the first time. This paper presents the integration of a Compton gamma camera with a self-localizing ground robot for accurate 3D radiation mapping. Using the position and orientation of the robot, radiation images from the gamma camera are accumulated over a traversed path in a shared frame of reference to construct a consistent voxel grid-based radiation map. The peaks of the map at pre-specified energy windows are selected as the source location estimates, which are compared to the ground truth source locations. The proposed approach localizes multiple sources to within an average of 0.2 m in two 5 x 4 m^2 and 14 x 6 m^2 laboratory environments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 90,584 |
1906.07901 | Multimodal Abstractive Summarization for How2 Videos | In this paper, we study abstractive summarization for open-domain videos. Unlike the traditional text news summarization, the goal is less to "compress" text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities, in our case video and audio transcripts (or text). We show how a multi-source sequence-to-sequence model with hierarchical attention can integrate information from different modalities into a coherent output, compare various models trained with different modalities and present pilot experiments on the How2 corpus of instructional videos. We also propose a new evaluation metric (Content F1) for abstractive summarization task that measures semantic adequacy rather than fluency of the summaries, which is covered by metrics like ROUGE and BLEU. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | true | 135,729 |
1309.4860 | Modeling complex spatial dynamics of two-population interaction in
urbanization process | This paper is mainly devoted to lay an empirical foundation for further research on complex spatial dynamics of two-population interaction. Based on the US population census data, a rural and urban population interaction model is developed. Subsequently a logistic equation on percentage urban is derived from the urbanization model so that spatial interaction can be connected mathematically with logistic growth. The numerical experiment by using the discretized urban-rural population interaction model of urbanization shows a period-doubling bifurcation and chaotic behavior, which is identical in patterns to those from the simple mathematical models of logistic growth in ecology. This suggests that the complicated dynamics of logistic growth may come from some kind of the nonlinear interaction. The results from this study help to understand urbanization, urban-rural population interaction, chaotic dynamics, and spatial complexity of geographical systems. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 27,124 |
1804.03429 | Graphical Generative Adversarial Networks | We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. Graphical-GAN conjoins the power of Bayesian networks on compactly representing the dependency structures among random variables and that of generative adversarial networks on learning expressive dependency functions. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We generalize the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly. Finally, we present two important instances of Graphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 94,627 |
2104.00676 | Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
Empirical Study | This work aims to empirically clarify a recently discovered perspective that label smoothing is incompatible with knowledge distillation. We begin by introducing the motivation behind on how this incompatibility is raised, i.e., label smoothing erases relative information between teacher logits. We provide a novel connection on how label smoothing affects distributions of semantically similar and dissimilar classes. Then we propose a metric to quantitatively measure the degree of erased information in sample's representation. After that, we study its one-sidedness and imperfection of the incompatibility view through massive analyses, visualizations and comprehensive experiments on Image Classification, Binary Networks, and Neural Machine Translation. Finally, we broadly discuss several circumstances wherein label smoothing will indeed lose its effectiveness. Project page: http://zhiqiangshen.com/projects/LS_and_KD/index.html. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 228,078 |
2211.10014 | Users are Closer than they Appear: Protecting User Location from WiFi
APs | WiFi-based indoor localization has now matured for over a decade. Most of the current localization algorithms rely on the WiFi access points (APs) in the enterprise network to localize the WiFi user accurately. Thus, the WiFi user's location information could be easily snooped by an attacker listening through a compromised WiFi AP. With indoor localization and navigation being the next step towards automation, it is important to give users the capability to defend against such attacks. In this paper, we present MIRAGE, a system that can utilize the downlink physical layer information to create a defense against an attacker snooping on a WiFi user's location information. MIRAGE achieves this by utilizing the beamforming capability of the transmitter that is already part of the WiFi protocols. With this initial idea, we have demonstrated that the user can obfuscate his/her location from the WiFi AP always with no compromise to the throughput of the existing WiFi communication system and reduce the user location accuracy of the attacker from 2.3m to more than 10m. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 331,182 |
1405.0144 | Discrete-Time Fractional-Order PID Controller: Definition, Tuning,
Digital Realization and Experimental Results | In some of the complicated control problems we have to use the controllers that apply nonlocal operators to the error signal to generate the control. Currently, the most famous controller with nonlocal operators is the fractional-order PID (FOPID). Commonly, after tuning the parameters of FOPID controller, its transfer function is discretized (for realization purposes) using the so-called generating function. This discretization is the origin of some errors and unexpected results in feedback systems. It may even happen that the controller obtained by discretizing a FOPID controller works worse than a directly-tuned discrete-time classical PID controller. Moreover, FOPID controllers cannot directly be applied to the processes modeled by, e.g., the ARMA or ARMAX model. The aim of this paper is to propose a discrete-time version of the FOPID controller and discuss on its properties and applications. Similar to the FOPID controller, the proposed structure applies nonlocal operators (with adjustable memory length) to the error signal. Two methods for tuning the parameters of the proposed controller are developed and it is shown that the proposed controller has the capacity of solving complicated control problems with a high performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 32,745 |
1905.12982 | Meta-Surrogate Benchmarking for Hyperparameter Optimization | Despite the recent progress in hyperparameter optimization (HPO), available benchmarks that resemble real-world scenarios consist of a few and very large problem instances that are expensive to solve. This blocks researchers and practitioners not only from systematically running large-scale comparisons that are needed to draw statistically significant results but also from reproducing experiments that were conducted before. This work proposes a method to alleviate these issues by means of a meta-surrogate model for HPO tasks trained on off-line generated data. The model combines a probabilistic encoder with a multi-task model such that it can generate inexpensive and realistic tasks of the class of problems of interest. We demonstrate that benchmarking HPO methods on samples of the generative model allows us to draw more coherent and statistically significant conclusions that can be reached orders of magnitude faster than using the original tasks. We provide evidence of our findings for various HPO methods on a wide class of problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,960 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.