id
stringlengths 9
16
| title
stringlengths 4
278
| abstract
stringlengths 3
4.08k
| cs.HC
bool 2
classes | cs.CE
bool 2
classes | cs.SD
bool 2
classes | cs.SI
bool 2
classes | cs.AI
bool 2
classes | cs.IR
bool 2
classes | cs.LG
bool 2
classes | cs.RO
bool 2
classes | cs.CL
bool 2
classes | cs.IT
bool 2
classes | cs.SY
bool 2
classes | cs.CV
bool 2
classes | cs.CR
bool 2
classes | cs.CY
bool 2
classes | cs.MA
bool 2
classes | cs.NE
bool 2
classes | cs.DB
bool 2
classes | Other
bool 2
classes | __index_level_0__
int64 0
541k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.14291
|
Video Polyp Segmentation: A Deep Learning Perspective
|
We present the first comprehensive video polyp segmentation (VPS) study in the deep learning era. Over the years, developments in VPS are not moving forward with ease due to the lack of large-scale fine-grained segmentation annotations. To address this issue, we first introduce a high-quality frame-by-frame annotated VPS dataset, named SUN-SEG, which contains 158,690 colonoscopy frames from the well-known SUN-database. We provide additional annotations with diverse types, i.e., attribute, object mask, boundary, scribble, and polygon. Second, we design a simple but efficient baseline, dubbed PNS+, consisting of a global encoder, a local encoder, and normalized self-attention (NS) blocks. The global and local encoders receive an anchor frame and multiple successive frames to extract long-term and short-term spatial-temporal representations, which are then progressively updated by two NS blocks. Extensive experiments show that PNS+ achieves the best performance and real-time inference speed (170fps), making it a promising solution for the VPS task. Third, we extensively evaluate 13 representative polyp/object segmentation models on our SUN-SEG dataset and provide attribute-based comparisons. Finally, we discuss several open issues and suggest possible research directions for the VPS community.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 287,950
|
1501.04656
|
Microscopic Advances with Large-Scale Learning: Stochastic Optimization
for Cryo-EM
|
Determining the 3D structures of biological molecules is a key problem for both biology and medicine. Electron Cryomicroscopy (Cryo-EM) is a promising technique for structure estimation which relies heavily on computational methods to reconstruct 3D structures from 2D images. This paper introduces the challenging Cryo-EM density estimation problem as a novel application for stochastic optimization techniques. Structure discovery is formulated as MAP estimation in a probabilistic latent-variable model, resulting in an optimization problem to which an array of seven stochastic optimization methods are applied. The methods are tested on both real and synthetic data, with some methods recovering reasonable structures in less than one epoch from a random initialization. Complex quasi-Newton methods are found to converge more slowly than simple gradient-based methods, but all stochastic methods are found to converge to similar optima. This method represents a major improvement over existing methods as it is significantly faster and is able to converge from a random initialization.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 39,391
|
2209.12592
|
Generating Compressed Combinatory Proof Structures -- An Approach to
Automated First-Order Theorem Proving
|
Representing a proof tree by a combinator term that reduces to the tree lets subtle forms of duplication within the tree materialize as duplicated subterms of the combinator term. In a DAG representation of the combinator term these straightforwardly factor into shared subgraphs. To search for proofs, combinator terms can be enumerated, like clausal tableaux, interwoven with unification of formulas that are associated with nodes of the enumerated structures. To restrict the search space, the enumeration can be based on proof schemas defined as parameterized combinator terms. We introduce here this "combinator term as proof structure" approach to automated first-order proving, present an implementation and first experimental results. The approach builds on a term view of proof structures rooted in condensed detachment and the connection method. It realizes features known from the connection structure calculus, which has not been implemented so far.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 319,584
|
1703.08198
|
On Desirable Semantics of Functional Dependencies over Databases with
Incomplete Information
|
Codd's relational model describes just one possible world. To better cope with incomplete information, extended database models allow several possible worlds. Vague tables are one such convenient extended model where attributes accept sets of possible values (e.g., the manager is either Jill or Bob). However, conceptual database design in such cases remains an open problem. In particular, there is no canonical definition of functional dependencies (FDs) over possible worlds (e.g., each employee has just one manager). We identify several desirable properties that the semantics of such FDs should meet including Armstrong's axioms, the independence from irrelevant attributes, seamless satisfaction and implied by strong satisfaction. We show that we can define FDs such that they have all our desirable properties over vague tables. However, we also show that no notion of FD can satisfy all our desirable properties over a more general model (disjunctive tables). Our work formalizes a trade-off between having a general model and having well-behaved FDs.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 70,536
|
2407.06785
|
Towards physics-informed neural networks for landslide prediction
|
For decades, solutions to regional scale landslide prediction have mostly relied on data-driven models, by definition, disconnected from the physics of the failure mechanism. The success and spread of such tools came from the ability to exploit proxy variables rather than explicit geotechnical ones, as the latter are prohibitive to acquire over broad landscapes. Our work implements a Physics Informed Neural Network (PINN) approach, thereby adding to a standard data-driven architecture, an intermediate constraint to solve for the permanent deformation typical of Newmark slope stability methods. This translates into a neural network tasked with explicitly retrieving geotechnical parameters from common proxy variables and then minimize a loss function with respect to the available coseismic landside inventory. The results are very promising, because our model not only produces excellent predictive performance in the form of standard susceptibility output, but in the process, also generates maps of the expected geotechnical properties at a regional scale. Such architecture is therefore framed to tackle coseismic landslide prediction, something that, if confirmed in other studies, could open up towards PINN-based near-real-time predictions.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 471,534
|
2205.11027
|
When Data Geometry Meets Deep Function: Generalizing Offline
Reinforcement Learning
|
In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep Q function in out-of-distribution (OOD) areas. Unfortunately, existing offline RL methods are often over-conservative, inevitably hurting generalization performance outside data distribution. In our study, one interesting observation is that deep Q functions approximate well inside the convex hull of training data. Inspired by this, we propose a new method, DOGE (Distance-sensitive Offline RL with better GEneralization). DOGE marries dataset geometry with deep function approximators in offline RL, and enables exploitation in generalizable OOD areas rather than strictly constraining policy within data distribution. Specifically, DOGE trains a state-conditioned distance function that can be readily plugged into standard actor-critic methods as a policy constraint. Simple yet elegant, our algorithm enjoys better generalization compared to state-of-the-art methods on D4RL benchmarks. Theoretical analysis demonstrates the superiority of our approach to existing methods that are solely based on data distribution or support constraints.
| false
| false
| false
| false
| true
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 297,966
|
2407.05858
|
Fast On-device LLM Inference with NPUs
|
On-device inference for Large Language Models (LLMs), driven by increasing privacy concerns and advancements of mobile-sized models, has gained significant interest. However, even mobile-sized LLMs (e.g., Gemma-2B) encounter unacceptably high inference latency, often bottlenecked by the prefill stage in tasks like screen UI understanding. We present llm.npu, the first LLM inference system utilizing on-device Neural Processing Unit (NPU) offloading to reduce prefill latency. llm.npu enhances NPU offloading efficiency by re-constructing the prompt and model in three levels: (1) At prompt level, it divides variable-length prompts into multiple fixed-sized chunks while maintaining data dependencies; (2) At tensor level, it identifies and extracts significant outliers to run on the CPU/GPU in parallel with minimal overhead; (3) At block level, it schedules Transformer blocks in an out-of-order manner to the CPU/GPU and NPU based on their hardware affinity and sensitivity to accuracy. Compared to competitive baselines, llm.npu achieves 22.4x faster prefill speed and 30.7$\times$ energy savings on average, and up to 32.8x speedup in an end-to-end real-world application. For the first time, llm.npu achieves more than 1,000 tokens/sec prefilling for a billion-sized model.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 471,155
|
2502.02433
|
A coding theoretic study of homogeneous Markovian predictive games
|
This paper explores a predictive game in which a Forecaster announces odds based on a time-homogeneous Markov kernel, establishing a game-theoretic law of large numbers for the relative frequencies of occurrences of all finite strings. A key feature of our proof is a betting strategy built on a universal coding scheme, inspired by the martingale convergence theorem and algorithmic randomness theory, without relying on a diversified betting approach that involves countably many operating accounts. We apply these insights to thermodynamics, offering a game-theoretic perspective on Le\'o Szil\'ard's thought experiment.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| 530,311
|
2109.14638
|
Privacy Policy Question Answering Assistant: A Query-Guided Extractive
Summarization Approach
|
Existing work on making privacy policies accessible has explored new presentation forms such as color-coding based on the risk factors or summarization to assist users with conscious agreement. To facilitate a more personalized interaction with the policies, in this work, we propose an automated privacy policy question answering assistant that extracts a summary in response to the input user query. This is a challenging task because users articulate their privacy-related questions in a very different language than the legal language of the policy, making it difficult for the system to understand their inquiry. Moreover, existing annotated data in this domain are limited. We address these problems by paraphrasing to bring the style and language of the user's question closer to the language of privacy policies. Our content scoring module uses the existing in-domain data to find relevant information in the policy and incorporates it in a summary. Our pipeline is able to find an answer for 89% of the user queries in the privacyQA dataset.
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| 258,011
|
2406.04608
|
A Recover-then-Discriminate Framework for Robust Anomaly Detection
|
Anomaly detection (AD) has been extensively studied and applied in a wide range of scenarios in the recent past. However, there are still gaps between achieved and desirable levels of recognition accuracy for making AD for practical applications. In this paper, we start from an insightful analysis of two types of fundamental yet representative failure cases in the baseline model, and reveal reasons that hinder current AD methods from achieving a higher recognition accuracy. Specifically, by Case-1, we found that the main reasons detrimental to current AD methods is that the inputs to the recovery model contain a large number of detailed features to be recovered, which leads to the normal/abnormal area has-not/has been recovered into its original state. By Case-2, we surprisingly found that the abnormal area that cannot be recognized in image-level representations can be easily recognized in the feature-level representation. Based on the above observations, we propose a novel Recover-then-Discriminate (ReDi) framework for AD. ReDi takes a self-generated feature map and a selected prompted image as explicit input information to solve problems in case-1. Concurrently, a feature-level discriminative network is proposed to enhance abnormal differences between the recovered representation and the input representation. Extensive experimental results on two popular yet challenging AD datasets validate that ReDi achieves the new state-of-the-art accuracy.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 461,755
|
1001.4122
|
Distributed Control of the Laplacian Spectral Moments of a Network
|
It is well-known that the eigenvalue spectrum of the Laplacian matrix of a network contains valuable information about the network structure and the behavior of many dynamical processes run on it. In this paper, we propose a fully decentralized algorithm that iteratively modifies the structure of a network of agents in order to control the moments of the Laplacian eigenvalue spectrum. Although the individual agents have knowledge of their local network structure only (i.e., myopic information), they are collectively able to aggregate this local information and decide on what links are most beneficial to be added or removed at each time step. Our approach relies on gossip algorithms to distributively compute the spectral moments of the Laplacian matrix, as well as ensure network connectivity in the presence of link deletions. We illustrate our approach in nontrivial computer simulations and show that a good final approximation of the spectral moments of the target Laplacian matrix is achieved for many cases of interest.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 5,496
|
2303.05892
|
Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection
|
Open-vocabulary object detection aims to provide object detectors trained on a fixed set of object categories with the generalizability to detect objects described by arbitrary text queries. Previous methods adopt knowledge distillation to extract knowledge from Pretrained Vision-and-Language Models (PVLMs) and transfer it to detectors. However, due to the non-adaptive proposal cropping and single-level feature mimicking processes, they suffer from information destruction during knowledge extraction and inefficient knowledge transfer. To remedy these limitations, we propose an Object-Aware Distillation Pyramid (OADP) framework, including an Object-Aware Knowledge Extraction (OAKE) module and a Distillation Pyramid (DP) mechanism. When extracting object knowledge from PVLMs, the former adaptively transforms object proposals and adopts object-aware mask attention to obtain precise and complete knowledge of objects. The latter introduces global and block distillation for more comprehensive knowledge transfer to compensate for the missing relation information in object distillation. Extensive experiments show that our method achieves significant improvement compared to current methods. Especially on the MS-COCO dataset, our OADP framework reaches $35.6$ mAP$^{\text{N}}_{50}$, surpassing the current state-of-the-art method by $3.3$ mAP$^{\text{N}}_{50}$. Code is released at https://github.com/LutingWang/OADP.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 350,624
|
2309.05254
|
Towards Better Data Exploitation in Self-Supervised Monocular Depth
Estimation
|
Depth estimation plays an important role in the robotic perception system. Self-supervised monocular paradigm has gained significant attention since it can free training from the reliance on depth annotations. Despite recent advancements, existing self-supervised methods still underutilize the available training data, limiting their generalization ability. In this paper, we take two data augmentation techniques, namely Resizing-Cropping and Splitting-Permuting, to fully exploit the potential of training datasets. Specifically, the original image and the generated two augmented images are fed into the training pipeline simultaneously and we leverage them to conduct self-distillation. Additionally, we introduce the detail-enhanced DepthNet with an extra full-scale branch in the encoder and a grid decoder to enhance the restoration of fine details in depth maps. Experimental results demonstrate our method can achieve state-of-the-art performance on the KITTI benchmark, with both raw ground truth and improved ground truth. Moreover, our models also show superior generalization performance when transferring to Make3D and NYUv2 datasets. Our codes are available at https://github.com/Sauf4896/BDEdepth.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 391,015
|
2401.02606
|
Exploiting Polarized Material Cues for Robust Car Detection
|
Car detection is an important task that serves as a crucial prerequisite for many automated driving functions. The large variations in lighting/weather conditions and vehicle densities of the scenes pose significant challenges to existing car detection algorithms to meet the highly accurate perception demand for safety, due to the unstable/limited color information, which impedes the extraction of meaningful/discriminative features of cars. In this work, we present a novel learning-based car detection method that leverages trichromatic linear polarization as an additional cue to disambiguate such challenging cases. A key observation is that polarization, characteristic of the light wave, can robustly describe intrinsic physical properties of the scene objects in various imaging conditions and is strongly linked to the nature of materials for cars (e.g., metal and glass) and their surrounding environment (e.g., soil and trees), thereby providing reliable and discriminative features for robust car detection in challenging scenes. To exploit polarization cues, we first construct a pixel-aligned RGB-Polarization car detection dataset, which we subsequently employ to train a novel multimodal fusion network. Our car detection network dynamically integrates RGB and polarization features in a request-and-complement manner and can explore the intrinsic material properties of cars across all learning samples. We extensively validate our method and demonstrate that it outperforms state-of-the-art detection methods. Experimental results show that polarization is a powerful cue for car detection.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 419,775
|
2101.10495
|
Transparency in Multi-Human Multi-Robot Interaction
|
Transparency is a key factor in improving the performance of human-robot interaction. A transparent interface allows humans to be aware of the state of a robot and to assess the progress of the tasks at hand. When multi-robot systems are involved, transparency is an even greater challenge, due to the larger number of variables affecting the behavior of the robots as a whole. Significant effort has been devoted to studying transparency when single operators interact with multiple robots. However, studies on transparency that focus on multiple human operators interacting with a multi-robot systems are limited. This paper aims to fill this gap by presenting a human-swarm interaction interface with graphical elements that can be enabled and disabled. Through this interface, we study which graphical elements are contribute to transparency by comparing four "transparency modes": (i) no transparency (no operator receives information from the robots), (ii) central transparency (the operators receive information only relevant to their personal task), (iii) peripheral transparency (the operators share information on each others' tasks), and (iv) mixed transparency (both central and peripheral). We report the results in terms of awareness, trust, and workload of a user study involving 18 participants engaged in a complex multi-robot task.
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 216,954
|
2011.12845
|
Bounds for Algorithmic Mutual Information and a Unifilar Order Estimator
|
Inspired by Hilberg's hypothesis, which states that mutual information between blocks for natural language grows like a power law, we seek for links between power-law growth rate of algorithmic mutual information and of some estimator of the unifilar order, i.e., the number of hidden states in the generating stationary ergodic source in its minimal unifilar hidden Markov representation. We consider an order estimator which returns the smallest order for which the maximum likelihood is larger than a weakly penalized universal probability. This order estimator is intractable and follows the ideas by Merhav, Gutman, and Ziv (1989) and by Ziv and Merhav (1992) but in its exact form seems overlooked despite some nice theoretical properties. In particular, we can prove both strong consistency of this order estimator and an upper bound of algorithmic mutual information in terms of it. Using both results, we show that all (also uncomputable) sources of a finite unifilar order exhibit sub-power-law growth of algorithmic mutual information and of the unifilar order estimator. In contrast, we also exhibit an example of unifilar processes of a countably infinite order, with a deterministic pushdown automaton and an algorithmically random oracle, for which the mentioned two quantities grow as a power law with the same exponent. We also relate our results to natural language research.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 208,291
|
2008.02796
|
Learning to Factorize and Relight a City
|
We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors. Inspired by the classic intrinsic image decomposition, our learning signal builds upon two insights: 1) combining the disentangled factors should reconstruct the original image, and 2) the permanent factors should stay constant across multiple temporal samples of the same scene. To facilitate training, we assemble a city-scale dataset of outdoor timelapse imagery from Google Street View, where the same locations are captured repeatedly through time. This data represents an unprecedented scale of spatio-temporal outdoor imagery. We show that our learned disentangled factors can be used to manipulate novel images in realistic ways, such as changing lighting effects and scene geometry. Please visit factorize-a-city.github.io for animated results.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 190,716
|
1304.1494
|
Now that I Have a Good Theory of Uncertainty, What Else Do I Need?
|
Rather than discussing the isolated merits of a nominative theory of uncertainty, this paper focuses on a class of problems, referred to as Dynamic Classification Problem (DCP), which requires the integration of many theories, including a prescriptive theory of uncertainty. We start by analyzing the Dynamic Classification Problem and by defining its induced requirements on a supporting (plausible) reasoning system. We provide a summary of the underlying theory (based on the semantics of many-valed logics) and illustrate the constraints imposed upon it to ensure the modularity and computational performance required by the applications. We describe the technologies used for knowledge engineering (such as object-based simulator to exercise requirements, and development tools to build the Knowledge Base and functionally validate it). We emphasize the difference between development environment and run-time system, describe the rule cross-compiler, and the real-time inference engine with meta-reasoning capabilities. Finally, we illustrate how our proposed technology satisfies the pop's requirements and analyze some of the lessons reamed from its applications to situation assessment problems for Pilot's Associate and Submarine Commander Associate.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 23,527
|
2205.11798
|
Symbolic Expression Transformer: A Computer Vision Approach for Symbolic
Regression
|
Symbolic Regression (SR) is a type of regression analysis to automatically find the mathematical expression that best fits the data. Currently, SR still basically relies on various searching strategies so that a sample-specific model is required to be optimized for every expression, which significantly limits the model's generalization and efficiency. Inspired by the fact that human beings can infer a mathematical expression based on the curve of it, we propose Symbolic Expression Transformer (SET), a sample-agnostic model from the perspective of computer vision for SR. Specifically, the collected data is represented as images and an image caption model is employed for translating images to symbolic expressions. A large-scale dataset without overlap between training and testing sets in the image domain is released. Our results demonstrate the effectiveness of SET and suggest the promising direction of image-based model for solving the challenging SR problem.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 298,288
|
2412.04326
|
Understanding Student Sentiment on Mental Health Support in Colleges
Using Large Language Models
|
Mental health support in colleges is vital in educating students by offering counseling services and organizing supportive events. However, evaluating its effectiveness faces challenges like data collection difficulties and lack of standardized metrics, limiting research scope. Student feedback is crucial for evaluation but often relies on qualitative analysis without systematic investigation using advanced machine learning methods. This paper uses public Student Voice Survey data to analyze student sentiments on mental health support with large language models (LLMs). We created a sentiment analysis dataset, SMILE-College, with human-machine collaboration. The investigation of both traditional machine learning methods and state-of-the-art LLMs showed the best performance of GPT-3.5 and BERT on this new dataset. The analysis highlights challenges in accurately predicting response sentiments and offers practical insights on how LLMs can enhance mental health-related research and improve college mental health services. This data-driven approach will facilitate efficient and informed mental health support evaluation, management, and decision-making.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| 514,363
|
2303.15322
|
Progressive Semantic-Visual Mutual Adaption for Generalized Zero-Shot
Learning
|
Generalized Zero-Shot Learning (GZSL) identifies unseen categories by knowledge transferred from the seen domain, relying on the intrinsic interactions between visual and semantic information. Prior works mainly localize regions corresponding to the sharing attributes. When various visual appearances correspond to the same attribute, the sharing attributes inevitably introduce semantic ambiguity, hampering the exploration of accurate semantic-visual interactions. In this paper, we deploy the dual semantic-visual transformer module (DSVTM) to progressively model the correspondences between attribute prototypes and visual features, constituting a progressive semantic-visual mutual adaption (PSVMA) network for semantic disambiguation and knowledge transferability improvement. Specifically, DSVTM devises an instance-motivated semantic encoder that learns instance-centric prototypes to adapt to different images, enabling the recast of the unmatched semantic-visual pair into the matched one. Then, a semantic-motivated instance decoder strengthens accurate cross-domain interactions between the matched pair for semantic-related instance adaption, encouraging the generation of unambiguous visual representations. Moreover, to mitigate the bias towards seen classes in GZSL, a debiasing loss is proposed to pursue response consistency between seen and unseen predictions. The PSVMA consistently yields superior performances against other state-of-the-art methods. Code will be available at: https://github.com/ManLiuCoder/PSVMA.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 354,450
|
2005.07647
|
Finding Experts in Transformer Models
|
In this work we study the presence of expert units in pre-trained Transformer Models (TM), and how they impact a model's performance. We define expert units to be neurons that are able to classify a concept with a given average precision, where a concept is represented by a binary set of sentences containing the concept (or not). Leveraging the OneSec dataset (Scarlini et al., 2019), we compile a dataset of 1641 concepts that allows diverse expert units in TM to be discovered. We show that expert units are important in several ways: (1) The presence of expert units is correlated ($r^2=0.833$) with the generalization power of TM, which allows ranking TM without requiring fine-tuning on suites of downstream tasks. We further propose an empirical method to decide how accurate such experts should be to evaluate generalization. (2) The overlap of top experts between concepts provides a sensible way to quantify concept co-learning, which can be used for explainability of unknown concepts. (3) We show how to self-condition off-the-shelf pre-trained language models to generate text with a given concept by forcing the top experts to be active, without requiring re-training the model or using additional parameters.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 177,335
|
1811.07725
|
Cyclic bent functions and their applications in codes, codebooks,
designs, MUBs and sequences
|
Let $m$ be an even positive integer. A Boolean bent function $f$ on $\GF{m-1} \times \GF {}$ is called a \emph{cyclic bent function} if for any $a\neq b\in \GF {m-1}$ and $\epsilon \in \GF{}$, $f(ax_1,x_2)+f(bx_1,x_2+\epsilon)$ is always bent, where $x_1\in \GF {m-1}, x_2 \in \GF {}$. Cyclic bent functions look extremely rare. This paper focuses on cyclic bent functions on $\GF {m-1} \times \GF {}$ and their applications. The first objective of this paper is to construct a new class of cyclic bent functions, which includes all known constructions of cyclic bent functions as special cases. The second objective is to use cyclic bent functions to construct good mutually unbiased bases (MUBs), codebooks and sequence families. The third objective is to study cyclic semi-bent functions and their applications. The fourth objective is to present a family of binary codes containing the Kerdock code as a special case, and describe their support designs. The results of this paper show that cyclic bent functions and cyclic semi-bent functions have nice applications in several fields such as symmetric cryptography, quantum physics, compressed sensing and CDMA communication.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 113,851
|
2412.08100
|
FuzzDistill: Intelligent Fuzzing Target Selection using Compile-Time
Analysis and Machine Learning
|
Fuzz testing is a fundamental technique employed to identify vulnerabilities within software systems. However, the process can be protracted and resource-intensive, especially when confronted with extensive codebases. In this work, I present FuzzDistill, an approach that harnesses compile-time data and machine learning to refine fuzzing targets. By analyzing compile-time information, such as function call graphs' features, loop information, and memory operations, FuzzDistill identifies high-priority areas of the codebase that are more probable to contain vulnerabilities. I demonstrate the efficacy of my approach through experiments conducted on real-world software, demonstrating substantial reductions in testing time.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| 515,938
|
1509.04110
|
Cooperative Cognitive Radio Network with Energy Harvesting: Stability
Analysis
|
This paper investigates the maximum stable throughput of a cooperative cognitive radio system with energy harvesting Primary User and Secondary User. Each PU and SU has a data queue for data storage and a battery for energy storage. These batteries harvest energy from the environment and store it for data transmission in next time slots. The SU is allowed to access the PU channel only when the PU is idle. The SU cooperates with the PU for its data transmission, getting mutual benefits for both users, such that, the PU exploits the SU power to relay a fraction of its undelivered packets, and the SU gets more opportunities to access idle time slots.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 46,895
|
2311.03313
|
Practical considerations for variable screening in the Super Learner
|
Estimating a prediction function is a fundamental component of many data analyses. The Super Learner ensemble, a particular implementation of stacking, has desirable theoretical properties and has been used successfully in many applications. Dimension reduction can be accomplished by using variable screening algorithms, including the lasso, within the ensemble prior to fitting other prediction algorithms. However, the performance of a Super Learner using the lasso for dimension reduction has not been fully explored in cases where the lasso is known to perform poorly. We provide empirical results that suggest that a diverse set of candidate screening algorithms should be used to protect against poor performance of any one screen, similar to the guidance for choosing a library of prediction algorithms for the Super Learner.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 405,795
|
1906.04663
|
Control contribution identifies top driver nodes in complex networks
|
We propose a new measure to quantify the impact of a node $i$ in controlling a directed network. This measure, called `control contribution' $\mathcal{C}_{i}$, combines the probability for node $i$ to appear in a set of driver nodes and the probability for other nodes to be controlled by $i$. To calculate $\mathcal{C}_{i}$, we propose an optimization method based on random samples of minimum sets of drivers. Using real-world and synthetic networks, we find very broad distributions of $C_{i}$. Ranking nodes according to their $C_{i}$ values allows us to identify the top driver nodes that control most of the network. We show that this ranking is superior to rankings based on control capacity or control range. We find that control contribution indeed contains new information that cannot be traced back to degree, control capacity or control range of a node.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 134,792
|
2005.02659
|
Towards Building Knowledge by Merging Multiple Ontologies with CoMerger:
A Partitioning-based Approach
|
Ontologies are the prime way of organizing data in the Semantic Web. Often, it is necessary to combine several, independently developed ontologies to obtain a knowledge graph fully representing a domain of interest. The complementarity of existing ontologies can be leveraged by merging them. Existing approaches for ontology merging mostly implement a binary merge. However, with the growing number and size of relevant ontologies across domains, scalability becomes a central challenge. A multi-ontology merging technique offers a potential solution to this problem. We present CoMerger, a scalable multiple ontologies merging method. For efficient processing, rather than successively merging complete ontologies pairwise, we group related concepts across ontologies into partitions and merge first within and then across those partitions. The experimental results on well-known datasets confirm the feasibility of our approach and demonstrate its superiority over binary strategies. A prototypical implementation is freely accessible through a live web portal.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 175,945
|
2207.11720
|
Progressive Feature Learning for Realistic Cloth-Changing Gait
Recognition
|
Gait recognition is instrumental in crime prevention and social security, for it can be conducted at a long distance to figure out the identity of persons. However, existing datasets and methods cannot satisfactorily deal with the most challenging cloth-changing problem in practice. Specifically, the practical gait models are usually trained on automatically labeled data, in which the sequences' views and cloth conditions of each person have some restrictions. To be concrete, the cross-view sub-dataset only has normal walking condition without cloth-changing, while the cross-cloth sub-dataset has cloth-changing sequences but only in front views. As a result, the cloth-changing accuracy cannot meet practical requirements. In this work, we formulate the problem as Realistic Cloth-Changing Gait Recognition (abbreviated as RCC-GR) and we construct two benchmarks: CASIA-BN-RCC and OUMVLP-RCC, to simulate the above setting. Furthermore, we propose a new framework called Progressive Feature Learning that can be applied with off-the-shelf backbones to improve their performance in RCC-GR. Specifically, in our framework, we design Progressive Mapping and Progressive Uncertainty to extract cross-view features and then extract cross-cloth features on the basis. In this way, the feature from the cross-view sub-dataset can first dominate the feature space and relieve the uneven distribution caused by the adverse effect from the cross-cloth sub-dataset. The experiments on our benchmarks show that our framework can effectively improve recognition performance, especially in the cloth-changing conditions.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 309,753
|
2211.16106
|
Encoder-Decoder Model for Suffix Prediction in Predictive Monitoring
|
Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time -- suffix prediction -- . Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, not learning from the whole suffix during the training phase. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that improves the selection of the activity for each index of the suffix. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 333,519
|
2406.17747
|
Probing the effects of broken symmetries in machine learning
|
Symmetry is one of the most central concepts in physics, and it is no surprise that it has also been widely adopted as an inductive bias for machine-learning models applied to the physical sciences. This is especially true for models targeting the properties of matter at the atomic scale. Both established and state-of-the-art approaches, with almost no exceptions, are built to be exactly equivariant to translations, permutations, and rotations of the atoms. Incorporating symmetries -- rotations in particular -- constrains the model design space and implies more complicated architectures that are often also computationally demanding. There are indications that non-symmetric models can easily learn symmetries from data, and that doing so can even be beneficial for the accuracy of the model. We put a model that obeys rotational invariance only approximately to the test, in realistic scenarios involving simulations of gas-phase, liquid, and solid water. We focus specifically on physical observables that are likely to be affected -- directly or indirectly -- by symmetry breaking, finding negligible consequences when the model is used in an interpolative, bulk, regime. Even for extrapolative gas-phase predictions, the model remains very stable, even though symmetry artifacts are noticeable. We also discuss strategies that can be used to systematically reduce the magnitude of symmetry breaking when it occurs, and assess their impact on the convergence of observables.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 467,704
|
2011.01223
|
Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test
|
The Kolmogorov-Smirnov (KS) test is popularly used in many applications, such as anomaly detection, astronomy, database security and AI systems. One challenge remained untouched is how we can obtain an explanation on why a test set fails the KS test. In this paper, we tackle the problem of producing counterfactual explanations for test data failing the KS test. Concept-wise, we propose the notion of most comprehensible counterfactual explanations, which accommodates both the KS test data and the user domain knowledge in producing explanations. Computation-wise, we develop an efficient algorithm MOCHE (for MOst CompreHensible Explanation) that avoids enumerating and checking an exponential number of subsets of the test set failing the KS test. MOCHE not only guarantees to produce the most comprehensible counterfactual explanations, but also is orders of magnitudes faster than the baselines. Experiment-wise, we present a systematic empirical study on a series of benchmark real datasets to verify the effectiveness, efficiency and scalability of most comprehensible counterfactual explanations and MOCHE.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 204,528
|
2410.19814
|
Stochastic Flow Matching for Resolving Small-Scale Physics
|
Conditioning diffusion and flow models have proven effective for super-resolving small-scale details in natural images.However, in physical sciences such as weather, super-resolving small-scale details poses significant challenges due to: (i) misalignment between input and output distributions (i.e., solutions to distinct partial differential equations (PDEs) follow different trajectories), (ii) multi-scale dynamics, deterministic dynamics at large scales vs. stochastic at small scales, and (iii) limited data, increasing the risk of overfitting. To address these challenges, we propose encoding the inputs to a latent base distribution that is closer to the target distribution, followed by flow matching to generate small-scale physics. The encoder captures the deterministic components, while flow matching adds stochastic small-scale details. To account for uncertainty in the deterministic part, we inject noise into the encoder output using an adaptive noise scaling mechanism, which is dynamically adjusted based on maximum-likelihood estimates of the encoder predictions. We conduct extensive experiments on both the real-world CWA weather dataset and the PDE-based Kolmogorov dataset, with the CWA task involving super-resolving the weather variables for the region of Taiwan from 25 km to 2 km scales. Our results show that the proposed stochastic flow matching (SFM) framework significantly outperforms existing methods such as conditional diffusion and flows.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 502,502
|
1906.00904
|
Deep ReLU Networks Have Surprisingly Few Activation Patterns
|
The success of deep networks has been attributed in part to their expressivity: per parameter, deep networks can approximate a richer class of functions than shallow networks. In ReLU networks, the number of activation patterns is one measure of expressivity; and the maximum number of patterns grows exponentially with the depth. However, recent work has showed that the practical expressivity of deep networks - the functions they can learn rather than express - is often far from the theoretical maximum. In this paper, we show that the average number of activation patterns for ReLU networks at initialization is bounded by the total number of neurons raised to the input dimension. We show empirically that this bound, which is independent of the depth, is tight both at initialization and during training, even on memorization tasks that should maximize the number of activation patterns. Our work suggests that realizing the full expressivity of deep networks may not be possible in practice, at least with current methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 133,539
|
2103.06357
|
ReportAGE: Automatically extracting the exact age of Twitter users based
on self-reports in tweets
|
Advancing the utility of social media data for research applications requires methods for automatically detecting demographic information about social media study populations, including users' age. The objective of this study was to develop and evaluate a method that automatically identifies the exact age of users based on self-reports in their tweets. Our end-to-end automatic natural language processing (NLP) pipeline, ReportAGE, includes query patterns to retrieve tweets that potentially mention an age, a classifier to distinguish retrieved tweets that self-report the user's exact age ("age" tweets) and those that do not ("no age" tweets), and rule-based extraction to identify the age. To develop and evaluate ReportAGE, we manually annotated 11,000 tweets that matched the query patterns. Based on 1000 tweets that were annotated by all five annotators, inter-annotator agreement (Fleiss' kappa) was 0.80 for distinguishing "age" and "no age" tweets, and 0.95 for identifying the exact age among the "age" tweets on which the annotators agreed. A deep neural network classifier, based on a RoBERTa-Large pretrained model, achieved the highest F1-score of 0.914 (precision = 0.905, recall = 0.942) for the "age" class. When the age extraction was evaluated using the classifier's predictions, it achieved an F1-score of 0.855 (precision = 0.805, recall = 0.914) for the "age" class. When it was evaluated directly on the held-out test set, it achieved an F1-score of 0.931 (precision = 0.873, recall = 0.998) for the "age" class. We deployed ReportAGE on more than 1.2 billion tweets posted by 245,927 users, and predicted ages for 132,637 (54%) of them. Scaling the detection of exact age to this large number of users can advance the utility of social media data for research applications that do not align with the predefined age groupings of extant binary or multi-class classification approaches.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 224,264
|
2405.20045
|
Iterative Learning Control of Fast, Nonlinear, Oscillatory Dynamics
(Preprint)
|
The sudden onset of deleterious and oscillatory dynamics (often called instabilities) is a known challenge in many fluid, plasma, and aerospace systems. These dynamics are difficult to address because they are nonlinear, chaotic, and are often too fast for active control schemes. In this work, we develop an alternative active controls system using an iterative, trajectory-optimization and parameter-tuning approach based on Iterative Learning Control (ILC), Time-Lagged Phase Portraits (TLPP) and Gaussian Process Regression (GPR). The novelty of this approach is that it can control a system's dynamics despite the controller being much slower than the dynamics. We demonstrate this controller on the Lorenz system of equations where it iteratively adjusts (tunes) the system's input parameters to successfully reproduce a desired oscillatory trajectory or state. Additionally, we investigate the system's dynamical sensitivity to its control parameters, identify continuous and bounded regions of desired dynamical trajectories, and demonstrate that the controller is robust to missing information and uncontrollable parameters as long as certain requirements are met. The controller presented in this work provides a framework for low-speed control for a variety of fast, nonlinear systems that may aid in instability suppression and mitigation.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 459,171
|
2310.00627
|
Intelligent Client Selection for Federated Learning using Cellular
Automata
|
Federated Learning (FL) has emerged as a promising solution for privacy-enhancement and latency minimization in various real-world applications, such as transportation, communications, and healthcare. FL endeavors to bring Machine Learning (ML) down to the edge by harnessing data from million of devices and IoT sensors, thus enabling rapid responses to dynamic environments and yielding highly personalized results. However, the increased amount of sensors across diverse applications poses challenges in terms of communication and resource allocation, hindering the participation of all devices in the federated process and prompting the need for effective FL client selection. To address this issue, we propose Cellular Automaton-based Client Selection (CA-CS), a novel client selection algorithm, which leverages Cellular Automata (CA) as models to effectively capture spatio-temporal changes in a fast-evolving environment. CA-CS considers the computational resources and communication capacity of each participating client, while also accounting for inter-client interactions between neighbors during the client selection process, enabling intelligent client selection for online FL processes on data streams that closely resemble real-world scenarios. In this paper, we present a thorough evaluation of the proposed CA-CS algorithm using MNIST and CIFAR-10 datasets, while making a direct comparison against a uniformly random client selection scheme. Our results demonstrate that CA-CS achieves comparable accuracy to the random selection approach, while effectively avoiding high-latency clients.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 396,067
|
2311.04925
|
Investigating Deep-Learning NLP for Automating the Extraction of
Oncology Efficacy Endpoints from Scientific Literature
|
Benchmarking drug efficacy is a critical step in clinical trial design and planning. The challenge is that much of the data on efficacy endpoints is stored in scientific papers in free text form, so extraction of such data is currently a largely manual task. Our objective is to automate this task as much as possible. In this study we have developed and optimised a framework to extract efficacy endpoints from text in scientific papers, using a machine learning approach. Our machine learning model predicts 25 classes associated with efficacy endpoints and leads to high F1 scores (harmonic mean of precision and recall) of 96.4% on the test set, and 93.9% and 93.7% on two case studies. These methods were evaluated against - and showed strong agreement with - subject matter experts and show significant promise in the future of automating the extraction of clinical endpoints from free text. Clinical information extraction from text data is currently a laborious manual task which scales poorly and is prone to human error. Demonstrating the ability to extract efficacy endpoints automatically shows great promise for accelerating clinical trial design moving forwards.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 406,402
|
2401.05069
|
MISS: Multiclass Interpretable Scoring Systems
|
In this work, we present a novel, machine-learning approach for constructing Multiclass Interpretable Scoring Systems (MISS) - a fully data-driven methodology for generating single, sparse, and user-friendly scoring systems for multiclass classification problems. Scoring systems are commonly utilized as decision support models in healthcare, criminal justice, and other domains where interpretability of predictions and ease of use are crucial. Prior methods for data-driven scoring, such as SLIM (Supersparse Linear Integer Model), were limited to binary classification tasks and extensions to multiclass domains were primarily accomplished via one-versus-all-type techniques. The scores produced by our method can be easily transformed into class probabilities via the softmax function. We demonstrate techniques for dimensionality reduction and heuristics that enhance the training efficiency and decrease the optimality gap, a measure that can certify the optimality of the model. Our approach has been extensively evaluated on datasets from various domains, and the results indicate that it is competitive with other machine learning models in terms of classification performance metrics and provides well-calibrated class probabilities.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 420,637
|
2005.05961
|
Private Two-Terminal Hypothesis Testing
|
We study private two-terminal hypothesis testing with simple hypotheses where the privacy goal is to ensure that participating in the testing protocol reveals little additional information about the other user's observation when a user is told what the correct hypothesis is. We show that, in general, meaningful correctness and privacy cannot be achieved if the users do not have access to correlated (but, not common) randomness. We characterize the optimal correctness and privacy error exponents when the users have access to non-trivial correlated randomness (those that permit secure multiparty computation).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 176,881
|
2104.10103
|
Space Partitioning and Regression Mode Seeking via a Mean-Shift-Inspired
Algorithm
|
The mean shift (MS) algorithm is a nonparametric method used to cluster sample points and find the local modes of kernel density estimates, using an idea based on iterative gradient ascent. In this paper we develop a mean-shift-inspired algorithm to estimate the modes of regression functions and partition the sample points in the input space. We prove convergence of the sequences generated by the algorithm and derive the non-asymptotic rates of convergence of the estimated local modes for the underlying regression model. We also demonstrate the utility of the algorithm for data-enabled discovery through an application on biomolecular structure data. An extension to subspace constrained mean shift (SCMS) algorithm used to extract ridges of regression functions is briefly discussed.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 231,465
|
1712.00716
|
Convolutional Phase Retrieval via Gradient Descent
|
We study the convolutional phase retrieval problem, of recovering an unknown signal $\mathbf x \in \mathbb C^n $ from $m$ measurements consisting of the magnitude of its cyclic convolution with a given kernel $\mathbf a \in \mathbb C^m $. This model is motivated by applications such as channel estimation, optics, and underwater acoustic communication, where the signal of interest is acted on by a given channel/filter, and phase information is difficult or impossible to acquire. We show that when $\mathbf a$ is random and the number of observations $m$ is sufficiently large, with high probability $\mathbf x$ can be efficiently recovered up to a global phase shift using a combination of spectral initialization and generalized gradient descent. The main challenge is coping with dependencies in the measurement operator. We overcome this challenge by using ideas from decoupling theory, suprema of chaos processes and the restricted isometry property of random circulant matrices, and recent analysis of alternating minimization methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| 85,969
|
2012.09276
|
Measuring Disentanglement: A Review of Metrics
|
Learning to disentangle and represent factors of variation in data is an important problem in AI. While many advances have been made to learn these representations, it is still unclear how to quantify disentanglement. While several metrics exist, little is known on their implicit assumptions, what they truly measure, and their limits. In consequence, it is difficult to interpret results when comparing different representations. In this work, we survey supervised disentanglement metrics and thoroughly analyze them. We propose a new taxonomy in which all metrics fall into one of three families: intervention-based, predictor-based and information-based. We conduct extensive experiments in which we isolate properties of disentangled representations, allowing stratified comparison along several axes. From our experiment results and analysis, we provide insights on relations between disentangled representation properties. Finally, we share guidelines on how to measure disentanglement.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 212,008
|
2207.10436
|
Mining Relations among Cross-Frame Affinities for Video Semantic
Segmentation
|
The essence of video semantic segmentation (VSS) is how to leverage temporal information for prediction. Previous efforts are mainly devoted to developing new techniques to calculate the cross-frame affinities such as optical flow and attention. Instead, this paper contributes from a different angle by mining relations among cross-frame affinities, upon which better temporal information aggregation could be achieved. We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations. Inspired by traditional feature processing, we propose Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA). To make it feasible to execute MAA, we propose a Selective Token Masking (STM) strategy to select a subset of consistent reference tokens for different scales when calculating affinities, which also improves the efficiency of our method. At last, the cross-frame affinities strengthened by SAR and MAA are adopted for adaptively aggregating temporal information. Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods. The code is publicly available at https://github.com/GuoleiSun/VSS-MRCFA
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 309,266
|
2001.05060
|
Recognizing Video Events with Varying Rhythms
|
Recognizing Video events in long, complex videos with multiple sub-activities has received persistent attention recently. This task is more challenging than traditional action recognition with short, relatively homogeneous video clips. In this paper, we investigate the problem of recognizing long and complex events with varying action rhythms, which has not been considered in the literature but is a practical challenge. Our work is inspired in part by how humans identify events with varying rhythms: quickly catching frames contributing most to a specific event. We propose a two-stage \emph{end-to-end} framework, in which the first stage selects the most significant frames while the second stage recognizes the event using the selected frames. Our model needs only \emph{event-level labels} in the training stage, and thus is more practical when the sub-activity labels are missing or difficult to obtain. The results of extensive experiments show that our model can achieve significant improvement in event recognition from long videos while maintaining high accuracy even if the test videos suffer from severe rhythm changes. This demonstrates the potential of our method for real-world video-based applications, where test and training videos can differ drastically in rhythms of sub-activities.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 160,428
|
2405.18523
|
MM-Mixing: Multi-Modal Mixing Alignment for 3D Understanding
|
We introduce MM-Mixing, a multi-modal mixing alignment framework for 3D understanding. MM-Mixing applies mixing-based methods to multi-modal data, preserving and optimizing cross-modal connections while enhancing diversity and improving alignment across modalities. Our proposed two-stage training pipeline combines feature-level and input-level mixing to optimize the 3D encoder. The first stage employs feature-level mixing with contrastive learning to align 3D features with their corresponding modalities. The second stage incorporates both feature-level and input-level mixing, introducing mixed point cloud inputs to further refine 3D feature representations. MM-Mixing enhances intermodality relationships, promotes generalization, and ensures feature consistency while providing diverse and realistic training samples. We demonstrate that MM-Mixing significantly improves baseline performance across various learning scenarios, including zero-shot 3D classification, linear probing 3D classification, and cross-modal 3D shape retrieval. Notably, we improved the zero-shot classification accuracy on ScanObjectNN from 51.3% to 61.9%, and on Objaverse-LVIS from 46.8% to 51.4%. Our findings highlight the potential of multi-modal mixing-based alignment to significantly advance 3D object recognition and understanding while remaining straightforward to implement and integrate into existing frameworks.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 458,457
|
1410.2862
|
On the Oblivious Transfer Capacity of Generalized Erasure Channels
against Malicious Adversaries
|
Noisy channels are a powerful resource for cryptography as they can be used to obtain information-theoretically secure key agreement, commitment and oblivious transfer protocols, among others. Oblivious transfer (OT) is a fundamental primitive since it is complete for secure multi-party computation, and the OT capacity characterizes how efficiently a channel can be used for obtaining string oblivious transfer. Ahlswede and Csisz\'{a}r (\emph{ISIT'07}) presented upper and lower bounds on the OT capacity of generalized erasure channels (GEC) against passive adversaries. In the case of GEC with erasure probability at least 1/2, the upper and lower bounds match and therefore the OT capacity was determined. It was later proved by Pinto et al. (\emph{IEEE Trans. Inf. Theory 57(8)}) that in this case there is also a protocol against malicious adversaries achieving the same lower bound, and hence the OT capacity is identical for passive and malicious adversaries. In the case of GEC with erasure probability smaller than 1/2, the known lower bound against passive adversaries that was established by Ahlswede and Csisz\'{a}r does not match their upper bound and it was unknown whether this OT rate could be achieved against malicious adversaries as well. In this work we show that there is a protocol against malicious adversaries achieving the same OT rate that was obtained against passive adversaries. In order to obtain our results we introduce a novel use of interactive hashing that is suitable for dealing with the case of low erasure probability ($p^* <1/2$).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| 36,656
|
2101.09606
|
Learning degraded image classification with restoration data fidelity
|
Learning-based methods especially with convolutional neural networks (CNN) are continuously showing superior performance in computer vision applications, ranging from image classification to restoration. For image classification, most existing works focus on very clean images such as images in Caltech-256 and ImageNet datasets. However, in most realistic scenarios, the acquired images may suffer from degradation. One important and interesting problem is to combine image classification and restoration tasks to improve the performance of CNN-based classification networks on degraded images. In this report, we explore the influence of degradation types and levels on four widely-used classification networks, and the use of a restoration network to eliminate the degradation's influence. We also propose a novel method leveraging a fidelity map to calibrate the image features obtained by pre-trained classification networks. We empirically demonstrate that our proposed method consistently outperforms the pre-trained networks under all degradation levels and types with additive white Gaussian noise (AWGN), and it even outperforms the re-trained networks for degraded images under low degradation levels. We also show that the proposed method is a model-agnostic approach that benefits different classification networks. Our results reveal that the proposed method is a promising solution to mitigate the effect caused by image degradation.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 216,652
|
1911.12615
|
Data Transmission based on Exact Inverse Periodic Nonlinear Fourier
Transform, Part II: Waveform Design and Experiment
|
The nonlinear Fourier transform has the potential to overcome limits on performance and achievable data rates which arise in modern optical fiber communication systems when nonlinear interference is treated as noise. The periodic nonlinear Fourier transform (PNFT) has been much less investigated compared to its counterpart based on vanishing boundary conditions. In this paper, we design a first experiment based on the PNFT in which information is encoded in the invariant nonlinear main spectrum. To this end, we propose a method to construct a set of periodic waveforms each having the same fixed period, by employing the exact inverse PNFT algorithm developed in Part I. We demonstrate feasibility of the transmission scheme in experiment in good agreement with simulations and obtain a bit-error ratio of $10^{-3}$ over a distance of 2000 km. It is shown that the transmission reach is significantly longer than expected from a naive estimate based on group velocity dispersion and cyclic prefix length, which is explained through a dominating solitonic component in the transmitted waveform. Our constellation design can be generalized to an arbitrary number of nonlinear degrees of freedom.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 155,453
|
2309.08580
|
Automated Characterization and Monitoring of Material Shape using
Riemannian Geometry
|
Shape affects both the physical and chemical properties of a material. Characterizing the roughness, convexity, and general geometry of a material can yield information on its catalytic efficiency, solubility, elasticity, porosity, and overall effectiveness in the application of interest. However, material shape can be defined in a multitude of conflicting ways where different aspects of a material's geometry are emphasized over others, leading to bespoke measures of shape that are not easily generalizable. In this paper, we explore the use of Riemannian geometry in the analysis of shape and show that a Riemannian geometric framework for shape analysis is generalizable, computationally scalable, and can be directly integrated into common data analysis methods. In this framework, material shapes are abstracted as points on a Riemannian manifold. This information can be used to construct statistical moments (e.g., means, variances) and perform tasks such as dimensionality reduction and statistical process control. We provide a practical introduction to the mathematics of shape analysis through Riemannian geometry and illustrate its application on a manufactured/mined granular material dataset provided by Covia Corp. We show that the Riemannian framework can be used to automatically extract and quantify the shape of granular materials in a statistically rigorous manner.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 392,228
|
1808.00171
|
Shuffle-Then-Assemble: Learning Object-Agnostic Visual Relationship
Features
|
Due to the fact that it is prohibitively expensive to completely annotate visual relationships, i.e., the (obj1, rel, obj2) triplets, relationship models are inevitably biased to object classes of limited pairwise patterns, leading to poor generalization to rare or unseen object combinations. Therefore, we are interested in learning object-agnostic visual features for more generalizable relationship models. By "agnostic", we mean that the feature is less likely biased to the classes of paired objects. To alleviate the bias, we propose a novel \texttt{Shuffle-Then-Assemble} pre-training strategy. First, we discard all the triplet relationship annotations in an image, leaving two unpaired object domains without obj1-obj2 alignment. Then, our feature learning is to recover possible obj1-obj2 pairs. In particular, we design a cycle of residual transformations between the two domains, to capture shared but not object-specific visual patterns. Extensive experiments on two visual relationship benchmarks show that by using our pre-trained features, naive relationship models can be consistently improved and even outperform other state-of-the-art relationship models. Code has been made available at: \url{https://github.com/yangxuntu/vrd}.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 104,322
|
0809.1618
|
ECOLANG - Communications Language for Ecological Simulations Network
|
This document describes the communication language used in one multiagent system environment for ecological simulations, based on EcoDynamo simulator application linked with several intelligent agents and visualisation applications, and extends the initial definition of the language. The agents actions and perceptions are translated into messages exchanged with the simulator application and other agents. The concepts and definitions used follow the BNF notation (Backus et al. 1960) and is inspired in the Coach Unilang language (Reis and Lau 2002).
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 2,322
|
1901.08763
|
Continuous Analog Channel Estimation Aided Beamforming for Massive MIMO
Systems
|
Analog beamforming greatly reduces the implementation cost of massive antenna transceivers by using only one up/down-conversion chain. However, it incurs a large pilot overhead when used with conventional channel estimation (CE) techniques. This is because these CE techniques involve digital processing, requiring the up/down-conversion chain to be time-multiplexed across the antenna dimensions. This paper introduces a novel CE technique, called continuous analog channel estimation (CACE), that avoids digital processing, enables analog beamforming at the receiver and additionally provides resilience against oscillator phase-noise. By avoiding time-multiplexing of up/down-conversion chains, the CE overhead is reduced significantly and furthermore becomes independent of the number of antenna elements. In CACE, a reference tone is transmitted continuously with the data signals, and the receiver uses the received reference signal as a matched filter for combining the data signals, albeit via analog processing. We propose a receiver architecture for CACE, analyze its performance in the presence of oscillator phase-noise, and derive near-optimal system parameters and power allocation. Transmit beamforming and initial access procedure with CACE are also discussed. Simulations confirm that, in comparison to conventional CE, CACE provides phase-noise resilience and a significant reduction in the CE overhead, while suffering only a small loss in signal-to-interference-plus-noise-ratio.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 119,575
|
1906.04563
|
Latent Channel Networks
|
Latent Euclidean embedding models a given network by representing each node in a Euclidean space, where the probability of two nodes sharing an edge is a function of the distances between the nodes. This implies that for two nodes to share an edge with high probability, they must be relatively close in all dimensions. This constraint may be overly restrictive for describing modern networks, in which having similarities in at least one area may be sufficient for having a high edge probability. We introduce a new model, which we call Latent Channel Networks, which allows for such features of a network. We present an EM algorithm for fitting the model, for which the computational complexity is linear in the number of edges and number of channels and apply the algorithm to both synthetic and classic network datasets.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 134,755
|
2305.00723
|
Predictions Based on Pixel Data: Insights from PDEs and Finite
Differences
|
As supported by abundant experimental evidence, neural networks are state-of-the-art for many approximation tasks in high-dimensional spaces. Still, there is a lack of a rigorous theoretical understanding of what they can approximate, at which cost, and at which accuracy. One network architecture of practical use, especially for approximation tasks involving images, is (residual) convolutional networks. However, due to the locality of the linear operators involved in these networks, their analysis is more complicated than that of fully connected neural networks. This paper deals with approximation of time sequences where each observation is a matrix. We show that with relatively small networks, we can represent exactly a class of numerical discretizations of PDEs based on the method of lines. We constructively derive these results by exploiting the connections between discrete convolution and finite difference operators. Our network architecture is inspired by those typically adopted in the approximation of time sequences. We support our theoretical results with numerical experiments simulating the linear advection, heat, and Fisher equations.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 361,438
|
1909.08929
|
Automobile Theft Detection by Clustering Owner Driver Data
|
As automobiles become intelligent, automobile theft methods are evolving intelligently. Therefore automobile theft detection has become a major research challenge. Data-mining, biometrics, and additional authentication methods have been proposed to address automobile theft, in previous studies. Among these methods, data-mining can be used to analyze driving characteristics and identify a driver comprehensively. However, it requires a labeled driving dataset to achieve high accuracy. It is impractical to use the actual automobile theft detection system because real theft driving data cannot be collected in advance. Hence, we propose a method to detect an automobile theft attempt using only owner driving data. We cluster the key features of the owner driving data using the k-means algorithm. After reconstructing the driving data into one of these clusters, theft is detected using an error from the original driving data. To validate the proposed models, we tested our actual driving data and obtained 99% accuracy from the best model. This result demonstrates that our proposed method can detect vehicle theft by using only the car owner's driving data.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 146,096
|
2306.01890
|
Mixed-type Distance Shrinkage and Selection for Clustering via Kernel
Metric Learning
|
Distance-based clustering and classification are widely used in various fields to group mixed numeric and categorical data. In many algorithms, a predefined distance measurement is used to cluster data points based on their dissimilarity. While there exist numerous distance-based measures for data with pure numerical attributes and several ordered and unordered categorical metrics, an efficient and accurate distance for mixed-type data that utilizes the continuous and discrete properties simulatenously is an open problem. Many metrics convert numerical attributes to categorical ones or vice versa. They handle the data points as a single attribute type or calculate a distance between each attribute separately and add them up. We propose a metric called KDSUM that uses mixed kernels to measure dissimilarity, with cross-validated optimal bandwidth selection. We demonstrate that KDSUM is a shrinkage method from existing mixed-type metrics to a uniform dissimilarity metric, and improves clustering accuracy when utilized in existing distance-based clustering algorithms on simulated and real-world datasets containing continuous-only, categorical-only, and mixed-type data.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 370,655
|
2405.04534
|
Tactile-Augmented Radiance Fields
|
We present a scene representation, which we call a tactile-augmented radiance field (TaRF), that brings vision and touch into a shared 3D space. This representation can be used to estimate the visual and tactile signals for a given 3D position within a scene. We capture a scene's TaRF from a collection of photos and sparsely sampled touch probes. Our approach makes use of two insights: (i) common vision-based touch sensors are built on ordinary cameras and thus can be registered to images using methods from multi-view geometry, and (ii) visually and structurally similar regions of a scene share the same tactile features. We use these insights to register touch signals to a captured visual scene, and to train a conditional diffusion model that, provided with an RGB-D image rendered from a neural radiance field, generates its corresponding tactile signal. To evaluate our approach, we collect a dataset of TaRFs. This dataset contains more touch samples than previous real-world datasets, and it provides spatially aligned visual signals for each captured touch signal. We demonstrate the accuracy of our cross-modal generative model and the utility of the captured visual-tactile data on several downstream tasks. Project page: https://dou-yiming.github.io/TaRF
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 452,596
|
2204.08339
|
Migrating Face Swap to Mobile Devices: A lightweight Framework and A
Supervised Training Solution
|
Existing face swap methods rely heavily on large-scale networks for adequate capacity to generate visually plausible results, which inhibits its applications on resource-constraint platforms. In this work, we propose MobileFSGAN, a novel lightweight GAN for face swap that can run on mobile devices with much fewer parameters while achieving competitive performance. A lightweight encoder-decoder structure is designed especially for image synthesis tasks, which is only 10.2MB and can run on mobile devices at a real-time speed. To tackle the unstability of training such a small network, we construct the FSTriplets dataset utilizing facial attribute editing techniques. FSTriplets provides source-target-result training triplets, yielding pixel-level labels thus for the first time making the training process supervised. We also designed multi-scale gradient losses for efficient back-propagation, resulting in faster and better convergence. Experimental results show that our model reaches comparable performance towards state-of-the-art methods, while significantly reducing the number of network parameters. Codes and the dataset have been released.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 292,050
|
2201.12354
|
Discovering Nonlinear PDEs from Scarce Data with Physics-encoded
Learning
|
There have been growing interests in leveraging experimental measurements to discover the underlying partial differential equations (PDEs) that govern complex physical phenomena. Although past research attempts have achieved great success in data-driven PDE discovery, the robustness of the existing methods cannot be guaranteed when dealing with low-quality measurement data. To overcome this challenge, we propose a novel physics-encoded discrete learning framework for discovering spatiotemporal PDEs from scarce and noisy data. The general idea is to (1) firstly introduce a novel deep convolutional-recurrent network, which can encode prior physics knowledge (e.g., known PDE terms, assumed PDE structure, initial/boundary conditions, etc.) while remaining flexible on representation capability, to accurately reconstruct high-fidelity data, and (2) perform sparse regression with the reconstructed data to identify the explicit form of the governing PDEs. We validate our method on three nonlinear PDE systems. The effectiveness and superiority of the proposed method over baseline models are demonstrated.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 277,613
|
2006.12433
|
What shapes feature representations? Exploring datasets, architectures,
and training
|
In naturalistic learning problems, a model's input contains a wide range of features, some useful for the task at hand, and others not. Of the useful features, which ones does the model use? Of the task-irrelevant features, which ones does the model represent? Answers to these questions are important for understanding the basis of models' decisions, as well as for building models that learn versatile, adaptable representations useful beyond the original training task. We study these questions using synthetic datasets in which the task-relevance of input features can be controlled directly. We find that when two features redundantly predict the labels, the model preferentially represents one, and its preference reflects what was most linearly decodable from the untrained model. Over training, task-relevant features are enhanced, and task-irrelevant features are partially suppressed. Interestingly, in some cases, an easier, weakly predictive feature can suppress a more strongly predictive, but more difficult one. Additionally, models trained to recognize both easy and hard features learn representations most similar to models that use only the easy feature. Further, easy features lead to more consistent representations across model runs than do hard features. Finally, models have greater representational similarity to an untrained model than to models trained on a different task. Our results highlight the complex processes that determine which features a model represents.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 183,588
|
1307.2991
|
Integrity Verification for Outsourcing Uncertain Frequent Itemset Mining
|
In recent years, due to the wide applications of uncertain data (e.g., noisy data), uncertain frequent itemsets (UFI) mining over uncertain databases has attracted much attention, which differs from the corresponding deterministic problem from the generalized definition and resolutions. As the most costly task in association rule mining process, it has been shown that outsourcing this task to a service provider (e.g.,the third cloud party) brings several benefits to the data owner such as cost relief and a less commitment to storage and computational resources. However, the correctness integrity of mining results can be corrupted if the service provider is with random fault or not honest (e.g., lazy, malicious, etc). Therefore, in this paper, we focus on the integrity and verification issue in UFI mining problem during outsourcing process, i.e., how the data owner verifies the mining results. Specifically, we explore and extend the existing work on deterministic FI outsourcing verification to uncertain scenario. For this purpose, We extend the existing outsourcing FI mining work to uncertain area w.r.t. the two popular UFI definition criteria and the approximate UFI mining methods. Specifically, We construct and improve the basic/enhanced verification scheme with such different UFI definition respectively. After that, we further discuss the scenario of existing approximation UFP mining, where we can see that our technique can provide good probabilistic guarantees about the correctness of the verification. Finally, we present the comparisons and analysis on the schemes proposed in this paper.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 25,759
|
2007.00493
|
Optimisation of the PointPillars network for 3D object detection in
point clouds
|
In this paper we present our research on the optimisation of a deep neural network for 3D object detection in a point cloud. Techniques like quantisation and pruning available in the Brevitas and PyTorch tools were used. We performed the experiments for the PointPillars network, which offers a reasonable compromise between detection accuracy and calculation complexity. The aim of this work was to propose a variant of the network which we will ultimately implement in an FPGA device. This will allow for real-time LiDAR data processing with low energy consumption. The obtained results indicate that even a significant quantisation from 32-bit floating point to 2-bit integer in the main part of the algorithm, results in 5%-9% decrease of the detection accuracy, while allowing for almost a 16-fold reduction in size of the model.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 185,134
|
2209.09393
|
Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks
|
Deep learning models have achieved excellent recognition results on large-scale video benchmarks. However, they perform poorly when applied to videos with rare scenes or objects, primarily due to the bias of existing video datasets. We tackle this problem from two different angles: algorithm and dataset. From the perspective of algorithms, we propose Spatial-aware Multi-Aspect Debiasing (SMAD), which incorporates both explicit debiasing with multi-aspect adversarial training and implicit debiasing with the spatial actionness reweighting module, to learn a more generic representation invariant to non-action aspects. To neutralize the intrinsic dataset bias, we propose OmniDebias to leverage web data for joint training selectively, which can achieve higher performance with far fewer web data. To verify the effectiveness, we establish evaluation protocols and perform extensive experiments on both re-distributed splits of existing datasets and a new evaluation dataset focusing on the action with rare scenes. We also show that the debiased representation can generalize better when transferred to other datasets and tasks.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 318,485
|
2101.00203
|
B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic
Meta-Learning
|
There is a growing interest in the learning-to-learn paradigm, also known as meta-learning, where models infer on new tasks using a few training examples. Recently, meta-learning based methods have been widely used in few-shot classification, regression, reinforcement learning, and domain adaptation. The model-agnostic meta-learning (MAML) algorithm is a well-known algorithm that obtains model parameter initialization at meta-training phase. In the meta-test phase, this initialization is rapidly adapted to new tasks by using gradient descent. However, meta-learning models are prone to overfitting since there are insufficient training tasks resulting in over-parameterized models with poor generalization performance for unseen tasks. In this paper, we propose a Bayesian neural network based MAML algorithm, which we refer to as the B-SMALL algorithm. The proposed framework incorporates a sparse variational loss term alongside the loss function of MAML, which uses a sparsifying approximated KL divergence as a regularizer. We demonstrate the performance of B-MAML using classification and regression tasks, and highlight that training a sparsifying BNN using MAML indeed improves the parameter footprint of the model while performing at par or even outperforming the MAML approach. We also illustrate applicability of our approach in distributed sensor networks, where sparsity and meta-learning can be beneficial.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 214,006
|
2305.19879
|
RaSP: Relation-aware Semantic Prior for Weakly Supervised Incremental
Segmentation
|
Class-incremental semantic image segmentation assumes multiple model updates, each enriching the model to segment new categories. This is typically carried out by providing expensive pixel-level annotations to the training algorithm for all new objects, limiting the adoption of such methods in practical applications. Approaches that solely require image-level labels offer an attractive alternative, yet, such coarse annotations lack precise information about the location and boundary of the new objects. In this paper we argue that, since classes represent not just indices but semantic entities, the conceptual relationships between them can provide valuable information that should be leveraged. We propose a weakly supervised approach that exploits such semantic relations to transfer objectness prior from the previously learned classes into the new ones, complementing the supervisory signal from image-level labels. We validate our approach on a number of continual learning tasks, and show how even a simple pairwise interaction between classes can significantly improve the segmentation mask quality of both old and new classes. We show these conclusions still hold for longer and, hence, more realistic sequences of tasks and for a challenging few-shot scenario.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 369,720
|
2404.11310
|
Autonomous aerial perching and unperching using omnidirectional
tiltrotor and switching controller
|
Aerial unperching of multirotors has received little attention as opposed to perching that has been investigated to elongate operation time. This study presents a new aerial robot capable of both perching and unperching autonomously on/from a ferromagnetic surface during flight, and a switching controller to avoid rotor saturation and mitigate overshoot during transition between free-flight and perching. To enable stable perching and unperching maneuvers on/from a vertical surface, a lightweight ($\approx$ $1$ \si{kg}), fully actuated tiltrotor that can hover at $90^\circ$ pitch angle is first developed. We design a perching/unperching module composed of a single servomotor and a magnet, which is then mounted on the tiltrotor. A switching controller including exclusive control modes for transitions between free-flight and perching is proposed. Lastly, we propose a simple yet effective strategy to ensure robust perching in the presence of measurement and control errors and avoid collisions with the perching site immediately after unperching. We validate the proposed framework in experiments where the tiltrotor successfully performs perching and unperching on/from a vertical surface during flight. We further show effectiveness of the proposed transition mode in the switching controller by ablation studies where large overshoot and even collision with a perching site occur. To the best of the authors' knowledge, this work presents the first autonomous aerial unperching framework using a fully actuated tiltrotor.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 447,456
|
1903.01182
|
Complement Objective Training
|
Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the groundtruth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 123,206
|
2204.12717
|
Dataset for Robust and Accurate Leading Vehicle Velocity Recognition
|
Recognition of the surrounding environment using a camera is an important technology in Advanced Driver-Assistance Systems and Autonomous Driving, and recognition technology is often solved by machine learning approaches such as deep learning in recent years. Machine learning requires datasets for learning and evaluation. To develop robust recognition technology in the real world, in addition to normal driving environment, data in environments that are difficult for cameras such as rainy weather or nighttime are essential. We have constructed a dataset that one can benchmark the technology, targeting the velocity recognition of the leading vehicle. This task is an important one for the Advanced Driver-Assistance Systems and Autonomous Driving. The dataset is available at https://signate.jp/competitions/657
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 293,581
|
2206.10256
|
Human-in-the-loop Speaker Adaptation for DNN-based Multi-speaker TTS
|
This paper proposes a human-in-the-loop speaker-adaptation method for multi-speaker text-to-speech. With a conventional speaker-adaptation method, a target speaker's embedding vector is extracted from his/her reference speech using a speaker encoder trained on a speaker-discriminative task. However, this method cannot obtain an embedding vector for the target speaker when the reference speech is unavailable. Our method is based on a human-in-the-loop optimization framework, which incorporates a user to explore the speaker-embedding space to find the target speaker's embedding. The proposed method uses a sequential line search algorithm that repeatedly asks a user to select a point on a line segment in the embedding space. To efficiently choose the best speech sample from multiple stimuli, we also developed a system in which a user can switch between multiple speakers' voices for each phoneme while looping an utterance. Experimental results indicate that the proposed method can achieve comparable performance to the conventional one in objective and subjective evaluations even if reference speech is not used as the input of a speaker encoder directly.
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 303,849
|
2105.11455
|
Managing HILP Consequences Using Dynamic Distribution System Asset
Assessment
|
In order to increase the resilience of distribution systems against high-impact low-probability (HILP) events, it is important to prioritize assets damaged by these events so that the lost loads, especially sensitive and important loads, can be recovered faster. For this reason, this paper discusses the prioritization of electricity supply lines for scheduling and prioritizing repair actions. To this end, the economic value of distribution system lines has been considered as a criterion representing the sensitivity of the network to hurricanes. The modeling is based on value, in which the load value, lifetime-based failure probability of the poles, fragility curve, duration of line repair by the maintenance team, and the topology factor have been considered. This is so that the significance of the demand side, the failure extent and accessibility of the lines, the importance of time, and the network configuration are considered. The results provide an order of line priority for fault resolution, in which the topology factor has a larger effect. This modeling has been tested on an IEEE 33-bus network.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 236,708
|
1907.07854
|
Understanding Video Content: Efficient Hero Detection and Recognition
for the Game "Honor of Kings"
|
In order to understand content and automatically extract labels for videos of the game "Honor of Kings", it is necessary to detect and recognize characters (called "hero") together with their camps in the game video. In this paper, we propose an efficient two-stage algorithm to detect and recognize heros in game videos. First, we detect all heros in a video frame based on blood bar template-matching method, and classify them according to their camps (self/ friend/ enemy). Then we recognize the name of each hero using one or more deep convolution neural networks. Our method needs almost no work for labelling training and testing samples in the recognition stage. Experiments show its efficiency and accuracy in the task of hero detection and recognition in game videos.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 138,985
|
2404.01255
|
Gradient Methods for Scalable Multi-value Electricity Network Expansion
Planning
|
We consider multi-value expansion planning (MEP), a general bilevel optimization model in which a planner optimizes arbitrary functions of the dispatch outcome in the presence of a partially controllable, competitive electricity market. The MEP problem can be used to jointly plan various grid assets, such as transmission, generation, and battery storage capacities; examples include identifying grid investments that minimize emissions in the absence of a carbon tax, maximizing the profit of a portfolio of renewable investments and long-term energy contracts, or reducing price inequities between different grid stakeholders. The MEP problem, however, is in general nonconvex, making it difficult to solve exactly for large real-world systems. Therefore, we propose a fast stochastic implicit gradient-based heuristic method that scales well to large networks with many scenarios. We use a strong duality reformulation and the McCormick envelope to provide a lower bound on the performance of our algorithm via convex relaxation. We test the performance of our method on a large model of the U.S. Western Interconnect and demonstrate that it scales linearly with network size and number of scenarios and can be efficiently parallelized on large machines. We find that for medium-sized 16 hour cases, gradient descent on average finds a 5.3x lower objective value in 16.5x less time compared to a traditional reformulation-based approach solved with an interior point method. We conclude with a large example in which we jointly plan transmission, generation, and storage for a 768 hour case on 100 node system, showing that emissions penalization leads to additional 40.0% reduction in carbon intensity at an additional cost of $17.1/MWh.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 443,339
|
2307.02276
|
First-Explore, then Exploit: Meta-Learning to Solve Hard
Exploration-Exploitation Trade-Offs
|
Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. taking into account complex domain priors and adapting quickly based on previous exploration). Across episodes, RL agents struggle to perform even simple exploration strategies, for example systematic search that avoids exploring the same location multiple times. This poor exploration limits performance on challenging domains. Meta-RL is a potential solution, as unlike standard RL, meta-RL can learn to explore, and potentially learn highly complex strategies far beyond those of standard RL, strategies such as experimenting in early episodes to learn new skills, or conducting experiments to learn about the current environment. Traditional meta-RL focuses on the problem of learning to optimally balance exploration and exploitation to maximize the cumulative reward of the episode sequence (e.g., aiming to maximize the total wins in a tournament -- while also improving as a player). We identify a new challenge with state-of-the-art cumulative-reward meta-RL methods. When optimal behavior requires exploration that sacrifices immediate reward to enable higher subsequent reward, existing state-of-the-art cumulative-reward meta-RL methods become stuck on the local optimum of failing to explore. Our method, First-Explore, overcomes this limitation by learning two policies: one to solely explore, and one to solely exploit. When exploring requires forgoing early-episode reward, First-Explore significantly outperforms existing cumulative meta-RL methods. By identifying and solving the previously unrecognized problem of forgoing reward in early episodes, First-Explore represents a significant step towards developing meta-RL algorithms capable of human-like exploration on a broader range of domains.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 377,645
|
1911.05020
|
Generative adversarial networks (GAN) based efficient sampling of
chemical space for inverse design of inorganic materials
|
A major challenge in materials design is how to efficiently search the vast chemical design space to find the materials with desired properties. One effective strategy is to develop sampling algorithms that can exploit both explicit chemical knowledge and implicit composition rules embodied in the large materials database. Here, we propose a generative machine learning model (MatGAN) based on a generative adversarial network (GAN) for efficient generation of new hypothetical inorganic materials. Trained with materials from the ICSD database, our GAN model can generate hypothetical materials not existing in the training dataset, reaching a novelty of 92.53% when generating 2 million samples. The percentage of chemically valid (charge neutral and electronegativity balanced) samples out of all generated ones reaches 84.5% by our GAN when trained with materials from ICSD even though no such chemical rules are explicitly enforced in our GAN model, indicating its capability to learn implicit chemical composition rules. Our algorithm could be used to speed up inverse design or computational screening of inorganic materials.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 153,145
|
2501.05749
|
Bridging Dialects: Translating Standard Bangla to Regional Variants
Using Neural Models
|
The Bangla language includes many regional dialects, adding to its cultural richness. The translation of Bangla Language into regional dialects presents a challenge due to significant variations in vocabulary, pronunciation, and sentence structure across regions like Chittagong, Sylhet, Barishal, Noakhali, and Mymensingh. These dialects, though vital to local identities, lack of representation in technological applications. This study addresses this gap by translating standard Bangla into these dialects using neural machine translation (NMT) models, including BanglaT5, mT5, and mBART50. The work is motivated by the need to preserve linguistic diversity and improve communication among dialect speakers. The models were fine-tuned using the "Vashantor" dataset, containing 32,500 sentences across various dialects, and evaluated through Character Error Rate (CER) and Word Error Rate (WER) metrics. BanglaT5 demonstrated superior performance with a CER of 12.3% and WER of 15.7%, highlighting its effectiveness in capturing dialectal nuances. The outcomes of this research contribute to the development of inclusive language technologies that support regional dialects and promote linguistic diversity.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 523,708
|
1410.2045
|
Supervised learning Methods for Bangla Web Document Categorization
|
This paper explores the use of machine learning approaches, or more specifically, four supervised learning Methods, namely Decision Tree(C 4.5), K-Nearest Neighbour (KNN), Na\"ive Bays (NB), and Support Vector Machine (SVM) for categorization of Bangla web documents. This is a task of automatically sorting a set of documents into categories from a predefined set. Whereas a wide range of methods have been applied to English text categorization, relatively few studies have been conducted on Bangla language text categorization. Hence, we attempt to analyze the efficiency of those four methods for categorization of Bangla documents. In order to validate, Bangla corpus from various websites has been developed and used as examples for the experiment. For Bangla, empirical results support that all four methods produce satisfactory performance with SVM attaining good result in terms of high dimensional and relatively noisy document feature vectors.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 36,585
|
1608.00218
|
Hyperparameter Transfer Learning through Surrogate Alignment for
Efficient Deep Neural Network Training
|
Recently, several optimization methods have been successfully applied to the hyperparameter optimization of deep neural networks (DNNs). The methods work by modeling the joint distribution of hyperparameter values and corresponding error. Those methods become less practical when applied to modern DNNs whose training may take a few days and thus one cannot collect sufficient observations to accurately model the distribution. To address this challenging issue, we propose a method that learns to transfer optimal hyperparameter values for a small source dataset to hyperparameter values with comparable performance on a dataset of interest. As opposed to existing transfer learning methods, our proposed method does not use hand-designed features. Instead, it uses surrogates to model the hyperparameter-error distributions of the two datasets and trains a neural network to learn the transfer function. Extensive experiments on three CV benchmark datasets clearly demonstrate the efficiency of our method.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| 59,245
|
2311.14823
|
Revisiting Quantum Algorithms for Linear Regressions: Quadratic Speedups
without Data-Dependent Parameters
|
Linear regression is one of the most fundamental linear algebra problems. Given a dense matrix $A \in \mathbb{R}^{n \times d}$ and a vector $b$, the goal is to find $x'$ such that $ \| Ax' - b \|_2^2 \leq (1+\epsilon) \min_{x} \| A x - b \|_2^2 $. The best classical algorithm takes $O(nd) + \mathrm{poly}(d/\epsilon)$ time [Clarkson and Woodruff STOC 2013, Nelson and Nguyen FOCS 2013]. On the other hand, quantum linear regression algorithms can achieve exponential quantum speedups, as shown in [Wang Phys. Rev. A 96, 012335, Kerenidis and Prakash ITCS 2017, Chakraborty, Gily{\'e}n and Jeffery ICALP 2019]. However, the running times of these algorithms depend on some quantum linear algebra-related parameters, such as $\kappa(A)$, the condition number of $A$. In this work, we develop a quantum algorithm that runs in $\widetilde{O}(\epsilon^{-1}\sqrt{n}d^{1.5}) + \mathrm{poly}(d/\epsilon)$ time. It provides a quadratic quantum speedup in $n$ over the classical lower bound without any dependence on data-dependent parameters. In addition, we also show our result can be generalized to multiple regression and ridge linear regression.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 410,267
|
2306.15731
|
Stochastic Gradient Bayesian Optimal Experimental Designs for
Simulation-based Inference
|
Simulation-based inference (SBI) methods tackle complex scientific models with challenging inverse problems. However, SBI models often face a significant hurdle due to their non-differentiable nature, which hampers the use of gradient-based optimization techniques. Bayesian Optimal Experimental Design (BOED) is a powerful approach that aims to make the most efficient use of experimental resources for improved inferences. While stochastic gradient BOED methods have shown promising results in high-dimensional design problems, they have mostly neglected the integration of BOED with SBI due to the difficult non-differentiable property of many SBI simulators. In this work, we establish a crucial connection between ratio-based SBI inference algorithms and stochastic gradient-based variational inference by leveraging mutual information bounds. This connection allows us to extend BOED to SBI applications, enabling the simultaneous optimization of experimental designs and amortized inference functions. We demonstrate our approach on a simple linear model and offer implementation details for practitioners.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 376,127
|
2103.14580
|
Correcting Automated and Manual Speech Transcription Errors using Warped
Language Models
|
Masked language models have revolutionized natural language processing systems in the past few years. A recently introduced generalization of masked language models called warped language models are trained to be more robust to the types of errors that appear in automatic or manual transcriptions of spoken language by exposing the language model to the same types of errors during training. In this work we propose a novel approach that takes advantage of the robustness of warped language models to transcription noise for correcting transcriptions of spoken language. We show that our proposed approach is able to achieve up to 10% reduction in word error rates of both automatic and manual transcriptions of spoken language.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 226,901
|
1912.00953
|
LOGAN: Latent Optimisation for Generative Adversarial Networks
|
Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we improve CS-GAN with natural gradient-based latent optimisation and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet ($128 \times 128$) dataset. Our model achieves an Inception Score (IS) of $148$ and an Fr\'echet Inception Distance (FID) of $3.4$, an improvement of $17\%$ and $32\%$ in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 155,940
|
2410.17532
|
Responsible Multilingual Large Language Models: A Survey of Development,
Applications, and Societal Impact
|
Multilingual Large Language Models (MLLMs) represent a pivotal advancement in democratizing artificial intelligence across linguistic boundaries. While theoretical foundations are well-established, practical implementation guidelines remain scattered. This work bridges this gap by providing a comprehensive end-to-end framework for developing and deploying MLLMs in production environments. We make three distinctive contributions: First, we present an actionable pipeline from data pre-processing through deployment, integrating insights from academic research and industrial applications. Second, using Llama2 as a case study, we provide detailed optimization strategies for enhancing multilingual capabilities, including curriculum learning approaches for balancing high-resource and low-resource languages, tokenization strategies, and effective sampling methods. Third, we offer an interdisciplinary analysis that considers technical, linguistic, and cultural perspectives in MLLM development. Our findings reveal critical challenges in supporting linguistic diversity, with 88.38% of world languages categorized as low-resource, affecting over a billion speakers. We examine practical solutions through real-world applications in customer service, search engines, and machine translation. By synthesizing theoretical frameworks with production-ready implementation strategies, this survey provides essential guidance for practitioners and researchers working to develop more inclusive and effective multilingual AI systems.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 501,508
|
2208.03431
|
IVT: An End-to-End Instance-guided Video Transformer for 3D Pose
Estimation
|
Video 3D human pose estimation aims to localize the 3D coordinates of human joints from videos. Recent transformer-based approaches focus on capturing the spatiotemporal information from sequential 2D poses, which cannot model the contextual depth feature effectively since the visual depth features are lost in the step of 2D pose estimation. In this paper, we simplify the paradigm into an end-to-end framework, Instance-guided Video Transformer (IVT), which enables learning spatiotemporal contextual depth information from visual features effectively and predicts 3D poses directly from video frames. In particular, we firstly formulate video frames as a series of instance-guided tokens and each token is in charge of predicting the 3D pose of a human instance. These tokens contain body structure information since they are extracted by the guidance of joint offsets from the human center to the corresponding body joints. Then, these tokens are sent into IVT for learning spatiotemporal contextual depth. In addition, we propose a cross-scale instance-guided attention mechanism to handle the variational scales among multiple persons. Finally, the 3D poses of each person are decoded from instance-guided tokens by coordinate regression. Experiments on three widely-used 3D pose estimation benchmarks show that the proposed IVT achieves state-of-the-art performances.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 311,781
|
1209.2139
|
Fused Multiple Graphical Lasso
|
In this paper, we consider the problem of estimating multiple graphical models simultaneously using the fused lasso penalty, which encourages adjacent graphs to share similar structures. A motivating example is the analysis of brain networks of Alzheimer's disease using neuroimaging data. Specifically, we may wish to estimate a brain network for the normal controls (NC), a brain network for the patients with mild cognitive impairment (MCI), and a brain network for Alzheimer's patients (AD). We expect the two brain networks for NC and MCI to share common structures but not to be identical to each other; similarly for the two brain networks for MCI and AD. The proposed formulation can be solved using a second-order method. Our key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which decomposes the large graphs into small subgraphs and allows an efficient estimation of multiple independent (small) subgraphs, dramatically reducing the computational cost. We perform experiments on both synthetic and real data; our results demonstrate the effectiveness and efficiency of the proposed approach.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 18,489
|
2106.02943
|
Learning Routines for Effective Off-Policy Reinforcement Learning
|
The performance of reinforcement learning depends upon designing an appropriate action space, where the effect of each action is measurable, yet, granular enough to permit flexible behavior. So far, this process involved non-trivial user choices in terms of the available actions and their execution frequency. We propose a novel framework for reinforcement learning that effectively lifts such constraints. Within our framework, agents learn effective behavior over a routine space: a new, higher-level action space, where each routine represents a set of 'equivalent' sequences of granular actions with arbitrary length. Our routine space is learned end-to-end to facilitate the accomplishment of underlying off-policy reinforcement learning objectives. We apply our framework to two state-of-the-art off-policy algorithms and show that the resulting agents obtain relevant performance improvements while requiring fewer interactions with the environment per episode, improving computational efficiency.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 239,098
|
1403.5199
|
Obtaining Information about Queries behind Views and Dependencies
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 31,708
|
1806.00974
|
ALMN: Deep Embedding Learning with Geometrical Virtual Point Generating
|
Deep embedding learning becomes more attractive for discriminative feature learning, but many methods still require hard-class mining, which is computationally complex and performance-sensitive. To this end, we propose Adaptive Large Margin N-Pair loss (ALMN) to address the aforementioned issues. Instead of exploring hard example-mining strategy, we introduce the concept of large margin constraint. This constraint aims at encouraging local-adaptive large angular decision margin among dissimilar samples in multimodal feature space so as to significantly encourage intraclass compactness and interclass separability. And it is mainly achieved by a simple yet novel geometrical Virtual Point Generating (VPG) method, which converts artificially setting a fixed margin into automatically generating a boundary training sample in feature space and is an open question. We demonstrate the effectiveness of our method on several popular datasets for image retrieval and clustering tasks.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 99,454
|
2405.15903
|
UnitNorm: Rethinking Normalization for Transformers in Time Series
|
Normalization techniques are crucial for enhancing Transformer models' performance and stability in time series analysis tasks, yet traditional methods like batch and layer normalization often lead to issues such as token shift, attention shift, and sparse attention. We propose UnitNorm, a novel approach that scales input vectors by their norms and modulates attention patterns, effectively circumventing these challenges. Grounded in existing normalization frameworks, UnitNorm's effectiveness is demonstrated across diverse time series analysis tasks, including forecasting, classification, and anomaly detection, via a rigorous evaluation on 6 state-of-the-art models and 10 datasets. Notably, UnitNorm shows superior performance, especially in scenarios requiring robust attention mechanisms and contextual comprehension, evidenced by significant improvements by up to a 1.46 decrease in MSE for forecasting, and a 4.89% increase in accuracy for classification. This work not only calls for a reevaluation of normalization strategies in time series Transformers but also sets a new direction for enhancing model performance and stability. The source code is available at https://anonymous.4open.science/r/UnitNorm-5B84.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 457,153
|
1802.02379
|
Dynamic Sampling from a Discrete Probability Distribution with a Known
Distribution of Rates
|
In this paper, we consider several efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance-Rejection method, the Complete Binary Tree and the Alias method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used, with the top level selecting a group of events, and the bottom level selecting an element from a group. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant when the rate distribution follows certain conditions. We show that for any distribution, combining a tree structure with the Acceptance-Rejection method, we have an expected time of sampling and update of $O\left(\log\log{r_{max}}/{r_{min}}\right)$ is possible, where $r_{max}$ is the maximum rate and $r_{min}$ the minimum rate. We also discuss an implementation of a Two Levels Acceptance-Rejection data structure, that allows expected constant time for sampling, and amortized constant time for updates, assuming that $r_{max}$ and $r_{min}$ are known and the number of events is sufficiently large. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 89,764
|
2105.11233
|
Gradient descent in materia through homodyne gradient extraction
|
Deep learning, a multi-layered neural network approach inspired by the brain, has revolutionized machine learning. One of its key enablers has been backpropagation, an algorithm that computes the gradient of a loss function with respect to the weights and biases in the neural network model, in combination with its use in gradient descent. However, the implementation of deep learning in digital computers is intrinsically energy hungry, with energy consumption becoming prohibitively high for many applications. This has stimulated the development of specialized hardware, ranging from neuromorphic CMOS integrated circuits and integrated photonic tensor cores to unconventional, material-based computing system. The learning process in these material systems, realized, e.g., by artificial evolution, equilibrium propagation or surrogate modelling, is a complicated and time-consuming process. Here, we demonstrate a simple yet efficient and accurate gradient extraction method, based on the principle of homodyne detection, for performing gradient descent on a loss function directly in a physical system without the need of an analytical description. By perturbing the parameters that need to be optimized using sinusoidal waveforms with distinct frequencies, we effectively obtain the gradient information in a highly robust and scalable manner. We illustrate the method in dopant network processing units, but argue that it is applicable in a wide range of physical systems. Homodyne gradient extraction can in principle be fully implemented in materia, facilitating the development of autonomously learning material systems.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| true
| 236,634
|
1810.01406
|
Super-Resolution via Conditional Implicit Maximum Likelihood Estimation
|
Single-image super-resolution (SISR) is a canonical problem with diverse applications. Leading methods like SRGAN produce images that contain various artifacts, such as high-frequency noise, hallucinated colours and shape distortions, which adversely affect the realism of the result. In this paper, we propose an alternative approach based on an extension of the method of Implicit Maximum Likelihood Estimation (IMLE). We demonstrate greater effectiveness at noise reduction and preservation of the original colours and shapes, yielding more realistic super-resolved images.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| true
| 109,390
|
2212.03232
|
Learning the joint distribution of two sequences using little or no
paired data
|
We present a noisy channel generative model of two sequences, for example text and speech, which enables uncovering the association between the two modalities when limited paired data is available. To address the intractability of the exact model under a realistic data setup, we propose a variational inference approximation. To train this variational model with categorical data, we propose a KL encoder loss approach which has connections to the wake-sleep algorithm. Identifying the joint or conditional distributions by only observing unpaired samples from the marginals is only possible under certain conditions in the data distribution and we discuss under what type of conditional independence assumptions that might be achieved, which guides the architecture designs. Experimental results show that even tiny amount of paired data (5 minutes) is sufficient to learn to relate the two modalities (graphemes and phonemes here) when a massive amount of unpaired data is available, paving the path to adopting this principled approach for all seq2seq models in low data resource regimes.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 335,038
|
1806.05947
|
Discovering User Groups for Natural Language Generation
|
We present a model which predicts how individual users of a dialog system understand and produce utterances based on user groups. In contrast to previous work, these user groups are not specified beforehand, but learned in training. We evaluate on two referring expression (RE) generation tasks; our experiments show that our model can identify user groups and learn how to most effectively talk to them, and can dynamically assign unseen users to the correct groups as they interact with the system.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 100,590
|
2206.08477
|
Backdoor Attacks on Vision Transformers
|
Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs. Their design is based on a self-attention mechanism that processes images as a sequence of patches, which is quite different compared to CNNs. Hence it is interesting to study if ViTs are vulnerable to backdoor attacks. Backdoor attacks happen when an attacker poisons a small part of the training data for malicious purposes. The model performance is good on clean test images, but the attacker can manipulate the decision of the model by showing the trigger at test time. To the best of our knowledge, we are the first to show that ViTs are vulnerable to backdoor attacks. We also find an intriguing difference between ViTs and CNNs - interpretation algorithms effectively highlight the trigger on test images for ViTs but not for CNNs. Based on this observation, we propose a test-time image blocking defense for ViTs which reduces the attack success rate by a large margin. Code is available here: https://github.com/UCDvision/backdoor_transformer.git
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| 303,155
|
2008.12680
|
Bayesian Neural Networks for Uncertainty Estimation of Imaging
Biomarkers
|
Image segmentation enables to extract quantitative measures from scans that can serve as imaging biomarkers for diseases. However, segmentation quality can vary substantially across scans, and therefore yield unfaithful estimates in the follow-up statistical analysis of biomarkers. The core problem is that segmentation and biomarker analysis are performed independently. We propose to propagate segmentation uncertainty to the statistical analysis to account for variations in segmentation confidence. To this end, we evaluate four Bayesian neural networks to sample from the posterior distribution and estimate the uncertainty. We then assign confidence measures to the biomarker and propose statistical models for its integration in group analysis and disease classification. Our results for segmenting the liver in patients with diabetes mellitus clearly demonstrate the improvement of integrating biomarker uncertainty in the statistical inference.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 193,648
|
2412.09483
|
Early Detection of At-Risk Students Using Machine Learning
|
This research presents preliminary work to address the challenge of identifying at-risk students using supervised machine learning and three unique data categories: engagement, demographics, and performance data collected from Fall 2023 using Canvas and the California State University, Fullerton dashboard. We aim to tackle the persistent challenges of higher education retention and student dropout rates by screening for at-risk students and building a high-risk identification system. By focusing on previously overlooked behavioral factors alongside traditional metrics, this work aims to address educational gaps, enhance student outcomes, and significantly boost student success across disciplines at the University. Pre-processing steps take place to establish a target variable, anonymize student information, manage missing data, and identify the most significant features. Given the mixed data types in the datasets and the binary classification nature of this study, this work considers several machine learning models, including Support Vector Machines (SVM), Naive Bayes, K-nearest neighbors (KNN), Decision Trees, Logistic Regression, and Random Forest. These models predict at-risk students and identify critical periods of the semester when student performance is most vulnerable. We will use validation techniques such as train test split and k-fold cross-validation to ensure the reliability of the models. Our analysis indicates that all algorithms generate an acceptable outcome for at-risk student predictions, while Naive Bayes performs best overall.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 516,502
|
2202.04185
|
OSM-tree: A Sortedness-Aware Index
|
Indexes facilitate efficient querying when the selection predicate is on an indexed key. As a result, when loading data, if we anticipate future selective (point or range) queries, we typically maintain an index that is gradually populated as new data is ingested. In that respect, indexing can be perceived as the process of adding structure to an incoming, otherwise unsorted, data collection. The process of adding structure comes at a cost, as instead of simply appending incoming data, every new entry is inserted into the index. If the data ingestion order matches the indexed attribute order, the ingestion cost is entirely redundant and can be avoided (e.g., via bulk loading in a B+-tree). However, state-of-the-art index designs do not benefit when data is ingested in an order that is close to being sorted but not fully sorted. In this paper, we study how indexes can benefit from partial data sortedness or near-sortedness, and we propose an ensemble of techniques that combine bulk loading, index appends, variable node fill/split factor, and buffering, to optimize the ingestion cost of a tree index in presence of partial data sortedness. We further augment the proposed design with necessary metadata structures to ensure competitive read performance. We apply the proposed design paradigm on a state-of-the-art B+-tree, and we propose the Ordered Sort-Merge tree (OSM-tree). OSM-tree outperforms the state of the art by up to 8.8x in ingestion performance in the presence of sortedness, while falling back to a B+-tree's ingestion performance when data is scrambled. OSM-tree offers competitive query performance, leading to performance benefits between 28% and 5x for mixed read/write workloads.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 279,480
|
2404.17528
|
Geometry-aware Reconstruction and Fusion-refined Rendering for
Generalizable Neural Radiance Fields
|
Generalizable NeRF aims to synthesize novel views for unseen scenes. Common practices involve constructing variance-based cost volumes for geometry reconstruction and encoding 3D descriptors for decoding novel views. However, existing methods show limited generalization ability in challenging conditions due to inaccurate geometry, sub-optimal descriptors, and decoding strategies. We address these issues point by point. First, we find the variance-based cost volume exhibits failure patterns as the features of pixels corresponding to the same point can be inconsistent across different views due to occlusions or reflections. We introduce an Adaptive Cost Aggregation (ACA) approach to amplify the contribution of consistent pixel pairs and suppress inconsistent ones. Unlike previous methods that solely fuse 2D features into descriptors, our approach introduces a Spatial-View Aggregator (SVA) to incorporate 3D context into descriptors through spatial and inter-view interaction. When decoding the descriptors, we observe the two existing decoding strategies excel in different areas, which are complementary. A Consistency-Aware Fusion (CAF) strategy is proposed to leverage the advantages of both. We incorporate the above ACA, SVA, and CAF into a coarse-to-fine framework, termed Geometry-aware Reconstruction and Fusion-refined Rendering (GeFu). GeFu attains state-of-the-art performance across multiple datasets. Code is available at https://github.com/TQTQliu/GeFu .
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 449,890
|
2302.05094
|
General, Single-shot, Target-less, and Automatic LiDAR-Camera Extrinsic
Calibration Toolbox
|
This paper presents an open source LiDAR-camera calibration toolbox that is general to LiDAR and camera projection models, requires only one pairing of LiDAR and camera data without a calibration target, and is fully automatic. For automatic initial guess estimation, we employ the SuperGlue image matching pipeline to find 2D-3D correspondences between LiDAR and camera data and estimate the LiDAR-camera transformation via RANSAC. Given the initial guess, we refine the transformation estimate with direct LiDAR-camera registration based on the normalized information distance, a mutual information-based cross-modal distance metric. For a handy calibration process, we also present several assistance capabilities (e.g., dynamic LiDAR data integration and user interface for making 2D-3D correspondence manually). The experimental results show that the proposed toolbox enables calibration of any combination of spinning and non-repetitive scan LiDARs and pinhole and omnidirectional cameras, and shows better calibration accuracy and robustness than those of the state-of-the-art edge-alignment-based calibration method.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 344,930
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.