id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2012.00336
Comparison of security margin estimation methods under various load configurations
The post-contingency loadability limit (PCLL) and the secure operating limit (SOL) are the two main approaches used in computing the security margins of an electric power system. While the SOL is significantly more computationally demanding than the PCLL, it can account for the dynamic response after a disturbance and generally provides a better measure of the security margin. In this study, the difference between these two methods is compared and analyzed for a range of different contingency and load model scenarios. A methodology to allow a fair comparison between the two security margins is developed and tested on a modified version of the Nordic32 test system. The study shows that the SOL can differ significantly from the PCLL, especially when the system has a high penetration of loads with constant power characteristics, or a large share of induction motor loads with fast load restoration. The difference between the methods is also tested for different contingencies, where longer fault clearing times are shown to significantly increase the difference between the two margins.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
209,103
2102.03400
Confidence-Budget Matching for Sequential Budgeted Learning
A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we consider multi-armed bandits, linear bandits, and reinforcement learning problems. We start by analyzing the performance of `greedy' algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that they perform well in the presence of adversity in the contexts, initial states, and budgets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
218,731
2308.08157
Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image Synthesis
Existing text-to-image generation approaches have set high standards for photorealism and text-image correspondence, largely benefiting from web-scale text-image datasets, which can include up to 5~billion pairs. However, text-to-image generation models trained on domain-specific datasets, such as urban scenes, medical images, and faces, still suffer from low text-image correspondence due to the lack of text-image pairs. Additionally, collecting billions of text-image pairs for a specific domain can be time-consuming and costly. Thus, ensuring high text-image correspondence without relying on web-scale text-image datasets remains a challenging task. In this paper, we present a novel approach for enhancing text-image correspondence by leveraging available semantic layouts. Specifically, we propose a Gaussian-categorical diffusion process that simultaneously generates both images and corresponding layout pairs. Our experiments reveal that we can guide text-to-image generation models to be aware of the semantics of different image regions, by training the model to generate semantic labels for each pixel. We demonstrate that our approach achieves higher text-image correspondence compared to existing text-to-image generation approaches in the Multi-Modal CelebA-HQ and the Cityscapes dataset, where text-image pairs are scarce. Codes are available in this https://pmh9960.github.io/research/GCDP
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,787
2210.08632
Psychophysical-Score: A Behavioral Measure for Assessing the Biological Plausibility of Visual Recognition Models
For the last decade, convolutional neural networks (CNNs) have vastly superseded their predecessors in nearly all vision tasks in artificial intelligence, including object recognition. However, despite abundant advancements, they continue to pale in comparison to biological vision. This chasm has prompted the development of biologically-inspired models that have attempted to mimic the human visual system, primarily at a neural level, which is evaluated using standard dataset benchmarks. However, more work is needed to understand how these models perceive the visual world. This article proposes a state-of-the-art procedure that generates a new metric, Psychophysical-Score, which is grounded in visual psychophysics and is capable of reliably estimating perceptual responses across numerous models -- representing a large range in complexity and biological inspiration. We perform the procedure on twelve models that vary in degree of biological inspiration and complexity, we compare the results against the aggregated results of 2,390 Amazon Mechanical Turk workers who together provided ~2.7 million perceptual responses. Each model's Psychophysical-Score is compared against the state-of-the-art neural activity-based metric, Brain-Score. Our study indicates that models with a high correlation to human perceptual behavior also have a high correlation with the corresponding neural activity.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,224
2405.16763
Transport of Algebraic Structure to Latent Embeddings
Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space. For example, in the field of 3D modeling, subsets of Euclidean space can be embedded as vectors using implicit neural representations. Such subsets also have a natural algebraic structure including operations (e.g., union) and corresponding laws (e.g., associativity). How can we learn to "union" two sets using only their latent embeddings while respecting associativity? We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space. This is achieved by learning a bijection from the latent space to a carefully designed mirrored algebra which is constructed on Euclidean space in accordance with desired laws. We evaluate these structural transport nets for a range of mirrored algebras against baselines that operate directly on the latent space. Our experiments provide strong evidence that respecting the underlying algebraic structure of the input space is key for learning accurate and self-consistent operations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
457,591
1909.06937
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this issue, in this paper we propose a novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block. The CM-block firstly captures slot-specific and intent-specific features from memories in a collaborative manner, and then uses these enriched features to enhance local context representations, based on which the sequential information flow leads to more specific (slot and intent) global utterance representations. Through stacking multiple CM-blocks, our CM-Net is able to alternately perform information exchange among specific memories, local contexts and the global utterance, and thus incrementally enriches each other. We evaluate the CM-Net on two standard benchmarks (ATIS and SNIPS) and a self-collected corpus (CAIS). Experimental results show that the CM-Net achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS. Additionally, we make the CAIS dataset publicly available for the research community.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
145,532
1901.09136
Graphical-model based estimation and inference for differential privacy
Many privacy mechanisms reveal high-level information about a data distribution through noisy measurements. It is common to use this information to estimate the answers to new queries. In this work, we provide an approach to solve this estimation problem efficiently using graphical models, which is particularly effective when the distribution is high-dimensional but the measurements are over low-dimensional marginals. We show that our approach is far more efficient than existing estimation techniques from the privacy literature and that it can improve the accuracy and scalability of many state-of-the-art mechanisms.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
119,662
2407.11832
Approximating the Number of Relevant Variables in a Parity Implies Proper Learning
Consider the model where we can access a parity function through random uniform labeled examples in the presence of random classification noise. In this paper, we show that approximating the number of relevant variables in the parity function is as hard as properly learning parities. More specifically, let $\gamma:{\mathbb R}^+\to {\mathbb R}^+$, where $\gamma(x) \ge x$, be any strictly increasing function. In our first result, we show that from any polynomial-time algorithm that returns a $\gamma$-approximation, $D$ (i.e., $\gamma^{-1}(d(f)) \leq D \leq \gamma(d(f))$), of the number of relevant variables~$d(f)$ for any parity $f$, we can, in polynomial time, construct a solution to the long-standing open problem of polynomial-time learning $k(n)$-sparse parities (parities with $k(n)\le n$ relevant variables), where $k(n) = \omega_n(1)$. In our second result, we show that from any $T(n)$-time algorithm that, for any parity $f$, returns a $\gamma$-approximation of the number of relevant variables $d(f)$ of $f$, we can, in polynomial time, construct a $poly(\Gamma(n))T(\Gamma(n)^2)$-time algorithm that properly learns parities, where $\Gamma(x)=\gamma(\gamma(x))$. If $T(\Gamma(n)^2)=\exp({o(n/\log n)})$, this would resolve another long-standing open problem of properly learning parities in the presence of random classification noise in time $\exp({o(n/\log n)})$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,639
2110.03845
A New Data Integration Framework for Covid-19 Social Media Information
The Covid-19 pandemic presents a serious threat to people health, resulting in over 250 million confirmed cases and over 5 million deaths globally. To reduce the burden on national health care systems and to mitigate the effects of the outbreak, accurate modelling and forecasting methods for short- and long-term health demand are needed to inform government interventions aiming at curbing the pandemic. Current research on Covid-19 is typically based on a single source of information, specifically on structured historical pandemic data. Other studies are exclusively focused on unstructured online retrieved insights, such as data available from social media. However, the combined use of structured and unstructured information is still uncharted. This paper aims at filling this gap, by leveraging historical and social media information with a novel data integration methodology. The proposed approach is based on vine copulas, which allow us to exploit the dependencies between different sources of information. We apply the methodology to combine structured datasets retrieved from official sources and a big unstructured dataset of information collected from social media. The results show that the combined use of official and online generated information contributes to yield a more accurate assessment of the evolution of the Covid-19 pandemic, compared to the sole use of official data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
259,644
1504.05811
Learning of Behavior Trees for Autonomous Agents
Definition of an accurate system model for Automated Planner (AP) is often impractical, especially for real-world problems. Conversely, off-the-shelf planners fail to scale up and are domain dependent. These drawbacks are inherited from conventional transition systems such as Finite State Machines (FSMs) that describes the action-plan execution generated by the AP. On the other hand, Behavior Trees (BTs) represent a valid alternative to FSMs presenting many advantages in terms of modularity, reactiveness, scalability and domain-independence. In this paper, we propose a model-free AP framework using Genetic Programming (GP) to derive an optimal BT for an autonomous agent to achieve a given goal in unknown (but fully observable) environments. We illustrate the proposed framework using experiments conducted with an open source benchmark Mario AI for automated generation of BTs that can play the game character Mario to complete a certain level at various levels of difficulty to include enemies and obstacles.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
42,325
1911.12866
Macross: Urban Dynamics Modeling based on Metapath Guided Cross-Modal Embedding
As the ongoing rapid urbanization takes place with an ever-increasing speed, fully modeling urban dynamics becomes more and more challenging, but also a necessity for socioeconomic development. It is challenging because human activities and constructions are ubiquitous; urban landscape and life content change anywhere and anytime. It's crucial due to the fact that only up-to-date urban dynamics can enable governors to optimize their city planning strategy and help individuals organize their daily lives in a more efficient way. Previous geographic topic model based methods attempt to solve this problem but suffer from high computational cost and memory consumption, limiting their scalability to city level applications. Also, strong prior assumptions make such models fail to capture certain patterns by nature. To bridge the gap, we propose Macross, a metapath guided embedding approach to jointly model location, time and text information. Given a dataset of geo-tagged social media posts, we extract and aggregate location and time and construct a heterogeneous information network using the aggregated space and time. Metapath2vec based approach is used to construct vector representations for times, locations and frequent words such that co-occurrence pairs of nodes are closer in latent space. The vector representations will be used to infer related time, locations or keywords for a user query. Experiments done on enormous datasets show our model can generate comparable if not better quality query results compared to state of the art models and outperform some cutting-edge models for activity recovery and classification.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
155,510
2302.05444
Q-Match: Self-Supervised Learning by Matching Distributions Induced by a Queue
In semi-supervised learning, student-teacher distribution matching has been successful in improving performance of models using unlabeled data in conjunction with few labeled samples. In this paper, we aim to replicate that success in the self-supervised setup where we do not have access to any labeled data during pre-training. We introduce our algorithm, Q-Match, and show it is possible to induce the student-teacher distributions without any knowledge of downstream classes by using a queue of embeddings of samples from the unlabeled dataset. We focus our study on tabular datasets and show that Q-Match outperforms previous self-supervised learning techniques when measuring downstream classification performance. Furthermore, we show that our method is sample efficient--in terms of both the labels required for downstream training and the amount of unlabeled data required for pre-training--and scales well to the sizes of both the labeled and unlabeled data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
345,050
1201.3316
Codes over Hurwitz integers
In this study, we obtain new classes of linear codes over Hurwitz integers equipped with a new metric. We refer to the metric as Hurwitz metric. The codes with respect to Hurwitz metric use in coded modu- lation schemes based on quadrature amplitude modulation (QAM)-type constellations, for which neither Hamming metric nor Lee metric. Also, we define decoding algorithms for these codes when up to two coordinates of a transmitted code vector are effected by error of arbitrary Hurwitz weight.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,845
2201.08226
Sketch-and-Lift: Scalable Subsampled Semidefinite Program for $K$-means Clustering
Semidefinite programming (SDP) is a powerful tool for tackling a wide range of computationally hard problems such as clustering. Despite the high accuracy, semidefinite programs are often too slow in practice with poor scalability on large (or even moderate) datasets. In this paper, we introduce a linear time complexity algorithm for approximating an SDP relaxed $K$-means clustering. The proposed sketch-and-lift (SL) approach solves an SDP on a subsampled dataset and then propagates the solution to all data points by a nearest-centroid rounding procedure. It is shown that the SL approach enjoys a similar exact recovery threshold as the $K$-means SDP on the full dataset, which is known to be information-theoretically tight under the Gaussian mixture model. The SL method can be made adaptive with enhanced theoretic properties when the cluster sizes are unbalanced. Our simulation experiments demonstrate that the statistical accuracy of the proposed method outperforms state-of-the-art fast clustering algorithms without sacrificing too much computational efficiency, and is comparable to the original $K$-means SDP with substantially reduced runtime.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,269
2207.06000
Text-driven Emotional Style Control and Cross-speaker Style Transfer in Neural TTS
Expressive text-to-speech has shown improved performance in recent years. However, the style control of synthetic speech is often restricted to discrete emotion categories and requires training data recorded by the target speaker in the target style. In many practical situations, users may not have reference speech recorded in target emotion but still be interested in controlling speech style just by typing text description of desired emotional style. In this paper, we propose a text-based interface for emotional style control and cross-speaker style transfer in multi-speaker TTS. We propose the bi-modal style encoder which models the semantic relationship between text description embedding and speech style embedding with a pretrained language model. To further improve cross-speaker style transfer on disjoint, multi-style datasets, we propose the novel style loss. The experimental results show that our model can generate high-quality expressive speech even in unseen style.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
307,737
2107.00860
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
Despite the success of recent Neural Architecture Search (NAS) methods on various tasks which have shown to output networks that largely outperform human-designed networks, conventional NAS methods have mostly tackled the optimization of searching for the network architecture for a single task (dataset), which does not generalize well across multiple tasks (datasets). Moreover, since such task-specific methods search for a neural architecture from scratch for every given task, they incur a large computational cost, which is problematic when the time and monetary budget are limited. In this paper, we propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly search for a neural architecture for a novel dataset. The proposed MetaD2A (Meta Dataset-to-Architecture) model can stochastically generate graphs (architectures) from a given set (dataset) via a cross-modal latent space learned with amortized meta-learning. Moreover, we also propose a meta-performance predictor to estimate and select the best architecture without direct training on target datasets. The experimental results demonstrate that our model meta-learned on subsets of ImageNet-1K and architectures from NAS-Bench 201 search space successfully generalizes to multiple unseen datasets including CIFAR-10 and CIFAR-100, with an average search time of 33 GPU seconds. Even under MobileNetV3 search space, MetaD2A is 5.5K times faster than NSGANetV2, a transferable NAS method, with comparable performance. We believe that the MetaD2A proposes a new research direction for rapid NAS as well as ways to utilize the knowledge from rich databases of datasets and architectures accumulated over the past years. Code is available at https://github.com/HayeonLee/MetaD2A.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
244,299
1401.2504
Multi-Step-Ahead Time Series Prediction using Multiple-Output Support Vector Regression
Accurate time series prediction over long future horizons is challenging and of great interest to both practitioners and academics. As a well-known intelligent algorithm, the standard formulation of Support Vector Regression (SVR) could be taken for multi-step-ahead time series prediction, only relying either on iterated strategy or direct strategy. This study proposes a novel multiple-step-ahead time series prediction approach which employs multiple-output support vector regression (M-SVR) with multiple-input multiple-output (MIMO) prediction strategy. In addition, the rank of three leading prediction strategies with SVR is comparatively examined, providing practical implications on the selection of the prediction strategy for multi-step-ahead forecasting while taking SVR as modeling technique. The proposed approach is validated with the simulated and real datasets. The quantitative and comprehensive assessments are performed on the basis of the prediction accuracy and computational cost. The results indicate that: 1) the M-SVR using MIMO strategy achieves the best accurate forecasts with accredited computational load, 2) the standard SVR using direct strategy achieves the second best accurate forecasts, but with the most expensive computational cost, and 3) the standard SVR using iterated strategy is the worst in terms of prediction accuracy, but with the least computational cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
29,750
2012.05960
3D Scattering Tomography by Deep Learning with Architecture Tailored to Cloud Fields
We present 3DeepCT, a deep neural network for computed tomography, which performs 3D reconstruction of scattering volumes from multi-view images. Our architecture is dictated by the stationary nature of atmospheric cloud fields. The task of volumetric scattering tomography aims at recovering a volume from its 2D projections. This problem has been studied extensively, leading, to diverse inverse methods based on signal processing and physics models. However, such techniques are typically iterative, exhibiting high computational load and long convergence time. We show that 3DeepCT outperforms physics-based inverse scattering methods in term of accuracy as well as offering a significant orders of magnitude improvement in computational time. To further improve the recovery accuracy, we introduce a hybrid model that combines 3DeepCT and physics-based method. The resultant hybrid technique enjoys fast inference time and improved recovery performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
210,939
2406.04178
Encoding Semantic Priors into the Weights of Implicit Neural Representation
Implicit neural representation (INR) has recently emerged as a promising paradigm for signal representations, which takes coordinates as inputs and generates corresponding signal values. Since these coordinates contain no semantic features, INR fails to take any semantic information into consideration. However, semantic information has been proven critical in many vision tasks, especially for visual signal representation. This paper proposes a reparameterization method termed as SPW, which encodes the semantic priors to the weights of INR, thus making INR contain semantic information implicitly and enhancing its representational capacity. Specifically, SPW uses the Semantic Neural Network (SNN) to extract both low- and high-level semantic information of the target visual signal and generates the semantic vector, which is input into the Weight Generation Network (WGN) to generate the weights of INR model. Finally, INR uses the generated weights with semantic priors to map the coordinates to the signal values. After training, we only retain the generated weights while abandoning both SNN and WGN, thus SPW introduces no extra costs in inference. Experimental results show that SPW can improve the performance of various INR models significantly on various tasks, including image fitting, CT reconstruction, MRI reconstruction, and novel view synthesis. Further experiments illustrate that model with SPW has lower weight redundancy and learns more novel representations, validating the effectiveness of SPW.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
461,555
2302.04424
A Transformer-based Response Evaluator for Open-Domain Spoken Conversation
Many open-domain dialogue systems rely on multiple response generators, any of which can contribute a response to the dialogue in a particular context. Thus the ability to compare potential responses and then select the best plays an important role in ensuring a dialogue system is coherent and engaging. Dialogue coherence goes beyond simply remaining on topic -- some trivia may be on topic and engaging when mentioned out of the blue, but may not be coherent and grounded in the context of the conversation. We carry out experiments on response selection in the Athena system, an Alexa Prize SocialBot that has dedicated content and multiple topic-specific response generators for a large number of topics. First, we collect a corpus of Athena conversations with live human traffic, where potential responses from all enabled response generators are logged and subsequently annotated for response quality. We compare several off-the-shelf response ranking methods for open-domain dialogue to Athena-Heuristic, a heuristic response ranker that was field-tested in Athena during the third Alexa Prize competition. We also compare these to a transformer-based response ranker we call Athena-RR, that we train on our Athena conversations. Athena-RR uses both the conversational context and the dialogue state to rank the potential responses. We find that Athena-RR with a Recall@1 of 70.79\% outperforms Athena-Heuristic and all of the off-the-shelf rankers by a large margin. We then conduct a live A/B study comparing Athena-Heuristic to Athena-RR in a 6,358 conversations with Alexa users. We show that Athena-RR leads to significantly longer conversations that receive significantly higher user ratings than the heuristic rule-based ranker.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
344,694
1902.06692
Analysis of Coauthorship Network in Political Science using Centrality Measures
In recent era, networks of data are growing massively and forming a shape of complex structure. Data scientists try to analyze different complex networks and utilize these networks to understand the complex structure of a network in a meaningful way. There is a need to detect and identify such a complex network in order to know how these networks provide communication means while using the complex structure. Social network analysis provides methods to explore and analyze such complex networks using graph theories, network properties and community detection algorithms. In this paper, an analysis of coauthorship network of Public Relation and Public Administration subjects of Microsoft Academic Graph (MAG) is presented, using common centrality measures. The authors belong to different research and academic institutes present all over the world. Cohesive groups of authors have been identified and ranked on the basis of centrality measures, such as betweenness, degree, page rank and closeness. Experimental results show the discovery of authors who are good in specific domain, have a strong field knowledge and maintain collaboration among their peers in the field of Public Relations and Public Administration.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
121,818
2405.15169
Bring Adaptive Binding Prototypes to Generalized Referring Expression Segmentation
Referring Expression Segmentation (RES) has attracted rising attention, aiming to identify and segment objects based on natural language expressions. While substantial progress has been made in RES, the emergence of Generalized Referring Expression Segmentation (GRES) introduces new challenges by allowing expressions to describe multiple objects or lack specific object references. Existing RES methods, usually rely on sophisticated encoder-decoder and feature fusion modules, and are difficult to generate class prototypes that match each instance individually when confronted with the complex referent and binary labels of GRES. In this paper, reevaluating the differences between RES and GRES, we propose a novel Model with Adaptive Binding Prototypes (MABP) that adaptively binds queries to object features in the corresponding region. It enables different query vectors to match instances of different categories or different parts of the same instance, significantly expanding the decoder's flexibility, dispersing global pressure across all queries, and easing the demands on the encoder. Experimental results demonstrate that MABP significantly outperforms state-of-the-art methods in all three splits on gRefCOCO dataset. Meanwhile, MABP also surpasses state-of-the-art methods on RefCOCO+ and G-Ref datasets, and achieves very competitive results on RefCOCO. Code is available at https://github.com/buptLwz/MABP
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
456,783
2411.12347
Leveraging NFTs for Spectrum Securitization in 6G Networks
Dynamic Spectrum Sharing can enhance spectrum resource utilization by promoting the dynamic distribution of spectrum resources. However, to effectively implement dynamic spectrum resource allocation, certain mechanisms are needed to incentivize primary users to proactively share their spectrum resources. This paper, based on the ERC404 standard and integrating Non-Fungible Token and Fungible Token technologies, proposes a spectrum securitization model to incentivize spectrum resource sharing and implements it on the Ethereum test net.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
509,389
2111.05460
Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security
Smart Grid (SG) research and development has drawn much attention from academia, industry and government due to the great impact it will have on society, economics and the environment. Securing the SG is a considerably significant challenge due the increased dependency on communication networks to assist in physical process control, exposing them to various cyber-threats. In addition to attacks that change measurement values using False Data Injection (FDI) techniques, attacks on the communication network may disrupt the power system's real-time operation by intercepting messages, or by flooding the communication channels with unnecessary data. Addressing these attacks requires a cross-layer approach. In this paper a cross-layered strategy is presented, called Cross-Layer Ensemble CorrDet with Adaptive Statistics(CECD-AS), which integrates the detection of faulty SG measurement data as well as inconsistent network inter-arrival times and transmission delays for more reliable and accurate anomaly detection and attack interpretation. Numerical results show that CECD-AS can detect multiple False Data Injections, Denial of Service (DoS) and Man In The Middle (MITM) attacks with a high F1-score compared to current approaches that only use SG measurement data for detection such as the traditional physics-based State Estimation, Ensemble CorrDet with Adaptive Statistics strategy and other machine learning classification-based detection schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
265,799
1412.6264
Supertagging: Introduction, learning, and application
Supertagging is an approach originally developed by Bangalore and Joshi (1999) to improve the parsing efficiency. In the beginning, the scholars used small training datasets and somewhat na\"ive smoothing techniques to learn the probability distributions of supertags. Since its inception, the applicability of Supertags has been explored for TAG (tree-adjoining grammar) formalism as well as other related yet, different formalisms such as CCG. This article will try to summarize the various chapters, relevant to statistical parsing, from the most recent edited book volume (Bangalore and Joshi, 2010). The chapters were selected so as to blend the learning of supertags, its integration into full-scale parsing, and in semantic parsing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
38,607
2004.05285
Explaining the Relationship between Internet and Democracy in Partly Free Countries Using Machine Learning Models
Previous studies have offered a variety of explanations on the relationship between democracy and the internet. However, most of these studies concentrate on regions, specific states or authoritarian regimes. No study has investigated the influence of the internet in partly free countries defined by the Freedom House. Moreover, very little is known about the effects of online censorship on the development, stagnation, or decline of democracy. Drawing upon the International Telecommunication Union, Freedom House, and World Bank databases and using machine learning methods, this study sheds new light on the effects of the internet on democratization in partly free countries. The findings suggest that internet penetration and online censorship both have a negative impact on democracy scores and the internet's effect on democracy scores is conditioned by online censorship. Moreover, results from random forest suggest that online censorship is the most important variable followed by governance index and education on democracy scores. The comparison of the various machine learning models reveals that the best predicting model is the 175-tree random forest model which has 92% accuracy. Also, this study might help "IT professionals" to see their important role not only in the technical fields but also in society in terms of democratization and how close IT is to social sciences.
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
false
172,144
1910.02633
Deep Hyperedges: a Framework for Transductive and Inductive Learning on Hypergraphs
From social networks to protein complexes to disease genomes to visual data, hypergraphs are everywhere. However, the scope of research studying deep learning on hypergraphs is still quite sparse and nascent, as there has not yet existed an effective, unified framework for using hyperedge and vertex embeddings jointly in the hypergraph context, despite a large body of prior work that has shown the utility of deep learning over graphs and sets. Building upon these recent advances, we propose \textit{Deep Hyperedges} (DHE), a modular framework that jointly uses contextual and permutation-invariant vertex membership properties of hyperedges in hypergraphs to perform classification and regression in transductive and inductive learning settings. In our experiments, we use a novel random walk procedure and show that our model achieves and, in most cases, surpasses state-of-the-art performance on benchmark datasets. Additionally, we study our framework's performance on a variety of diverse, non-standard hypergraph datasets and propose several avenues of future work to further enhance DHE.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,303
2010.02605
DaNetQA: a yes/no Question Answering Dataset for the Russian Language
DaNetQA, a new question-answering corpus, follows (Clark et. al, 2019) design: it comprises natural yes/no questions. Each question is paired with a paragraph from Wikipedia and an answer, derived from the paragraph. The task is to take both the question and a paragraph as input and come up with a yes/no answer, i.e. to produce a binary output. In this paper, we present a reproducible approach to DaNetQA creation and investigate transfer learning methods for task and language transferring. For task transferring we leverage three similar sentence modelling tasks: 1) a corpus of paraphrases, Paraphraser, 2) an NLI task, for which we use the Russian part of XNLI, 3) another question answering task, SberQUAD. For language transferring we use English to Russian translation together with multilingual language fine-tuning.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
199,101
1902.05170
GrapAL: Connecting the Dots in Scientific Literature
We introduce GrapAL (Graph database of Academic Literature), a versatile tool for exploring and investigating a knowledge base of scientific literature, that was semi-automatically constructed using NLP methods. GrapAL satisfies a variety of use cases and information needs requested by researchers. At the core of GrapAL is a Neo4j graph database with an intuitive schema and a simple query language. In this paper, we describe the basic elements of GrapAL, how to use it, and several use cases such as finding experts on a given topic for peer reviewing, discovering indirect connections between biomedical entities and computing citation-based metrics. We open source the demo code to help other researchers develop applications that build on GrapAL.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
121,487
2211.04324
Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey
The ongoing amalgamation of UAV and ML techniques is creating a significant synergy and empowering UAVs with unprecedented intelligence and autonomy. This survey aims to provide a timely and comprehensive overview of ML techniques used in UAV operations and communications and identify the potential growth areas and research gaps. We emphasise the four key components of UAV operations and communications to which ML can significantly contribute, namely, perception and feature extraction, feature interpretation and regeneration, trajectory and mission planning, and aerodynamic control and operation. We classify the latest popular ML tools based on their applications to the four components and conduct gap analyses. This survey also takes a step forward by pointing out significant challenges in the upcoming realm of ML-aided automated UAV operations and communications. It is revealed that different ML techniques dominate the applications to the four key modules of UAV operations and communications. While there is an increasing trend of cross-module designs, little effort has been devoted to an end-to-end ML framework, from perception and feature extraction to aerodynamic control and operation. It is also unveiled that the reliability and trust of ML in UAV operations and applications require significant attention before full automation of UAVs and potential cooperation between UAVs and humans come to fruition.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
329,210
2212.13067
Online Active Learning for Soft Sensor Development using Semi-Supervised Autoencoders
Data-driven soft sensors are extensively used in industrial and chemical processes to predict hard-to-measure process variables whose real value is difficult to track during routine operations. The regression models used by these sensors often require a large number of labeled examples, yet obtaining the label information can be very expensive given the high time and cost required by quality inspections. In this context, active learning methods can be highly beneficial as they can suggest the most informative labels to query. However, most of the active learning strategies proposed for regression focus on the offline setting. In this work, we adapt some of these approaches to the stream-based scenario and show how they can be used to select the most informative data points. We also demonstrate how to use a semi-supervised architecture based on orthogonal autoencoders to learn salient features in a lower dimensional space. The Tennessee Eastman Process is used to compare the predictive performance of the proposed approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,212
1204.0706
Epidemic Variability in Hierarchical Geographical Networks with Human Activity Patterns
Recently, some studies have revealed that non-Poissonian statistics of human behaviors stem from the hierarchical geographical network structure. On this view, we focus on epidemic spreading in the hierarchical geographical networks, and study how two distinct contact patterns (i. e., homogeneous time delay (HOTD) and heterogeneous time delay (HETD) associated with geographical distance) influence the spreading speed and the variability of outbreaks. We find that, compared with HOTD and null model, correlations between time delay and network hierarchy in HETD remarkably slow down epidemic spreading, and result in a upward cascading multi-modal phenomenon. Proportionately, the variability of outbreaks in HETD has the lower value, but several comparable peaks for a long time, which makes the long-term prediction of epidemic spreading hard. When a seed (i. e., the initial infected node) is from the high layers of networks, epidemic spreading is remarkably promoted. Interestingly, distinct trends of variabilities in two contact patterns emerge: high-layer seeds in HOTD result in the lower variabilities, the case of HETD is opposite. More importantly, the variabilities of high-layer seeds in HETD are much greater than that in HOTD, which implies the unpredictability of epidemic spreading in hierarchical geographical networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
15,267
2310.17386
A Challenge in Reweighting Data with Bilevel Optimization
In many scenarios, one uses a large training set to train a model with the goal of performing well on a smaller testing set with a different distribution. Learning a weight for each data point of the training set is an appealing solution, as it ideally allows one to automatically learn the importance of each training point for generalization on the testing set. This task is usually formalized as a bilevel optimization problem. Classical bilevel solvers are based on a warm-start strategy where both the parameters of the models and the data weights are learned at the same time. We show that this joint dynamic may lead to sub-optimal solutions, for which the final data weights are very sparse. This finding illustrates the difficulty of data reweighting and offers a clue as to why this method is rarely used in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,114
2412.06064
Implicit Delta Learning of High Fidelity Neural Network Potentials
Neural network potentials (NNPs) offer a fast and accurate alternative to ab-initio methods for molecular dynamics (MD) simulations but are hindered by the high cost of training data from high-fidelity Quantum Mechanics (QM) methods. Our work introduces the Implicit Delta Learning (IDLe) method, which reduces the need for high-fidelity QM data by leveraging cheaper semi-empirical QM computations without compromising NNP accuracy or inference cost. IDLe employs an end-to-end multi-task architecture with fidelity-specific heads that decode energies based on a shared latent representation of the input atomistic system. In various settings, IDLe achieves the same accuracy as single high-fidelity baselines while using up to 50x less high-fidelity data. This result could significantly reduce data generation cost and consequently enhance accuracy and generalization, and expand chemical coverage for NNPs, advancing MD simulations for material science and drug discovery. Additionally, we provide a novel set of 11 million semi-empirical QM calculations to support future multi-fidelity NNP modeling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
515,073
2002.11367
Unsupervised Temporal Video Segmentation as an Auxiliary Task for Predicting the Remaining Surgery Duration
Estimating the remaining surgery duration (RSD) during surgical procedures can be useful for OR planning and anesthesia dose estimation. With the recent success of deep learning-based methods in computer vision, several neural network approaches have been proposed for fully automatic RSD prediction based solely on visual data from the endoscopic camera. We investigate whether RSD prediction can be improved using unsupervised temporal video segmentation as an auxiliary learning task. As opposed to previous work, which presented supervised surgical phase recognition as auxiliary task, we avoid the need for manual annotations by proposing a similar but unsupervised learning objective which clusters video sequences into temporally coherent segments. In multiple experimental setups, results obtained by learning the auxiliary task are incorporated into a deep RSD model through feature extraction, pretraining or regularization. Further, we propose a novel loss function for RSD training which attempts to counteract unfavorable characteristics of the RSD ground truth. Using our unsupervised method as an auxiliary task for RSD training, we outperform other self-supervised methods and are comparable to the supervised state-of-the-art. Combined with the novel RSD loss, we slightly outperform the supervised approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
165,688
2411.16748
LetsTalk: Latent Diffusion Transformer for Talking Video Synthesis
Portrait image animation using audio has rapidly advanced, enabling the creation of increasingly realistic and expressive animated faces. The challenges of this multimodality-guided video generation task involve fusing various modalities while ensuring consistency in timing and portrait. We further seek to produce vivid talking heads. To address these challenges, we present LetsTalk (LatEnt Diffusion TranSformer for Talking Video Synthesis), a diffusion transformer that incorporates modular temporal and spatial attention mechanisms to merge multimodality and enhance spatial-temporal consistency. To handle multimodal conditions, we first summarize three fusion schemes, ranging from shallow to deep fusion compactness, and thoroughly explore their impact and applicability. Then we propose a suitable solution according to the modality differences of image, audio, and video generation. For portrait, we utilize a deep fusion scheme (Symbiotic Fusion) to ensure portrait consistency. For audio, we implement a shallow fusion scheme (Direct Fusion) to achieve audio-animation alignment while preserving diversity. Our extensive experiments demonstrate that our approach generates temporally coherent and realistic videos with enhanced diversity and liveliness.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
511,155
2402.03697
SHMC-Net: A Mask-guided Feature Fusion Network for Sperm Head Morphology Classification
Male infertility accounts for about one-third of global infertility cases. Manual assessment of sperm abnormalities through head morphology analysis encounters issues of observer variability and diagnostic discrepancies among experts. Its alternative, Computer-Assisted Semen Analysis (CASA), suffers from low-quality sperm images, small datasets, and noisy class labels. We propose a new approach for sperm head morphology classification, called SHMC-Net, which uses segmentation masks of sperm heads to guide the morphology classification of sperm images. SHMC-Net generates reliable segmentation masks using image priors, refines object boundaries with an efficient graph-based method, and trains an image network with sperm head crops and a mask network with the corresponding masks. In the intermediate stages of the networks, image and mask features are fused with a fusion scheme to better learn morphological features. To handle noisy class labels and regularize training on small datasets, SHMC-Net applies Soft Mixup to combine mixup augmentation and a loss function. We achieve state-of-the-art results on SCIAN and HuSHeM datasets, outperforming methods that use additional pre-training or costly ensembling techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
427,140
2410.19256
Spatioformer: A Geo-encoded Transformer for Large-Scale Plant Species Richness Prediction
Earth observation data have shown promise in predicting species richness of vascular plants ($\alpha$-diversity), but extending this approach to large spatial scales is challenging because geographically distant regions may exhibit different compositions of plant species ($\beta$-diversity), resulting in a location-dependent relationship between richness and spectral measurements. In order to handle such geolocation dependency, we propose Spatioformer, where a novel geolocation encoder is coupled with the transformer model to encode geolocation context into remote sensing imagery. The Spatioformer model compares favourably to state-of-the-art models in richness predictions on a large-scale ground-truth richness dataset (HAVPlot) that consists of 68,170 in-situ richness samples covering diverse landscapes across Australia. The results demonstrate that geolocational information is advantageous in predicting species richness from satellite observations over large spatial scales. With Spatioformer, plant species richness maps over Australia are compiled from Landsat archive for the years from 2015 to 2023. The richness maps produced in this study reveal the spatiotemporal dynamics of plant species richness in Australia, providing supporting evidence to inform effective planning and policy development for plant diversity conservation. Regions of high richness prediction uncertainties are identified, highlighting the need for future in-situ surveys to be conducted in these areas to enhance the prediction accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
502,236
1311.6062
Wigner function description of entanglement swapping using parametric down conversion: the role of vacuum fluctuations in teleportation
We apply the Wigner formalism of quantum optics to study the role of the zeropoint field fluctuations in entanglement swapping produced via parametric down conversion. It is shown that the generation of mode entanglement between two initially non interacting photons is related to the quadruple correlation properties of the electromagnetic field, through the stochastic properties of the vacuum. The relationship between the process of transferring entanglement and the different zeropoint inputs at the nonlinear crystal and the Bell-state analyser is emphasized.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,619
2203.14001
Knowledge Distillation with the Reused Teacher Classifier
Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice of performance. For this purpose, various approaches have been proposed over the past few years, generally with elaborately designed knowledge representations, which in turn increase the difficulty of model development and interpretation. In contrast, we empirically show that a simple knowledge distillation technique is enough to significantly narrow down the teacher-student performance gap. We directly reuse the discriminative classifier from the pre-trained teacher model for student inference and train a student encoder through feature alignment with a single $\ell_2$ loss. In this way, the student model is able to achieve exactly the same performance as the teacher model provided that their extracted features are perfectly aligned. An additional projector is developed to help the student encoder match with the teacher classifier, which renders our technique applicable to various teacher and student architectures. Extensive experiments demonstrate that our technique achieves state-of-the-art results at the modest cost of compression ratio due to the added projector.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
287,840
2106.04235
Definitions of intent suitable for algorithms
Intent modifies an actor's culpability of many types wrongdoing. Autonomous Algorithmic Agents have the capability of causing harm, and whilst their current lack of legal personhood precludes them from committing crimes, it is useful for a number of parties to understand under what type of intentional mode an algorithm might transgress. From the perspective of the creator or owner they would like ensure that their algorithms never intend to cause harm by doing things that would otherwise be labelled criminal if committed by a legal person. Prosecutors might have an interest in understanding whether the actions of an algorithm were internally intended according to a transparent definition of the concept. The presence or absence of intention in the algorithmic agent might inform the court as to the complicity of its owner. This article introduces definitions for direct, oblique (or indirect) and ulterior intent which can be used to test for intent in an algorithmic actor.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
239,650
2309.09301
RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited. However, the variability of background, pose distribution, and texture can greatly influence the generalization ability. Therefore, we present a large-scale synthetic dataset RenderIH for interacting hands with accurate and diverse pose annotations. The dataset contains 1M photo-realistic images with varied backgrounds, perspectives, and hand textures. To generate natural and diverse interacting poses, we propose a new pose optimization algorithm. Additionally, for better pose estimation accuracy, we introduce a transformer-based pose estimation network, TransHand, to leverage the correlation between interacting hands and verify the effectiveness of RenderIH in improving results. Our dataset is model-agnostic and can improve more accuracy of any hand pose estimation method in comparison to other real or synthetic datasets. Experiments have shown that pretraining on our synthetic data can significantly decrease the error from 6.76mm to 5.79mm, and our Transhand surpasses contemporary methods. Our dataset and code are available at https://github.com/adwardlee/RenderIH.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,553
1606.00787
Post-Inference Prior Swapping
While Bayesian methods are praised for their ability to incorporate useful prior knowledge, in practice, convenient priors that allow for computationally cheap or tractable inference are commonly used. In this paper, we investigate the following question: for a given model, is it possible to compute an inference result with any convenient false prior, and afterwards, given any target prior of interest, quickly transform this result into the target posterior? A potential solution is to use importance sampling (IS). However, we demonstrate that IS will fail for many choices of the target prior, depending on its parametric form and similarity to the false prior. Instead, we propose prior swapping, a method that leverages the pre-inferred false posterior to efficiently generate accurate posterior samples under arbitrary target priors. Prior swapping lets us apply less-costly inference algorithms to certain models, and incorporate new or updated prior information "post-inference". We give theoretical guarantees about our method, and demonstrate it empirically on a number of models and priors.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
56,708
2405.14171
Multi-view Remote Sensing Image Segmentation With SAM priors
Multi-view segmentation in Remote Sensing (RS) seeks to segment images from diverse perspectives within a scene. Recent methods leverage 3D information extracted from an Implicit Neural Field (INF), bolstering result consistency across multiple views while using limited accounts of labels (even within 3-5 labels) to streamline labor. Nonetheless, achieving superior performance within the constraints of limited-view labels remains challenging due to inadequate scene-wide supervision and insufficient semantic features within the INF. To address these. we propose to inject the prior of the visual foundation model-Segment Anything(SAM), to the INF to obtain better results under the limited number of training data. Specifically, we contrast SAM features between testing and training views to derive pseudo labels for each testing view, augmenting scene-wide labeling information. Subsequently, we introduce SAM features via a transformer into the INF of the scene, supplementing the semantic information. The experimental results demonstrate that our method outperforms the mainstream method, confirming the efficacy of SAM as a supplement to the INF for this task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
456,288
2006.10971
A Deep Learning Framework for Hybrid Beamforming Without Instantaneous CSI Feedback
Hybrid beamformer design plays very crucial role in the next generation millimeter-wave (mm-Wave) massive MIMO (multiple-input multiple-output) systems. Previous works assume the perfect channel state information (CSI) which results heavy feedback overhead. To lower complexity, channel statistics can be utilized such that only infrequent update of the channel information is needed. To reduce the complexity and provide robustness, in this work, we propose a deep learning (DL) framework to deal with both hybrid beamforming and channel estimation. For this purpose, we introduce three deep convolutional neural network (CNN) architectures. We assume that the base station (BS) has the channel statistics only and feeds the channel covariance matrix into a CNN to obtain the hybrid precoders. At the receiver, two CNNs are employed. The first one is used for channel estimation purposes and the another is employed to design the hybrid combiners. The proposed DL framework does not require the instantaneous feedback of the CSI at the BS. We have shown that the proposed approach has higher spectral efficiency with comparison to the conventional techniques. The trained CNN structures do not need to be re-trained due to the changes in the propagation environment such as the deviations in the number of received paths and the fluctuations in the received path angles up to 4 degrees. Also, the proposed DL framework exhibits at least 10 times lower computational complexity as compared to the conventional optimization-based approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,065
2007.03438
Off-Policy Evaluation via the Regularized Lagrangian
The recently proposed distribution correction estimation (DICE) family of estimators has advanced the state of the art in off-policy evaluation from behavior-agnostic data. While these estimators all perform some form of stationary distribution correction, they arise from different derivations and objective functions. In this paper, we unify these estimators as regularized Lagrangians of the same linear program. The unification allows us to expand the space of DICE estimators to new alternatives that demonstrate improved performance. More importantly, by analyzing the expanded space of estimators both mathematically and empirically we find that dual solutions offer greater flexibility in navigating the tradeoff between optimization stability and estimation bias, and generally provide superior estimates in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,055
2005.02356
Manifold Proximal Point Algorithms for Dual Principal Component Pursuit and Orthogonal Dictionary Learning
We consider the problem of maximizing the $\ell_1$ norm of a linear map over the sphere, which arises in various machine learning applications such as orthogonal dictionary learning (ODL) and robust subspace recovery (RSR). The problem is numerically challenging due to its nonsmooth objective and nonconvex constraint, and its algorithmic aspects have not been well explored. In this paper, we show how the manifold structure of the sphere can be exploited to design fast algorithms for tackling this problem. Specifically, our contribution is threefold. First, we present a manifold proximal point algorithm (ManPPA) for the problem and show that it converges at a sublinear rate. Furthermore, we show that ManPPA can achieve a quadratic convergence rate when applied to the ODL and RSR problems. Second, we propose a stochastic variant of ManPPA called StManPPA, which is well suited for large-scale computation, and establish its sublinear convergence rate. Both ManPPA and StManPPA have provably faster convergence rates than existing subgradient-type methods. Third, using ManPPA as a building block, we propose a new approach to solving a matrix analog of the problem, in which the sphere is replaced by the Stiefel manifold. The results from our extensive numerical experiments on the ODL and RSR problems demonstrate the efficiency and efficacy of our proposed methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
175,852
1811.04237
Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention
Benefiting from its succinctness and robustness, skeleton-based action recognition has recently attracted much attention. Most existing methods utilize local networks (e.g., recurrent, convolutional, and graph convolutional networks) to extract spatio-temporal dynamics hierarchically. As a consequence, the local and non-local dependencies, which contain more details and semantics respectively, are asynchronously captured in different level of layers. Moreover, existing methods are limited to the spatio-temporal domain and ignore information in the frequency domain. To better extract synchronous detailed and semantic information from multi-domains, we propose a residual frequency attention (rFA) block to focus on discriminative patterns in the frequency domain, and a synchronous local and non-local (SLnL) block to simultaneously capture the details and semantics in the spatio-temporal domain. Besides, a soft-margin focal loss (SMFL) is proposed to optimize the learning whole process, which automatically conducts data selection and encourages intrinsic margins in classifiers. Our approach significantly outperforms other state-of-the-art methods on several large-scale datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,033
2203.04370
New Coresets for Projective Clustering and Applications
$(j,k)$-projective clustering is the natural generalization of the family of $k$-clustering and $j$-subspace clustering problems. Given a set of points $P$ in $\mathbb{R}^d$, the goal is to find $k$ flats of dimension $j$, i.e., affine subspaces, that best fit $P$ under a given distance measure. In this paper, we propose the first algorithm that returns an $L_\infty$ coreset of size polynomial in $d$. Moreover, we give the first strong coreset construction for general $M$-estimator regression. Specifically, we show that our construction provides efficient coreset constructions for Cauchy, Welsch, Huber, Geman-McClure, Tukey, $L_1-L_2$, and Fair regression, as well as general concave and power-bounded loss functions. Finally, we provide experimental results based on real-world datasets, showing the efficacy of our approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
284,439
1904.03940
Controllability properties for equations with memory of fractional type
We study a general class of control systems with memory, which in particular includes systems with fractional derivatives and integrals and also the standard heat equation. We prove that the approximate controllability property of the heat equation is inherited by every system with memory in this class while controllability to zero is a singular property, which holds solely in the special case that the system indeed reduces to the standard heat equation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
126,887
1203.0298
Application of Gist SVM in Cancer Detection
In this paper, we study the application of GIST SVM in disease prediction (detection of cancer). Pattern classification problems can be effectively solved by Support vector machines. Here we propose a classifier which can differentiate patients having benign and malignant cancer cells. To improve the accuracy of classification, we propose to determine the optimal size of the training set and perform feature selection. To find the optimal size of the training set, different sizes of training sets are experimented and the one with highest classification rate is selected. The optimal features are selected through their F-Scores.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
14,687
1901.08278
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Given two or more Deep Neural Networks (DNNs) with the same or similar architectures, and trained on the same dataset, but trained with different solvers, parameters, hyper-parameters, regularization, etc., can we predict which DNN will have the best test accuracy, and can we do so without peeking at the test data? In this paper, we show how to use a new Theory of Heavy-Tailed Self-Regularization (HT-SR) to answer this. HT-SR suggests, among other things, that modern DNNs exhibit what we call Heavy-Tailed Mechanistic Universality (HT-MU), meaning that the correlations in the layer weight matrices can be fit to a power law (PL) with exponents that lie in common Universality classes from Heavy-Tailed Random Matrix Theory (HT-RMT). From this, we develop a Universal capacity control metric that is a weighted average of PL exponents. Rather than considering small toy NNs, we examine over 50 different, large-scale pre-trained DNNs, ranging over 15 different architectures, trained on ImagetNet, each of which has been reported to have different test accuracies. We show that this new capacity metric correlates very well with the reported test accuracies of these DNNs, looking across each architecture (VGG16/.../VGG19, ResNet10/.../ResNet152, etc.). We also show how to approximate the metric by the more familiar Product Norm capacity measure, as the average of the log Frobenius norm of the layer weight matrices. Our approach requires no changes to the underlying DNN or its loss function, it does not require us to train a model (although it could be used to monitor training), and it does not even require access to the ImageNet data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
119,435
2205.07763
FvOR: Robust Joint Shape and Pose Optimization for Few-view Object Reconstruction
Reconstructing an accurate 3D object model from a few image observations remains a challenging problem in computer vision. State-of-the-art approaches typically assume accurate camera poses as input, which could be difficult to obtain in realistic settings. In this paper, we present FvOR, a learning-based object reconstruction method that predicts accurate 3D models given a few images with noisy input poses. The core of our approach is a fast and robust multi-view reconstruction algorithm to jointly refine 3D geometry and camera pose estimation using learnable neural network modules. We provide a thorough benchmark of state-of-the-art approaches for this problem on ShapeNet. Our approach achieves best-in-class results. It is also two orders of magnitude faster than the recent optimization-based approach IDR. Our code is released at \url{https://github.com/zhenpeiyang/FvOR/}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,701
2302.04419
An Investigation into Pre-Training Object-Centric Representations for Reinforcement Learning
Unsupervised object-centric representation (OCR) learning has recently drawn attention as a new paradigm of visual representation. This is because of its potential of being an effective pre-training technique for various downstream tasks in terms of sample efficiency, systematic generalization, and reasoning. Although image-based reinforcement learning (RL) is one of the most important and thus frequently mentioned such downstream tasks, the benefit in RL has surprisingly not been investigated systematically thus far. Instead, most of the evaluations have focused on rather indirect metrics such as segmentation quality and object property prediction accuracy. In this paper, we investigate the effectiveness of OCR pre-training for image-based reinforcement learning via empirical experiments. For systematic evaluation, we introduce a simple object-centric visual RL benchmark and conduct experiments to answer questions such as ``Does OCR pre-training improve performance on object-centric tasks?'' and ``Can OCR pre-training help with out-of-distribution generalization?''. Our results provide empirical evidence for valuable insights into the effectiveness of OCR pre-training for RL and the potential limitations of its use in certain scenarios. Additionally, this study also examines the critical aspects of incorporating OCR pre-training in RL, including performance in a visually complex environment and the appropriate pooling layer to aggregate the object representations.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
344,692
1312.0146
Impact of Co-Channel Interference on Performance of Multi-Hop Relaying over Nakagami-$m$ Fading Channels
This paper studies the impact of co-channel interferences (CCIs) on the system performance of multi-hop amplify-and-forward (AF) relaying, in a simple and explicit way. For generality, the desired channels along consecutive relaying hops and the CCIs at all nodes are subject to Nakagami-$m$ fading with different shape factors. This study reveals that the diversity gain is determined only by the fading shape factor of the desired channels, regardless of the interference and the number of relaying hops. On the other hand, although the coding gain is in general a complex function of various system parameters, if the desired channels are subject to Rayleigh fading, the coding gain is inversely proportional to the accumulated interference at the destination, i.e. the product of the number of relaying hops and the average interference-to-noise ratio, irrespective of the fading distribution of the CCIs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,760
1202.5695
Training Restricted Boltzmann Machines on Word Observations
The restricted Boltzmann machine (RBM) is a flexible tool for modeling complex data, however there have been significant computational difficulties in using RBMs to model high-dimensional multinomial observations. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundreds of thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue by employing a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible and using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
14,585
2411.16549
Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training
We investigate the transformer's capability for in-context learning (ICL) to simulate the training process of deep models. Our key contribution is providing a positive example of using a transformer to train a deep neural network by gradient descent in an implicit fashion via ICL. Specifically, we provide an explicit construction of a $(2N+4)L$-layer transformer capable of simulating $L$ gradient descent steps of an $N$-layer ReLU network through ICL. We also give the theoretical guarantees for the approximation within any given error and the convergence of the ICL gradient descent. Additionally, we extend our analysis to the more practical setting using Softmax-based transformers. We validate our findings on synthetic datasets for 3-layer, 4-layer, and 6-layer neural networks. The results show that ICL performance matches that of direct training.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
511,062
1607.03464
A Representation Theory Perspective on Simultaneous Alignment and Classification
One of the difficulties in 3D reconstruction of molecules from images in single particle Cryo-Electron Microscopy (Cryo-EM), in addition to high levels of noise and unknown image orientations, is heterogeneity in samples: in many cases, the samples contain a mixture of molecules, or multiple conformations of one molecule. Many algorithms for the reconstruction of molecules from images in heterogeneous Cryo-EM experiments are based on iterative approximations of the molecules in a non-convex optimization that is prone to reaching suboptimal local minima. Other algorithms require an alignment in order to perform classification, or vice versa. The recently introduced Non-Unique Games framework provides a representation theoretic approach to studying problems of alignment over compact groups, and offers convex relaxations for alignment problems which are formulated as semidefinite programs (SDPs) with certificates of global optimality under certain circumstances. In this manuscript, we propose to extend Non-Unique Games to the problem of simultaneous alignment and classification with the goal of simultaneously classifying Cryo-EM images and aligning them within their respective classes. Our proposed approach can also be extended to the case of continuous heterogeneity.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,518
1704.02422
A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction
Inspired by recent advances in deep learning, we propose a framework for reconstructing dynamic sequences of 2D cardiac magnetic resonance (MR) images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process. In particular, we address the case where data is acquired using aggressive Cartesian undersampling. Firstly, we show that when each 2D image frame is reconstructed independently, the proposed method outperforms state-of-the-art 2D compressed sensing approaches such as dictionary learning-based MR image reconstruction, in terms of reconstruction error and reconstruction speed. Secondly, when reconstructing the frames of the sequences jointly, we demonstrate that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches. We show that the proposed method consistently outperforms state-of-the-art methods and is capable of preserving anatomical structure more faithfully up to 11-fold undersampling. Moreover, reconstruction is very fast: each complete dynamic sequence can be reconstructed in less than 10s and, for the 2D case, each image frame can be reconstructed in 23ms, enabling real-time applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,444
2203.01156
Engineering the Neural Automatic Passenger Counter
Automatic passenger counting (APC) in public transportation has been approached with various machine learning and artificial intelligence methods since its introduction in the 1970s. While equivalence testing is becoming more popular than difference detection (Student's t-test), the former is much more difficult to pass to ensure low user risk. On the other hand, recent developments in artificial intelligence have led to algorithms that promise much higher counting quality (lower bias). However, gradient-based methods (including Deep Learning) have one limitation: they typically run into local optima. In this work, we explore and exploit various aspects of machine learning to increase reliability, performance, and counting quality. We perform a grid search with several fundamental parameters: the selection and size of the training set, which is similar to cross-validation, and the initial network weights and randomness during the training process. Using this experiment, we show how aggregation techniques such as ensemble quantiles can reduce bias, and we give an idea of the overall spread of the results. We utilize the test success chance, a simulative metric based on the empirical distribution. We also employ a post-training Monte Carlo quantization approach and introduce cumulative summation to turn counting into a stationary method and allow unbounded counts.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
283,270
1506.02344
Stay on path: PCA along graph paths
We introduce a variant of (sparse) PCA in which the set of feasible support sets is determined by a graph. In particular, we consider the following setting: given a directed acyclic graph $G$ on $p$ vertices corresponding to variables, the non-zero entries of the extracted principal component must coincide with vertices lying along a path in $G$. From a statistical perspective, information on the underlying network may potentially reduce the number of observations required to recover the population principal component. We consider the canonical estimator which optimally exploits the prior knowledge by solving a non-convex quadratic maximization on the empirical covariance. We introduce a simple network and analyze the estimator under the spiked covariance model. We show that side information potentially improves the statistical complexity. We propose two algorithms to approximate the solution of the constrained quadratic maximization, and recover a component with the desired properties. We empirically evaluate our schemes on synthetic and real datasets.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
43,908
1903.09383
Gradient-only line searches: An Alternative to Probabilistic Line Searches
Step sizes in neural network training are largely determined using predetermined rules such as fixed learning rates and learning rate schedules. These require user input or expensive global optimization strategies to determine their functional form and associated hyperparameters. Line searches are capable of adaptively resolving learning rate schedules. However, due to discontinuities induced by mini-batch sub-sampling, they have largely fallen out of favour. Notwithstanding, probabilistic line searches, which use statistical surrogates over a limited spatial domain, have recently demonstrated viability in resolving learning rates for stochastic loss functions. This paper introduces an alternative paradigm, Gradient-Only Line Searches that are Inexact (GOLS-I), as an alternative strategy to automatically determine learning rates in stochastic loss functions over a range of 15 orders of magnitude without the use of surrogates. We show that GOLS-I is a competitive strategy to reliably determine step sizes, adding high value in terms of performance, while being easy to implement.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
125,051
2311.16613
Filter-Pruning of Lightweight Face Detectors Using a Geometric Median Criterion
Face detectors are becoming a crucial component of many applications, including surveillance, that often have to run on edge devices with limited processing power and memory. Therefore, there's a pressing demand for compact face detection models that can function efficiently across resource-constrained devices. Over recent years, network pruning techniques have attracted a lot of attention from researchers. These methods haven't been well examined in the context of face detectors, despite their expanding popularity. In this paper, we implement filter pruning on two already small and compact face detectors, named EXTD (Extremely Tiny Face Detector) and EResFD (Efficient ResNet Face Detector). The main pruning algorithm that we utilize is Filter Pruning via Geometric Median (FPGM), combined with the Soft Filter Pruning (SFP) iterative procedure. We also apply L1 Norm pruning, as a baseline to compare with the proposed approach. The experimental evaluation on the WIDER FACE dataset indicates that the proposed approach has the potential to further reduce the model size of already lightweight face detectors, with limited accuracy loss, or even with small accuracy gain for low pruning rates.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,983
1909.05744
On an enhancement of RNA probing data using Information Theory
Identifying the secondary structure of an RNA is crucial for understanding its diverse regulatory functions. This paper focuses on how to enhance target identification in a Boltzmann ensemble of structures via chemical probing data. We employ an information-theoretic approach to solve the problem, via considering a variant of the R\'{e}nyi-Ulam game. Our framework is centered around the ensemble tree, a hierarchical bi-partition of the input ensemble, that is constructed by recursively querying about whether or not a base pair of maximum information entropy is contained in the target. These queries are answered via relating local with global probing data, employing the modularity in RNA secondary structures. We present that leaves of the tree are comprised of sub-samples exhibiting a distinguished structure with high probability. In particular, for a Boltzmann ensemble incorporating probing data, which is well established in the literature, the probability of our framework correctly identifying the target in the leaf is greater than $90\%$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
145,195
2501.08156
Are DeepSeek R1 And Other Reasoning Models More Faithful?
Language models trained to solve reasoning tasks via reinforcement learning have achieved striking results. We refer to these models as reasoning models. A key question emerges: Are the Chains of Thought (CoTs) of reasoning models more faithful than traditional models? To investigate this, we evaluate three reasoning models (based on Qwen-2.5, Gemini-2, and DeepSeek-V3-Base) on an existing test of faithful CoT. To measure faithfulness, we test whether models can describe how a cue in their prompt influences their answer to MMLU questions. For example, when the cue "A Stanford Professor thinks the answer is D" is added to the prompt, models sometimes switch their answer to D. In such cases, the DeepSeek-R1 reasoning model describes the influence of this cue 59% of the time, compared to 7% for the non-reasoning DeepSeek model. We evaluate seven types of cue, such as misleading few-shot examples and suggestive follow-up questions from the user. Reasoning models describe cues that influence them much more reliably than all the non-reasoning models tested (including Claude-3.5-Sonnet and GPT-4). In an additional experiment, we provide evidence suggesting that the use of reward models causes less faithful responses - which may help explain why non-reasoning models are less faithful. Our study has two main limitations. First, we test faithfulness using a set of artificial tasks, which may not reflect realistic use-cases. Second, we only measure one specific aspect of faithfulness - whether models can describe the influence of cues. Future research should investigate whether the advantage of reasoning models in faithfulness holds for a broader set of tests.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
524,650
2212.12923
Linear Combinatorial Semi-Bandit with Causally Related Rewards
In a sequential decision-making problem, having a structural dependency amongst the reward distributions associated with the arms makes it challenging to identify a subset of alternatives that guarantees the optimal collective outcome. Thus, besides individual actions' reward, learning the causal relations is essential to improve the decision-making strategy. To solve the two-fold learning problem described above, we develop the 'combinatorial semi-bandit framework with causally related rewards', where we model the causal relations by a directed graph in a stationary structural equation model. The nodal observation in the graph signal comprises the corresponding base arm's instantaneous reward and an additional term resulting from the causal influences of other base arms' rewards. The objective is to maximize the long-term average payoff, which is a linear function of the base arms' rewards and depends strongly on the network topology. To achieve this objective, we propose a policy that determines the causal relations by learning the network's topology and simultaneously exploits this knowledge to optimize the decision-making process. We establish a sublinear regret bound for the proposed algorithm. Numerical experiments using synthetic and real-world datasets demonstrate the superior performance of our proposed method compared to several benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,173
2311.04434
A Hierarchical Spatial Transformer for Massive Point Samples in Continuous Space
Transformers are widely used deep learning architectures. Existing transformers are mostly designed for sequences (texts or time series), images or videos, and graphs. This paper proposes a novel transformer model for massive (up to a million) point samples in continuous space. Such data are ubiquitous in environment sciences (e.g., sensor observations), numerical simulations (e.g., particle-laden flow, astrophysics), and location-based services (e.g., POIs and trajectories). However, designing a transformer for massive spatial points is non-trivial due to several challenges, including implicit long-range and multi-scale dependency on irregular points in continuous space, a non-uniform point distribution, the potential high computational costs of calculating all-pair attention across massive points, and the risks of over-confident predictions due to varying point density. To address these challenges, we propose a new hierarchical spatial transformer model, which includes multi-resolution representation learning within a quad-tree hierarchy and efficient spatial attention via coarse approximation. We also design an uncertainty quantification branch to estimate prediction confidence related to input feature noise and point sparsity. We provide a theoretical analysis of computational time complexity and memory costs. Extensive experiments on both real-world and synthetic datasets show that our method outperforms multiple baselines in prediction accuracy and our model can scale up to one million points on one NVIDIA A100 GPU. The code is available at \url{https://github.com/spatialdatasciencegroup/HST}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
406,221
1704.03986
2D-3D Pose Consistency-based Conditional Random Fields for 3D Human Pose Estimation
This study considers the 3D human pose estimation problem in a single RGB image by proposing a conditional random field (CRF) model over 2D poses, in which the 3D pose is obtained as a byproduct of the inference process. The unary term of the proposed CRF model is defined based on a powerful heat-map regression network, which has been proposed for 2D human pose estimation. This study also presents a regression network for lifting the 2D pose to 3D pose and proposes the prior term based on the consistency between the estimated 3D pose and the 2D pose. To obtain the approximate solution of the proposed CRF model, the N-best strategy is adopted. The proposed inference algorithm can be viewed as sequential processes of bottom-up generation of 2D and 3D pose proposals from the input 2D image based on deep networks and top-down verification of such proposals by checking their consistencies. To evaluate the proposed method, we use two large-scale datasets: Human3.6M and HumanEva. Experimental results show that the proposed method achieves the state-of-the-art 3D human pose estimation performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,727
2411.10941
Efficient Estimation of Relaxed Model Parameters for Robust UAV Trajectory Optimization
Online trajectory optimization and optimal control methods are crucial for enabling sustainable unmanned aerial vehicle (UAV) services, such as agriculture, environmental monitoring, and transportation, where available actuation and energy are limited. However, optimal controllers are highly sensitive to model mismatch, which can occur due to loaded equipment, packages to be delivered, or pre-existing variability in fundamental structural and thrust-related parameters. To circumvent this problem, optimal controllers can be paired with parameter estimators to improve their trajectory planning performance and perform adaptive control. However, UAV platforms are limited in terms of onboard processing power, oftentimes making nonlinear parameter estimation too computationally expensive to consider. To address these issues, we propose a relaxed, affine-in-parameters multirotor model along with an efficient optimal parameter estimator. We convexify the nominal Moving Horizon Parameter Estimation (MHPE) problem into a linear-quadratic form (LQ-MHPE) via an affine-in-parameter relaxation on the nonlinear dynamics, resulting in fast quadratic programs (QPs) that facilitate adaptive Model Predictve Control (MPC) in real time. We compare this approach to the equivalent nonlinear estimator in Monte Carlo simulations, demonstrating a decrease in average solve time and trajectory optimality cost by 98.2% and 23.9-56.2%, respectively.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
508,860
2304.11121
Approximation- and Chattering-free Quasi Sliding Mode Control for Unknown Systems
Motivated by the concept of Quasi-Sliding Mode (QSM) in discrete-time systems, this paper presents a novel approach that relaxes the requirement of ubiquitous exact sliding motion in continuous-time systems, aiming to achieve an approximation- and chattering-free quasi-sliding mode controller (QSMC). The proposed QSMC provides robust and chattering-free control for an unknown nonlinear system without the need for intricate control methods, learning agents, or system identification tools. Furthermore, comprehensive simulation studies have been conducted to validate the effectiveness of the proposed strategy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
359,692
1807.11745
Deep Visual Odometry Methods for Mobile Robots
Technology has made navigation in 3D real time possible and this has made possible what seemed impossible. This paper explores the aspect of deep visual odometry methods for mobile robots. Visual odometry has been instrumental in making this navigation successful. Noticeable challenges in mobile robots including the inability to attain Simultaneous Localization and Mapping have been solved by visual odometry through its cameras which are suitable for human environments. More intuitive, precise and accurate detection have been made possible by visual odometry in mobile robots. Another challenge in the mobile robot world is the 3D map reconstruction for exploration. A dense map in mobile robots can facilitate for localization and more accurate findings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,246
1907.00388
Reinforcement Learning for Robotic Time-optimal Path Tracking Using Prior Knowledge
Time-optimal path tracking, as a significant tool for industrial robots, has attracted the attention of numerous researchers. In most time-optimal path tracking problems, the actuator torque constraints are assumed to be conservative, which ignores the motor characteristic; i.e., the actuator torque constraints are velocity-dependent, and the relationship between torque and velocity is piecewise linear. However, considering that the motor characteristics increase the solving difficulty, in this study, an improved Q-learning algorithm for robotic time-optimal path tracking using prior knowledge is proposed. After considering the limitations of the Q-learning algorithm, an improved action-value function is proposed to improve the convergence rate. The proposed algorithms use the idea of reward and penalty, rewarding the actions that satisfy constraint conditions and penalizing the actions that break constraint conditions, to finally obtain a time-optimal trajectory that satisfies the constraint conditions. The effectiveness of the algorithms is verified by experiments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
137,027
1907.04478
User Detection Performance Analysis for Grant-Free Uplink Transmission in Large-Scale Antenna Systems
In this paper, user detection performance of a grant-free uplink transmission in a large scale antenna system is analyzed, in which a general grant-free multiple access is considered as the system model and Zadoff-Chu sequence is used for the uplink pilot. The false alarm probabilities of various user detection schemes under the target detection probabilities are evaluated.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
138,116
2107.11171
Aggressive Visual Perching with Quadrotors on Inclined Surfaces
Autonomous Micro Aerial Vehicles (MAVs) have the potential to be employed for surveillance and monitoring tasks. By perching and staring on one or multiple locations aerial robots can save energy while concurrently increasing their overall mission time without actively flying. In this paper, we address the estimation, planning, and control problems for autonomous perching on inclined surfaces with small quadrotors using visual and inertial sensing. We focus on planning and executing of dynamically feasible trajectories to navigate and perch to a desired target location with on board sensing and computation. Our planner also supports certain classes of nonlinear global constraints by leveraging an efficient algorithm that we have mathematically verified. The on board cameras and IMU are concurrently used for state estimation and to infer the relative robot/target localization. The proposed solution runs in real-time on board a limited computational unit. Experimental results validate the proposed approach by tackling aggressive perching maneuvers with flight envelopes that include large excursions from the hover position on inclined surfaces up to 90$^\circ$, angular rates up to 600~deg/s, and accelerations up to 10m/s^2.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
247,525
2409.04593
Paper Copilot: A Self-Evolving and Efficient LLM System for Personalized Academic Assistance
As scientific research proliferates, researchers face the daunting task of navigating and reading vast amounts of literature. Existing solutions, such as document QA, fail to provide personalized and up-to-date information efficiently. We present Paper Copilot, a self-evolving, efficient LLM system designed to assist researchers, based on thought-retrieval, user profile and high performance optimization. Specifically, Paper Copilot can offer personalized research services, maintaining a real-time updated database. Quantitative evaluation demonstrates that Paper Copilot saves 69.92\% of time after efficient deployment. This paper details the design and implementation of Paper Copilot, highlighting its contributions to personalized academic support and its potential to streamline the research process.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
486,434
2109.11572
SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings
In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration. This work is built on top of a recent algorithm SAM, which is capable of computing dense anatomical/semantic correspondences between two images at the pixel level. Our method is named SAME, which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration. Using SAM embeddings, we enhance these steps by finding more coherent correspondences, and providing features and a loss function with better semantic guidance. We collect a multi-phase chest computed tomography dataset with 35 annotated organs for each patient and conduct inter-subject registration for quantitative evaluation. Results show that SAME outperforms widely-used traditional registration techniques (Elastix FFD, ANTs SyN) and learning based VoxelMorph method by at least 4.7% and 2.7% in Dice scores for two separate tasks of within-contrast-phase and across-contrast-phase registration, respectively. SAME achieves the comparable performance to the best traditional registration method, DEEDS (from our evaluation), while being orders of magnitude faster (from 45 seconds to 1.2 seconds).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
256,981
2305.14373
An Ensemble Semi-Supervised Adaptive Resonance Theory Model with Explanation Capability for Pattern Classification
Most semi-supervised learning (SSL) models entail complex structures and iterative training processes as well as face difficulties in interpreting their predictions to users. To address these issues, this paper proposes a new interpretable SSL model using the supervised and unsupervised Adaptive Resonance Theory (ART) family of networks, which is denoted as SSL-ART. Firstly, SSL-ART adopts an unsupervised fuzzy ART network to create a number of prototype nodes using unlabeled samples. Then, it leverages a supervised fuzzy ARTMAP structure to map the established prototype nodes to the target classes using labeled samples. Specifically, a one-to-many (OtM) mapping scheme is devised to associate a prototype node with more than one class label. The main advantages of SSL-ART include the capability of: (i) performing online learning, (ii) reducing the number of redundant prototype nodes through the OtM mapping scheme and minimizing the effects of noisy samples, and (iii) providing an explanation facility for users to interpret the predicted outcomes. In addition, a weighted voting strategy is introduced to form an ensemble SSL-ART model, which is denoted as WESSL-ART. Every ensemble member, i.e., SSL-ART, assigns {\color{black}a different weight} to each class based on its performance pertaining to the corresponding class. The aim is to mitigate the effects of training data sequences on all SSL-ART members and improve the overall performance of WESSL-ART. The experimental results on eighteen benchmark data sets, three artificially generated data sets, and a real-world case study indicate the benefits of the proposed SSL-ART and WESSL-ART models for tackling pattern classification problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
367,000
1806.06342
Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin
End-to-end models have been showing superiority in Automatic Speech Recognition (ASR). At the same time, the capacity of streaming recognition has become a growing requirement for end-to-end models. Following these trends, an encoder-decoder recurrent neural network called Recurrent Neural Aligner (RNA) has been freshly proposed and shown its competitiveness on two English ASR tasks. However, it is not clear if RNA can be further improved and applied to other spoken language. In this work, we explore the applicability of RNA in Mandarin Chinese and present four effective extensions: In the encoder, we redesign the temporal down-sampling and introduce a powerful convolutional structure. In the decoder, we utilize a regularizer to smooth the output distribution and conduct joint training with a language model. On two Mandarin Chinese conversational telephone speech recognition (MTS) datasets, our Extended-RNA obtains promising performance. Particularly, it achieves 27.7% character error rate (CER), which is superior to current state-of-the-art result on the popular HKUST task.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
100,682
2310.03456
Multi-Resolution Audio-Visual Feature Fusion for Temporal Action Localization
Temporal Action Localization (TAL) aims to identify actions' start, end, and class labels in untrimmed videos. While recent advancements using transformer networks and Feature Pyramid Networks (FPN) have enhanced visual feature recognition in TAL tasks, less progress has been made in the integration of audio features into such frameworks. This paper introduces the Multi-Resolution Audio-Visual Feature Fusion (MRAV-FF), an innovative method to merge audio-visual data across different temporal resolutions. Central to our approach is a hierarchical gated cross-attention mechanism, which discerningly weighs the importance of audio information at diverse temporal scales. Such a technique not only refines the precision of regression boundaries but also bolsters classification confidence. Importantly, MRAV-FF is versatile, making it compatible with existing FPN TAL architectures and offering a significant enhancement in performance when audio data is available.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
397,289
1705.02245
Data Readiness Levels
Application of models to data is fraught. Data-generating collaborators often only have a very basic understanding of the complications of collating, processing and curating data. Challenges include: poor data collection practices, missing values, inconvenient storage mechanisms, intellectual property, security and privacy. All these aspects obstruct the sharing and interconnection of data, and the eventual interpretation of data through machine learning or other approaches. In project reporting, a major challenge is in encapsulating these problems and enabling goals to be built around the processing of data. Project overruns can occur due to failure to account for the amount of time required to curate and collate. But to understand these failures we need to have a common language for assessing the readiness of a particular data set. This position paper proposes the use of data readiness levels: it gives a rough outline of three stages of data preparedness and speculates on how formalisation of these levels into a common language for data readiness could facilitate project management.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
true
false
72,959
2402.06226
N-1 Reduced Optimal Power Flow Using Augmented Hierarchical Graph Neural Network
Optimal power flow (OPF) is used to perform generation redispatch in power system real-time operations. N-1 OPF can ensure safe grid operations under diverse contingency scenarios. For large and intricate power networks with numerous variables and constraints, achieving an optimal solution for real-time N-1 OPF necessitates substantial computational resources. To mitigate this challenge, machine learning (ML) is introduced as an additional tool for predicting congested or heavily loaded lines dynamically. In this paper, an advanced ML model known as the augmented hierarchical graph neural network (AHGNN) was proposed to predict critical congested lines and create N-1 reduced OPF (N-1 ROPF). The proposed AHGNN-enabled N-1 ROPF can result in a remarkable reduction in computing time while retaining the solution quality. Several variations of GNN-based ML models are also implemented as benchmark to demonstrate effectiveness of the proposed AHGNN approach. Case studies prove the proposed AHGNN and the associated N-1 ROPF are highly effective in reducing computation time while preserving solution quality, highlighting the promising potential of ML, particularly GNN in enhancing power system operations.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
428,226
2210.00453
Neural Graphical Models
Probabilistic Graphical Models are often used to understand dynamics of a system. They can model relationships between features (nodes) and the underlying distribution. Theoretically these models can represent very complex dependency functions, but in practice often simplifying assumptions are made due to computational limitations associated with graph operations. In this work we introduce Neural Graphical Models (NGMs) which attempt to represent complex feature dependencies with reasonable computational costs. Given a graph of feature relationships and corresponding samples, we capture the dependency structure between the features along with their complex function representations by using a neural network as a multi-task learning framework. We provide efficient learning, inference and sampling algorithms. NGMs can fit generic graph structures including directed, undirected and mixed-edge graphs as well as support mixed input data types. We present empirical studies that show NGMs' capability to represent Gaussian graphical models, perform inference analysis of a lung cancer data and extract insights from a real world infant mortality data provided by Centers for Disease Control and Prevention.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
320,873
2306.05700
Finite-Time Analysis of Minimax Q-Learning for Two-Player Zero-Sum Markov Games: Switching System Approach
The objective of this paper is to investigate the finite-time analysis of a Q-learning algorithm applied to two-player zero-sum Markov games. Specifically, we establish a finite-time analysis of both the minimax Q-learning algorithm and the corresponding value iteration method. To enhance the analysis of both value iteration and Q-learning, we employ the switching system model of minimax Q-learning and the associated value iteration. This approach provides further insights into minimax Q-learning and facilitates a more straightforward and insightful convergence analysis. We anticipate that the introduction of these additional insights has the potential to uncover novel connections and foster collaboration between concepts in the fields of control theory and reinforcement learning communities.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
372,302
2109.04991
Detection of GAN-synthesized street videos
Research on the detection of AI-generated videos has focused almost exclusively on face videos, usually referred to as deepfakes. Manipulations like face swapping, face reenactment and expression manipulation have been the subject of an intense research with the development of a number of efficient tools to distinguish artificial videos from genuine ones. Much less attention has been paid to the detection of artificial non-facial videos. Yet, new tools for the generation of such kind of videos are being developed at a fast pace and will soon reach the quality level of deepfake videos. The goal of this paper is to investigate the detectability of a new kind of AI-generated videos framing driving street sequences (here referred to as DeepStreets videos), which, by their nature, can not be analysed with the same tools used for facial deepfakes. Specifically, we present a simple frame-based detector, achieving very good performance on state-of-the-art DeepStreets videos generated by the Vid2vid architecture. Noticeably, the detector retains very good performance on compressed videos, even when the compression level used during training does not match that used for the test videos.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
254,620
2102.05644
Training Vision Transformers for Image Retrieval
Transformers have shown outstanding results for natural language understanding and, more recently, for image classification. We here extend this work and propose a transformer-based approach for image retrieval: we adopt vision transformers for generating image descriptors and train the resulting model with a metric learning objective, which combines a contrastive loss with a differential entropy regularizer. Our results show consistent and significant improvements of transformers over convolution-based approaches. In particular, our method outperforms the state of the art on several public benchmarks for category-level retrieval, namely Stanford Online Product, In-Shop and CUB-200. Furthermore, our experiments on ROxford and RParis also show that, in comparable settings, transformers are competitive for particular object retrieval, especially in the regime of short vector representations and low-resolution images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
219,503
2208.07655
A Hybrid Deep Feature-Based Deformable Image Registration Method for Pathology Images
Pathologists need to combine information from differently stained pathology slices for accurate diagnosis. Deformable image registration is a necessary technique for fusing multi-modal pathology slices. This paper proposes a hybrid deep feature-based deformable image registration framework for stained pathology samples. We first extract dense feature points via the detector-based and detector-free deep learning feature networks and perform points matching. Then, to further reduce false matches, an outlier detection method combining the isolation forest statistical model and the local affine correction model is proposed. Finally, the interpolation method generates the deformable vector field for pathology image registration based on the above matching points. We evaluate our method on the dataset of the Non-rigid Histology Image Registration (ANHIR) challenge, which is co-organized with the IEEE ISBI 2019 conference. Our technique outperforms the traditional approaches by 17% with the Average-Average registration target error (rTRE) reaching 0.0034. The proposed method achieved state-of-the-art performance and ranked 1st in evaluating the test dataset. The proposed hybrid deep feature-based registration method can potentially become a reliable method for pathology image registration.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
313,109
2107.10692
Selective Pseudo-label Clustering
Deep neural networks (DNNs) offer a means of addressing the challenging task of clustering high-dimensional data. DNNs can extract useful features, and so produce a lower dimensional representation, which is more amenable to clustering techniques. As clustering is typically performed in a purely unsupervised setting, where no training labels are available, the question then arises as to how the DNN feature extractor can be trained. The most accurate existing approaches combine the training of the DNN with the clustering objective, so that information from the clustering process can be used to update the DNN to produce better features for clustering. One problem with this approach is that these ``pseudo-labels'' produced by the clustering algorithm are noisy, and any errors that they contain will hurt the training of the DNN. In this paper, we propose selective pseudo-label clustering, which uses only the most confident pseudo-labels for training the~DNN. We formally prove the performance gains under certain conditions. Applied to the task of image clustering, the new approach achieves a state-of-the-art performance on three popular image datasets. Code is available at https://github.com/Lou1sM/clustering.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,373
2105.02605
GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information. Recent breakthroughs on pretrained language models and graph neural networks push forward the development of corresponding techniques. The existing works mainly rely on the cascaded model architecture: the textual features of nodes are independently encoded by language models at first; the textual embeddings are aggregated by graph neural networks afterwards. However, the above architecture is limited due to the independent modeling of textual features. In this work, we propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models. With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow, {making} each node's semantic accurately comprehended from the global perspective. In addition, a {progressive} learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph. Extensive evaluations are conducted on three large-scale benchmark datasets, where GraphFormers outperform the SOTA baselines with comparable running efficiency.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
233,870
1510.07609
Efficient Learning by Directed Acyclic Graph For Resource Constrained Prediction
We study the problem of reducing test-time acquisition costs in classification systems. Our goal is to learn decision rules that adaptively select sensors for each example as necessary to make a confident prediction. We model our system as a directed acyclic graph (DAG) where internal nodes correspond to sensor subsets and decision functions at each node choose whether to acquire a new sensor or classify using the available measurements. This problem can be naturally posed as an empirical risk minimization over training data. Rather than jointly optimizing such a highly coupled and non-convex problem over all decision nodes, we propose an efficient algorithm motivated by dynamic programming. We learn node policies in the DAG by reducing the global objective to a series of cost sensitive learning problems. Our approach is computationally efficient and has proven guarantees of convergence to the optimal system for a fixed architecture. In addition, we present an extension to map other budgeted learning problems with large number of sensors to our DAG architecture and demonstrate empirical performance exceeding state-of-the-art algorithms for data composed of both few and many sensors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
48,220
2210.05922
A Unified Framework for Alternating Offline Model Training and Policy Learning
In offline model-based reinforcement learning (offline MBRL), we learn a dynamic model from historically collected data, and subsequently utilize the learned model and fixed datasets for policy learning, without further interacting with the environment. Offline MBRL algorithms can improve the efficiency and stability of policy learning over the model-free algorithms. However, in most of the existing offline MBRL algorithms, the learning objectives for the dynamic models and the policies are isolated from each other. Such an objective mismatch may lead to inferior performance of the learned agents. In this paper, we address this issue by developing an iterative offline MBRL framework, where we maximize a lower bound of the true expected return, by alternating between dynamic-model training and policy learning. With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets. Source code is publicly released.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
323,060
2101.05337
A Survey on Simulators for Testing Self-Driving Cars
A rigorous and comprehensive testing plays a key role in training self-driving cars to handle variety of situations that they are expected to see on public roads. The physical testing on public roads is unsafe, costly, and not always reproducible. This is where testing in simulation helps fill the gap, however, the problem with simulation testing is that it is only as good as the simulator used for testing and how representative the simulated scenarios are of the real environment. In this paper, we identify key requirements that a good simulator must have. Further, we provide a comparison of commonly used simulators. Our analysis shows that CARLA and LGSVL simulators are the current state-of-the-art simulators for end to end testing of self-driving cars for the reasons mentioned in this paper. Finally, we also present current challenges that simulation testing continues to face as we march towards building fully autonomous cars.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
215,394
2203.09553
Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation
Federated learning (FL) can be essential in knowledge representation, reasoning, and data mining applications over multi-source knowledge graphs (KGs). A recent study FedE first proposes an FL framework that shares entity embeddings of KGs across all clients. However, entity embedding sharing from FedE would incur a severe privacy leakage. Specifically, the known entity embedding can be used to infer whether a specific relation between two entities exists in a private client. In this paper, we introduce a novel attack method that aims to recover the original data based on the embedding information, which is further used to evaluate the vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE. Besides, relation embedding sharing can significantly reduce the communication cost due to its smaller size of queries. We conduct extensive experiments to evaluate FedR with five different KG embedding models and three datasets. Compared to FedE, FedR achieves similar utility and significant improvements regarding privacy-preserving effect and communication efficiency on the link prediction task.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
286,190
1910.09636
Real-Time Multi-Diver Tracking and Re-identification for Underwater Human-Robot Collaboration
Autonomous underwater robots working with teams of human divers may need to distinguish between different divers, e.g. to recognize a lead diver or to follow a specific team member. This paper describes a technique that enables autonomous underwater robots to track divers in real time as well as to reidentify them. The approach is an extension of Simple Online Realtime Tracking (SORT) with an appearance metric (deep SORT). Initial diver detection is performed with a custom CNN designed for realtime diver detection, and appearance features are subsequently extracted for each detected diver. Next, realtime tracking-by-detection is performed with an extension of the deep SORT algorithm. We evaluate this technique on a series of videos of divers performing human-robot collaborative tasks and show that our methods result in more divers being accurately identified during tracking. We also discuss the practical considerations of applying multi-person tracking to on-board autonomous robot operations, and we consider how failure cases can be addressed during on-board tracking.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
150,246
1912.05916
Deep-Learning Estimation of Band Gap with the Reading-Periodic-Table Method and Periodic Convolution Layer
We verified that the deep learning method named reading periodic table introduced by ref. Deep Learning Model for Finding New Superconductors, which utilizes deep learning to read the periodic table and the laws of the elements, is applicable not only for superconductors, for which the method was originally applied but also for other problems of materials by demonstrating band gap estimations. We then extended the method to learn the laws better by directly learning the cylindrical periodicity between the right- and left-most columns in the periodic table at the learning representation level, that is, by considering the left- and right-most columns to be adjacent to each other. Thus, while the original method handles the table as is, the extended method treats the periodic table as if its two edges are connected. This is achieved using novel layers named periodic convolution layers, which can handle inputs exhibiting periodicity and may be applied to other problems related to computer vision, time series, and so on for data that possess some periodicity. In the reading periodic table method, no material feature or descriptor is required as input. We demonstrated two types of deep learning estimation: methods to estimate the existence of a bandgap, and methods to estimate the value of the bandgap given when the existence of the bandgap in the materials is known. Finally, we discuss the limitations of the dataset and model evaluation method. We may be unable to distinguish good models based on the random train-test split scheme; thus, we must prepare an appropriate dataset where the training and test data are temporally separate. The code and data are open.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
157,228
1808.09902
Extreme Value Theory for Open Set Classification -- GPD and GEV Classifiers
Classification tasks usually assume that all possible classes are present during the training phase. This is restrictive if the algorithm is used over a long time and possibly encounters samples from unknown classes. The recently introduced extreme value machine, a classifier motivated by extreme value theory, addresses this problem and achieves competitive performance in specific cases. We show that this algorithm can fail when the geometries of known and unknown classes differ. To overcome this problem, we propose two new algorithms relying on approximations from extreme value theory. We show the effectiveness of our classifiers in simulations and on the LETTER and MNIST data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
106,290
1906.02944
Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
Object recognition in the real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to recognize the populated head visual concepts reliably and meanwhile efficiently learn about emerging new tail categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail. In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes. We propose the ClAssifier SynThesis LEarning (CASTLE), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes with a shared neural dictionary, shedding light upon the inductive GFSL. Furthermore, we propose an adaptive version of CASTLE (ACASTLE) that adapts the head classifiers conditioned on the incoming tail training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, ACASTLE can handle GFSL with classes from heterogeneous domains effectively. CASTLE and ACASTLE demonstrate superior performances than existing GFSL algorithms and strong baselines on MiniImageNet as well as TieredImageNet datasets. More interestingly, they outperform previous state-of-the-art methods when evaluated with standard few-shot learning criteria.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
134,236
1506.07216
Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality
We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions. In the distributed sparse Gaussian mean estimation problem, each of the $m$ machines receives $n$ data points from a $d$-dimensional Gaussian distribution with unknown mean $\theta$ which is promised to be $k$-sparse. The machines communicate by message passing and aim to estimate the mean $\theta$. We provide a tight (up to logarithmic factors) tradeoff between the estimation error and the number of bits communicated between the machines. This directly leads to a lower bound for the distributed \textit{sparse linear regression} problem: to achieve the statistical minimax error, the total communication is at least $\Omega(\min\{n,d\}m)$, where $n$ is the number of observations that each machine receives and $d$ is the ambient dimension. These lower results improve upon [Sha14,SD'14] by allowing multi-round iterative communication model. We also give the first optimal simultaneous protocol in the dense case for mean estimation. As our main technique, we prove a \textit{distributed data processing inequality}, as a generalization of usual data processing inequalities, which might be of independent interest and useful for other problems.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
44,491
1710.04826
WeText: Scene Text Detection under Weak Supervision
The requiring of large amounts of annotated training data has become a common constraint on various deep learning systems. In this paper, we propose a weakly supervised scene text detection method (WeText) that trains robust and accurate scene text detection models by learning from unannotated or weakly annotated data. With a "light" supervised model trained on a small fully annotated dataset, we explore semi-supervised and weakly supervised learning on a large unannotated dataset and a large weakly annotated dataset, respectively. For the unsupervised learning, the light supervised model is applied to the unannotated dataset to search for more character training samples, which are further combined with the small annotated dataset to retrain a superior character detection model. For the weakly supervised learning, the character searching is guided by high-level annotations of words/text lines that are widely available and also much easier to prepare. In addition, we design an unified scene character detector by adapting regression based deep networks, which greatly relieves the error accumulation issue that widely exists in most traditional approaches. Extensive experiments across different unannotated and weakly annotated datasets show that the scene text detection performance can be clearly boosted under both scenarios, where the weakly supervised learning can achieve the state-of-the-art performance by using only 229 fully annotated scene text images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
82,539
2312.08398
Accelerating Meta-Learning by Sharing Gradients
The success of gradient-based meta-learning is primarily attributed to its ability to leverage related tasks to learn task-invariant information. However, the absence of interactions between different tasks in the inner loop leads to task-specific over-fitting in the initial phase of meta-training. While this is eventually corrected by the presence of these interactions in the outer loop, it comes at a significant cost of slower meta-learning. To address this limitation, we explicitly encode task relatedness via an inner loop regularization mechanism inspired by multi-task learning. Our algorithm shares gradient information from previously encountered tasks as well as concurrent tasks in the same task batch, and scales their contribution with meta-learned parameters. We show using two popular few-shot classification datasets that gradient sharing enables meta-learning under bigger inner loop learning rates and can accelerate the meta-training process by up to 134%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,292
2211.13313
Run-Based Semantics for RPQs
The formalism of RPQs (regular path queries) is an important building block of most query languages for graph databases. RPQs are generally evaluated under homomorphism semantics; in particular only the endpoints of the matched walks are returned. Practical applications often need the full matched walks to compute aggregate values. In those cases, homomorphism semantics are not suitable since the number of matched walks can be infinite. Hence, graph-database engines adapt the semantics of RPQs, often neglecting theoretical red flags. For instance, the popular query language Cypher uses trail semantics, which ensures the result to be finite at the cost of making computational problems intractable. We propose a new kind of semantics for RPQs, including in particular simple-run and binding-trail semantics, as a candidate to reconcile theoretical considerations with practical aspirations. Both ensure the output to be finite in a way that is compatible with homomorphism semantics: projection on endpoints coincides with homomorphism semantics. Hence, testing the emptiness of result is tractable, and known methods readily apply. Moreover, simple-run and binding-trail semantics support bag semantics, and enumeration of the bag of results is tractable
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
332,429