id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2002.00718
Modeling the Background for Incremental Learning in Semantic Segmentation
Despite their effectiveness in a wide range of tasks, deep architectures suffer from some important limitations. In particular, they are vulnerable to catastrophic forgetting, i.e. they perform poorly when they are required to update their model as new classes are available but the original training set is not retained. This paper addresses this problem in the context of semantic segmentation. Current strategies fail on this task because they do not consider a peculiar aspect of semantic segmentation: since each training step provides annotation only for a subset of all possible classes, pixels of the background class (i.e. pixels that do not belong to any other classes) exhibit a semantic distribution shift. In this work we revisit classical incremental learning methods, proposing a new distillation-based framework which explicitly accounts for this shift. Furthermore, we introduce a novel strategy to initialize classifier's parameters, thus preventing biased predictions toward the background class. We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC 2012 and ADE20K datasets, significantly outperforming state of the art incremental learning methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
162,434
2205.08833
Speckle Image Restoration without Clean Data
Speckle noise is an inherent disturbance in coherent imaging systems such as digital holography, synthetic aperture radar, optical coherence tomography, or ultrasound systems. These systems usually produce only single observation per view angle of the same interest object, imposing the difficulty to leverage the statistic among observations. We propose a novel image restoration algorithm that can perform speckle noise removal without clean data and does not require multiple noisy observations in the same view angle. Our proposed method can also be applied to the situation without knowing the noise distribution as prior. We demonstrate our method is especially well-suited for spectral images by first validating on the synthetic dataset, and also applied on real-world digital holography samples. The results are superior in both quantitative measurement and visual inspection compared to several widely applied baselines. Our method even shows promising results across different speckle noise strengths, without the clean data needed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,073
2107.07071
A Combinatorial Interpretation for the Shor-Laflamme Weight Enumerators of CWS Codes
We show that one of the Shor-Laflamme weight enumerators of a codeword stabilized quantum code may be interpreted as the distance enumerator of an associated classical code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
246,301
1301.2959
New elements for a network (including brain) general theory during learning period
This study deals with the evolution of the so called 'intelligent' networks (insect society without leader, cells of an organism, brain,...) during their learning period. First we summarize briefly the Version 2 (published in French), whose the main characteristics are: 1) A network connected to its environment is considered as immersed into an information field created by this environment which so dictates to it the learning constraints. 2) The used formalism draws one's inspiration from the one of the Quantum field theory (Principle of stationary action, gauge fields, invariance by symmetry transformations,...). 3) We obtain Lagrange equations whose solutions describe the network evolution during the whole learning period. 4) Then, while proceeding with the same formalism inspiration, we suggest other study ways capable of evolving the knowledge in the considered scope. In a second part, after a reminder of the points to be improved, we exhibit the Version 5 which brings, we think, relevant improvements. Indeed: 5) We consider the weighted averages of the variables; this introduces probabilities. 6) We define two observables (L average of information flux and A activity of the network) which could be measured and so be compared with experimental results. 7) We find that L , weighted average of information flows, is an invariant. 8) Finally, we propose two expressions for the conactance, from which we deduce the corresponding Lagrange equations which have to be solved to know the evolution of the considered weighted averages. But, at the present stage, we think that we can progress only by carrying out experiments (see projects like Human brain project) and discovering invariants, symmetries which would allow us, like in Physics, to classify networks and above all to understand better the connections between them. Indeed, and that is what we propose among the future research ways, the underlying problem is to understand how, after their learning period, several networks can connect together to produce, in the brain case for instance, what we call mental states.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
21,058
2405.18305
Volt-PF Control Mode for Distribution Feeder Voltage Management Under High Penetration of Distributed Energy Resources
Volt-VAr control is a popular method for mitigating overvoltage violations caused by high penetration of distributed energy resources (DERs) in distribution feeders. An inherent limitation of volt-VAr control is that the reactive power (Q) absorbed/injected by the DER is determined based only on the terminal voltage, without considering the active power (P) generated by the DER. This leads to an inequitable burden of Q support, in the sense that those DERs generating lower P, and hence contributing less to overvoltage issues, may be required to provide more than their share of $Q$ support. The resulting PF of these DERs is required to vary over a wide range, which many current DERs do not support. A new control scheme, namely volt-PF control, is proposed here where the Q support is inherently a function of both the voltage and $P$ from DERs, which alleviates the above concerns while limiting the PF variation within a narrow range of 0.9 to 1. The proposed scheme is validated through extensive static and dynamic simulations on a real, large (8000+ nodes) feeder with very high penetration (>200%) of DERs.The implementation of the new scheme in new and existing commercial hardware inverters is described.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
458,365
2205.14987
A Continuous Time Framework for Discrete Denoising Models
We provide the first complete continuous time framework for denoising diffusion models of discrete data. This is achieved by formulating the forward noising process and corresponding reverse time generative process as Continuous Time Markov Chains (CTMCs). The model can be efficiently trained using a continuous time version of the ELBO. We simulate the high dimensional CTMC using techniques developed in chemical physics and exploit our continuous time framework to derive high performance samplers that we show can outperform discrete time methods for discrete data. The continuous time treatment also enables us to derive a novel theoretical result bounding the error between the generated sample distribution and the true data distribution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,564
1204.6583
A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems
In binary classification problems, mainly two approaches have been proposed; one is loss function approach and the other is uncertainty set approach. The loss function approach is applied to major learning algorithms such as support vector machine (SVM) and boosting methods. The loss function represents the penalty of the decision function on the training samples. In the learning algorithm, the empirical mean of the loss function is minimized to obtain the classifier. Against a backdrop of the development of mathematical programming, nowadays learning algorithms based on loss functions are widely applied to real-world data analysis. In addition, statistical properties of such learning algorithms are well-understood based on a lots of theoretical works. On the other hand, the learning method using the so-called uncertainty set is used in hard-margin SVM, mini-max probability machine (MPM) and maximum margin MPM. In the learning algorithm, firstly, the uncertainty set is defined for each binary label based on the training samples. Then, the best separating hyperplane between the two uncertainty sets is employed as the decision function. This is regarded as an extension of the maximum-margin approach. The uncertainty set approach has been studied as an application of robust optimization in the field of mathematical programming. The statistical properties of learning algorithms with uncertainty sets have not been intensively studied. In this paper, we consider the relation between the above two approaches. We point out that the uncertainty set is described by using the level set of the conjugate of the loss function. Based on such relation, we study statistical properties of learning algorithms using uncertainty sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
15,733
2011.07960
Explicitly Modeling Syntax in Language Models with Incremental Parsing and a Dynamic Oracle
Syntax is fundamental to our thinking about language. Failing to capture the structure of input language could lead to generalization problems and over-parametrization. In the present work, we propose a new syntax-aware language model: Syntactic Ordered Memory (SOM). The model explicitly models the structure with an incremental parser and maintains the conditional probability setting of a standard language model (left-to-right). To train the incremental parser and avoid exposure bias, we also propose a novel dynamic oracle, so that SOM is more robust to wrong parsing decisions. Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests, while using fewer parameters than other models.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
206,724
1008.2186
RDFViewS: A Storage Tuning Wizard for RDF Applications
In recent years, the significant growth of RDF data used in numerous applications has made its efficient and scalable manipulation an important issue. In this paper, we present RDFViewS, a system capable of choosing the most suitable views to materialize, in order to minimize the query response time for a specific SPARQL query workload, while taking into account the view maintenance cost and storage space constraints. Our system employs practical algorithms and heuristics to navigate through the search space of potential view configurations, and exploits the possibly available semantic information - expressed via an RDF Schema - to ensure the completeness of the query evaluation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
7,265
2006.14032
Compositional Explanations of Neurons
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior. Compared to prior work that uses atomic labels as explanations, analyzing neurons compositionally allows us to more precisely and expressively characterize their behavior. We use this procedure to answer several questions on interpretability in models for vision and natural language processing. First, we examine the kinds of abstractions learned by neurons. In image classification, we find that many neurons learn highly abstract but semantically coherent visual concepts, while other polysemantic neurons detect multiple unrelated features; in natural language inference (NLI), neurons learn shallow lexical heuristics from dataset biases. Second, we see whether compositional explanations give us insight into model performance: vision neurons that detect human-interpretable concepts are positively correlated with task performance, while NLI neurons that fire for shallow heuristics are negatively correlated with task performance. Finally, we show how compositional explanations provide an accessible way for end users to produce simple "copy-paste" adversarial examples that change model behavior in predictable ways.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
184,100
2410.15986
A quantitative Robbins-Siegmund theorem
The Robbins-Siegmund theorem is one of the most important results in stochastic optimization, where it is widely used to prove the convergence of stochastic algorithms. We provide a quantitative version of the theorem, establishing a bound on how far one needs to look in order to locate a region of metastability in the sense of Tao. Our proof involves a metastable analogue of Doob's theorem for $L_1$-supermartingales along with a series of technical lemmas that make precise how quantitative information propagates through sums and products of stochastic processes. In this way, our paper establishes a general methodology for finding metastable bounds for stochastic processes that can be reduced to supermartingales, and therefore for obtaining quantitative convergence information across a broad class of stochastic algorithms whose convergence proof relies on some variation of the Robbins-Siegmund theorem. We conclude by discussing how our general quantitative result might be used in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,827
1707.05587
Graph learning under sparsity priors
Graph signals offer a very generic and natural representation for data that lives on networks or irregular structures. The actual data structure is however often unknown a priori but can sometimes be estimated from the knowledge of the application domain. If this is not possible, the data structure has to be inferred from the mere signal observations. This is exactly the problem that we address in this paper, under the assumption that the graph signals can be represented as a sparse linear combination of a few atoms of a structured graph dictionary. The dictionary is constructed on polynomials of the graph Laplacian, which can sparsely represent a general class of graph signals composed of localized patterns on the graph. We formulate a graph learning problem, whose solution provides an ideal fit between the signal observations and the sparse graph signal model. As the problem is non-convex, we propose to solve it by alternating between a signal sparse coding and a graph update step. We provide experimental results that outline the good graph recovery performance of our method, which generally compares favourably to other recent network inference algorithms.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
77,261
1909.08994
Scalable Deep Unsupervised Clustering with Concrete GMVAEs
Discrete random variables are natural components of probabilistic clustering models. A number of VAE variants with discrete latent variables have been developed. Training such methods requires marginalizing over the discrete latent variables, causing training time complexity to be linear in the number clusters. By applying a continuous relaxation to the discrete variables in these methods we can achieve a reduction in the training time complexity to be constant in the number of clusters used. We demonstrate that in practice for one such method, the Gaussian Mixture VAE, the use of a continuous relaxation has no negative effect on the quality of the clustering but provides a substantial reduction in training time, reducing training time on CIFAR-100 with 20 clusters from 47 hours to less than 6 hours.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
146,115
1304.1492
Map Learning with Indistinguishable Locations
Nearly all spatial reasoning problems involve uncertainty of one sort or another. Uncertainty arises due to the inaccuracies of sensors used in measuring distances and angles. We refer to this as directional uncertainty. Uncertainty also arises in combining spatial information when one location is mistakenly identified with another. We refer to this as recognition uncertainty. Most problems in constructing spatial representations (maps) for the purpose of navigation involve both directional and recognition uncertainty. In this paper, we show that a particular class of spatial reasoning problems involving the construction of representations of large-scale space can be solved efficiently even in the presence of directional and recognition uncertainty. We pay particular attention to the problems that arise due to recognition uncertainty.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,525
2210.15470
DAGKT: Difficulty and Attempts Boosted Graph-based Knowledge Tracing
In the field of intelligent education, knowledge tracing (KT) has attracted increasing attention, which estimates and traces students' mastery of knowledge concepts to provide high-quality education. In KT, there are natural graph structures among questions and knowledge concepts so some studies explored the application of graph neural networks (GNNs) to improve the performance of the KT models which have not used graph structure. However, most of them ignored both the questions' difficulties and students' attempts at questions. Actually, questions with the same knowledge concepts have different difficulties, and students' different attempts also represent different knowledge mastery. In this paper, we propose a difficulty and attempts boosted graph-based KT (DAGKT), using rich information from students' records. Moreover, a novel method is designed to establish the question similarity relationship inspired by the F1 score. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed DAGKT.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
326,968
2303.17619
Gaze-based Attention Recognition for Human-Robot Collaboration
Attention (and distraction) recognition is a key factor in improving human-robot collaboration. We present an assembly scenario where a human operator and a cobot collaborate equally to piece together a gearbox. The setup provides multiple opportunities for the cobot to adapt its behavior depending on the operator's attention, which can improve the collaboration experience and reduce psychological strain. As a first step, we recognize the areas in the workspace that the human operator is paying attention to, and consequently, detect when the operator is distracted. We propose a novel deep-learning approach to develop an attention recognition model. First, we train a convolutional neural network to estimate the gaze direction using a publicly available image dataset. Then, we use transfer learning with a small dataset to map the gaze direction onto pre-defined areas of interest. Models trained using this approach performed very well in leave-one-subject-out evaluation on the small dataset. We performed an additional validation of our models using the video snippets collected from participants working as an operator in the presented assembly scenario. Although the recall for the Distracted class was lower in this case, the models performed well in recognizing the areas the operator paid attention to. To the best of our knowledge, this is the first work that validated an attention recognition model using data from a setting that mimics industrial human-robot collaboration. Our findings highlight the need for validation of attention recognition solutions in such full-fledged, non-guided scenarios.
true
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
355,288
2312.10787
Learning Discrete-Time Major-Minor Mean Field Games
Recent techniques based on Mean Field Games (MFGs) allow the scalable analysis of multi-player games with many similar, rational agents. However, standard MFGs remain limited to homogeneous players that weakly influence each other, and cannot model major players that strongly influence other players, severely limiting the class of problems that can be handled. We propose a novel discrete time version of major-minor MFGs (M3FGs), along with a learning algorithm based on fictitious play and partitioning the probability simplex. Importantly, M3FGs generalize MFGs with common noise and can handle not only random exogeneous environment states but also major players. A key challenge is that the mean field is stochastic and not deterministic as in standard MFGs. Our theoretical investigation verifies both the M3FG model and its algorithmic solution, showing firstly the well-posedness of the M3FG model starting from a finite game of interest, and secondly convergence and approximation guarantees of the fictitious play algorithm. Then, we empirically verify the obtained theoretical results, ablating some of the theoretical assumptions made, and show successful equilibrium learning in three example problems. Overall, we establish a learning framework for a novel and broad class of tractable games.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
416,313
2405.03252
A Universal List Decoding Algorithm with Application to Decoding of Polar Codes
This paper is concerned with a guessing codeword decoding (GCD) of linear block codes. Compared with the guessing noise decoding (GND), which is only efficient for high-rate codes, the GCD is efficient for not only high-rate codes but also low-rate codes. We prove that the GCD typically requires a fewer number of queries than the GND. Compared with the ordered statistics decoding (OSD), the GCD does not require the online Gaussian elimination (GE). In addition to limiting the maximum number of searches, we suggest limiting the radius of searches in terms of soft weights or tolerated performance loss to further reduce the decoding complexity, resulting in the so-called truncated GCD. The performance gap between the truncated GCD and the optimal decoding can be upper bounded approximately by the saddlepoint approach or other numerical approaches. The derived upper bound captures the relationship between the performance and the decoding parameters, enabling us to balance the performance and the complexity by optimizing the decoding parameters of the truncated GCD. We also introduce a parallel implementation of the (truncated) GCD algorithm to reduce decoding latency without compromising performance. Another contribution of this paper is the application of the GCD to the polar codes. We propose a multiple-bit-wise decoding algorithm over a pruned tree for the polar codes, referred to as the successive-cancellation list (SCL) decoding algorithm by GCD. First, we present a strategy for pruning the conventional polar decoding tree based on the complexity analysis rather than the specific bit patterns. Then we apply the GCD algorithm in parallel aided by the early stopping criteria to the leaves of the pruned tree. Simulation results show that, without any performance loss as justified by analysis, the proposed decoding algorithm can significantly reduce the decoding latency of the polar codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
452,115
2401.14688
Taiyi-Diffusion-XL: Advancing Bilingual Text-to-Image Generation with Large Vision-Language Model Support
Recent advancements in text-to-image models have significantly enhanced image generation capabilities, yet a notable gap of open-source models persists in bilingual or Chinese language support. To address this need, we present Taiyi-Diffusion-XL, a new Chinese and English bilingual text-to-image model which is developed by extending the capabilities of CLIP and Stable-Diffusion-XL through a process of bilingual continuous pre-training. This approach includes the efficient expansion of vocabulary by integrating the most frequently used Chinese characters into CLIP's tokenizer and embedding layers, coupled with an absolute position encoding expansion. Additionally, we enrich text prompts by large vision-language model, leading to better images captions and possess higher visual quality. These enhancements are subsequently applied to downstream text-to-image models. Our empirical results indicate that the developed CLIP model excels in bilingual image-text retrieval.Furthermore, the bilingual image generation capabilities of Taiyi-Diffusion-XL surpass previous models. This research leads to the development and open-sourcing of the Taiyi-Diffusion-XL model, representing a notable advancement in the field of image generation, particularly for Chinese language applications. This contribution is a step forward in addressing the need for more diverse language support in multimodal research. The model and demonstration are made publicly available at \href{https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-XL-3.5B/}, fostering further research and collaboration in this domain.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
424,190
1403.4735
On the Automorphisms of Order 15 for a Binary Self-Dual [96, 48, 20] Code
The structure of binary self-dual codes invariant under the action of a cyclic group of order $pq$ for odd primes $p\neq q$ is considered. As an application we prove the nonexistence of an extremal self-dual $[96, 48, 20]$ code with an automorphism of order $15$ which closes a gap in `"On extremal self-dual codes of length 96", IEEE Trans. Inf. Theory, vol. 57, pp. 6820-6823, 2011'.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
31,673
1401.0282
Design of a GIS-based Assistant Software Agent for the Incident Commander to Coordinate Emergency Response Operations
Problem: This paper addresses the design of an intelligent software system for the IC (incident commander) of a team in order to coordinate actions of agents (field units or robots) in the domain of emergency/crisis response operations. Objective: This paper proposes GICoordinator. It is a GIS-based assistant software agent that assists and collaborates with the human planner in strategic planning and macro tasks assignment for centralized multi-agent coordination. Method: Our approach to design GICoordinator was to: analyze the problem, design a complete data model, design an architecture of GICoordinator, specify required capabilities of human and system in coordination problem solving, specify development tools, and deploy. Result: The result was an architecture/design of GICoordinator that contains system requirements. Findings: GICoordinator efficiently integrates geoinformatics with artifice intelligent techniques in order to provide a spatial intelligent coordinator system for an IC to efficiently coordinate and control agents by making macro/strategic decisions. Results define a framework for future works to develop this system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
29,545
2203.14912
Advanced Skills through Multiple Adversarial Motion Priors in Reinforcement Learning
In recent years, reinforcement learning (RL) has shown outstanding performance for locomotion control of highly articulated robotic systems. Such approaches typically involve tedious reward function tuning to achieve the desired motion style. Imitation learning approaches such as adversarial motion priors aim to reduce this problem by encouraging a pre-defined motion style. In this work, we present an approach to augment the concept of adversarial motion prior-based RL to allow for multiple, discretely switchable styles. We show that multiple styles and skills can be learned simultaneously without notable performance differences, even in combination with motion data-free skills. Our approach is validated in several real-world experiments with a wheeled-legged quadruped robot showing skills learned from existing RL controllers and trajectory optimization, such as ducking and walking, and novel skills such as switching between a quadrupedal and humanoid configuration. For the latter skill, the robot is required to stand up, navigate on two wheels, and sit down. Instead of tuning the sit-down motion, we verify that a reverse playback of the stand-up movement helps the robot discover feasible sit-down behaviors and avoids tedious reward function tuning.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
288,172
2104.10307
Analyzing the Effect of Persistent Asset Switches on a Class of Hybrid-Inspired Optimization Algorithms
Convex optimization challenges are currently pervasive in many science and engineering domains. In many applications of convex optimization, such as those involving multi-agent systems and resource allocation, the objective function can persistently switch during the execution of an optimization algorithm. Motivated by such applications, we analyze the effect of persistently switching objectives in continuous-time optimization algorithms. In particular, we take advantage of existing robust stability results for switched systems with distinct equilibria and extend these results to systems described by differential inclusions, making the results applicable to recent optimization algorithms that employ differential inclusions for improving efficiency and/or robustness. Within the framework of hybrid systems theory, we provide an accurate characterization, in terms of Omega-limit sets, of the set to which the optimization dynamics converge. Finally, by considering the switching signal to be constrained in its average dwell time, we establish semi-global practical asymptotic stability of these sets with respect to the dwell-time parameter.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
231,525
1903.01148
Asymmetric Single Magnitude Four Error Correcting Codes
Limited magnitude asymmetric error model is well suited for flash memory. In this paper, we consider the construction of asymmetric codes correcting single error over $\mathbb{Z}_{2^{k}r}$ and which are based on so called $B_{1}[4](2^{k}r)$ set. In fact, we reduce the construction of a maximal size $B_{1}[4](2^{k}r)$ set for $k\geq3$ to the construction of a maximal size $B_{1}[4](2^{k-3}r)$ set. Finally, we give a explicit formula of a maximal size $B_{1}[4](4r)$ set and some lower bounds of a maximal size $B_{1}[4](2r)$ set. By computer searching up to $q\leq106$, we conjecture that those lower bounds are tight.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
123,199
2312.04609
Short-term prediction of construction waste transport activities using AI-Truck
Construction waste hauling trucks (or `slag trucks') are among the most commonly seen heavy-duty diesel vehicles in urban streets, which not only produce significant carbon, NO$_{\textbf{x}}$ and PM$_{\textbf{2.5}}$ emissions but are also a major source of on-road and on-site fugitive dust. Slag trucks are subject to a series of spatial and temporal access restrictions by local traffic and environmental policies. This paper addresses the practical problem of predicting levels of slag truck activity at a city scale during heavy pollution episodes, such that environmental law enforcement units can take timely and proactive measures against localized truck aggregation. A deep ensemble learning framework (coined AI-Truck) is designed, which employs a soft vote integrator that utilizes Bi-LSTM, TCN, STGCN, and PDFormer as base classifiers. AI-Truck employs a combination of downsampling and weighted loss is employed to address sample imbalance, and utilizes truck trajectories to extract more accurate and effective geographic features. The framework was deployed for truck activity prediction at a resolution of 1km$\times$1km$\times$0.5h, in a 255 km$^{\textbf{2}}$ area in Chengdu, China. As a classifier, AI-Truck achieves a macro F1 of 0.747 in predicting levels of slag truck activity for 0.5-h prediction time length, and enables personnel to spot high-activity locations 1.5 hrs ahead with over 80\% accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
413,757
2203.16069
A unified analysis framework for iterative parallel-in-time algorithms
Parallel-in-time integration has been the focus of intensive research efforts over the past two decades due to the advent of massively parallel computer architectures and the scaling limits of purely spatial parallelization. Various iterative parallel-in-time (PinT) algorithms have been proposed, like Parareal, PFASST, MGRIT, and Space-Time Multi-Grid (STMG). These methods have been described using different notations, and the convergence estimates that are available are difficult to compare. We describe Parareal, PFASST, MGRIT and STMG for the Dahlquist model problem using a common notation and give precise convergence estimates using generating functions. This allows us, for the first time, to directly compare their convergence. We prove that all four methods eventually converge super-linearly, and also compare them numerically. The generating function framework provides further opportunities to explore and analyze existing and new methods.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
288,642
2501.07334
Anonymization of Documents for Law Enforcement with Machine Learning
The steadily increasing utilization of data-driven methods and approaches in areas that handle sensitive personal information such as in law enforcement mandates an ever increasing effort in these institutions to comply with data protection guidelines. In this work, we present a system for automatically anonymizing images of scanned documents, reducing manual effort while ensuring data protection compliance. Our method considers the viability of further forensic processing after anonymization by minimizing automatically redacted areas by combining automatic detection of sensitive regions with knowledge from a manually anonymized reference document. Using a self-supervised image model for instance retrieval of the reference document, our approach requires only one anonymized example to efficiently redact all documents of the same type, significantly reducing processing time. We show that our approach outperforms both a purely automatic redaction system and also a naive copy-paste scheme of the reference anonymization to other documents on a hand-crafted dataset of ground truth redactions.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
524,362
1304.2714
Higher Order Probabilities
A number of writers have supposed that for the full specification of belief, higher order probabilities are required. Some have even supposed that there may be an unending sequence of higher order probabilities of probabilities of probabilities.... In the present paper we show that higher order probabilities can always be replaced by the marginal distributions of joint probability distributions. We consider both the case in which higher order probabilities are of the same sort as lower order probabilities and that in which higher order probabilities are distinct in character, as when lower order probabilities are construed as frequencies and higher order probabilities are construed as subjective degrees of belief. In neither case do higher order probabilities appear to offer any advantages, either conceptually or computationally.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,723
2402.10517
Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
Recently, considerable efforts have been directed towards compressing Large Language Models (LLMs), which showcase groundbreaking capabilities across diverse applications but entail significant deployment costs due to their large sizes. Meanwhile, much less attention has been given to mitigating the costs associated with deploying multiple LLMs of varying sizes despite its practical significance. Thus, this paper introduces \emph{any-precision LLM}, extending the concept of any-precision DNN to LLMs. Addressing challenges in any-precision LLM, we propose a lightweight method for any-precision quantization of LLMs, leveraging a post-training quantization framework, and develop a specialized software engine for its efficient serving. As a result, our solution significantly reduces the high costs of deploying multiple, different-sized LLMs by overlaying LLMs quantized to varying bit-widths, such as 3, 4, ..., $n$ bits, into a memory footprint comparable to a single $n$-bit LLM. All the supported LLMs with varying bit-widths demonstrate state-of-the-art model quality and inference throughput, proving itself to be a compelling option for deployment of multiple, different-sized LLMs. Our code is open-sourced and available online.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
430,000
2402.14144
Extending identifiability results from isolated networks to embedded networks
This paper deals with the design of Excitation and Measurement Patterns (EMPs) for the identification of dynamical networks, when the objective is to identify only a subnetwork embedded in a larger network. Recent results have shown how to construct EMPs that guarantee identifiability for a range of networks with specific graph topologies, such as trees, loops, or Directed Acyclic Graphs (DAGs). However, an EMP that is valid for the identification of a subnetwork taken in isolation may no longer be valid when that subnetwork is embedded in a larger network. Our main contribution is to exhibit conditions under which it does remain valid, and to propose ways to enhance such EMP when these conditions are not satisfied.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
431,544
2008.02186
A Novel Method For Designing Transferable Soft Sensors And Its Application
In this paper, a new approach is proposed for designing transferable soft sensors. Soft sensing is one of the significant applications of data-driven methods in the condition monitoring of plants. While hard sensors can be easily used in various plants, soft sensors are confined to the specific plant they are designed for and cannot be used in a new plant or even used in some new working conditions in the same plant. In this paper, a solution is proposed for this underlying obstacle in data-driven condition monitoring systems. Data-driven methods suffer from the fact that the distribution of the data by which the models are constructed may not be the same as the distribution of the data to which the model will be applied. This ultimately leads to the decline of models accuracy. We proposed a new transfer learning (TL) based regression method, called Domain Adversarial Neural Network Regression (DANN-R), and employed it for designing transferable soft sensors. We used data collected from the SCADA system of an industrial power plant to comprehensively investigate the functionality of the proposed method. The result reveals that the proposed transferable soft sensor can successfully adapt to new plants.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
190,545
2211.11174
Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning
In AI, generalization refers to a model's ability to perform well on out-of-distribution data related to the given task, beyond the data it was trained on. For an AI agent to excel, it must also possess the continual learning capability, whereby an agent incrementally learns to perform a sequence of tasks without forgetting the previously acquired knowledge to solve the old tasks. Intuitively, generalization within a task allows the model to learn underlying features that can readily be applied to novel tasks, facilitating quicker learning and enhanced performance in subsequent tasks within a continual learning framework. Conversely, continual learning methods often include mechanisms to mitigate catastrophic forgetting, ensuring that knowledge from earlier tasks is retained. This preservation of knowledge over tasks plays a role in enhancing generalization for the ongoing task at hand. Despite the intuitive appeal of the interplay of both abilities, existing literature on continual learning and generalization has proceeded separately. In the preliminary effort to promote studies that bridge both fields, we first present empirical evidence showing that each of these fields has a mutually positive effect on the other. Next, building upon this finding, we introduce a simple and effective technique known as Shape-Texture Consistency Regularization (STCR), which caters to continual learning. STCR learns both shape and texture representations for each task, consequently enhancing generalization and thereby mitigating forgetting. Remarkably, extensive experiments validate that our STCR, can be seamlessly integrated with existing continual learning methods, where its performance surpasses these continual learning methods in isolation or when combined with established generalization techniques by a large margin. Our data and source code will be made publicly available upon publication.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,612
2009.02997
Predicting Requests in Large-Scale Online P2P Ridesharing
Peer-to-peer ridesharing (P2P-RS) enables people to arrange one-time rides with their own private cars, without the involvement of professional drivers. It is a prominent collective intelligence application producing significant benefits both for individuals (reduced costs) and for the entire community (reduced pollution and traffic), as we showed in a recent publication where we proposed an online approximate solution algorithm for large-scale P2P-RS. In this paper we tackle the fundamental question of assessing the benefit of predicting ridesharing requests in the context of P2P-RS optimisation. Results on a public real-world show that, by employing a perfect predictor, the total reward can be improved by 5.27% with a forecast horizon of 1 minute. On the other hand, a vanilla long short-term memory neural network cannot improve upon a baseline predictor that simply replicates the previous day's requests, whilst achieving an almost-double accuracy.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
194,716
2404.00658
KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation
This paper presents a novel Kinematics and Trajectory Prior Knowledge-Enhanced Transformer (KTPFormer), which overcomes the weakness in existing transformer-based methods for 3D human pose estimation that the derivation of Q, K, V vectors in their self-attention mechanisms are all based on simple linear mapping. We propose two prior attention modules, namely Kinematics Prior Attention (KPA) and Trajectory Prior Attention (TPA) to take advantage of the known anatomical structure of the human body and motion trajectory information, to facilitate effective learning of global dependencies and features in the multi-head self-attention. KPA models kinematic relationships in the human body by constructing a topology of kinematics, while TPA builds a trajectory topology to learn the information of joint motion trajectory across frames. Yielding Q, K, V vectors with prior knowledge, the two modules enable KTPFormer to model both spatial and temporal correlations simultaneously. Extensive experiments on three benchmarks (Human3.6M, MPI-INF-3DHP and HumanEva) show that KTPFormer achieves superior performance in comparison to state-of-the-art methods. More importantly, our KPA and TPA modules have lightweight plug-and-play designs and can be integrated into various transformer-based networks (i.e., diffusion-based) to improve the performance with only a very small increase in the computational overhead. The code is available at: https://github.com/JihuaPeng/KTPFormer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
443,049
2010.13471
Deep reinforced learning enables solving rich discrete-choice life cycle models to analyze social security reforms
Discrete-choice life cycle models of labor supply can be used to estimate how social security reforms influence employment rate. In a life cycle model, optimal employment choices during the life course of an individual must be solved. Mostly, life cycle models have been solved with dynamic programming, which is not feasible when the state space is large, as often is the case in a realistic life cycle model. Solving a complex life cycle model requires the use of approximate methods, such as reinforced learning algorithms. We compare how well a deep reinforced learning algorithm ACKTR and dynamic programming solve a relatively simple life cycle model. To analyze results, we use a selection of statistics and also compare the resulting optimal employment choices at various states. The statistics demonstrate that ACKTR yields almost as good results as dynamic programming. Qualitatively, dynamic programming yields more spiked aggregate employment profiles than ACKTR. The results obtained with ACKTR provide a good, yet not perfect, approximation to the results of dynamic programming. In addition to the baseline case, we analyze two social security reforms: (1) an increase of retirement age, and (2) universal basic income. Our results suggest that reinforced learning algorithms can be of significant value in developing social security reforms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
203,141
2207.13835
Impactful Robots: Evaluating Visual and Audio Warnings to Help Users Brace for Impact in Human Robot Interaction
Wearable robotic devices have potential to assist and protect their users. Toward design of a Smart Helmet, this article examines the effectiveness of audio and visual warnings to help participants brace for impacts. A user study examines different warnings and impacts applied to users while running. Perturbation forces scaled to user mass are applied from different directions and user displacement is measured to characterize effectiveness of the warning. This is accomplished using the TreadPort Active Wind Tunnel adapted to deliver forward, rearward, right, or left perturbation forces at precise moments during the locomotor cycle. The article presents an overview of the system and demonstrates the ability to precisely deliver consistent warnings and perturbations during gait. User study results highlight effectiveness of visual and audio warnings to help users brace for impact, resulting in guidelines that will inform future human-robot warning systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
310,414
2406.03486
BIPED: Pedagogically Informed Tutoring System for ESL Education
Large Language Models (LLMs) have a great potential to serve as readily available and cost-efficient Conversational Intelligent Tutoring Systems (CITS) for teaching L2 learners of English. Existing CITS, however, are designed to teach only simple concepts or lack the pedagogical depth necessary to address diverse learning strategies. To develop a more pedagogically informed CITS capable of teaching complex concepts, we construct a BIlingual PEDagogically-informed Tutoring Dataset (BIPED) of one-on-one, human-to-human English tutoring interactions. Through post-hoc analysis of the tutoring interactions, we come up with a lexicon of dialogue acts (34 tutor acts and 9 student acts), which we use to further annotate the collected dataset. Based on a two-step framework of first predicting the appropriate tutor act then generating the corresponding response, we implemented two CITS models using GPT-4 and SOLAR-KO, respectively. We experimentally demonstrate that the implemented models not only replicate the style of human teachers but also employ diverse and contextually appropriate pedagogical strategies.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
461,261
2501.06843
Leveraging the Global Research Infrastructure to Characterize the Impact of National Science Foundation Research
The Global Research infrastructure (GRI) is made up of the repositories and organizations that provide persistent identifiers (PIDs) and metadata for many kinds of research objects and connect these objects to funders, research institutions, researchers, and one another using PIDs. The INFORMATE Project has combined three data sources to focus on understanding how the global research infrastructure might help the US National Science Foundation (NSF) and other federal agencies identify and characterize the impact of their support. In this paper we present INFORMATE observations of three data systems. The NSF Award database represents NSF funding while the NSF Public Access Repository (PAR) and CHORUS, as a proxy for the GRI, represent two different view of results of that funding. We compare the first at the level of awards and the second two at the level of published research articles. Our findings demonstrate that CHORUS datasets include significantly more NSF awards and more related papers than does PAR. Our findings also suggest that time plays a significant role in the inclusion of award metadata across the sources analyzed. Data in those sources travel very different journeys, each presenting different obstacles to metadata completeness and suggesting necessary actions on the parts of authors and publishers to ensure that publication and funding metadata are captured. We discuss these actions, as well as implications our findings have for emergent technologies such as artificial intelligence and natural language processing.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
524,156
2004.08526
Effect of Text Color on Word Embeddings
In natural scenes and documents, we can find the correlation between a text and its color. For instance, the word, "hot", is often printed in red, while "cold" is often in blue. This correlation can be thought of as a feature that represents the semantic difference between the words. Based on this observation, we propose the idea of using text color for word embeddings. While text-only word embeddings (e.g. word2vec) have been extremely successful, they often represent antonyms as similar since they are often interchangeable in sentences. In this paper, we try two tasks to verify the usefulness of text color in understanding the meanings of words, especially in identifying synonyms and antonyms. First, we quantify the color distribution of words from the book cover images and analyze the correlation between the color and meaning of the word. Second, we try to retrain word embeddings with the color distribution of words as a constraint. By observing the changes in the word embeddings of synonyms and antonyms before and after re-training, we aim to understand the kind of words that have positive or negative effects in their word embeddings when incorporating text color information.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
173,084
2401.15889
Sliced Wasserstein with Random-Path Projecting Directions
Slicing distribution selection has been used as an effective technique to improve the performance of parameter estimators based on minimizing sliced Wasserstein distance in applications. Previous works either utilize expensive optimization to select the slicing distribution or use slicing distributions that require expensive sampling methods. In this work, we propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation. In particular, we introduce the random-path projecting direction (RPD) which is constructed by leveraging the normalized difference between two random vectors following the two input measures. From the RPD, we derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW). We then discuss the topological, statistical, and computational properties of RPSW and IWRPSW. Finally, we showcase the favorable performance of RPSW and IWRPSW in gradient flow and the training of denoising diffusion generative models on images.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
424,632
2308.09909
Never Explore Repeatedly in Multi-Agent Reinforcement Learning
In the realm of multi-agent reinforcement learning, intrinsic motivations have emerged as a pivotal tool for exploration. While the computation of many intrinsic rewards relies on estimating variational posteriors using neural network approximators, a notable challenge has surfaced due to the limited expressive capability of these neural statistics approximators. We pinpoint this challenge as the "revisitation" issue, where agents recurrently explore confined areas of the task space. To combat this, we propose a dynamic reward scaling approach. This method is crafted to stabilize the significant fluctuations in intrinsic rewards in previously explored areas and promote broader exploration, effectively curbing the revisitation phenomenon. Our experimental findings underscore the efficacy of our approach, showcasing enhanced performance in demanding environments like Google Research Football and StarCraft II micromanagement tasks, especially in sparse reward settings.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
386,480
2209.13325
Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
Transformer architecture has become the fundamental element of the widespread natural language processing~(NLP) models. With the trends of large NLP models, the increasing memory and computation costs hinder their efficient deployment on resource-limited devices. Therefore, transformer quantization attracts wide research interest. Recent work recognizes that structured outliers are the critical bottleneck for quantization performance. However, their proposed methods increase the computation overhead and still leave the outliers there. To fundamentally address this problem, this paper delves into the inherent inducement and importance of the outliers. We discover that $\boldsymbol \gamma$ in LayerNorm (LN) acts as a sinful amplifier for the outliers, and the importance of outliers varies greatly where some outliers provided by a few tokens cover a large area but can be clipped sharply without negative impacts. Motivated by these findings, we propose an outlier suppression framework including two components: Gamma Migration and Token-Wise Clipping. The Gamma Migration migrates the outlier amplifier to subsequent modules in an equivalent transformation, contributing to a more quantization-friendly model without any extra burden. The Token-Wise Clipping takes advantage of the large variance of token range and designs a token-wise coarse-to-fine pipeline, obtaining a clipping range with minimal final quantization loss in an efficient way. This framework effectively suppresses the outliers and can be used in a plug-and-play mode. Extensive experiments prove that our framework surpasses the existing works and, for the first time, pushes the 6-bit post-training BERT quantization to the full-precision (FP) level. Our code is available at https://github.com/wimh966/outlier_suppression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
319,849
2204.02488
Discovering and forecasting extreme events via active learning in neural operators
Extreme events in society and nature, such as pandemic spikes, rogue waves, or structural failures, can have catastrophic consequences. Characterizing extremes is difficult as they occur rarely, arise from seemingly benign conditions, and belong to complex and often unknown infinite-dimensional systems. Such challenges render attempts at characterizing them as moot. We address each of these difficulties by combining novel training schemes in Bayesian experimental design (BED) with an ensemble of deep neural operators (DNOs). This model-agnostic framework pairs a BED scheme that actively selects data for quantifying extreme events with an ensemble of DNOs that approximate infinite-dimensional nonlinear operators. We find that not only does this framework clearly beat Gaussian processes (GPs) but that 1) shallow ensembles of just two members perform best; 2) extremes are uncovered regardless of the state of initial data (i.e. with or without extremes); 3) our method eliminates "double-descent" phenomena; 4) the use of batches of suboptimal acquisition points compared to step-by-step global optima does not hinder BED performance; and 5) Monte Carlo acquisition outperforms standard optimizers in high-dimensions. Together these conclusions form the foundation of an AI-assisted experimental infrastructure that can efficiently infer and pinpoint critical situations across many domains, from physical to societal systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
289,962
2211.08971
Energy Reconstruction in Analysis of Cherenkov Telescopes Images in TAIGA Experiment Using Deep Learning Methods
Imaging Atmospheric Cherenkov Telescopes (IACT) of TAIGA astrophysical complex allow to observe high energy gamma radiation helping to study many astrophysical objects and processes. TAIGA-IACT enables us to select gamma quanta from the total cosmic radiation flux and recover their primary parameters, such as energy and direction of arrival. The traditional method of processing the resulting images is an image parameterization - so-called the Hillas parameters method. At the present time Machine Learning methods, in particular Deep Learning methods have become actively used for IACT image processing. This paper presents the analysis of simulated Monte Carlo images by several Deep Learning methods for a single telescope (mono-mode) and multiple IACT telescopes (stereo-mode). The estimation of the quality of energy reconstruction was carried out and their energy spectra were analyzed using several types of neural networks. Using the developed methods the obtained results were also compared with the results obtained by traditional methods based on the Hillas parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
330,825
2106.04229
BIGDML: Towards Exact Machine Learning Force Fields for Materials
Machine-learning force fields (MLFF) should be accurate, computationally and data efficient, and applicable to molecules, materials, and interfaces thereof. Currently, MLFFs often introduce tradeoffs that restrict their practical applicability to small subsets of chemical space or require exhaustive datasets for training. Here, we introduce the Bravais-Inspired Gradient-Domain Machine Learning (BIGDML) approach and demonstrate its ability to construct reliable force fields using a training set with just 10-200 geometries for materials including pristine and defect-containing 2D and 3D semiconductors and metals, as well as chemisorbed and physisorbed atomic and molecular adsorbates on surfaces. The BIGDML model employs the full relevant symmetry group for a given material, does not assume artificial atom types or localization of atomic interactions and exhibits high data efficiency and state-of-the-art energy accuracies (errors substantially below 1 meV per atom) for an extended set of materials. Extensive path-integral molecular dynamics carried out with BIGDML models demonstrate the counterintuitive localization of benzene--graphene dynamics induced by nuclear quantum effects and allow to rationalize the Arrhenius behavior of hydrogen diffusion coefficient in a Pd crystal for a wide range of temperatures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
239,647
2204.10206
Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics
Question answering-based summarization evaluation metrics must automatically determine whether the QA model's prediction is correct or not, a task known as answer verification. In this work, we benchmark the lexical answer verification methods which have been used by current QA-based metrics as well as two more sophisticated text comparison methods, BERTScore and LERC. We find that LERC out-performs the other methods in some settings while remaining statistically indistinguishable from lexical overlap in others. However, our experiments reveal that improved verification performance does not necessarily translate to overall QA-based metric quality: In some scenarios, using a worse verification method -- or using none at all -- has comparable performance to using the best verification method, a result that we attribute to properties of the datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
292,706
1802.02840
Neural Network Renormalization Group
We present a variational renormalization group (RG) approach using a deep generative model based on normalizing flows. The model performs hierarchical change-of-variables transformations from the physical space to a latent space with reduced mutual information. Conversely, the neural net directly maps independent Gaussian noises to physical configurations following the inverse RG flow. The model has an exact and tractable likelihood, which allows unbiased training and direct access to the renormalized energy function of the latent variables. To train the model, we employ probability density distillation for the bare energy function of the physical problem, in which the training loss provides a variational upper bound of the physical free energy. We demonstrate practical usage of the approach by identifying mutually independent collective variables of the Ising model and performing accelerated hybrid Monte Carlo sampling in the latent space. Lastly, we comment on the connection of the present approach to the wavelet formulation of RG and the modern pursuit of information preserving RG.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
89,846
1712.01694
Fuzzy-Based Dialectical Non-Supervised Image Classification and Clustering
The materialist dialectical method is a philosophical investigative method to analyze aspects of reality. These aspects are viewed as complex processes composed by basic units named poles, which interact with each other. Dialectics has experienced considerable progress in the 19th century, with Hegel's dialectics and, in the 20th century, with the works of Marx, Engels, and Gramsci, in Philosophy and Economics. The movement of poles through their contradictions is viewed as a dynamic process with intertwined phases of evolution and revolutionary crisis. In order to build a computational process based on dialectics, the interaction between poles can be modeled using fuzzy membership functions. Based on this assumption, we introduce the Objective Dialectical Classifier (ODC), a non-supervised map for classification based on materialist dialectics and designed as an extension of fuzzy c-means classifier. As a case study, we used ODC to classify 181 magnetic resonance synthetic multispectral images composed by proton density, $T_1$- and $T_2$-weighted synthetic brain images. Comparing ODC to k-means, fuzzy c-means, and Kohonen's self-organized maps, concerning with image fidelity indexes as estimatives of quantization distortion, we proved that ODC can reach almost the same quantization performance as optimal non-supervised classifiers like Kohonen's self-organized maps.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
true
false
true
86,155
2404.12400
Efflex: Efficient and Flexible Pipeline for Spatio-Temporal Trajectory Graph Modeling and Representation Learning
In the landscape of spatio-temporal data analytics, effective trajectory representation learning is paramount. To bridge the gap of learning accurate representations with efficient and flexible mechanisms, we introduce Efflex, a comprehensive pipeline for transformative graph modeling and representation learning of the large-volume spatio-temporal trajectories. Efflex pioneers the incorporation of a multi-scale k-nearest neighbors (KNN) algorithm with feature fusion for graph construction, marking a leap in dimensionality reduction techniques by preserving essential data features. Moreover, the groundbreaking graph construction mechanism and the high-performance lightweight GCN increase embedding extraction speed by up to 36 times faster. We further offer Efflex in two versions, Efflex-L for scenarios demanding high accuracy, and Efflex-B for environments requiring swift data processing. Comprehensive experimentation with the Porto and Geolife datasets validates our approach, positioning Efflex as the state-of-the-art in the domain. Such enhancements in speed and accuracy highlight the versatility of Efflex, underscoring its wide-ranging potential for deployment in time-sensitive and computationally constrained applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
447,875
1802.02254
Trajectory-driven Influential Billboard Placement
In this paper we propose and study the problem of trajectory-driven influential billboard placement: given a set of billboards $U$ (each with a location and a cost), a database of trajectories $\mathcal{T}$ and a budget $L$, find a set of billboards within the budget to influence the largest number of trajectories. One core challenge is to identify and reduce the overlap of the influence from different billboards to the same trajectories, while keeping the budget constraint into consideration. We show that this problem is NP-hard and present an enumeration based algorithm with $(1-1/e)$ approximation ratio. However, the enumeration should be very costly when $|U|$ is large. By exploiting the locality property of billboards' influence, we propose a partition-based framework PartSel. PartSel partitions $U$ into a set of small clusters, computes the locally influential billboards for each cluster, and merges them to generate the global solution. Since the local solutions can be obtained much more efficient than the global one, PartSel should reduce the computation cost greatly; meanwhile it achieves a non-trivial approximation ratio guarantee. Then we propose a LazyProbe method to further prune billboards with low marginal influence, while achieving the same approximation ratio as PartSel. Experiments on real datasets verify the efficiency and effectiveness of our methods.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
89,743
2004.02458
Optimal Correlators for Detection and Estimation in Optical Receivers
Motivated by modern applications of light detection and ranging (LIDAR), we study the model of an optical receiver based on an avalanche photo-diode (APD), followed by electronic circuitry for detection of reflected optical signals and estimation of their delay.This model is known to be quite complicated as it consists of at least three different types of noise: thermal noise, shot noise, and multiplicative noise (excess noise) that stems from the random gain associated with the photo-multiplication of the APD. Consequently, the derivation of the optimal likelihood ratio test (LRT) associated with signal detection is a non-trivial task, which has no apparent exact closed--form solution. We consider instead a class of relatively simple detectors, that are based on correlating the noisy received signal with a given deterministic waveform, and our purpose is to characterize the optimal waveform in the sense of the best trade--off between the false-alarm (FA) error exponent and the missed-detection (MD) error exponent. In the same spirit, we also study the problem of estimating the delay on the basis of maximizing the correlation between the received signal and a time-shifted waveform, as a function of this time shift. We characterize the optimal correlator waveform that minimizes the mean square error (MSE) in the regime of high signal-to-noise ratio (SNR). The optimal correlator waveforms for detection and for estimation turn out to be different, but their limiting behavior is the same: when the thermal Gaussian noise is dominant, the optimal correlator waveform becomes proportional to the clean signal, but when the thermal noise is negligible compared to the other noises, then it becomes logarithmic function of the clean signal, as expected.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
171,236
2311.15030
A Learning Quasi-stiffness Control Framework of a Powered Trans-femoral Prosthesis for Adaptive Speed and Incline Walking
Impedance-based control represents a prevalent strategy in the powered trans femoral prostheses because of its ability to reproduce natural walking. However, most existing studies have developed impedance-based prosthesis controllers for specific tasks, while creating a task-adaptive controller for variable-task walking continues to be a significant challenge. This article proposes a task-adaptive quasi-stiffness control framework for powered prostheses that generalizes across various walking tasks, including the torque-angle relationship reconstruction part and the quasi-stiffness controller design part. A Gaussian Process Regression model is introduced to predict the target features of the human joints angle and torque in a new task. Subsequently, a Kernel Movement Primitives is employed to reconstruct the torque-angle relationship of the new task from multiple human reference trajectories and estimated target features. Based on the torque-angle relationship of the new task, a quasi-stiffness control approach is designed for a powered prosthesis. Finally, the proposed framework is validated through practical examples, including varying speeds and inclines walking tasks. Notably, the proposed framework not only aligns with but frequently surpasses the performance of a benchmark finite state machine impedance controller without necessitating manual impedance tuning and has the potential to expand to variable walking tasks in daily life for the trans-femoral amputees.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
410,354
2501.11196
Enhancing Brain Tumor Segmentation Using Channel Attention and Transfer learning
Accurate and efficient segmentation of brain tumors is critical for diagnosis, treatment planning, and monitoring in clinical practice. In this study, we present an enhanced ResUNet architecture for automatic brain tumor segmentation, integrating an EfficientNetB0 encoder, a channel attention mechanism, and an Atrous Spatial Pyramid Pooling (ASPP) module. The EfficientNetB0 encoder leverages pre-trained features to improve feature extraction efficiency, while the channel attention mechanism enhances the model's focus on tumor-relevant features. ASPP enables multiscale contextual learning, crucial for handling tumors of varying sizes and shapes. The proposed model was evaluated on two benchmark datasets: TCGA LGG and BraTS 2020. Experimental results demonstrate that our method consistently outperforms the baseline ResUNet and its EfficientNet variant, achieving Dice coefficients of 0.903 and 0.851 and HD95 scores of 9.43 and 3.54 for whole tumor and tumor core regions on the BraTS 2020 dataset, respectively. compared with state-of-the-art methods, our approach shows competitive performance, particularly in whole tumor and tumor core segmentation. These results indicate that combining a powerful encoder with attention mechanisms and ASPP can significantly enhance brain tumor segmentation performance. The proposed approach holds promise for further optimization and application in other medical image segmentation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,825
cs/0606118
Adapting a general parser to a sublanguage
In this paper, we propose a method to adapt a general parser (Link Parser) to sublanguages, focusing on the parsing of texts in biology. Our main proposal is the use of terminology (identication and analysis of terms) in order to reduce the complexity of the text to be parsed. Several other strategies are explored and finally combined among which text normalization, lexicon and morpho-guessing module extensions and grammar rules adaptation. We compare the parsing results before and after these adaptations.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
539,546
1301.7368
Irrelevance and Independence Relations in Quasi-Bayesian Networks
This paper analyzes irrelevance and independence relations in graphical models associated with convex sets of probability distributions (called Quasi-Bayesian networks). The basic question in Quasi-Bayesian networks is, How can irrelevance/independence relations in Quasi-Bayesian networks be detected, enforced and exploited? This paper addresses these questions through Walley's definitions of irrelevance and independence. Novel algorithms and results are presented for inferences with the so-called natural extensions using fractional linear programming, and the properties of the so-called type-1 extensions are clarified through a new generalization of d-separation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,602
2404.18820
Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior
Image compression at extremely low bitrates (below 0.1 bits per pixel (bpp)) is a significant challenge due to substantial information loss. In this work, we propose a novel two-stage extreme image compression framework that exploits the powerful generative capability of pre-trained diffusion models to achieve realistic image reconstruction at extremely low bitrates. In the first stage, we treat the latent representation of images in the diffusion space as guidance, employing a VAE-based compression approach to compress images and initially decode the compressed information into content variables. The second stage leverages pre-trained stable diffusion to reconstruct images under the guidance of content variables. Specifically, we introduce a small control module to inject content information while keeping the stable diffusion model fixed to maintain its generative capability. Furthermore, we design a space alignment loss to force the content variables to align with the diffusion space and provide the necessary constraints for optimization. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches in terms of visual performance at extremely low bitrates. The source code and trained models are available at https://github.com/huai-chang/DiffEIC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
450,400
1212.6592
Social Teaching: Being Informative vs. Being Right in Sequential Decision Making
We show that it can be suboptimal for Bayesian decision-making agents employing social learning to use correct prior probabilities as their initial beliefs. We consider sequential Bayesian binary hypothesis testing where each individual agent makes a binary decision based on an initial belief, a private signal, and the decisions of all earlier-acting agents---with the actions of precedent agents causing updates of the initial belief. Each agent acts to minimize Bayes risk, with all agents sharing the same Bayes costs for Type I (false alarm) and Type II (missed detection) errors. The effect of the set of initial beliefs on the decision-making performance of the last agent is studied. The last agent makes the best decision when the initial beliefs are inaccurate. When the private signals are described by Gaussian likelihoods, the optimal initial beliefs are not haphazard but rather follow a systematic pattern: the earlier-acting agents should act as if the prior probability is larger than it is in reality when the true prior probability is small, and vice versa. We interpret this as being open minded toward the unlikely hypothesis. The early-acting agents face a trade-off between making a correct decision and being maximally informative to the later-acting agents.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
20,655
2209.07947
Omni-Dimensional Dynamic Convolution
Learning a single static convolutional kernel in each convolutional layer is the common training paradigm of modern Convolutional Neural Networks (CNNs). Instead, recent research in dynamic convolution shows that learning a linear combination of $n$ convolutional kernels weighted with their input-dependent attentions can significantly improve the accuracy of light-weight CNNs, while maintaining efficient inference. However, we observe that existing works endow convolutional kernels with the dynamic property through one dimension (regarding the convolutional kernel number) of the kernel space, but the other three dimensions (regarding the spatial size, the input channel number and the output channel number for each convolutional kernel) are overlooked. Inspired by this, we present Omni-dimensional Dynamic Convolution (ODConv), a more generalized yet elegant dynamic convolution design, to advance this line of research. ODConv leverages a novel multi-dimensional attention mechanism with a parallel strategy to learn complementary attentions for convolutional kernels along all four dimensions of the kernel space at any convolutional layer. As a drop-in replacement of regular convolutions, ODConv can be plugged into many CNN architectures. Extensive experiments on the ImageNet and MS-COCO datasets show that ODConv brings solid accuracy boosts for various prevailing CNN backbones including both light-weight and large ones, e.g., 3.77%~5.71%|1.86%~3.72% absolute top-1 improvements to MobivleNetV2|ResNet family on the ImageNet dataset. Intriguingly, thanks to its improved feature learning ability, ODConv with even one single kernel can compete with or outperform existing dynamic convolution counterparts with multiple kernels, substantially reducing extra parameters. Furthermore, ODConv is also superior to other attention modules for modulating the output features or the convolutional weights.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
317,953
2006.03481
Providing reliability in Recommender Systems through Bernoulli Matrix Factorization
Beyond accuracy, quality measures are gaining importance in modern recommender systems, with reliability being one of the most important indicators in the context of collaborative filtering. This paper proposes Bernoulli Matrix Factorization (BeMF), which is a matrix factorization model, to provide both prediction values and reliability values. BeMF is a very innovative approach from several perspectives: a) it acts on model-based collaborative filtering rather than on memory-based filtering, b) it does not use external methods or extended architectures, such as existing solutions, to provide reliability, c) it is based on a classification-based model instead of traditional regression-based models, and d) matrix factorization formalism is supported by the Bernoulli distribution to exploit the binary nature of the designed classification model. The experimental results show that the more reliable a prediction is, the less liable it is to be wrong: recommendation quality improves after the most reliable predictions are selected. State-of-the-art quality measures for reliability have been tested, which shows that BeMF outperforms previous baseline methods and models.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
180,324
2410.07611
Parallel Digital Twin-driven Deep Reinforcement Learning for User Association and Load Balancing in Dynamic Wireless Networks
Optimization of user association in a densely deployed heterogeneous cellular network is usually challenging and even more complicated due to the dynamic nature of user mobility and fluctuation in user counts. While deep reinforcement learning (DRL) emerges as a promising solution, its application in practice is hindered by high trial-and-error costs in real world and unsatisfactory physical network performance during training. In addition, existing DRL-based user association methods are usually only applicable to scenarios with a fixed number of users due to convergence and compatibility challenges. In this paper, we propose a parallel digital twin (DT)-driven DRL method for user association and load balancing in networks with both dynamic user counts, distribution, and mobility patterns. Our method employs a distributed DRL strategy to handle varying user numbers and exploits a refined neural network structure for faster convergence. To address these DRL training-related challenges, we devise a high-fidelity DT construction technique, featuring a zero-shot generative user mobility model, named Map2Traj, based on a diffusion model. Map2Traj estimates user trajectory patterns and spatial distributions solely from street maps. Armed with this DT environment, DRL agents are enabled to be trained without the need for interactions with the physical network. To enhance the generalization ability of DRL models for dynamic scenarios, a parallel DT framework is further established to alleviate strong correlation and non-stationarity in single-environment training and improve the training efficiency. Numerical results show that the proposed parallel DT-driven DRL method achieves closely comparable performance to real environment training, and even outperforms those trained in a single real-world environment with nearly 20% gain in terms of cell-edge user performance.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
496,712
1804.06958
A-CCNN: adaptive ccnn for density estimation and crowd counting
Crowd counting, for estimating the number of people in a crowd using vision-based computer techniques, has attracted much interest in the research community. Although many attempts have been reported, real-world problems, such as huge variation in subjects' sizes in images and serious occlusion among people, make it still a challenging problem. In this paper, we propose an Adaptive Counting Convolutional Neural Network (A-CCNN) and consider the scale variation of objects in a frame adaptively so as to improve the accuracy of counting. Our method takes advantages of contextual information to provide more accurate and adaptive density maps and crowd counting in a scene. Extensively experimental evaluation is conducted using different benchmark datasets for object-counting and shows that the proposed approach is effective and outperforms state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
95,419
2404.17484
Sparse Reconstruction of Optical Doppler Tomography Based on State Space Model
Optical Doppler Tomography (ODT) is a blood flow imaging technique popularly used in bioengineering applications. The fundamental unit of ODT is the 1D frequency response along the A-line (depth), named raw A-scan. A 2D ODT image (B-scan) is obtained by first sensing raw A-scans along the B-line (width), and then constructing the B-scan from these raw A-scans via magnitude-phase analysis and post-processing. To obtain a high-resolution B-scan with a precise flow map, densely sampled A-scans are required in current methods, causing both computational and storage burdens. To address this issue, in this paper we propose a novel sparse reconstruction framework with four main sequential steps: 1) early magnitude-phase fusion that encourages rich interaction of the complementary information in magnitude and phase, 2) State Space Model (SSM)-based representation learning, inspired by recent successes in Mamba and VMamba, to naturally capture both the intra-A-scan sequential information and between-A-scan interactions, 3) an Inception-based Feedforward Network module (IncFFN) to further boost the SSM-module, and 4) a B-line Pixel Shuffle (BPS) layer to effectively reconstruct the final results. In the experiments on real-world animal data, our method shows clear effectiveness in reconstruction accuracy. As the first application of SSM for image reconstruction tasks, we expect our work to inspire related explorations in not only efficient ODT imaging techniques but also generic image enhancement.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
449,871
2210.08971
APGKT: Exploiting Associative Path on Skills Graph for Knowledge Tracing
Knowledge tracing (KT) is a fundamental task in educational data mining that mainly focuses on students' dynamic cognitive states of skills. The question-answering process of students can be regarded as a thinking process that considers the following two problems. One problem is which skills are needed to answer the question, and the other is how to use these skills in order. If a student wants to answer a question correctly, the student should not only master the set of skills involved in the question but also think and obtain the associative path on the skills graph. The nodes in the associative path refer to the skills needed and the path shows the order of using them. The associative path is referred to as the skill mode. Thus, obtaining the skill modes is the key to answering questions successfully. However, most existing KT models only focus on a set of skills, without considering the skill modes. We propose a KT model, called APGKT, that exploits skill modes. Specifically, we extract the subgraph topology of the skills involved in the question and combine the difficulty level of the skills to obtain the skill modes via encoding; then, through multi-layer recurrent neural networks, we obtain a student's higher-order cognitive states of skills, which is used to predict the student's future answering performance. Experiments on five benchmark datasets validate the effectiveness of the proposed model.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
324,368
0804.3599
Respect My Authority! HITS Without Hyperlinks, Utilizing Cluster-Based Language Models
We present an approach to improving the precision of an initial document ranking wherein we utilize cluster information within a graph-based framework. The main idea is to perform re-ranking based on centrality within bipartite graphs of documents (on one side) and clusters (on the other side), on the premise that these are mutually reinforcing entities. Links between entities are created via consideration of language models induced from them. We find that our cluster-document graphs give rise to much better retrieval performance than previously proposed document-only graphs do. For example, authority-based re-ranking of documents via a HITS-style cluster-based approach outperforms a previously-proposed PageRank-inspired algorithm applied to solely-document graphs. Moreover, we also show that computing authority scores for clusters constitutes an effective method for identifying clusters containing a large percentage of relevant documents.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
1,620
2212.07635
Let's consider more general nonlinear approaches to study teleconnections of climate variables
The recent work by (Rieger et al 2021) is concerned with the problem of extracting features from spatio-temporal geophysical signals. The authors introduce the complex rotated MCA (xMCA) to deal with lagged effects and non-orthogonality of the feature representation. This method essentially (1) transforms the signals to a complex plane with the Hilbert transform; (2) applies an oblique (Varimax and Promax) rotation to remove the orthogonality constraint; and (3) performs the eigendecomposition in this complex space (Horel et al, 1984). We argue that this method is essentially a particular case of the method called rotated complex kernel principal component analysis (ROCK-PCA) introduced in (Bueso et al., 2019, 2020), where we proposed the same approach: first transform the data to the complex plane with the Hilbert transform and then apply the varimax rotation, with the only difference that the eigendecomposition is performed in the dual (kernel) Hilbert space. The latter allows us to generalize the xMCA solution by extracting nonlinear (curvilinear) features when nonlinear kernel functions are used. Hence, the solution of xMCA boils down to ROCK-PCA when the inner product is computed in the input data space instead of in the high-dimensional (possibly infinite) kernel Hilbert space to which data has been mapped. In this short correspondence we show theoretical proof that xMCA is a special case of ROCK-PCA and provide quantitative evidence that more expressive and informative features can be extracted when working with kernels; results of the decomposition of global sea surface temperature (SST) fields are shown to illustrate the capabilities of ROCK-PCA to cope with nonlinear processes, unlike xMCA.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
336,473
2309.14957
Context-Aware Generative Models for Prediction of Aircraft Ground Tracks
Trajectory prediction (TP) plays an important role in supporting the decision-making of Air Traffic Controllers (ATCOs). Traditional TP methods are deterministic and physics-based, with parameters that are calibrated using aircraft surveillance data harvested across the world. These models are, therefore, agnostic to the intentions of the pilots and ATCOs, which can have a significant effect on the observed trajectory, particularly in the lateral plane. This work proposes a generative method for lateral TP, using probabilistic machine learning to model the effect of the epistemic uncertainty arising from the unknown effect of pilot behaviour and ATCO intentions. The models are trained to be specific to a particular sector, allowing local procedures such as coordinated entry and exit points to be modelled. A dataset comprising a week's worth of aircraft surveillance data, passing through a busy sector of the United Kingdom's upper airspace, was used to train and test the models. Specifically, a piecewise linear model was used as a functional, low-dimensional representation of the ground tracks, with its control points determined by a generative model conditioned on partial context. It was found that, of the investigated models, a Bayesian Neural Network using the Laplace approximation was able to generate the most plausible trajectories in order to emulate the flow of traffic through the sector.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
394,798
2207.09217
Contextual Similarity is More Valuable than Character Similarity: An Empirical Study for Chinese Spell Checking
Chinese Spell Checking (CSC) task aims to detect and correct Chinese spelling errors. Recently, related researches focus on introducing character similarity from confusion set to enhance the CSC models, ignoring the context of characters that contain richer information. To make better use of contextual information, we propose a simple yet effective Curriculum Learning (CL) framework for the CSC task. With the help of our model-agnostic CL framework, existing CSC models will be trained from easy to difficult as humans learn Chinese characters and achieve further performance improvements. Extensive experiments and detailed analyses on widely used SIGHAN datasets show that our method outperforms previous state-of-the-art methods. More instructively, our study empirically suggests that contextual similarity is more valuable than character similarity for the CSC task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
308,838
1305.0606
Results from a Practical Deployment of the MyZone Decentralized P2P Social Network
This paper presents MyZone, a private online social network for relatively small, closely-knit communities. MyZone has three important distinguishing features. First, users keep the ownership of their data and have complete control over maintaining their privacy. Second, MyZone is free from any possibility of content censorship and is highly resilient to any single point of disconnection. Finally, MyZone minimizes deployment cost by minimizing its computation, storage and network bandwidth requirements. It incorporates both a P2P architecture and a centralized architecture in its design ensuring high availability, security and privacy. A prototype of MyZone was deployed over a period of 40 days with a membership of more than one hundred users. The paper provides a detailed evaluation of the results obtained from this deployment.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
true
24,363
2104.11067
K\"unstliche Intelligenz, quo vadis?
This paper outlines the state of the art in AI. It then describes basic machine learning and knowledge processing techniques. Based on this, some possibilities and limitations of future AI developments are discussed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
231,807
2005.08898
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite nonconvexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices. The goal of this paper is to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient descent, where the pre-conditioners are adaptive and iteration-varying with a minimal computational overhead. With tailored variants for low-rank matrix sensing, robust principal component analysis and matrix completion, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. Our analysis is also applicable to general loss functions that are restricted strongly convex and smooth over low-rank matrices. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties over a wide range of low-rank matrix estimation tasks.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
177,763
2501.09948
AI Explainability for Power Electronics: From a Lipschitz Continuity Perspective
Lifecycle management of power converters continues to thrive with emerging artificial intelligence (AI) solutions, yet AI mathematical explainability remains unexplored in power electronics (PE) community. The lack of theoretical rigor challenges adoption in mission-critical applications. Therefore, this letter proposes a generic framework to evaluate mathematical explainability, highlighting inference stability and training convergence from a Lipschitz continuity perspective. Inference stability governs consistent outputs under input perturbations, essential for robust real-time control and fault diagnosis. Training convergence guarantees stable learning dynamics, facilitating accurate modeling in PE contexts. Additionally, a Lipschitz-aware learning rate selection strategy is introduced to accelerate convergence while mitigating overshoots and oscillations. The feasibility of the proposed Lipschitz-oriented framework is demonstrated by validating the mathematical explainability of a state-of-the-art physics-in-architecture neural network, and substantiated through empirical case studies on dual-active-bridge converters. This letter serves as a clarion call for the PE community to embrace mathematical explainability, heralding a transformative era of trustworthy and explainable AI solutions that potentially redefine the future of power electronics.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
525,332
2109.02022
Recommending Researchers in Machine Learning based on Author-Topic Model
The aim of this paper is to uncover the researchers in machine learning using the author-topic model (ATM). We collect 16,855 scientific papers from six top journals in the field of machine learning published from 1997 to 2016 and analyze them using ATM. The dataset is broken down into 4 intervals to identify the top researchers and find similar researchers using their similarity score. The similarity score is calculated using Hellinger distance. The researchers are plotted using t-SNE, which reduces the dimensionality of the data while keeping the same distance between the points. The analysis of our study helps the upcoming researchers to find the top researchers in their area of interest.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
253,608
2404.02438
From Narratives to Numbers: Valid Inference Using Language Model Predictions from Verbal Autopsy Narratives
In settings where most deaths occur outside the healthcare system, verbal autopsies (VAs) are a common tool to monitor trends in causes of death (COD). VAs are interviews with a surviving caregiver or relative that are used to predict the decedent's COD. Turning VAs into actionable insights for researchers and policymakers requires two steps (i) predicting likely COD using the VA interview and (ii) performing inference with predicted CODs (e.g. modeling the breakdown of causes by demographic factors using a sample of deaths). In this paper, we develop a method for valid inference using outcomes (in our case COD) predicted from free-form text using state-of-the-art NLP techniques. This method, which we call multiPPI++, extends recent work in "prediction-powered inference" to multinomial classification. We leverage a suite of NLP techniques for COD prediction and, through empirical analysis of VA data, demonstrate the effectiveness of our approach in handling transportability issues. multiPPI++ recovers ground truth estimates, regardless of which NLP model produced predictions and regardless of whether they were produced by a more accurate predictor like GPT-4-32k or a less accurate predictor like KNN. Our findings demonstrate the practical importance of inference correction for public health decision-making and suggests that if inference tasks are the end goal, having a small amount of contextually relevant, high quality labeled data is essential regardless of the NLP algorithm.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
443,855
2110.06192
Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
We study the problem of robotic stacking with objects of complex geometry. We propose a challenging and diverse set of such objects that was carefully designed to require strategies beyond a simple "pick-and-place" solution. Our method is a reinforcement learning (RL) approach combined with vision-based interactive policy distillation and simulation-to-reality transfer. Our learned policies can efficiently handle multiple object combinations in the real world and exhibit a large variety of stacking skills. In a large experimental study, we investigate what choices matter for learning such general vision-based agents in simulation, and what affects optimal transfer to the real robot. We then leverage data collected by such policies and improve upon them with offline RL. A video and a blog post of our work are provided as supplementary material.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
260,529
1908.06381
Long-Duration Fully Autonomous Operation of Rotorcraft Unmanned Aerial Systems for Remote-Sensing Data Acquisition
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
false
false
false
false
false
false
false
true
false
false
true
true
false
false
false
false
false
false
142,003
2409.01691
When 3D Partial Points Meets SAM: Tooth Point Cloud Segmentation with Sparse Labels
Tooth point cloud segmentation is a fundamental task in many orthodontic applications. Current research mainly focuses on fully supervised learning which demands expensive and tedious manual point-wise annotation. Although recent weakly-supervised alternatives are proposed to use weak labels for 3D segmentation and achieve promising results, they tend to fail when the labels are extremely sparse. Inspired by the powerful promptable segmentation capability of the Segment Anything Model (SAM), we propose a framework named SAMTooth that leverages such capacity to complement the extremely sparse supervision. To automatically generate appropriate point prompts for SAM, we propose a novel Confidence-aware Prompt Generation strategy, where coarse category predictions are aggregated with confidence-aware filtering. Furthermore, to fully exploit the structural and shape clues in SAM's outputs for assisting the 3D feature learning, we advance a Mask-guided Representation Learning that re-projects the generated tooth masks of SAM into 3D space and constrains these points of different teeth to possess distinguished representations. To demonstrate the effectiveness of the framework, we conduct experiments on the public dataset and surprisingly find with only 0.1\% annotations (one point per tooth), our method can surpass recent weakly supervised methods by a large margin, and the performance is even comparable to the recent fully-supervised methods, showcasing the significant potential of applying SAM to 3D perception tasks with sparse labels. Code is available at https://github.com/CUHK-AIM-Group/SAMTooth.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
485,443
2402.14658
OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
The introduction of large language models has significantly advanced code generation. However, open-source models often lack the execution capabilities and iterative refinement of advanced systems like the GPT-4 Code Interpreter. To address this, we introduce OpenCodeInterpreter, a family of open-source code systems designed for generating, executing, and iteratively refining code. Supported by Code-Feedback, a dataset featuring 68K multi-turn interactions, OpenCodeInterpreter integrates execution and human feedback for dynamic code refinement. Our comprehensive evaluation of OpenCodeInterpreter across key benchmarks such as HumanEval, MBPP, and their enhanced versions from EvalPlus reveals its exceptional performance. Notably, OpenCodeInterpreter-33B achieves an accuracy of 83.2 (76.4) on the average (and plus versions) of HumanEval and MBPP, closely rivaling GPT-4's 84.2 (76.2) and further elevates to 91.6 (84.6) with synthesized human feedback from GPT-4. OpenCodeInterpreter brings the gap between open-source code generation models and proprietary systems like GPT-4 Code Interpreter.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
431,779
2110.03262
Situated Dialogue Learning through Procedural Environment Generation
We teach goal-driven agents to interactively act and speak in situated environments by training on generated curriculums. Our agents operate in LIGHT (Urbanek et al. 2019) -- a large-scale crowd-sourced fantasy text adventure game wherein an agent perceives and interacts with the world through textual natural language. Goals in this environment take the form of character-based quests, consisting of personas and motivations. We augment LIGHT by learning to procedurally generate additional novel textual worlds and quests to create a curriculum of steadily increasing difficulty for training agents to achieve such goals. In particular, we measure curriculum difficulty in terms of the rarity of the quest in the original training distribution -- an easier environment is one that is more likely to have been found in the unaugmented dataset. An ablation study shows that this method of learning from the tail of a distribution results in significantly higher generalization abilities as measured by zero-shot performance on never-before-seen quests.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
259,427
0801.1060
On the Period of a Periodic-Finite-Type Shift
Periodic-finite-type shifts (PFT's) form a class of sofic shifts that strictly contains the class of shifts of finite type (SFT's). In this paper, we investigate how the notion of "period" inherent in the definition of a PFT causes it to differ from an SFT, and how the period influences the properties of a PFT.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,132
2411.10156
Mitigating Sycophancy in Decoder-Only Transformer Architectures: Synthetic Data Intervention
To address the sycophancy problem caused by reinforcement learning from human feedback in large language models, this research applies synthetic data intervention technology to the decoder-only transformer architecture. Based on the research gaps in the existing literature, the researcher designed an experimental process to reduce the tendency of models to cater by generating diversified data, and used GPT4o as an experimental tool for verification. The experiment used 100 true and false questions, and compared the performance of the model trained with synthetic data intervention and the original untrained model on multiple indicators. The results show that the SDI training model supports the technology in terms of accuracy rate and sycophancy rate and has significant effectiveness in reducing sycophancy phenomena.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
508,522
1811.01057
Semidefinite relaxations for certifying robustness to adversarial examples
Despite their impressive performance on diverse tasks, neural networks fail catastrophically in the presence of adversarial inputs---imperceptibly but adversarially perturbed versions of natural inputs. We have witnessed an arms race between defenders who attempt to train robust networks and attackers who try to construct adversarial examples. One promise of ending the arms race is developing certified defenses, ones which are provably robust against all attackers in some family. These certified defenses are based on convex relaxations which construct an upper bound on the worst case loss over all attackers in the family. Previous relaxations are loose on networks that are not trained against the respective relaxation. In this paper, we propose a new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks. We show that our proposed relaxation is tighter than previous relaxations and produces meaningful robustness guarantees on three different "foreign networks" whose training objectives are agnostic to our proposed relaxation.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
112,255
2008.06464
Multi-Agent Deep Reinforcement Learning enabled Computation Resource Allocation in a Vehicular Cloud Network
In this paper, we investigate the computational resource allocation problem in a distributed Ad-Hoc vehicular network with no centralized infrastructure support. To support the ever increasing computational needs in such a vehicular network, the distributed virtual cloud network (VCN) is formed, based on which a computational resource sharing scheme through offloading among nearby vehicles is proposed. In view of the time-varying computational resource in VCN, the statistical distribution characteristics for computational resource are analyzed in detail. Thereby, a resource-aware combinatorial optimization objective mechanism is proposed. To alleviate the non-stationary environment caused by the typically multi-agent environment in VCN, we adopt a centralized training and decentralized execution framework. In addition, for the objective optimization problem, we model it as a Markov game and propose a DRL based multi-agent deep deterministic reinforcement learning (MADDPG) algorithm to solve it. Interestingly, to overcome the dilemma of lacking a real central control unit in VCN, the allocation is actually completed on the vehicles in a distributed manner. The simulation results are presented to demonstrate our scheme's effectiveness.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
191,812
2211.07647
An Interpretable Neuron Embedding for Static Knowledge Distillation
Although deep neural networks have shown well-performance in various tasks, the poor interpretability of the models is always criticized. In the paper, we propose a new interpretable neural network method, by embedding neurons into the semantic space to extract their intrinsic global semantics. In contrast to previous methods that probe latent knowledge inside the model, the proposed semantic vector externalizes the latent knowledge to static knowledge, which is easy to exploit. Specifically, we assume that neurons with similar activation are of similar semantic information. Afterwards, semantic vectors are optimized by continuously aligning activation similarity and semantic vector similarity during the training of the neural network. The visualization of semantic vectors allows for a qualitative explanation of the neural network. Moreover, we assess the static knowledge quantitatively by knowledge distillation tasks. Empirical experiments of visualization show that semantic vectors describe neuron activation semantics well. Without the sample-by-sample guidance from the teacher model, static knowledge distillation exhibit comparable or even superior performance with existing relation-based knowledge distillation methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
330,322
2309.15670
MONOVAB : An Annotated Corpus for Bangla Multi-label Emotion Detection
In recent years, Sentiment Analysis (SA) and Emotion Recognition (ER) have been increasingly popular in the Bangla language, which is the seventh most spoken language throughout the entire world. However, the language is structurally complicated, which makes this field arduous to extract emotions in an accurate manner. Several distinct approaches such as the extraction of positive and negative sentiments as well as multiclass emotions, have been implemented in this field of study. Nevertheless, the extraction of multiple sentiments is an almost untouched area in this language. Which involves identifying several feelings based on a single piece of text. Therefore, this study demonstrates a thorough method for constructing an annotated corpus based on scrapped data from Facebook to bridge the gaps in this subject area to overcome the challenges. To make this annotation more fruitful, the context-based approach has been used. Bidirectional Encoder Representations from Transformers (BERT), a well-known methodology of transformers, have been shown the best results of all methods implemented. Finally, a web application has been developed to demonstrate the performance of the pre-trained top-performer model (BERT) for multi-label ER in Bangla.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
395,065
2411.19943
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning Capability
Mathematical reasoning tasks pose significant challenges for large language models (LLMs) because they require precise logical deduction and sequence analysis. In this work, we introduce the concept of critical tokens -- elements within reasoning trajectories that significantly influence incorrect outcomes. We present a novel framework for identifying these tokens through rollout sampling and demonstrate their substantial divergence from traditional error tokens. Through extensive experiments on datasets such as GSM8K and MATH500, we show that identifying and replacing critical tokens significantly improves model accuracy. We propose an efficient methodology for pinpointing these tokens in large-scale datasets using contrastive estimation and extend this framework to enhance model training processes with direct preference optimization (DPO). Experimental results on GSM8K and MATH500 benchmarks with the widely used models Llama-3 (8B and 70B) and Deepseek-math (7B) demonstrate the effectiveness of the proposed approach, cDPO. Our results underscore the potential of leveraging critical tokens to reduce errors in reasoning tasks, advancing the development of AI systems capable of robust logical deduction. Our code, annotated datasets, and trained models are available at https://github.com/chenzhiling9954/Critical-Tokens-Matter to support and encourage future research in this promising field.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
512,438
2205.08467
Application of Graph Based Features in Computer Aided Diagnosis for Histopathological Image Classification of Gastric Cancer
The gold standard for gastric cancer detection is gastric histopathological image analysis, but there are certain drawbacks in the existing histopathological detection and diagnosis. In this paper, based on the study of computer aided diagnosis system, graph based features are applied to gastric cancer histopathology microscopic image analysis, and a classifier is used to classify gastric cancer cells from benign cells. Firstly, image segmentation is performed, and after finding the region, cell nuclei are extracted using the k-means method, the minimum spanning tree (MST) is drawn, and graph based features of the MST are extracted. The graph based features are then put into the classifier for classification. In this study, different segmentation methods are compared in the tissue segmentation stage, among which are Level-Set, Otsu thresholding, watershed, SegNet, U-Net and Trans-U-Net segmentation; Graph based features, Red, Green, Blue features, Grey-Level Co-occurrence Matrix features, Histograms of Oriented Gradient features and Local Binary Patterns features are compared in the feature extraction stage; Radial Basis Function (RBF) Support Vector Machine (SVM), Linear SVM, Artificial Neural Network, Random Forests, k-NearestNeighbor, VGG16, and Inception-V3 are compared in the classifier stage. It is found that using U-Net to segment tissue areas, then extracting graph based features, and finally using RBF SVM classifier gives the optimal results with 94.29%.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
296,952
1807.08415
Clustering of Driving Encounter Scenarios Using Connected Vehicle Trajectories
Multi-vehicle interaction behavior classification and analysis offer in-depth knowledge to make an efficient decision for autonomous vehicles. This paper aims to cluster a wide range of driving encounter scenarios based only on multi-vehicle GPS trajectories. Towards this end, we propose a generic unsupervised learning framework comprising two layers: feature representation layer and clustering layer. In the layer of feature representation, we combine the deep autoencoders with a distance-based measure to map the sequential observations of driving encounters into a computationally tractable space that allows quantifying the spatiotemporal interaction characteristics of two vehicles. The clustering algorithm is then applied to the extracted representations to gather homogeneous driving encounters into groups. Our proposed generic framework is then evaluated using 2,568 naturalistic driving encounters. Experimental results demonstrate that our proposed generic framework incorporated with unsupervised learning can cluster multi-trajectory data into distinct groups. These clustering results could benefit decision-making policy analysis and design for autonomous vehicles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
103,535
2206.03003
Transformer-based Personalized Attention Mechanism for Medical Images with Clinical Records
In medical image diagnosis, identifying the attention region, i.e., the region of interest for which the diagnosis is made, is an important task. Various methods have been developed to automatically identify target regions from given medical images. However, in actual medical practice, the diagnosis is made based not only on the images but also on a variety of clinical records. This means that pathologists examine medical images with some prior knowledge of the patients and that the attention regions may change depending on the clinical records. In this study, we propose a method called the Personalized Attention Mechanism (PersAM), by which the attention regions in medical images are adaptively changed according to the clinical records. The primary idea of the PersAM method is to encode the relationships between the medical images and clinical records using a variant of Transformer architecture. To demonstrate the effectiveness of the PersAM method, we applied it to a large-scale digital pathology problem of identifying the subtypes of 842 malignant lymphoma patients based on their gigapixel whole slide images and clinical records.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
301,111
2202.11170
Multi-fidelity reinforcement learning framework for shape optimization
Deep reinforcement learning (DRL) is a promising outer-loop intelligence paradigm which can deploy problem solving strategies for complex tasks. Consequently, DRL has been utilized for several scientific applications, specifically in cases where classical optimization or control methods are limited. One key limitation of conventional DRL methods is their episode-hungry nature which proves to be a bottleneck for tasks which involve costly evaluations of a numerical forward model. In this article, we address this limitation of DRL by introducing a controlled transfer learning framework that leverages a multi-fidelity simulation setting. Our strategy is deployed for an airfoil shape optimization problem at high Reynolds numbers, where our framework can learn an optimal policy for generating efficient airfoil shapes by gathering knowledge from multi-fidelity environments and reduces computational costs by over 30\%. Furthermore, our formulation promotes policy exploration and generalization to new environments, thereby preventing over-fitting to data from solely one fidelity. Our results demonstrate this framework's applicability to other scientific DRL scenarios where multi-fidelity environments can be used for policy learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,787
2208.11283
A Hierarchical Interactive Network for Joint Span-based Aspect-Sentiment Analysis
Recently, some span-based methods have achieved encouraging performances for joint aspect-sentiment analysis, which first extract aspects (aspect extraction) by detecting aspect boundaries and then classify the span-level sentiments (sentiment classification). However, most existing approaches either sequentially extract task-specific features, leading to insufficient feature interactions, or they encode aspect features and sentiment features in a parallel manner, implying that feature representation in each task is largely independent of each other except for input sharing. Both of them ignore the internal correlations between the aspect extraction and sentiment classification. To solve this problem, we novelly propose a hierarchical interactive network (HI-ASA) to model two-way interactions between two tasks appropriately, where the hierarchical interactions involve two steps: shallow-level interaction and deep-level interaction. First, we utilize cross-stitch mechanism to combine the different task-specific features selectively as the input to ensure proper two-way interactions. Second, the mutual information technique is applied to mutually constrain learning between two tasks in the output layer, thus the aspect input and the sentiment input are capable of encoding features of the other task via backpropagation. Extensive experiments on three real-world datasets demonstrate HI-ASA's superiority over baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
314,368
2111.10342
GRecX: An Efficient and Unified Benchmark for GNN-based Recommendation
In this paper, we present GRecX, an open-source TensorFlow framework for benchmarking GNN-based recommendation models in an efficient and unified way. GRecX consists of core libraries for building GNN-based recommendation benchmarks, as well as the implementations of popular GNN-based recommendation models. The core libraries provide essential components for building efficient and unified benchmarks, including FastMetrics (efficient metrics computation libraries), VectorSearch (efficient similarity search libraries for dense vectors), BatchEval (efficient mini-batch evaluation libraries), and DataManager (unified dataset management libraries). Especially, to provide a unified benchmark for the fair comparison of different complex GNN-based recommendation models, we design a new metric GRMF-X and integrate it into the FastMetrics component. Based on a TensorFlow GNN library tf_geometric, GRecX carefully implements a variety of popular GNN-based recommendation models. We carefully implement these baseline models to reproduce the performance reported in the literature, and our implementations are usually more efficient and friendly. In conclusion, GRecX enables uses to train and benchmark GNN-based recommendation baselines in an efficient and unified way. We conduct experiments with GRecX, and the experimental results show that GRecX allows us to train and benchmark GNN-based recommendation baselines in an efficient and unified way. The source code of GRecX is available at https://github.com/maenzhier/GRecX.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
267,291
2006.08040
Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs
We develop a new approach to obtaining high probability regret bounds for online learning with bandit feedback against an adaptive adversary. While existing approaches all require carefully constructing optimistic and biased loss estimators, our approach uses standard unbiased estimators and relies on a simple increasing learning rate schedule, together with the help of logarithmically homogeneous self-concordant barriers and a strengthened Freedman's inequality. Besides its simplicity, our approach enjoys several advantages. First, the obtained high-probability regret bounds are data-dependent and could be much smaller than the worst-case bounds, which resolves an open problem asked by Neu (2015). Second, resolving another open problem of Bartlett et al. (2008) and Abernethy and Rakhlin (2009), our approach leads to the first general and efficient algorithm with a high-probability regret bound for adversarial linear bandits, while previous methods are either inefficient or only applicable to specific action sets. Finally, our approach can also be applied to learning adversarial Markov Decision Processes and provides the first algorithm with a high-probability small-loss bound for this problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,047
1106.1017
MMSE of "Bad" Codes
We examine codes, over the additive Gaussian noise channel, designed for reliable communication at some specific signal-to-noise ratio (SNR) and constrained by the permitted minimum mean-square error (MMSE) at lower SNRs. The maximum possible rate is below point-to-point capacity, and hence these are non-optimal codes (alternatively referred to as "bad" codes). We show that the maximum possible rate is the one attained by superposition codebooks. Moreover, the MMSE and mutual information behavior as a function of SNR, for any code attaining the maximum rate under the MMSE constraint, is known for all SNR. We also provide a lower bound on the MMSE for finite length codes, as a function of the error probability of the code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,737
1609.09430
CNN Architectures for Large-Scale Audio Classification
Convolutional Neural Networks (CNNs) have proven very effective in image classification and show promise for audio. We use various CNN architectures to classify the soundtracks of a dataset of 70M training videos (5.24 million hours) with 30,871 video-level labels. We examine fully connected Deep Neural Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We investigate varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on our audio classification task, and larger training and label sets help up to a point. A model using embeddings from these classifiers does much better than raw features on the Audio Set [5] Acoustic Event Detection (AED) classification task.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
61,715
2202.06369
Incremental user embedding modeling for personalized text classification
Individual user profiles and interaction histories play a significant role in providing customized experiences in real-world applications such as chatbots, social media, retail, and education. Adaptive user representation learning by utilizing user personalized information has become increasingly challenging due to ever-growing history data. In this work, we propose an incremental user embedding modeling approach, in which embeddings of user's recent interaction histories are dynamically integrated into the accumulated history vectors via a transformer encoder. This modeling paradigm allows us to create generalized user representations in a consecutive manner and also alleviate the challenges of data management. We demonstrate the effectiveness of this approach by applying it to a personalized multi-class classification task based on the Reddit dataset, and achieve 9% and 30% relative improvement on prediction accuracy over a baseline system for two experiment settings through appropriate comment history encoding and task modeling.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
280,195
2011.09776
Improving Bayesian Network Structure Learning in the Presence of Measurement Error
Structure learning algorithms that learn the graph of a Bayesian network from observational data often do so by assuming the data correctly reflect the true distribution of the variables. However, this assumption does not hold in the presence of measurement error, which can lead to spurious edges. This is one of the reasons why the synthetic performance of these algorithms often overestimates real-world performance. This paper describes an algorithm that can be added as an additional learning phase at the end of any structure learning algorithm, and serves as a correction learning phase that removes potential false positive edges. The results show that the proposed correction algorithm successfully improves the graphical score of four well-established structure learning algorithms spanning different classes of learning in the presence of measurement error.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
207,302
1112.0371
Zigzag Codes: MDS Array Codes with Optimal Rebuilding
MDS array codes are widely used in storage systems to protect data against erasures. We address the \emph{rebuilding ratio} problem, namely, in the case of erasures, what is the fraction of the remaining information that needs to be accessed in order to rebuild \emph{exactly} the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct then the rebuilding ratio is 1 (access all the remaining information). However, the interesting and more practical case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between 1/2 and 3/4, however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2. In general, we construct a new family of $r$-erasure correcting MDS array codes that has optimal rebuilding ratio of $\frac{e}{r}$ in the case of $e$ erasures, $1 \le e \le r$. Our array codes have efficient encoding and decoding algorithms (for the case $r=2$ they use a finite field of size 3) and an optimal update property.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,288
2105.10922
OntoED: Low-resource Event Detection with Ontology Embedding
Event Detection (ED) aims to identify event trigger words from a given text and classify it into an event type. Most of current methods to ED rely heavily on training instances, and almost ignore the correlation of event types. Hence, they tend to suffer from data scarcity and fail to handle new unseen event types. To address these problems, we formulate ED as a process of event ontology population: linking event instances to pre-defined event types in event ontology, and propose a novel ED framework entitled OntoED with ontology embedding. We enrich event ontology with linkages among event types, and further induce more event-event correlations. Based on the event ontology, OntoED can leverage and propagate correlation knowledge, particularly from data-rich to data-poor event types. Furthermore, OntoED can be applied to new unseen event types, by establishing linkages to existing ones. Experiments indicate that OntoED is more predominant and robust than previous approaches to ED, especially in data-scarce scenarios.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
236,545
2205.01089
ComPhy: Compositional Physical Reasoning of Objects and Events from Videos
Objects' motions in nature are governed by complex interactions and their properties. While some properties, such as shape and material, can be identified via the object's visual appearances, others like mass and electric charge are not directly visible. The compositionality between the visible and hidden properties poses unique challenges for AI models to reason from the physical world, whereas humans can effortlessly infer them with limited observations. Existing studies on video reasoning mainly focus on visually observable elements such as object appearance, movement, and contact interaction. In this paper, we take an initial step to highlight the importance of inferring the hidden physical properties not directly observable from visual appearances, by introducing the Compositional Physical Reasoning (ComPhy) dataset. For a given set of objects, ComPhy includes few videos of them moving and interacting under different initial conditions. The model is evaluated based on its capability to unravel the compositional hidden properties, such as mass and charge, and use this knowledge to answer a set of questions posted on one of the videos. Evaluation results of several state-of-the-art video reasoning models on ComPhy show unsatisfactory performance as they fail to capture these hidden properties. We further propose an oracle neural-symbolic framework named Compositional Physics Learner (CPL), combining visual perception, physical property learning, dynamic prediction, and symbolic execution into a unified framework. CPL can effectively identify objects' physical properties from their interactions and predict their dynamics to answer questions.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
294,478
1910.00294
When and Why is Document-level Context Useful in Neural Machine Translation?
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
147,631