id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1406.2293
Using Gossips to Spread Information: Theory and Evidence from a Randomized Controlled Trial
Is it possible to identify individuals who are highly central in a community without gathering any network information, simply by asking a few people? If we use people's nominees as seeds for a diffusion process, will it be successful? We explore these questions theoretically, via surveys, and via field experiments. We show via a model of information flow how members of a community can, just by tracking gossip about others, identify highly central individuals in their network. Asking villagers in rural Indian villages to name good seeds for diffusion, we find that they accurately nominate those who are central according to a measure tailored for diffusion - not just those with many friends or in powerful positions. Finally, we run a randomized field experiment in 213 other villages that tests how effective it is to use such nominations as seeds for a diffusion process. Relative to random seeds or those with high social status, hitting at least one seed nominated by villagers leads to more than a 65% increase in the spread of information.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
33,734
1912.02605
Towards Understanding Residual and Dilated Dense Neural Networks via Convolutional Sparse Coding
Convolutional neural network (CNN) and its variants have led to many state-of-art results in various fields. However, a clear theoretical understanding about them is still lacking. Recently, multi-layer convolutional sparse coding (ML-CSC) has been proposed and proved to equal such simply stacked networks (plain networks). Here, we think three factors in each layer of it including the initialization, the dictionary design and the number of iterations greatly affect the performance of ML-CSC. Inspired by these considerations, we propose two novel multi-layer models--residual convolutional sparse coding model (Res-CSC) and mixed-scale dense convolutional sparse coding model (MSD-CSC), which have close relationship with the residual neural network (ResNet) and mixed-scale (dilated) dense neural network (MSDNet), respectively. Mathematically, we derive the shortcut connection in ResNet as a special case of a new forward propagation rule on ML-CSC. We find a theoretical interpretation of the dilated convolution and dense connection in MSDNet by analyzing MSD-CSC, which gives a clear mathematical understanding about them. We implement the iterative soft thresholding algorithm (ISTA) and its fast version to solve Res-CSC and MSD-CSC, which can employ the unfolding operation for further improvements. At last, extensive numerical experiments and comparison with competing methods demonstrate their effectiveness using three typical datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
156,390
2301.13340
Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive Learning
Hard negative mining has shown effective in enhancing self-supervised contrastive learning (CL) on diverse data types, including graph CL (GCL). The existing hardness-aware CL methods typically treat negative instances that are most similar to the anchor instance as hard negatives, which helps improve the CL performance, especially on image data. However, this approach often fails to identify the hard negatives but leads to many false negatives on graph data. This is mainly due to that the learned graph representations are not sufficiently discriminative due to oversmooth representations and/or non-independent and identically distributed (non-i.i.d.) issues in graph data. To tackle this problem, this article proposes a novel approach that builds a discriminative model on collective affinity information (i.e., two sets of pairwise affinities between the negative instances and the anchor instance) to mine hard negatives in GCL. In particular, the proposed approach evaluates how confident/uncertain the discriminative model is about the affinity of each negative instance to an anchor instance to determine its hardness weight relative to the anchor instance. This uncertainty information is then incorporated into the existing GCL loss functions via a weighting term to enhance their performance. The enhanced GCL is theoretically grounded that the resulting GCL loss is equivalent to a triplet loss with an adaptive margin being exponentially proportional to the learned uncertainty of each negative instance. Extensive experiments on ten graph datasets show that our approach does the following: 1) consistently enhances different state-of-the-art (SOTA) GCL methods in both graph and node classification tasks and 2) significantly improves their robustness against adversarial attacks. Code is available at https://github.com/mala-lab/AUGCL.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,864
1806.10206
Deep Feature Factorization For Concept Discovery
We propose Deep Feature Factorization (DFF), a method capable of localizing similar semantic concepts within an image or a set of images. We use DFF to gain insight into a deep convolutional neural network's learned features, where we detect hierarchical cluster structures in feature space. This is visualized as heat maps, which highlight semantically matching regions across a set of images, revealing what the network `perceives' as similar. DFF can also be used to perform co-segmentation and co-localization, and we report state-of-the-art results on these tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
101,505
1801.06889
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Deep learning has recently seen rapid development and received significant attention due to its state-of-the-art performance on previously-thought hard problems. However, because of the internal complexity and nonlinear structure of deep neural networks, the underlying decision making processes for why these models are achieving such performance are challenging and sometimes mystifying to interpret. As deep learning spreads across domains, it is of paramount importance that we equip users of deep learning with tools for understanding when a model works correctly, when it fails, and ultimately how to improve its performance. Standardized toolkits for building neural networks have helped democratize deep learning; visual analytics systems have now been developed to support model explanation, interpretation, debugging, and improvement. We present a survey of the role of visual analytics in deep learning research, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How (Why, Who, What, How, When, and Where). We conclude by highlighting research directions and open research problems. This survey helps researchers and practitioners in both visual analytics and deep learning to quickly learn key aspects of this young and rapidly growing body of research, whose impact spans a diverse range of domains.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
88,697
2110.10568
Inference Graphs for CNN Interpretation
Convolutional neural networks (CNNs) have achieved superior accuracy in many visual related tasks. However, the inference process through intermediate layers is opaque, making it difficult to interpret such networks or develop trust in their operation. We propose to model the network hidden layers activity using probabilistic models. The activity patterns in layers of interest are modeled as Gaussian mixture models, and transition probabilities between clusters in consecutive modeled layers are estimated. Based on maximum-likelihood considerations, nodes and paths relevant for network prediction are chosen, connected, and visualized as an inference graph. We show that such graphs are useful for understanding the general inference process of a class, as well as explaining decisions the network makes regarding specific images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
262,199
2312.09532
Grounding for Artificial Intelligence
A core function of intelligence is grounding, which is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being, e.g., a human. Human cognition is grounded in our sensorimotor experiences in the external world and subjective feelings in our internal world. We use languages to communicate with each other and the languages are grounded on our shared sensorimotor experiences and feelings. Without this shard grounding, it is impossible for us to understand each other because all natural languages are highly abstract and are only able to describe a tiny portion of what has happened or is happening in the real world. Although grounding at high or abstract levels has been studied in different fields and applications, to our knowledge, limited systematic work at fine-grained levels has been done. With the rapid progress of large language models (LLMs), it is imperative that we have a sound understanding of grounding in order to move to the next level of intelligence. It is also believed that grounding is necessary for Artificial General Intelligence (AGI). This paper makes an attempt to systematically study this problem.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
415,769
2409.08285
DIC2CAE: Calculating the stress intensity factors (KI-III) from 2D and stereo displacement fields
Integrating experimental data into simulations is crucial for predicting material behaviour, especially in fracture mechanics. Digital Image Correlation (DIC) provides precise displacement measurements, essential for evaluating strain energy release rates and stress intensity factors (SIF) around cracks. Translating DIC data into CAE software like ABAQUS has been challenging. DIC2CAE, a MATLAB-based tool, automates this conversion, enabling accurate simulations. It uses the J-integral method to calculate SIFs and handles complex scenarios without needing specimen geometry or applied loads. DIC2CAE enhances fracture mechanics simulations' reliability, accelerating materials research and development.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
487,849
2407.02549
Diffusion Models for Tabular Data Imputation and Synthetic Data Generation
Data imputation and data generation have important applications for many domains, like healthcare and finance, where incomplete or missing data can hinder accurate analysis and decision-making. Diffusion models have emerged as powerful generative models capable of capturing complex data distributions across various data modalities such as image, audio, and time series data. Recently, they have been also adapted to generate tabular data. In this paper, we propose a diffusion model for tabular data that introduces three key enhancements: (1) a conditioning attention mechanism, (2) an encoder-decoder transformer as the denoising network, and (3) dynamic masking. The conditioning attention mechanism is designed to improve the model's ability to capture the relationship between the condition and synthetic data. The transformer layers help model interactions within the condition (encoder) or synthetic data (decoder), while dynamic masking enables our model to efficiently handle both missing data imputation and synthetic data generation tasks within a unified framework. We conduct a comprehensive evaluation by comparing the performance of diffusion models with transformer conditioning against state-of-the-art techniques, such as Variational Autoencoders, Generative Adversarial Networks and Diffusion Models, on benchmark datasets. Our evaluation focuses on the assessment of the generated samples with respect to three important criteria, namely: (1) Machine Learning efficiency, (2) statistical similarity, and (3) privacy risk mitigation. For the task of data imputation, we consider the efficiency of the generated samples across different levels of missing features.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
469,791
2402.16759
The Door and Drawer Reset Mechanisms: Automated Mechanisms for Testing and Data Collection
Robotic manipulation in human environments is a challenging problem for researchers and industry alike. In particular, opening doors/drawers can be challenging for robots, as the size, shape, actuation and required force is variable. Because of this, it can be difficult to collect large real-world datasets and to benchmark different control algorithms on the same hardware. In this paper we present two automated testbeds, the Door Reset Mechanism (DORM) and Drawer Reset Mechanism (DWRM), for the purpose of real world testing and data collection. These devices are low-cost, are sensorized, operate with customized variable resistance, and come with open source software. Additionally, we provide a dataset of over 600 grasps using the DORM and DWRM. We use this dataset to highlight how much variability can exist even with the same trial on the same hardware. This data can also serve as a source for real-world noise in simulation environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
432,681
1911.07960
The {\alpha}{\mu} Search Algorithm for the Game of Bridge
{\alpha}{\mu} is an anytime heuristic search algorithm for incomplete information games that assumes perfect information for the opponents. {\alpha}{\mu} addresses the strategy fusion and non-locality problems encountered by Perfect Information Monte Carlo sampling. In this paper {\alpha}{\mu} is applied to the game of Bridge.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
154,030
1403.4175
Approximate Dynamic Programming based on Projection onto the (min,+) subsemimodule
We develop a new Approximate Dynamic Programming (ADP) method for infinite horizon discounted reward Markov Decision Processes (MDP) based on projection onto a subsemimodule. We approximate the value function in terms of a $(\min,+)$ linear combination of a set of basis functions whose $(\min,+)$ linear span constitutes a subsemimodule. The projection operator is closely related to the Fenchel transform. Our approximate solution obeys the $(\min,+)$ Projected Bellman Equation (MPPBE) which is different from the conventional Projected Bellman Equation (PBE). We show that the approximation error is bounded in its $L_\infty$-norm. We develop a Min-Plus Approximate Dynamic Programming (MPADP) algorithm to compute the solution to the MPPBE. We also present the proof of convergence of the MPADP algorithm and apply it to two problems, a grid-world problem in the discrete domain and mountain car in the continuous domain.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
31,629
2108.10392
A generalized stacked reinforcement learning method for sampled systems
A common setting of reinforcement learning (RL) is a Markov decision process (MDP) in which the environment is a stochastic discrete-time dynamical system. Whereas MDPs are suitable in such applications as video-games or puzzles, physical systems are time-continuous. A general variant of RL is of digital format, where updates of the value (or cost) and policy are performed at discrete moments in time. The agent-environment loop then amounts to a sampled system, whereby sample-and-hold is a specific case. In this paper, we propose and benchmark two RL methods suitable for sampled systems. Specifically, we hybridize model-predictive control (MPC) with critics learning the optimal Q- and value (or cost-to-go) function. Optimality is analyzed and performance comparison is done in an experimental case study with a mobile robot.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
251,881
2310.01571
Contraction Properties of the Global Workspace Primitive
To push forward the important emerging research field surrounding multi-area recurrent neural networks (RNNs), we expand theoretically and empirically on the provably stable RNNs of RNNs introduced by Kozachkov et al. in "RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks". We prove relaxed stability conditions for salient special cases of this architecture, most notably for a global workspace modular structure. We then demonstrate empirical success for Global Workspace Sparse Combo Nets with a small number of trainable parameters, not only through strong overall test performance but also greater resilience to removal of individual subnetworks. These empirical results for the global workspace inter-area topology are contingent on stability preservation, highlighting the relevance of our theoretical work for enabling modular RNN success. Further, by exploring sparsity in the connectivity structure between different subnetwork modules more broadly, we improve the state of the art performance for stable RNNs on benchmark sequence processing tasks, thus underscoring the general utility of specialized graph structures for multi-area RNNs.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
396,474
2412.11100
DynamicScaler: Seamless and Scalable Video Generation for Panoramic Scenes
The increasing demand for immersive AR/VR applications and spatial intelligence has heightened the need to generate high-quality scene-level and 360{\deg} panoramic video. However, most video diffusion models are constrained by limited resolution and aspect ratio, which restricts their applicability to scene-level dynamic content synthesis. In this work, we propose the DynamicScaler, addressing these challenges by enabling spatially scalable and panoramic dynamic scene synthesis that preserves coherence across panoramic scenes of arbitrary size. Specifically, we introduce a Offset Shifting Denoiser, facilitating efficient, synchronous, and coherent denoising panoramic dynamic scenes via a diffusion model with fixed resolution through a seamless rotating Window, which ensures seamless boundary transitions and consistency across the entire panoramic space, accommodating varying resolutions and aspect ratios. Additionally, we employ a Global Motion Guidance mechanism to ensure both local detail fidelity and global motion continuity. Extensive experiments demonstrate our method achieves superior content and motion quality in panoramic scene-level video generation, offering a training-free, efficient, and scalable solution for immersive dynamic scene creation with constant VRAM consumption regardless of the output video resolution. Our project page is available at \url{https://dynamic-scaler.pages.dev/}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
517,257
1811.12695
An Efficient Image Retrieval Based on Fusion of Low-Level Visual Features
Due to an increase in the number of image achieves, Content-Based Image Retrieval (CBIR) has gained attention for research community of computer vision. The image visual contents are represented in a feature space in the form of numerical values that is considered as a feature vector of image. Images belonging to different classes may contain the common visuals and shapes that can result in the closeness of computed feature space of two different images belonging to separate classes. Due to this reason, feature extraction and image representation is selected with appropriate features as it directly affects the performance of image retrieval system. The commonly used visual features are image spatial layout, color, texture and shape. Image feature space is combined to achieve the discriminating ability that is not possible to achieve when the features are used separately. Due to this reason, in this paper, we aim to explore the low-level feature combination that are based on color and shape features. We selected color moments and color histogram to represent color while shape is represented by using invariant moments. We selected this combination, as these features are reported intuitive, compact and robust for image representation. We evaluated the performance of our proposed research by using the Corel, Coil and Ground Truth (GT) image datasets. We evaluated the proposed low-level feature fusion by calculating the precision, recall and time required for feature extraction. The precision, recall and feature extraction values obtained from the proposed low-level feature fusion outperforms the existing research of CBIR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
115,074
1901.00049
SiCloPe: Silhouette-Based Clothed People
We introduce a new silhouette-based representation for modeling clothed human bodies using deep generative models. Our method can reconstruct a complete and textured 3D model of a person wearing clothes from a single input picture. Inspired by the visual hull algorithm, our implicit representation uses 2D silhouettes and 3D joints of a body pose to describe the immense shape complexity and variations of clothed people. Given a segmented 2D silhouette of a person and its inferred 3D joints from the input picture, we first synthesize consistent silhouettes from novel view points around the subject. The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction. We then infer the texture of the subject's back view using the frontal image and segmentation mask as input to a conditional generative adversarial network. Our experiments demonstrate that our silhouette-based model is an effective representation and the appearance of the back view can be predicted reliably using an image-to-image translation network. While classic methods based on parametric models often fail for single-view images of subjects with challenging clothing, our approach can still produce successful results, which are comparable to those obtained from multi-view input.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,674
2305.01653
Physics-Informed and Data-Driven Discovery of Governing Equations for Complex Phenomena in Heterogeneous Media
Rapid evolution of sensor technology, advances in instrumentation, and progress in devising data-acquisition softwares/hardwares are providing vast amounts of data for various complex phenomena, ranging from those in atomospheric environment, to large-scale porous formations, and biological systems. The tremendous increase in the speed of scientific computing has also made it possible to emulate diverse high-dimensional, multiscale and multiphysics phenomena that contain elements of stochasticity, and to generate large volumes of numerical data for them in heterogeneous systems. The difficulty is, however, that often the governing equations for such phenomena are not known. A prime example is flow, transport, and deformation processes in macroscopically-heterogeneous materials and geomedia. In other cases, the governing equations are only partially known, in the sense that they either contain various coefficients that must be evaluated based on data, or that they require constitutive relations, such as the relationship between the stress tensor and the velocity gradients for non-Newtonian fluids in the momentum conservation equation, in order for them to be useful to the modeling. Several classes of approaches are emerging to address such problems that are based on machine learning, symbolic regression, the Mori-Zwanzig projection operator formulation, sparse identification of nonlinear dynamics, data assimilation, and stochastic optimization and analysis, or a combination of two or more of such approaches. This Perspective describes the latest developments in this highly important area, and discusses possible future directions.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
361,756
1904.12274
Robust subspace clustering by Cauchy loss function
Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-arts approaches are designed by following the model of spectral clustering based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, we in this paper propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF's influence function has a upper bound which can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real datasets reveal that our proposed method outperforms several representative clustering methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,065
2401.11929
Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis, characterized by extensive input sequences, as opposed to the shorter spans typical of traditional approaches. While longer sequences inherently offer richer information for enhanced predictive precision, prevailing studies often respond by escalating model complexity. These intricate models can inflate into millions of parameters, resulting in prohibitive parameter scales. Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation while achieving uniformly superior and robust results across various datasets. Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks, using over 99 \% fewer parameters than the majority of competing methods. Through this work, we aim to unleash the power of a restricted set of parameters by capitalizing on domain characteristics--a timely reminder that in the realm of LTSF, bigger is not invariably better.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
423,206
1201.3851
Combinatorial Modelling and Learning with Prediction Markets
Combining models in appropriate ways to achieve high performance is commonly seen in machine learning fields today. Although a large amount of combinatorial models have been created, little attention is drawn to the commons in different models and their connections. A general modelling technique is thus worth studying to understand model combination deeply and shed light on creating new models. Prediction markets show a promise of becoming such a generic, flexible combinatorial model. By reviewing on several popular combinatorial models and prediction market models, this paper aims to show how the market models can generalise different combinatorial stuctures and how they implement these popular combinatorial models in specific conditions. Besides, we will see among different market models, Storkey's \emph{Machine Learning Markets} provide more fundamental, generic modelling mechanisms than the others, and it has a significant appeal for both theoretical study and application.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
13,877
1511.03981
Disconnected, fragmented, or united? A trans-disciplinary review of network science
During decades the study of networks has been divided between the efforts of social scientists and natural scientists, two groups of scholars who often do not see eye to eye. In this review I present an effort to mutually translate the work conducted by scholars from both of these academic fronts hoping to continue to unify what has become a diverging body of literature. I argue that social and natural scientists fail to see eye to eye because they have diverging academic goals. Social scientists focus on explaining how context specific social and economic mechanisms drive the structure of networks and on how networks shape social and economic outcomes. By contrast, natural scientists focus primarily on modeling network characteristics that are independent of context, since their focus is to identify universal characteristics of systems instead of context specific mechanisms. In the following pages I discuss the differences between both of these literatures by summarizing the parallel theories advanced to explain link formation and the applications used by scholars in each field to justify their approach to network science. I conclude by providing an outlook on how these literatures can be further unified.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
48,825
1906.08928
Learning Reward Functions by Integrating Human Demonstrations and Preferences
Our goal is to accurately and efficiently learn reward functions for autonomous robots. Current approaches to this problem include inverse reinforcement learning (IRL), which uses expert demonstrations, and preference-based learning, which iteratively queries the user for her preferences between trajectories. In robotics however, IRL often struggles because it is difficult to get high-quality demonstrations; conversely, preference-based learning is very inefficient since it attempts to learn a continuous, high-dimensional function from binary feedback. We propose a new framework for reward learning, DemPref, that uses both demonstrations and preference queries to learn a reward function. Specifically, we (1) use the demonstrations to learn a coarse prior over the space of reward functions, to reduce the effective size of the space from which queries are generated; and (2) use the demonstrations to ground the (active) query generation process, to improve the quality of the generated queries. Our method alleviates the efficiency issues faced by standard preference-based learning methods and does not exclusively depend on (possibly low-quality) demonstrations. In numerical experiments, we find that DemPref is significantly more efficient than a standard active preference-based learning method. In a user study, we compare our method to a standard IRL method; we find that users rated the robot trained with DemPref as being more successful at learning their desired behavior, and preferred to use the DemPref system (over IRL) to train the robot.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
136,016
2004.02249
CondenseUNet: A Memory-Efficient Condensely-Connected Architecture for Bi-ventricular Blood Pool and Myocardium Segmentation
With the advent of Cardiac Cine Magnetic Resonance (CMR) Imaging, there has been a paradigm shift in medical technology, thanks to its capability of imaging different structures within the heart without ionizing radiation. However, it is very challenging to conduct pre-operative planning of minimally invasive cardiac procedures without accurate segmentation and identification of the left ventricle (LV), right ventricle (RV) blood-pool, and LV-myocardium. Manual segmentation of those structures, nevertheless, is time-consuming and often prone to error and biased outcomes. Hence, automatic and computationally efficient segmentation techniques are paramount. In this work, we propose a novel memory-efficient Convolutional Neural Network (CNN) architecture as a modification of both CondenseNet, as well as DenseNet for ventricular blood-pool segmentation by introducing a bottleneck block and an upsampling path. Our experiments show that the proposed architecture runs on the Automated Cardiac Diagnosis Challenge (ACDC) dataset using half (50%) the memory requirement of DenseNet and one-twelfth (~ 8%) of the memory requirements of U-Net, while still maintaining excellent accuracy of cardiac segmentation. We validated the framework on the ACDC dataset featuring one healthy and four pathology groups whose heart images were acquired throughout the cardiac cycle and achieved the mean dice scores of 96.78% (LV blood-pool), 93.46% (RV blood-pool) and 90.1% (LV-Myocardium). These results are promising and promote the proposed methods as a competitive tool for cardiac image segmentation and clinical parameter estimation that has the potential to provide fast and accurate results, as needed for pre-procedural planning and/or pre-operative applications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
171,172
2006.08085
Optimal Complexity in Decentralized Training
Decentralization is a promising method of scaling up parallel machine learning systems. In this paper, we provide a tight lower bound on the iteration complexity for such methods in a stochastic non-convex setting. Our lower bound reveals a theoretical gap in known convergence rates of many existing decentralized training algorithms, such as D-PSGD. We prove by construction this lower bound is tight and achievable. Motivated by our insights, we further propose DeTAG, a practical gossip-style decentralized algorithm that achieves the lower bound with only a logarithm gap. Empirically, we compare DeTAG with other decentralized algorithms on image classification tasks, and we show DeTAG enjoys faster convergence compared to baselines, especially on unshuffled data and in sparse networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,058
2203.14169
AutoTS: Automatic Time Series Forecasting Model Design Based on Two-Stage Pruning
Automatic Time Series Forecasting (TSF) model design which aims to help users to efficiently design suitable forecasting model for the given time series data scenarios, is a novel research topic to be urgently solved. In this paper, we propose AutoTS algorithm trying to utilize the existing design skills and design efficient search methods to effectively solve this problem. In AutoTS, we extract effective design experience from the existing TSF works. We allow the effective combination of design experience from different sources, so as to create an effective search space containing a variety of TSF models to support different TSF tasks. Considering the huge search space, in AutoTS, we propose a two-stage pruning strategy to reduce the search difficulty and improve the search efficiency. In addition, in AutoTS, we introduce the knowledge graph to reveal associations between module options. We make full use of these relational information to learn higher-level features of each module option, so as to further improve the search quality. Extensive experimental results show that AutoTS is well-suited for the TSF area. It is more efficient than the existing neural architecture search algorithms, and can quickly design powerful TSF model better than the manually designed ones.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
287,906
1908.09362
LightMC: A Dynamic and Efficient Multiclass Decomposition Algorithm
Multiclass decomposition splits a multiclass classification problem into a series of independent binary learners and recomposes them by combining their outputs to reconstruct the multiclass classification results. Three widely-used realizations of such decomposition methods are One-Versus-All (OVA), One-Versus-One (OVO), and Error-Correcting-Output-Code (ECOC). While OVA and OVO are quite simple, both of them assume all classes are orthogonal which neglect the latent correlation between classes in real-world. Error-Correcting-Output-Code (ECOC) based decomposition methods, on the other hand, are more preferable due to its integration of the correlation among classes. However, the performance of existing ECOC-based methods highly depends on the design of coding matrix and decoding strategy. Unfortunately, it is quite uncertain and time-consuming to discover an effective coding matrix with appropriate decoding strategy. To address this problem, we propose LightMC, an efficient dynamic multiclass decomposition algorithm. Instead of using fixed coding matrix and decoding strategy, LightMC uses a differentiable decoding strategy, which enables it to dynamically optimize the coding matrix and decoding strategy, toward increasing the overall accuracy of multiclass classification, via back propagation jointly with the training of base learners in an iterative way. Empirical experimental results on several public large-scale multiclass classification datasets have demonstrated the effectiveness of LightMC in terms of both good accuracy and high efficiency.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
142,832
1904.13268
Handwritten Chinese Font Generation with Collaborative Stroke Refinement
Automatic character generation is an appealing solution for new typeface design, especially for Chinese typefaces including over 3700 most commonly-used characters. This task has two main pain points: (i) handwritten characters are usually associated with thin strokes of few information and complex structure which are error prone during deformation; (ii) thousands of characters with various shapes are needed to synthesize based on a few manually designed characters. To solve those issues, we propose a novel convolutional-neural-network-based model with three main techniques: collaborative stroke refinement, using collaborative training strategy to recover the missing or broken strokes; online zoom-augmentation, taking the advantage of the content-reuse phenomenon to reduce the size of training set; and adaptive pre-deformation, standardizing and aligning the characters. The proposed model needs only 750 paired training samples; no pre-trained network, extra dataset resource or labels is needed. Experimental results show that the proposed method significantly outperforms the state-of-the-art methods under the practical restriction on handwritten font synthesis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,344
2006.04306
On smooth or 0/1 designs of the fixed-mesh element-based topology optimization
The traditional element-based topology optimization based on material penalization typically aims at a 0/1 design. Our numerical experiments reveal that the compliance of a smooth design is overestimated when material properties of boundary intermediate elements under the fixed-mesh finite element analysis are interpolated with a material penalization model. This paper proposes a floating projection topology optimization (FPTO) method for seeking a smooth design using the ersatz material model or a 0/1 design using a material penalization model. The proposed floating projection constraint combining with the upper and lower bounds heuristically simulates 0/1 constraints of design variables in the original discrete optimization problem. Numerical examples demonstrate the capability of the proposed element-based topology optimization approach in obtaining 0/1 or smooth designs for 2D and 3D compliance minimization problems. The proposed topology optimization approach can be easily implemented under the framework of the fixed-mesh finite element analysis and provides an alternative way to form explicit topologies of structures, especially when the ersatz material model is adopted.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
180,645
2412.07282
HARP: Hesitation-Aware Reframing in Transformer Inference Pass
This paper aims to improve the performance of large language models by addressing the variable computational demands in inference steps, where some tokens require more computational resources than others. We present HARP, a simple modification to "off-the-shelf" Transformer forward pass. Drawing from hesitation and the framing effect in decision-making, HARP selectively applies additional computation when the model encounters uncertainty during token generation. Our method mimics human cognitive processes by pausing at difficult decision points and reframing inputs for a different perspective. Unlike other approaches, HARP is model-agnostic, training-free, and easy to implement. We thoroughly evaluate our method across various downstream tasks and model sizes, demonstrating performance improvements up to +5.16%. Notably, HARP achieves these gains while maintaining inference times twice faster than beam search. Simple and yet with significant gains, HARP offers a practical solution for enhancing the performance of Transformer-based language models with minimal computational impact.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
515,609
1812.10576
Deconfounding Reinforcement Learning in Observational Settings
We propose a general formulation for addressing reinforcement learning (RL) problems in settings with observational data. That is, we consider the problem of learning good policies solely from historical data in which unobserved factors (confounders) affect both observed actions and rewards. Our formulation allows us to extend a representative RL algorithm, the Actor-Critic method, to its deconfounding variant, with the methodology for this extension being easily applied to other RL algorithms. In addition to this, we develop a new benchmark for evaluating deconfounding RL algorithms by modifying the OpenAI Gym environments and the MNIST dataset. Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full RL problems with observational data. Code is available at https://github.com/CausalRL/DRL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
117,399
2101.00850
Low Light Image Enhancement via Global and Local Context Modeling
Images captured under low-light conditions manifest poor visibility, lack contrast and color vividness. Compared to conventional approaches, deep convolutional neural networks (CNNs) perform well in enhancing images. However, being solely reliant on confined fixed primitives to model dependencies, existing data-driven deep models do not exploit the contexts at various spatial scales to address low-light image enhancement. These contexts can be crucial towards inferring several image enhancement tasks, e.g., local and global contrast, brightness and color corrections; which requires cues from both local and global spatial extent. To this end, we introduce a context-aware deep network for low-light image enhancement. First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain. Second, it introduces a dense residual block that captures local context with a relatively large receptive field. We evaluate the proposed approach using three challenging datasets: MIT-Adobe FiveK, LoL, and SID. On all these datasets, our method performs favorably against the state-of-the-arts in terms of standard image fidelity metrics. In particular, compared to the best performing method on the MIT-Adobe FiveK dataset, our algorithm improves PSNR from 23.04 dB to 24.45 dB.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
214,223
2403.13741
Hyper Strategy Logic
Strategy logic (SL) is a powerful temporal logic that enables strategic reasoning in multi-agent systems. SL supports explicit (first-order) quantification over strategies and provides a logical framework to express many important properties such as Nash equilibria, dominant strategies, etc. While in SL the same strategy can be used in multiple strategy profiles, each such profile is evaluated w.r.t. a path-property, i.e., a property that considers the single path resulting from a particular strategic interaction. In this paper, we present Hyper Strategy Logic (HyperSL), a strategy logic where the outcome of multiple strategy profiles can be compared w.r.t. a hyperproperty, i.e., a property that relates multiple paths. We show that HyperSL can capture important properties that cannot be expressed in SL, including non-interference, quantitative Nash equilibria, optimal adversarial planning, and reasoning under imperfect information. On the algorithmic side, we identify an expressive fragment of HyperSL with decidable model checking and present a model-checking algorithm. We contribute a prototype implementation of our algorithm and report on encouraging experimental results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
439,756
2007.12087
Hide-and-Seek Privacy Challenge
The clinical time-series setting poses a unique combination of challenges to data modeling and sharing. Due to the high dimensionality of clinical time series, adequate de-identification to preserve privacy while retaining data utility is difficult to achieve using common de-identification techniques. An innovative approach to this problem is synthetic data generation. From a technical perspective, a good generative model for time-series data should preserve temporal dynamics, in the sense that new sequences respect the original relationships between high-dimensional variables across time. From the privacy perspective, the model should prevent patient re-identification by limiting vulnerability to membership inference attacks. The NeurIPS 2020 Hide-and-Seek Privacy Challenge is a novel two-tracked competition to simultaneously accelerate progress in tackling both problems. In our head-to-head format, participants in the synthetic data generation track (i.e. "hiders") and the patient re-identification track (i.e. "seekers") are directly pitted against each other by way of a new, high-quality intensive care time-series dataset: the AmsterdamUMCdb dataset. Ultimately, we seek to advance generative techniques for dense and high-dimensional temporal data streams that are (1) clinically meaningful in terms of fidelity and predictivity, as well as (2) capable of minimizing membership privacy risks in terms of the concrete notion of patient re-identification.
false
false
false
false
false
false
true
false
false
false
false
false
true
true
false
false
false
false
188,726
2411.02854
SpiDR: A Reconfigurable Digital Compute-in-Memory Spiking Neural Network Accelerator for Event-based Perception
Spiking Neural Networks (SNNs), with their inherent recurrence, offer an efficient method for processing the asynchronous temporal data generated by Dynamic Vision Sensors (DVS), making them well-suited for event-based vision applications. However, existing SNN accelerators suffer from limitations in adaptability to diverse neuron models, bit precisions and network sizes, inefficient membrane potential (Vmem) handling, and limited sparse optimizations. In response to these challenges, we propose a scalable and reconfigurable digital compute-in-memory (CIM) SNN accelerator \chipname with a set of key features: 1) It uses in-memory computations and reconfigurable operating modes to minimize data movement associated with weight and Vmem data structures while efficiently adapting to different workloads. 2) It supports multiple weight/Vmem bit precision values, enabling a trade-off between accuracy and energy efficiency and enhancing adaptability to diverse application demands. 3) A zero-skipping mechanism for sparse inputs significantly reduces energy usage by leveraging the inherent sparsity of spikes without introducing high overheads for low sparsity. 4) Finally, the asynchronous handshaking mechanism maintains the computational efficiency of the pipeline for variable execution times of different computation units. We fabricated \chipname in 65 nm Taiwan Semiconductor Manufacturing Company (TSMC) low-power (LP) technology. It demonstrates competitive performance (scaled to the same technology node) to other digital SNN accelerators proposed in the recent literature and supports advanced reconfigurability. It achieves up to 5 TOPS/W energy efficiency at 95% input sparsity with 4-bit weights and 7-bit Vmem precision.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
505,694
2205.10893
Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
In theorem proving, the task of selecting useful premises from a large library to unlock the proof of a given conjecture is crucially important. This presents a challenge for all theorem provers, especially the ones based on language models, due to their relative inability to reason over huge volumes of premises in text form. This paper introduces Thor, a framework integrating language models and automated theorem provers to overcome this difficulty. In Thor, a class of methods called hammers that leverage the power of automated theorem provers are used for premise selection, while all other tasks are designated to language models. Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8.2\%$ of problems neither language models nor automated theorem provers are able to solve on their own. Furthermore, with a significantly smaller computational budget, Thor can achieve a success rate on the MiniF2F dataset that is on par with the best existing methods. Thor can be instantiated for the majority of popular interactive theorem provers via a straightforward protocol we provide.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
297,915
1802.04065
Bitcoin Volatility Forecasting with a Glimpse into Buy and Sell Orders
In this paper, we study the ability to make the short-term prediction of the exchange price fluctuations towards the United States dollar for the Bitcoin market. We use the data of realized volatility collected from one of the largest Bitcoin digital trading offices in 2016 and 2017 as well as order information. Experiments are performed to evaluate a variety of statistical and machine learning approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,135
1506.07191
Construction of power flow feasibility sets
We develop a new approach for construction of convex analytically simple regions where the AC power flow equations are guaranteed to have a feasible solutions. Construction of these regions is based on efficient semidefinite programming techniques accelerated via sparsity exploiting algorithms. Resulting regions have a simple geometric shape in the space of power injections (polytope or ellipsoid) and can be efficiently used for assessment of system security in the presence of uncertainty. Efficiency and tightness of the approach is validated on a number of test networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
44,484
1711.08856
Critical Learning Periods in Deep Neural Networks
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
85,283
2305.12099
Soft Actor-Critic Learning-Based Joint Computing, Pushing, and Caching Framework in MEC Networks
To support future 6G mobile applications, the mobile edge computing (MEC) network needs to be jointly optimized for computing, pushing, and caching to reduce transmission load and computation cost. To achieve this, we propose a framework based on deep reinforcement learning that enables the dynamic orchestration of these three activities for the MEC network. The framework can implicitly predict user future requests using deep networks and push or cache the appropriate content to enhance performance. To address the curse of dimensionality resulting from considering three activities collectively, we adopt the soft actor-critic reinforcement learning in continuous space and design the action quantization and correction specifically to fit the discrete optimization problem. We conduct simulations in a single-user single-server MEC network setting and demonstrate that the proposed framework effectively decreases both transmission load and computing cost under various configurations of cache size and tolerable service delay.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
365,837
cs/0307063
An Alternative to RDF-Based Languages for the Representation and Processing of Ontologies in the Semantic Web
This paper describes an approach to the representation and processing of ontologies in the Semantic Web, based on the ICMAUS theory of computation and AI. This approach has strengths that complement those of languages based on the Resource Description Framework (RDF) such as RDF Schema and DAML+OIL. The main benefits of the ICMAUS approach are simplicity and comprehensibility in the representation of ontologies, an ability to cope with errors and uncertainties in knowledge, and a versatile reasoning system with capabilities in the kinds of probabilistic reasoning that seem to be required in the Semantic Web.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
537,939
1711.09398
Novel Adaptive Genetic Algorithm Sample Consensus
Random sample consensus (RANSAC) is a successful algorithm in model fitting applications. It is vital to have strong exploration phase when there are an enormous amount of outliers within the dataset. Achieving a proper model is guaranteed by pure exploration strategy of RANSAC. However, finding the optimum result requires exploitation. GASAC is an evolutionary paradigm to add exploitation capability to the algorithm. Although GASAC improves the results of RANSAC, it has a fixed strategy for balancing between exploration and exploitation. In this paper, a new paradigm is proposed based on genetic algorithm with an adaptive strategy. We utilize an adaptive genetic operator to select high fitness individuals as parents and mutate low fitness ones. In the mutation phase, a training method is used to gradually learn which gene is the best replacement for the mutated gene. The proposed method adaptively balance between exploration and exploitation by learning about genes. During the final Iterations, the algorithm draws on this information to improve the final results. The proposed method is extensively evaluated on two set of experiments. In all tests, our method outperformed the other methods in terms of both the number of inliers found and the speed of the algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
85,395
2107.07853
A Causal Perspective on Meaningful and Robust Algorithmic Recourse
Algorithmic recourse explanations inform stakeholders on how to act to revert unfavorable predictions. However, in general ML models do not predict well in interventional distributions. Thus, an action that changes the prediction in the desired way may not lead to an improvement of the underlying target. Such recourse is neither meaningful nor robust to model refits. Extending the work of Karimi et al. (2021), we propose meaningful algorithmic recourse (MAR) that only recommends actions that improve both prediction and target. We justify this selection constraint by highlighting the differences between model audit and meaningful, actionable recourse explanations. Additionally, we introduce a relaxation of MAR called effective algorithmic recourse (EAR), which, under certain assumptions, yields meaningful recourse by only allowing interventions on causes of the target.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
246,549
1805.01389
A stabilized mixed discontinuous Galerkin formulation for double porosity/permeability model
Modeling flow through porous media with multiple pore-networks has now become an active area of research due to recent technological endeavors like geological carbon sequestration and recovery of hydrocarbons from tight rock formations. Herein, we consider the double porosity/permeability (DPP) model, which describes the flow of a single-phase incompressible fluid through a porous medium exhibiting two dominant pore-networks with a possibility of mass transfer across them. We present a stable mixed discontinuous Galerkin (DG) formulation for the DPP model. The formulation enjoys several attractive features. These include: (i) Equal-order interpolation for all the field variables (which is computationally the most convenient) is stable under the proposed formulation. (ii) The stabilization terms are residual-based, and the stabilization parameters do not contain any mesh-dependent parameters. (iii) The formulation is theoretically shown to be consistent, stable, and hence convergent. (iv) The formulation supports non-conforming discretizations and distorted meshes. (v) The DG formulation has improved element-wise (local) mass balance compared to the corresponding continuous formulation. (vi) The proposed formulation can capture physical instabilities in coupled flow and transport problems under the DPP model.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
96,651
2309.09212
RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for Evaluating Robotics Computing System Performance
We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to evaluate robotics computing performance across a diverse range of hardware platforms using ROS 2 as its common baseline. The suite encompasses ROS 2 packages covering the full robotics pipeline and integrates two distinct benchmarking approaches: black-box testing, which measures performance by eliminating upper layers and replacing them with a test application, and grey-box testing, an application-specific measure that observes internal system states with minimal interference. Our benchmarking framework provides ready-to-use tools and is easily adaptable for the assessment of custom ROS 2 computational graphs. Drawing from the knowledge of leading robot architects and system architecture experts, RobotPerf establishes a standardized approach to robotics benchmarking. As an open-source initiative, RobotPerf remains committed to evolving with community input to advance the future of hardware-accelerated robotics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
392,515
2309.16599
Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation
Zero-shot translation (ZST), which is generally based on a multilingual neural machine translation model, aims to translate between unseen language pairs in training data. The common practice to guide the zero-shot language mapping during inference is to deliberately insert the source and target language IDs, e.g., <EN> for English and <DE> for German. Recent studies have shown that language IDs sometimes fail to navigate the ZST task, making them suffer from the off-target problem (non-target language words exist in the generated translation) and, therefore, difficult to apply the current multilingual translation model to a broad range of zero-shot language scenarios. To understand when and why the navigation capabilities of language IDs are weakened, we compare two extreme decoder input cases in the ZST directions: Off-Target (OFF) and On-Target (ON) cases. By contrastively visualizing the contextual word representations (CWRs) of these cases with teacher forcing, we show that 1) the CWRs of different languages are effectively distributed in separate regions when the sentence and ID are matched (ON setting), and 2) if the sentence and ID are unmatched (OFF setting), the CWRs of different languages are chaotically distributed. Our analyses suggest that although they work well in ideal ON settings, language IDs become fragile and lose their navigation ability when faced with off-target tokens, which commonly exist during inference but are rare in training scenarios. In response, we employ unlikelihood tuning on the negative (OFF) samples to minimize their probability such that the language IDs can discriminate between the on- and off-target tokens during training. Experiments spanning 40 ZST directions show that our method reduces the off-target ratio by -48.0% on average, leading to a +9.1 BLEU improvement with only an extra +0.3% tuning cost.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
395,412
2203.09737
Semi-Supervised Learning with Mutual Distillation for Monocular Depth Estimation
We propose a semi-supervised learning framework for monocular depth estimation. Compared to existing semi-supervised learning methods, which inherit limitations of both sparse supervised and unsupervised loss functions, we achieve the complementary advantages of both loss functions, by building two separate network branches for each loss and distilling each other through the mutual distillation loss function. We also present to apply different data augmentation to each branch, which improves the robustness. We conduct experiments to demonstrate the effectiveness of our framework over the latest methods and provide extensive ablation studies.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
286,268
2409.14820
Past Meets Present: Creating Historical Analogy with Large Language Models
Historical analogies, which compare known past events with contemporary but unfamiliar events, are important abilities that help people make decisions and understand the world. However, research in applied history suggests that people have difficulty finding appropriate analogies. And previous studies in the AI community have also overlooked historical analogies. To fill this gap, in this paper, we focus on the historical analogy acquisition task, which aims to acquire analogous historical events for a given event. We explore retrieval and generation methods for acquiring historical analogies based on different large language models (LLMs). Furthermore, we propose a self-reflection method to mitigate hallucinations and stereotypes when LLMs generate historical analogies. Through human evaluations and our specially designed automatic multi-dimensional assessment, we find that LLMs generally have a good potential for historical analogies. And the performance of the models can be further improved by using our self-reflection method.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
490,643
2411.16794
Phase-Informed Tool Segmentation for Manual Small-Incision Cataract Surgery
Cataract surgery is the most common surgical procedure globally, with a disproportionately higher burden in developing countries. While automated surgical video analysis has been explored in general surgery, its application to ophthalmic procedures remains limited. Existing works primarily focus on Phaco cataract surgery, an expensive technique not accessible in regions where cataract treatment is most needed. In contrast, Manual Small-Incision Cataract Surgery (MSICS) is the preferred low-cost, faster alternative in high-volume settings and for challenging cases. However, no dataset exists for MSICS. To address this gap, we introduce Sankara-MSICS, the first comprehensive dataset containing 53 surgical videos annotated for 18 surgical phases and 3,527 frames with 13 surgical tools at the pixel level. We benchmark this dataset on state-of-the-art models and present ToolSeg, a novel framework that enhances tool segmentation by introducing a phase-conditional decoder and a simple yet effective semi-supervised setup leveraging pseudo-labels from foundation models. Our approach significantly improves segmentation performance, achieving a $23.77\%$ to $38.10\%$ increase in mean Dice scores, with a notable boost for tools that are less prevalent and small. Furthermore, we demonstrate that ToolSeg generalizes to other surgical settings, showcasing its effectiveness on the CaDIS dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
511,188
2003.05224
3-Survivor: A Rough Terrain Negotiable Teleoperated Mobile Rescue Robot with Passive Control Mechanism
This paper presents the design and integration of 3 Survivor, a rough terrain negotiable teleoperated mobile rescue and service robot. 3 Survivor is an improved version of two previously studied surveillance robots named Sigma 3 and Alpha N. In 3 Survivor, a modified double tracked with caterpillar mechanism is incorporated in the body design. A passive adjustment established in the body balance enables the front and rear body to operate in excellent synchronization. Instead of using an actuator, a re configurable dynamic method is constructed with a 6 DOF arm. This dynamic method is configured with the planer, spatial mechanism, rotation matrix, motion control of rotation using inverse kinematics and controlling power consumption of the manipulator using angular momentum. The robot is remotely controlled using a handheld Radio Frequency RF transmitter. 3 Survivor is equipped with a Raspberry Pi 12 MP camera which is used for livestreaming of robot operations. Object detection algorithms are run on the live video stream. The object detection method is built using a Faster RCNN with VGGNet16 architecture of CNN. The entire operations of the robot are monitored through a web control window. Therefore, the control portal provides a brief scenario of the environment to run, control and steer the robot for more precise operation. A very impressive 88.25 percent accuracy is acquired from this module in a rescue operation. Along with the ODM, the sensor system of the robot provides information on the hazardous terrain. The feasibility of the 3 Survivor is tested and presented by different experiments throughout the paper.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
167,809
1408.2289
Physical Computing With No Clock to Implement the Gaussian Pyramid of SIFT Algorithm
Physical computing is a technology utilizing the nature of electronic devices and circuit topology to cope with computing tasks. In this paper, we propose an active circuit network to implement multi-scale Gaussian filter, which is also called Gaussian Pyramid in image preprocessing. Various kinds of methods have been tried to accelerate the key stage in image feature extracting algorithm these years. Compared with existing technologies, GPU parallel computing and FPGA accelerating technology, physical computing has great advantage on processing speed as well as power consumption. We have verified that processing time to implement the Gaussian pyramid of the SIFT algorithm stands on nanosecond level through the physical computing technology, while other existing methods all need at least hundreds of millisecond. With an estimate on the stray capacitance of the circuit, the power consumption is around 670pJ to filter a 256x256 image. To the best of our knowledge, this is the most fast processing technology to accelerate the SIFT algorithm, and it is also a rather energy-efficient method, thanks to the proposed physical computing technology.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
35,282
2409.13096
Fast decision tree learning solves hard coding-theoretic problems
We connect the problem of properly PAC learning decision trees to the parameterized Nearest Codeword Problem ($k$-NCP). Despite significant effort by the respective communities, algorithmic progress on both problems has been stuck: the fastest known algorithm for the former runs in quasipolynomial time (Ehrenfeucht and Haussler 1989) and the best known approximation ratio for the latter is $O(n/\log n)$ (Berman and Karpinsky 2002; Alon, Panigrahy, and Yekhanin 2009). Research on both problems has thus far proceeded independently with no known connections. We show that $\textit{any}$ improvement of Ehrenfeucht and Haussler's algorithm will yield $O(\log n)$-approximation algorithms for $k$-NCP, an exponential improvement of the current state of the art. This can be interpreted either as a new avenue for designing algorithms for $k$-NCP, or as one for establishing the optimality of Ehrenfeucht and Haussler's algorithm. Furthermore, our reduction along with existing inapproximability results for $k$-NCP already rule out polynomial-time algorithms for properly learning decision trees. A notable aspect of our hardness results is that they hold even in the setting of $\textit{weak}$ learning whereas prior ones were limited to the setting of strong learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
489,847
1707.09926
A Framework for Super-Resolution of Scalable Video via Sparse Reconstruction of Residual Frames
This paper introduces a framework for super-resolution of scalable video based on compressive sensing and sparse representation of residual frames in reconnaissance and surveillance applications. We exploit efficient compressive sampling and sparse reconstruction algorithms to super-resolve the video sequence with respect to different compression rates. We use the sparsity of residual information in residual frames as the key point in devising our framework. Moreover, a controlling factor as the compressibility threshold to control the complexity-performance trade-off is defined. Numerical experiments confirm the efficiency of the proposed framework in terms of the compression rate as well as the quality of reconstructed video sequence in terms of PSNR measure. The framework leads to a more efficient compression rate and higher video quality compared to other state-of-the-art algorithms considering performance-complexity trade-offs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
78,113
1909.11498
Non-imaging single-pixel sensing with optimized binary modulation
The conventional high-level sensing techniques require high-fidelity images as input to extract target features, which are produced by either complex imaging hardware or high-complexity reconstruction algorithms. In this letter, we propose single-pixel sensing (SPS) that performs high-level sensing directly from coupled measurements of a single-pixel detector, without the conventional image acquisition and reconstruction process. The technique consists of three steps including binary light modulation that can be physically implemented at $\sim$22kHz, single-pixel coupled detection owning wide working spectrum and high signal-to-noise ratio, and end-to-end deep-learning based sensing that reduces both hardware and software complexity. Besides, the binary modulation is trained and optimized together with the sensing network, which ensures least required measurements and optimal sensing accuracy. The effectiveness of SPS is demonstrated on the classification task of handwritten MNIST dataset, and 96.68% classification accuracy at $\sim$1kHz is achieved. The reported single-pixel sensing technique is a novel framework for highly efficient machine intelligence.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
146,826
1509.05172
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis
We consider the off-policy evaluation problem in Markov decision processes with function approximation. We propose a generalization of the recently introduced \emph{emphatic temporal differences} (ETD) algorithm \citep{SuttonMW15}, which encompasses the original ETD($\lambda$), as well as several other off-policy evaluation algorithms as special cases. We call this framework \ETD, where our introduced parameter $\beta$ controls the decay rate of an importance-sampling term. We study conditions under which the projected fixed-point equation underlying \ETD\ involves a contraction operator, allowing us to present the first asymptotic error bounds (bias) for \ETD. Our results show that the original ETD algorithm always involves a contraction operator, and its bias is bounded. Moreover, by controlling $\beta$, our proposed generalization allows trading-off bias for variance reduction, thereby achieving a lower total error.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
47,013
2411.15824
Variable-size Symmetry-based Graph Fourier Transforms for image compression
Modern compression systems use linear transformations in their encoding and decoding processes, with transforms providing compact signal representations. While multiple data-dependent transforms for image/video coding can adapt to diverse statistical characteristics, assembling large datasets to learn each transform is challenging. Also, the resulting transforms typically lack fast implementation, leading to significant computational costs. Thus, despite many papers proposing new transform families, the most recent compression standards predominantly use traditional separable sinusoidal transforms. This paper proposes integrating a new family of Symmetry-based Graph Fourier Transforms (SBGFTs) of variable sizes into a coding framework, focusing on the extension from our previously introduced 8x8 SBGFTs to the general case of NxN grids. SBGFTs are non-separable transforms that achieve sparse signal representation while maintaining low computational complexity thanks to their symmetry properties. Their design is based on our proposed algorithm, which generates symmetric graphs on the grid by adding specific symmetrical connections between nodes and does not require any data-dependent adaptation. Furthermore, for video intra-frame coding, we exploit the correlations between optimal graphs and prediction modes to reduce the cardinality of the transform sets, thus proposing a low-complexity framework. Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection (MTS) used in the latest VVC intra-coding, providing a bit rate saving percentage of 6.23%, with only a marginal increase in average complexity. A MATLAB implementation of the proposed algorithm is available online at [1].
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
510,790
2408.09275
Design and Control of Modular Soft-Rigid Hybrid Manipulators with Self-Contact
Soft robotics focuses on designing robots with highly deformable materials, allowing them to adapt and operate safely and reliably in unstructured and variable environments. While soft robots offer increased compliance over rigid body robots, their payloads are limited, and they consume significant energy when operating against gravity in terrestrial environments. To address the carrying capacity limitation, we introduce a novel class of soft-rigid hybrid robot manipulators (SRH) that incorporates both soft continuum modules and rigid joints in a serial configuration. The SRH manipulators can seamlessly transition between being compliant and delicate to rigid and strong, achieving this through dynamic shape modulation and employing self-contact among rigid components to effectively form solid structures. We discuss the design and fabrication of SRH robots, and present a class of novel control algorithms for SRH systems. We propose a configuration space PD+ shape controller and a Cartesian impedance controller, both of which are provably stable, endowing the soft robot with the necessary low-level capabilities. We validate the controllers on SRH hardware and demonstrate the robot performing several tasks. Our results highlight the potential for the soft-rigid hybrid paradigm to produce robots that are both physically safe and effective at task performance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
481,363
2408.08729
ConcateNet: Dialogue Separation Using Local And Global Feature Concatenation
Dialogue separation involves isolating a dialogue signal from a mixture, such as a movie or a TV program. This can be a necessary step to enable dialogue enhancement for broadcast-related applications. In this paper, ConcateNet for dialogue separation is proposed, which is based on a novel approach for processing local and global features aimed at better generalization for out-of-domain signals. ConcateNet is trained using a noise reduction-focused, publicly available dataset and evaluated using three datasets: two noise reduction-focused datasets (in-domain), which show competitive performance for ConcateNet, and a broadcast-focused dataset (out-of-domain), which verifies the better generalization performance for the proposed architecture compared to considered state-of-the-art noise-reduction methods.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
481,127
2406.17363
Leveraging Synthetic Audio Data for End-to-End Low-Resource Speech Translation
This paper describes our system submission to the International Conference on Spoken Language Translation (IWSLT 2024) for Irish-to-English speech translation. We built end-to-end systems based on Whisper, and employed a number of data augmentation techniques, such as speech back-translation and noise augmentation. We investigate the effect of using synthetic audio data and discuss several methods for enriching signal diversity.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
467,543
2103.08317
Boosted Genetic Algorithm using Machine Learning for traffic control optimization
Traffic control optimization is a challenging task for various traffic centers around the world and the majority of existing approaches focus only on developing adaptive methods under normal (recurrent) traffic conditions. Optimizing the control plans when severe incidents occur still remains an open problem, especially when a high number of lanes or entire intersections are affected. This paper aims at tackling this problem and presents a novel methodology for optimizing the traffic signal timings in signalized urban intersections, under non-recurrent traffic incidents. With the purpose of producing fast and reliable decisions, we combine the fast running Machine Learning (ML) algorithms and the reliable Genetic Algorithms (GA) into a single optimization framework. As a benchmark, we first start with deploying a typical GA algorithm by considering the phase duration as the decision variable and the objective function to minimize the total travel time in the network. We fine tune the GA for crossover, mutation, fitness calculation and obtain the optimal parameters. Secondly, we train various machine learning regression models to predict the total travel time of the studied traffic network, and select the best performing regressor which we further hyper-tune to find the optimal training parameters. Lastly, we propose a new algorithm BGA-ML combining the GA algorithm and the extreme-gradient decision-tree, which is the best performing regressor, together in a single optimization framework. Comparison and results show that the new BGA-ML is much faster than the original GA algorithm and can be successfully applied under non-recurrent incident conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
224,872
2112.08217
Probabilistic Forecasting with Generative Networks via Scoring Rule Minimization
Probabilistic forecasting relies on past observations to provide a probability distribution for a future outcome, which is often evaluated against the realization using a scoring rule. Here, we perform probabilistic forecasting with generative neural networks, which parametrize distributions on high-dimensional spaces by transforming draws from a latent variable. Generative networks are typically trained in an adversarial framework. In contrast, we propose to train generative networks to minimize a predictive-sequential (or prequential) scoring rule on a recorded temporal sequence of the phenomenon of interest, which is appealing as it corresponds to the way forecasting systems are routinely evaluated. Adversarial-free minimization is possible for some scoring rules; hence, our framework avoids the cumbersome hyperparameter tuning and uncertainty underestimation due to unstable adversarial training, thus unlocking reliable use of generative networks in probabilistic forecasting. Further, we prove consistency of the minimizer of our objective with dependent data, while adversarial training assumes independence. We perform simulation studies on two chaotic dynamical models and a benchmark data set of global weather observations; for this last example, we define scoring rules for spatial data by drawing from the relevant literature. Our method outperforms state-of-the-art adversarial approaches, especially in probabilistic calibration, while requiring less hyperparameter tuning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,731
1909.01602
SQuAP-Ont: an Ontology of Software Quality Relational Factors from Financial Systems
Quality, architecture, and process are considered the keystones of software engineering. ISO defines them in three separate standards. However, their interaction has been scarcely studied, so far. The SQuAP model (Software Quality, Architecture, Process) describes twenty-eight main factors that impact on software quality in banking systems, and each factor is described as a relation among some characteristics from the three ISO standards. Hence, SQuAP makes such relations emerge rigorously, although informally. In this paper, we present SQuAP-Ont, an OWL ontology designed by following a well-established methodology based on the re-use of Ontology Design Patterns (i.e. ODPs). SQuAP-Ont formalises the relations emerging from SQuAP to represent and reason via Linked Data about software engineering in a three-dimensional model consisting of quality, architecture, and process ISO characteristics.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
143,953
1212.3139
Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles
Using a corpus of 17,000+ financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP and DOWN verbs used to describe movements of indices, stocks and shares. In Study 1 participants identified antonyms of these verbs in a free-response task and a matching task from which the most commonly identified antonyms were compiled. In Study 2, we determined whether the argument-distributions for the verbs in these antonym-pairs were sufficiently similar to predict the most frequently-identified antonym. Cosine similarity correlates moderately with the proportions of antonym-pairs identified by people (r = 0.31). More impressively, 87% of the time the most frequently-identified antonym is either the first- or second-most similar pair in the set of alternatives. The implications of these results for distributional approaches to determining metaphoric knowledge are discussed.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
20,369
1301.6190
Blahut-Arimoto Algorithm and Code Design for Action-Dependent Source Coding Problems
The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient algorithm for numerical computation of the rate-distortion-cost function for this problem is proposed, and a convergence proof is provided. Moreover, a two-stage code design based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Specific coding/decoding strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the lower bound dictated by the rate-distortion-cost function.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
21,390
1908.07599
Learning document embeddings along with their uncertainties
Majority of the text modelling techniques yield only point-estimates of document embeddings and lack in capturing the uncertainty of the estimates. These uncertainties give a notion of how well the embeddings represent a document. We present Bayesian subspace multinomial model (Bayesian SMM), a generative log-linear model that learns to represent documents in the form of Gaussian distributions, thereby encoding the uncertainty in its co-variance. Additionally, in the proposed Bayesian SMM, we address a commonly encountered problem of intractability that appears during variational inference in mixed-logit models. We also present a generative Gaussian linear classifier for topic identification that exploits the uncertainty in document embeddings. Our intrinsic evaluation using perplexity measure shows that the proposed Bayesian SMM fits the data better as compared to the state-of-the-art neural variational document model on Fisher speech and 20Newsgroups text corpora. Our topic identification experiments show that the proposed systems are robust to over-fitting on unseen test data. The topic ID results show that the proposed model is outperforms state-of-the-art unsupervised topic models and achieve comparable results to the state-of-the-art fully supervised discriminative models.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
142,321
2310.17119
FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Detecting factual errors in textual information, whether generated by large language models (LLM) or curated by humans, is crucial for making informed decisions. LLMs' inability to attribute their claims to external knowledge and their tendency to hallucinate makes it difficult to rely on their responses. Humans, too, are prone to factual errors in their writing. Since manual detection and correction of factual errors is labor-intensive, developing an automatic approach can greatly reduce human effort. We present FLEEK, a prototype tool that automatically extracts factual claims from text, gathers evidence from external knowledge sources, evaluates the factuality of each claim, and suggests revisions for identified errors using the collected evidence. Initial empirical evaluation on fact error detection (77-85\% F1) shows the potential of FLEEK. A video demo of FLEEK can be found at https://youtu.be/NapJFUlkPdQ.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
402,993
2403.13653
Learning User Embeddings from Human Gaze for Personalised Saliency Prediction
Reusable embeddings of user behaviour have shown significant performance improvements for the personalised saliency prediction task. However, prior works require explicit user characteristics and preferences as input, which are often difficult to obtain. We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps generated from a small amount of user-specific eye tracking data. At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users. Evaluations on two public saliency datasets show that the generated embeddings have high discriminative power, are effective at refining universal saliency maps to the individual users, and generalise well across users and images. Finally, based on our model's ability to encode individual user characteristics, our work points towards other applications that can benefit from reusable embeddings of gaze behaviour.
true
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
439,717
1703.09772
Particle Filtering for PLCA model with Application to Music Transcription
Automatic Music Transcription (AMT) consists in automatically estimating the notes in an audio recording, through three attributes: onset time, duration and pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular for this task. PLCA is a spectrogram factorization method, able to model a magnitude spectrogram as a linear combination of spectral vectors from a dictionary. Such methods use the Expectation-Maximization (EM) algorithm to estimate the parameters of the acoustic model. This algorithm presents well-known inherent defaults (local convergence, initialization dependency), making EM-based systems limited in their applications to AMT, particularly in regards to the mathematical form and number of priors. To overcome such limits, we propose in this paper to employ a different estimation framework based on Particle Filtering (PF), which consists in sampling the posterior distribution over larger parameter ranges. This framework proves to be more robust in parameter estimation, more flexible and unifying in the integration of prior knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and 59.5 $\%$ were achieved on evaluation sound datasets of two different instrument repertoires, including the classical piano (from MAPS dataset) and the marovany zither, and direct comparisons to previous PLCA-based approaches are provided. Steps for further development are also outlined.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
70,793
1802.06613
Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation
Arguing without committing a fallacy is one of the main requirements of an ideal debate. But even when debating rules are strictly enforced and fallacious arguments punished, arguers often lapse into attacking the opponent by an ad hominem argument. As existing research lacks solid empirical investigation of the typology of ad hominem arguments as well as their potential causes, this paper fills this gap by (1) performing several large-scale annotation studies, (2) experimenting with various neural architectures and validating our working hypotheses, such as controversy or reasonableness, and (3) providing linguistic insights into triggers of ad hominem using explainable neural network architectures.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
90,715
2308.13395
A Gr\"obner Approach to Dual-Containing Cyclic Left Module $(\theta,\delta)$-Codes over Finite Commutative Frobenius Rings
For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative Frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper, we derive a parity check matrix when $A$ is a finite commutative Frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-modules where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
387,902
1409.4391
Direct Sum Theorem for Bounded Round Quantum Communication Complexity
We prove a direct sum theorem for bounded round entanglement-assisted quantum communication complexity. To do so, we use the fully quantum definition for information cost and complexity that we recently introduced, and use both the fact that information is a lower bound on the communication, and the fact that a direct sum property holds for quantum information complexity. We then give a protocol for compressing a single copy of a protocol down to its quantum information cost, up to terms depending on the number of rounds and the allowed increase in error. Two important tools to derive this protocol are a smooth conditional min-entropy bound for a one-shot quantum state redistribution protocol, and the quantum substate theorem of Jain, Radhakrishnan and Sen (FOCS'02) to transform this bound into a von Neumann conditional entropy bound. This result further establishes the newly introduced notions of quantum information cost and complexity as the correct quantum generalisations of the classical ones in the standard communication complexity setting. Finding such a quantum generalization of information complexity was one of the open problem recently raised by Braverman (STOC'12).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
36,069
2312.11097
Change points detection in crime-related time series: an on-line fuzzy approach based on a shape space representation
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
416,443
2311.01995
From Discrete to Continuous Binary Best-Response Dynamics: Discrete Fluctuations Almost Surely Vanish with Population Size
In binary decision-makings, individuals often go for a common or rare action. In the framework of evolutionary game theory, the best-response update rule can be used to model this dichotomy. Those who prefer a common action are called \emph{coordinators}, and those who prefer a rare one are called \emph{anticoordinators}. A finite mixed population of the two types may undergo perpetual fluctuations, the characterization of which appears to be challenging. It is particularly unknown whether the fluctuations persist as population size grows. To fill this gap, we approximate the discrete finite population dynamics of coordinators and anticoordinators with the associated mean dynamics in the form of semicontinuous differential inclusions. We show that the family of the state sequences of the discrete dynamics for increasing population sizes forms a generalized stochastic approximation process for the differential inclusion. On the other hand, we show that the differential inclusions always converge to an equilibrium. This implies that the reported perpetual fluctuations in the finite discrete dynamics of coordinators and anticoordinators almost surely vanish with population size. The results encourage to first analyze the often simpler semicontinuous mean dynamics of the discrete population dynamics as the semicontinuous dynamics partly reveal the asymptotic behaviour of the discrete dynamics.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
405,257
1605.09497
Interdependent Scheduling Games
We propose a model of interdependent scheduling games in which each player controls a set of services that they schedule independently. A player is free to schedule his own services at any time; however, each of these services only begins to accrue reward for the player when all predecessor services, which may or may not be controlled by the same player, have been activated. This model, where players have interdependent services, is motivated by the problems faced in planning and coordinating large-scale infrastructures, e.g., restoring electricity and gas to residents after a natural disaster or providing medical care in a crisis when different agencies are responsible for the delivery of staff, equipment, and medicine. We undertake a game-theoretic analysis of this setting and in particular consider the issues of welfare maximization, computing best responses, Nash dynamics, and existence and computation of Nash equilibria.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
false
true
56,572
2206.00979
Multi-scale Wasserstein Shortest-path Graph Kernels for Graph Classification
Graph kernels are conventional methods for computing graph similarities. However, the existing R-convolution graph kernels cannot resolve both of the two challenges: 1) Comparing graphs at multiple different scales, and 2) Considering the distributions of substructures when computing the kernel matrix. These two challenges limit their performances. To mitigate both of the two challenges, we propose a novel graph kernel called the Multi-scale Wasserstein Shortest-Path graph kernel (MWSP), at the heart of which is the multi-scale shortest-path node feature map, of which each element denotes the number of occurrences of the shortest path around a node. The shortest path is represented by the concatenation of all the labels of nodes in it. Since the shortest-path node feature map can only compare graphs at local scales, we incorporate into it the multiple different scales of the graph structure, which are captured by the truncated BFS trees of different depths rooted at each node in a graph. We use the Wasserstein distance to compute the similarity between the multi-scale shortest-path node feature maps of two graphs, considering the distributions of shortest paths. We empirically validate MWSP on various benchmark graph datasets and demonstrate that it achieves state-of-the-art performance on most datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,318
2409.18394
An Augmented Reality Interface for Teleoperating Robot Manipulators: Reducing Demonstrator Task Load through Digital Twin Control
Acquiring high-quality demonstration data is essential for the success of data-driven methods, such as imitation learning. Existing platforms for providing demonstrations for manipulation tasks often impose significant physical and mental demands on the demonstrator, require additional hardware systems, or necessitate specialized domain knowledge. In this work, we present a novel augmented reality (AR) interface for teleoperating robotic manipulators, emphasizing the demonstrator's experience, particularly in the context of performing complex tasks that require precision and accuracy. This interface, designed for the Microsoft HoloLens 2, leverages the adaptable nature of mixed reality (MR), enabling users to control a physical robot through digital twin surrogates. We assess the effectiveness of our approach across three complex manipulation tasks and compare its performance against OPEN TEACH, a recent virtual reality (VR) teleoperation system, as well as two traditional control methods: kinesthetic teaching and a 3D SpaceMouse for end-effector control. Our findings show that our method performs comparably to the VR approach and demonstrates the potential for AR in data collection. Additionally, we conduct a pilot study to evaluate the usability and task load associated with each method. Results indicate that our AR-based system achieves higher usability scores than the VR benchmark and significantly reduces mental demand, physical effort, and frustration experienced by users. An accompanying video can be found at https://youtu.be/w-M58ohPgrA.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
492,232
2406.10741
Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare
The process of identifying human emotion and affective states from speech is known as speech emotion recognition (SER). This is based on the observation that tone and pitch in the voice frequently convey underlying emotion. Speech recognition includes the ability to recognize emotions, which is becoming increasingly popular and in high demand. With the help of appropriate factors (such modalities, emotions, intensities, repetitions, etc.) found in the data, my research seeks to use the Convolutional Neural Network (CNN) to distinguish emotions from audio recordings and label them in accordance with the range of different emotions. I have developed a machine learning model to identify emotions from supplied audio files with the aid of machine learning methods. The evaluation is mostly focused on precision, recall, and F1 score, which are common machine learning metrics. To properly set up and train the machine learning framework, the main objective is to investigate the influence and cross-relation of all input and output parameters. To improve the ability to recognize intentions, a key condition for communication, I have evaluated emotions using my specialized machine learning algorithm via voice that would address the emotional state from voice with the help of digital healthcare, bridging the gap between human and artificial intelligence (AI).
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
464,535
2407.04130
Towards Automating Text Annotation: A Case Study on Semantic Proximity Annotation using GPT-4
This paper explores using GPT-3.5 and GPT-4 to automate the data annotation process with automatic prompting techniques. The main aim of this paper is to reuse human annotation guidelines along with some annotated data to design automatic prompts for LLMs, focusing on the semantic proximity annotation task. Automatic prompts are compared to customized prompts. We further implement the prompting strategies into an open-source text annotation tool, enabling easy online use via the OpenAI API. Our study reveals the crucial role of accurate prompt design and suggests that prompting GPT-4 with human-like instructions is not straightforwardly possible for the semantic proximity task. We show that small modifications to the human guidelines already improve the performance, suggesting possible ways for future research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
470,440
1609.01152
Well-Posedness and Output Regulation for Implicit Time-Varying Evolution Variational Inequalities
A class of evolution variational inequalities (EVIs), which comprises ordinary differential equations (ODEs) coupled with variational inequalities (VIs) associated with time-varying set-valued mappings, is proposed in this paper. We first study the conditions for existence and uniqueness of solutions. The central idea behind the proof is to rewrite the system dynamics as a differential inclusion which can be decomposed into a single-valued Lipschitz map, and a time-dependent maximal monotone operator. Regularity assumptions on the set-valued mapping determine the regularity of the resulting solutions. Complementarity systems with time-dependence are studied as a particular case. We then use this result to study the problem of designing state feedback control laws for output regulation in systems described by EVIs. The derivation of control laws for output regulation is based on the use of internal model principle, and two cases are treated: First, a static feedback control law is derived when full state feedback is available, In the second case, only the error to be regulated is assumed to be available for measurement and a dynamic compensator is designed. As applications, we demonstrate how control input resulting from the solution of a variational inequality results in regulating the output of the system while maintaining polyhedral state constraints. Another application is seen in designing control inputs for regulation in power converters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
60,561
2404.11826
AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence
As the integration of large language models into daily life is on the rise, there is a clear gap in benchmarks for advising on subjective and personal dilemmas. To address this, we introduce AdvisorQA, the first benchmark developed to assess LLMs' capability in offering advice for deeply personalized concerns, utilizing the LifeProTips subreddit forum. This forum features a dynamic interaction where users post advice-seeking questions, receiving an average of 8.9 advice per query, with 164.2 upvotes from hundreds of users, embodying a collective intelligence framework. Therefore, we've completed a benchmark encompassing daily life questions, diverse corresponding responses, and majority vote ranking to train our helpfulness metric. Baseline experiments validate the efficacy of AdvisorQA through our helpfulness metric, GPT-4, and human evaluation, analyzing phenomena beyond the trade-off between helpfulness and harmlessness. AdvisorQA marks a significant leap in enhancing QA systems for providing personalized, empathetic advice, showcasing LLMs' improved understanding of human subjectivity.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
447,628
1505.00870
An $O(n\log(n))$ Algorithm for Projecting Onto the Ordered Weighted $\ell_1$ Norm Ball
The ordered weighted $\ell_1$ (OWL) norm is a newly developed generalization of the Octogonal Shrinkage and Clustering Algorithm for Regression (OSCAR) norm. This norm has desirable statistical properties and can be used to perform simultaneous clustering and regression. In this paper, we show how to compute the projection of an $n$-dimensional vector onto the OWL norm ball in $O(n\log(n))$ operations. In addition, we illustrate the performance of our algorithm on a synthetic regression test.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
42,782
2502.01456
Process Reinforcement through Implicit Rewards
Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
529,862
2202.03103
Combining Deep Learning and Reasoning for Address Detection in Unstructured Text Documents
Extracting information from unstructured text documents is a demanding task, since these documents can have a broad variety of different layouts and a non-trivial reading order, like it is the case for multi-column documents or nested tables. Additionally, many business documents are received in paper form, meaning that the textual contents need to be digitized before further analysis. Nonetheless, automatic detection and capturing of crucial document information like the sender address would boost many companies' processing efficiency. In this work we propose a hybrid approach that combines deep learning with reasoning for finding and extracting addresses from unstructured text documents. We use a visual deep learning model to detect the boundaries of possible address regions on the scanned document images and validate these results by analyzing the containing text using domain knowledge represented as a rule based system.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
279,078
1403.2654
Flying Insect Classification with Inexpensive Sensors
The ability to use inexpensive, noninvasive sensors to accurately classify flying insects would have significant implications for entomological research, and allow for the development of many useful applications in vector control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts on this task. To date, however, none of this research has had a lasting impact. In this work, we explain this lack of progress. We attribute the stagnation on this problem to several factors, including the use of acoustic sensing devices, the over-reliance on the single feature of wingbeat frequency, and the attempts to learn complex models with relatively little data. In contrast, we show that pseudo-acoustic optical sensors can produce vastly superior data, that we can exploit additional features, both intrinsic and extrinsic to the insect's flight behavior, and that a Bayesian classification approach allows us to efficiently learn classification models that are very robust to over-fitting. We demonstrate our findings with large scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
31,498
2202.13234
Safe Exploration for Efficient Policy Evaluation and Comparison
High-quality data plays a central role in ensuring the accuracy of policy evaluation. This paper initiates the study of efficient and safe data collection for bandit policy evaluation. We formulate the problem and investigate its several representative variants. For each variant, we analyze its statistical properties, derive the corresponding exploration policy, and design an efficient algorithm for computing it. Both theoretical analysis and experiments support the usefulness of the proposed methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,533
2010.13962
Task-Aware Neural Architecture Search
The design of handcrafted neural networks requires a lot of time and resources. Recent techniques in Neural Architecture Search (NAS) have proven to be competitive or better than traditional handcrafted design, although they require domain knowledge and have generally used limited search spaces. In this paper, we propose a novel framework for neural architecture search, utilizing a dictionary of models of base tasks and the similarity between the target task and the atoms of the dictionary; hence, generating an adaptive search space based on the base models of the dictionary. By introducing a gradient-based search algorithm, we can evaluate and discover the best architecture in the search space without fully training the networks. The experimental results show the efficacy of our proposed task-aware approach.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
203,299
1907.10226
Movement science needs different pose tracking algorithms
Over the last decade, computer science has made progress towards extracting body pose from single camera photographs or videos. This promises to enable movement science to detect disease, quantify movement performance, and take the science out of the lab into the real world. However, current pose tracking algorithms fall short of the needs of movement science; the types of movement data that matter are poorly estimated. For instance, the metrics currently used for evaluating pose tracking algorithms use noisy hand-labeled ground truth data and do not prioritize precision of relevant variables like three-dimensional position, velocity, acceleration, and forces which are crucial for movement science. Here, we introduce the scientific disciplines that use movement data, the types of data they need, and discuss the changes needed to make pose tracking truly transformative for movement science.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
139,565
1001.3122
Erasure entropies and Gibbs measures
Recently Verdu and Weissman introduced erasure entropies, which are meant to measure the information carried by one or more symbols given all of the remaining symbols in the realization of the random process or field. A natural relation to Gibbs measures has also been observed. In his short note we study this relation further, review a few earlier contributions from statistical mechanics, and provide the formula for the erasure entropy of a Gibbs measure in terms of the corresponding potentia. For some 2-dimensonal Ising models, for which Verdu and Weissman suggested a numerical procedure, we show how to obtain an exact formula for the erasure entropy. l
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,435
2302.11824
MossFormer: Pushing the Performance Limit of Monaural Speech Separation using Gated Single-Head Transformer with Convolution-Augmented Joint Self-Attentions
Transformer based models have provided significant performance improvements in monaural speech separation. However, there is still a performance gap compared to a recent proposed upper bound. The major limitation of the current dual-path Transformer models is the inefficient modelling of long-range elemental interactions and local feature patterns. In this work, we achieve the upper bound by proposing a gated single-head transformer architecture with convolution-augmented joint self-attentions, named \textit{MossFormer} (\textit{Mo}naural \textit{s}peech \textit{s}eparation Trans\textit{Former}). To effectively solve the indirect elemental interactions across chunks in the dual-path architecture, MossFormer employs a joint local and global self-attention architecture that simultaneously performs a full-computation self-attention on local chunks and a linearised low-cost self-attention over the full sequence. The joint attention enables MossFormer model full-sequence elemental interaction directly. In addition, we employ a powerful attentive gating mechanism with simplified single-head self-attentions. Besides the attentive long-range modelling, we also augment MossFormer with convolutions for the position-wise local pattern modelling. As a consequence, MossFormer significantly outperforms the previous models and achieves the state-of-the-art results on WSJ0-2/3mix and WHAM!/WHAMR! benchmarks. Our model achieves the SI-SDRi upper bound of 21.2 dB on WSJ0-3mix and only 0.3 dB below the upper bound of 23.1 dB on WSJ0-2mix.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
347,328
2106.02016
Semantic-WER: A Unified Metric for the Evaluation of ASR Transcript for End Usability
Recent advances in supervised, semi-supervised and self-supervised deep learning algorithms have shown significant improvement in the performance of automatic speech recognition(ASR) systems. The state-of-the-art systems have achieved a word error rate (WER) less than 5%. However, in the past, researchers have argued the non-suitability of the WER metric for the evaluation of ASR systems for downstream tasks such as spoken language understanding (SLU) and information retrieval. The reason is that the WER works at the surface level and does not include any syntactic and semantic knowledge.The current work proposes Semantic-WER (SWER), a metric to evaluate the ASR transcripts for downstream applications in general. The SWER can be easily customized for any down-stream task.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,697
1604.01566
Achievable Rates for Gaussian Degraded Relay Channels with Non-Vanishing Error Probabilities
This paper revisits the Gaussian degraded relay channel, where the link that carries information from the source to the destination is a physically degraded version of the link that carries information from the source to the relay. The source and the relay are subject to expected power constraints. The $\varepsilon$-capacity of the channel is characterized and it is strictly larger than the capacity for $\varepsilon>0$, which implies that the channel does not possess the strong converse property. The proof of the achievability part is based on several key ideas: block Markov coding which is used in the classical decode-forward strategy, power control for Gaussian channels under expected power constraints, and a careful scaling between the block size and the total number of block uses. The converse part is proved by first establishing two non-asymptotic lower bounds on the error probability, which are derived from the type-II errors of some binary hypothesis tests. Subsequently, each lower bound is simplified by conditioning on an event related to the power of some linear combination of the codewords transmitted by the source and the relay. Lower and upper bounds on the second-order term of the optimal coding rate in terms of blocklength and error probability are also obtained.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
54,213
2410.14268
MoDification: Mixture of Depths Made Easy
Long-context efficiency has recently become a trending topic in serving large language models (LLMs). And mixture of depths (MoD) is proposed as a perfect fit to bring down both latency and memory. In this paper, however, we discover that MoD can barely transform existing LLMs without costly training over an extensive number of tokens. To enable the transformations from any LLMs to MoD ones, we showcase top-k operator in MoD should be promoted to threshold-p operator, and refinement to architecture and data should also be crafted along. All these designs form our method termed MoDification. Through a comprehensive set of experiments covering model scales from 3B to 70B, we exhibit MoDification strikes an excellent balance between efficiency and effectiveness. MoDification can achieve up to ~1.2x speedup in latency and ~1.8x reduction in memory compared to original LLMs especially in long-context applications.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
499,964
2306.03988
Learn the Force We Can: Enabling Sparse Motion Control in Multi-Object Video Generation
We propose a novel unsupervised method to autoregressively generate videos from a single frame and a sparse motion input. Our trained model can generate unseen realistic object-to-object interactions. Although our model has never been given the explicit segmentation and motion of each object in the scene during training, it is able to implicitly separate their dynamics and extents. Key components in our method are the randomized conditioning scheme, the encoding of the input motion control, and the randomized and sparse sampling to enable generalization to out of distribution but realistic correlations. Our model, which we call YODA, has therefore the ability to move objects without physically touching them. Through extensive qualitative and quantitative evaluations on several datasets, we show that YODA is on par with or better than state of the art video generation prior work in terms of both controllability and video quality.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
371,557
2002.04205
Fine-grained Uncertainty Modeling in Neural Networks
Existing uncertainty modeling approaches try to detect an out-of-distribution point from the in-distribution dataset. We extend this argument to detect finer-grained uncertainty that distinguishes between (a). certain points, (b). uncertain points but within the data distribution, and (c). out-of-distribution points. Our method corrects overconfident NN decisions, detects outlier points and learns to say ``I don't know'' when uncertain about a critical point between the top two predictions. In addition, we provide a mechanism to quantify class distributions overlap in the decision manifold and investigate its implications in model interpretability. Our method is two-step: in the first step, the proposed method builds a class distribution using Kernel Activation Vectors (kav) extracted from the Network. In the second step, the algorithm determines the confidence of a test point by a hierarchical decision rule based on the chi-squared distribution of squared Mahalanobis distances. Our method sits on top of a given Neural Network, requires a single scan of training data to estimate class distribution statistics, and is highly scalable to deep networks and wider pre-softmax layer. As a positive side effect, our method helps to prevent adversarial attacks without requiring any additional training. It is directly achieved when the Softmax layer is substituted by our robust uncertainty layer at the evaluation phase.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
163,543
2311.10782
A BERT based Ensemble Approach for Sentiment Classification of Customer Reviews and its Application to Nudge Marketing in e-Commerce
According to the literature, Product reviews are an important source of information for customers to support their buying decision. Product reviews improve customer trust and loyalty. Reviews help customers in understanding what other customers think about a particular product and helps in driving purchase decisions. Therefore, for an e-commerce platform it is important to understand the sentiments in customer reviews to understand their products and services, and it also allows them to potentially create positive consumer interaction as well as long lasting relationships. Reviews also provide innovative ways to market the products for an ecommerce company. One such approach is Nudge Marketing. Nudge marketing is a subtle way for an ecommerce company to help their customers make better decisions without hesitation.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
408,654
2304.01235
How Graph Structure and Label Dependencies Contribute to Node Classification in a Large Network of Documents
We introduce a new dataset named WikiVitals which contains a large graph of 48k mutually referred Wikipedia articles classified into 32 categories and connected by 2.3M edges. Our aim is to rigorously evaluate the contributions of three distinct sources of information to the label prediction in a semi-supervised node classification setting, namely the content of the articles, their connections with each other and the correlations among their labels. We perform this evaluation using a Graph Markov Neural Network which provides a theoretically principled model for this task and we conduct a detailed evaluation of the contributions of each sources of information using a clear separation of model selection and model assessment. One interesting observation is that including the effect of label dependencies is more relevant for sparse train sets than it is for dense train sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
355,992
2004.13177
PowerModelsRestoration.jl: An Open-Source Framework for Exploring Power Network Restoration Algorithms
With the escalating frequency of extreme grid disturbances, such as natural disasters, comes an increasing need for efficient recovery plans. Algorithms for optimal power restoration play an important role in developing such plans, but also give rise to challenging mixed-integer nonlinear optimization problems, where tractable solution methods are not yet available. To assist in research on such solution methods, this work proposes PowerModelsRestoration, a flexible, open-source software framework for rapidly designing and testing power restoration algorithms. PowerModelsRestoration constructs a mathematical modeling layer for formalizing core restoration tasks that can be combined to develop complex workflows and high performance heuristics. The efficacy of the proposed framework is demonstrated by proof-of-concept studies on three established cases from the literature, focusing on single-phase positive sequence network models. The results demonstrate that PowerModelsRestoration reproduces the established literature, and for the first time provide an analysis of restoration with nonlinear power flow models, which have not been previously considered.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
174,469
1504.00481
Data Dissemination Problem in Wireless Networks
In this work, we formulate and study a data dissemination problem, which can be viewed as a generalization of the index coding problem and of the data exchange problem to networks with an arbitrary topology. We define $r$-solvable networks, in which data dissemination can be achieved in $r > 0$ communications rounds. We show that the optimum number of transmissions for any one-round communications scheme is given by the minimum rank of a certain constrained family of matrices. For a special case of this problem, called bipartite data dissemination problem, we present lower and upper graph-theoretic bounds on the optimum number of transmissions. For general $r$-solvable networks, we derive an upper bound on the minimum number of transmissions in any scheme with $\geq r$ rounds. We experimentally compare the obtained upper bound to a simple lower bound.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
41,708
2212.12616
Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms
Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature. The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
338,074
2010.03821
Clustering Analysis of Interactive Learning Activities Based on Improved BIRCH Algorithm
Group tendency is a research branch of computer assisted learning. The construction of good learning behavior is of great significance to learners' learning process and learning effect, and is the key basis of data-driven education decision-making. Clustering analysis is an effective method for the study of group tendency. Therefore, it is necessary to obtain the online learning behavior big data set of multi period and multi course, and describe the learning behavior as multi-dimensional learning interaction activities. First of all, on the basis of data initialization and standardization, we locate the classification conditions of data, realize the differentiation and integration of learning behavior, and form multiple subsets of data to be clustered; secondly, according to the topological relevance and dependence between learning interaction activities, we design an improved algorithm of BIRCH clustering based on random walking strategy, which realizes the retrieval evaluation and data of key learning interaction activities; Thirdly, through the calculation and comparison of several performance indexes, the improved algorithm has obvious advantages in learning interactive activity clustering, and the clustering process and results are feasible and reliable. The conclusion of this study can be used for reference and can be popularized. It has practical significance for the research of education big data and the practical application of learning analytics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
199,542