id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1206.6847
Identifying the Relevant Nodes Without Learning the Model
We propose a method to identify all the nodes that are relevant to compute all the conditional probability distributions for a given set of nodes. Our method is simple, effcient, consistent, and does not require learning a Bayesian network first. Therefore, our method can be applied to high-dimensional databases, e.g. gene expression databases.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
17,073
2305.12452
Advancing Referring Expression Segmentation Beyond Single Image
Referring Expression Segmentation (RES) is a widely explored multi-modal task, which endeavors to segment the pre-existing object within a single image with a given linguistic expression. However, in broader real-world scenarios, it is not always possible to determine if the described object exists in a specific image. Typically, we have a collection of images, some of which may contain the described objects. The current RES setting curbs its practicality in such situations. To overcome this limitation, we propose a more realistic and general setting, named Group-wise Referring Expression Segmentation (GRES), which expands RES to a collection of related images, allowing the described objects to be present in a subset of input images. To support this new setting, we introduce an elaborately compiled dataset named Grouped Referring Dataset (GRD), containing complete group-wise annotations of target objects described by given expressions. We also present a baseline method named Grouped Referring Segmenter (GRSer), which explicitly captures the language-vision and intra-group vision-vision interactions to achieve state-of-the-art results on the proposed GRES and related tasks, such as Co-Salient Object Detection and RES. Our dataset and codes will be publicly released in https://github.com/yixuan730/group-res.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
366,008
2206.07898
Multimodal Dialogue State Tracking
Designed for tracking user goals in dialogues, a dialogue state tracker is an essential component in a dialogue system. However, the research of dialogue state tracking has largely been limited to unimodality, in which slots and slot values are limited by knowledge domains (e.g. restaurant domain with slots of restaurant name and price range) and are defined by specific database schema. In this paper, we propose to extend the definition of dialogue state tracking to multimodality. Specifically, we introduce a novel dialogue state tracking task to track the information of visual objects that are mentioned in video-grounded dialogues. Each new dialogue utterance may introduce a new video segment, new visual objects, or new object attributes, and a state tracker is required to update these information slots accordingly. We created a new synthetic benchmark and designed a novel baseline, Video-Dialogue Transformer Network (VDTN), for this task. VDTN combines both object-level features and segment-level features and learns contextual dependencies between videos and dialogues to generate multimodal dialogue states. We optimized VDTN for a state generation task as well as a self-supervised video understanding task which recovers video segment or object representations. Finally, we trained VDTN to use the decoded states in a response prediction task. Together with comprehensive ablation and qualitative analysis, we discovered interesting insights towards building more capable multimodal dialogue systems.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
302,929
2409.11235
SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking
Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to novel categories not in the training set. Currently, the best-performing methods are mainly based on pure appearance matching. Due to the complexity of motion patterns in the large-vocabulary scenarios and unstable classification of the novel objects, the motion and semantics cues are either ignored or applied based on heuristics in the final matching steps by existing methods. In this paper, we present a unified framework SLAck that jointly considers semantics, location, and appearance priors in the early steps of association and learns how to integrate all valuable information through a lightweight spatial and temporal object graph. Our method eliminates complex post-processing heuristics for fusing different cues and boosts the association performance significantly for large-scale open-vocabulary tracking. Without bells and whistles, we outperform previous state-of-the-art methods for novel classes tracking on the open-vocabulary MOT and TAO TETA benchmarks. Our code is available at \href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
489,066
2111.10933
Decentralized Upper Confidence Bound Algorithms for Homogeneous Multi-Agent Multi-Armed Bandits
This paper studies a decentralized homogeneous multi-armed bandit problem in a multi-agent network. The problem is simultaneously solved by $N$ agents assuming they face a common set of $M$ arms and share the same arms' reward distributions. Each agent can receive information only from its neighbors, where the neighbor relationships among the agents are described by a fixed graph. Two fully decentralized upper confidence bound (UCB) algorithms are proposed for undirected graphs, respectively based on the classic algorithm and the state-of-the-art Kullback-Leibler upper confidence bound (KL-UCB) algorithm. The proposed decentralized UCB1 and KL-UCB algorithms permit each agent in the network to achieve a better logarithmic asymptotic regret than their single-agent counterparts, provided that the agent has at least one neighbor, and the more neighbors an agent has, the better regret it will have, meaning that the sum is more than its component parts. The same algorithm design framework is also extended to directed graphs through the design of a variant of the decentralized UCB1 algorithm, which outperforms the single-agent UCB1 algorithm.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
267,491
1804.09820
A Nonlinear Spectral Method for Core--Periphery Detection in Networks
We derive and analyse a new iterative algorithm for detecting network core--periphery structure. Using techniques in nonlinear Perron-Frobenius theory, we prove global convergence to the unique solution of a relaxed version of a natural discrete optimization problem. On sparse networks, the cost of each iteration scales linearly with the number of nodes, making the algorithm feasible for large-scale problems. We give an alternative interpretation of the algorithm from the perspective of maximum likelihood reordering of a new logistic core--periphery random graph model. This viewpoint also gives a new basis for quantitatively judging a core--periphery detection algorithm. We illustrate the algorithm on a range of synthetic and real networks, and show that it offers advantages over the current state-of-the-art.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
96,044
2208.03712
TPM: Transition Probability Matrix -- Graph Structural Feature based Embedding
In this work, Transition Probability Matrix (TPM) is proposed as a new method for extracting the features of nodes in the graph. The proposed method uses random walks to capture the connectivity structure of a node's close neighborhood. The information obtained from random walks is converted to anonymous walks to extract the topological features of nodes. In the embedding process of nodes, anonymous walks are used since they capture the topological similarities of connectivities better than random walks. Therefore the obtained embedding vectors have richer information about the underlying connectivity structure. The method is applied to node classification and link prediction tasks. The performance of the proposed algorithm is superior to the state-of-the-art algorithms in the recent literature. Moreover, the extracted information about the connectivity structure of similar networks is used to link prediction and node classification tasks for a completely new graph.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
311,877
2402.01741
Development and Testing of a Novel Large Language Model-Based Clinical Decision Support Systems for Medication Safety in 12 Clinical Specialties
Importance: We introduce a novel Retrieval Augmented Generation (RAG)-Large Language Model (LLM) framework as a Clinical Decision Support Systems (CDSS) to support safe medication prescription. Objective: To evaluate the efficacy of LLM-based CDSS in correctly identifying medication errors in different patient case vignettes from diverse medical and surgical sub-disciplines, against a human expert panel derived ground truth. We compared performance for under 2 different CDSS practical healthcare integration modalities: LLM-based CDSS alone (fully autonomous mode) vs junior pharmacist + LLM-based CDSS (co-pilot, assistive mode). Design, Setting, and Participants: Utilizing a RAG model with state-of-the-art medically-related LLMs (GPT-4, Gemini Pro 1.0 and Med-PaLM 2), this study used 61 prescribing error scenarios embedded into 23 complex clinical vignettes across 12 different medical and surgical specialties. A multidisciplinary expert panel assessed these cases for Drug-Related Problems (DRPs) using the PCNE classification and graded severity / potential for harm using revised NCC MERP medication error index. We compared. Results RAG-LLM performed better compared to LLM alone. When employed in a co-pilot mode, accuracy, recall, and F1 scores were optimized, indicating effectiveness in identifying moderate to severe DRPs. The accuracy of DRP detection with RAG-LLM improved in several categories but at the expense of lower precision. Conclusions This study established that a RAG-LLM based CDSS significantly boosts the accuracy of medication error identification when used alongside junior pharmacists (co-pilot), with notable improvements in detecting severe DRPs. This study also illuminates the comparative performance of current state-of-the-art LLMs in RAG-based CDSS systems.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
426,188
2112.02998
A PubMedBERT-based Classifier with Data Augmentation Strategy for Detecting Medication Mentions in Tweets
As a major social media platform, Twitter publishes a large number of user-generated text (tweets) on a daily basis. Mining such data can be used to address important social, public health, and emergency management issues that are infeasible through other means. An essential step in many text mining pipelines is named entity recognition (NER), which presents some special challenges for tweet data. Among them are nonstandard expressions, extreme imbalanced classes, and lack of context information, etc. The track 3 of BioCreative challenge VII (BC7) was organized to evaluate methods for detecting medication mentions in tweets. In this paper, we report our work on BC7 track 3, where we explored a PubMedBERT-based classifier trained with a combination of multiple data augmentation approaches. Our method achieved an F1 score of 0.762, which is substantially higher than the mean of all submissions (0.696).
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
270,040
1908.01994
Comprehensive Fuzzy Turing Machines, An Evolution to the Concept of Finite State Machine Control
The Turing machine is an abstract concept of a computing device which introduced new models for computation. The idea of Fuzzy algorithms defined by Zadeh and Lee was followed by introducing Fuzzy Turing Machine (FTM) to create a platform for a new fuzzy computation model. Then, in his investigations on its computational power, Wiedermann showed that FTM is able to solve undecidable problems. His suggested FTM structure, which highly resembles the original definition was one of the most well-known classical definitions of FTM lately.To improve some of its weaknesses and vague points which will be discussed extensively in this paper, we will develop a more complete definition for fuzzy Turing machines. Our proposed definition of FTM, which encompasses the conventional definition, is motivated from the definition of General Fuzzy Automata (GFA) introduced by Doostfatemeh and Kremer. As it improved the conventional definition of fuzzy automata, especially the problem of membership assignment and multi-membership resolution, we also improved the same aspects of FTM through the definition of Comprehensive Fuzzy Turing Machine (CFTM). In addition, we address on some possible vaguenesses in FTM was not the subject of focus in fuzzy automata. As example, we investigate the issue of multi-path and multi-direction which are possible in case of nondeterminism. Finally, we show the simplicity, applicability and computational efficiency of the CFTM through an explanatory example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
140,905
2412.16083
Differentially Private Federated Learning of Diffusion Models for Synthetic Tabular Data Generation
The increasing demand for privacy-preserving data analytics in finance necessitates solutions for synthetic data generation that rigorously uphold privacy standards. We introduce DP-Fed-FinDiff framework, a novel integration of Differential Privacy, Federated Learning and Denoising Diffusion Probabilistic Models designed to generate high-fidelity synthetic tabular data. This framework ensures compliance with stringent privacy regulations while maintaining data utility. We demonstrate the effectiveness of DP-Fed-FinDiff on multiple real-world financial datasets, achieving significant improvements in privacy guarantees without compromising data quality. Our empirical evaluations reveal the optimal trade-offs between privacy budgets, client configurations, and federated optimization strategies. The results affirm the potential of DP-Fed-FinDiff to enable secure data sharing and robust analytics in highly regulated domains, paving the way for further advances in federated learning and privacy-preserving data synthesis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
519,349
2409.05503
Fast Computation for the Forest Matrix of an Evolving Graph
The forest matrix plays a crucial role in network science, opinion dynamics, and machine learning, offering deep insights into the structure of and dynamics on networks. In this paper, we study the problem of querying entries of the forest matrix in evolving graphs, which more accurately represent the dynamic nature of real-world networks compared to static graphs. To address the unique challenges posed by evolving graphs, we first introduce two approximation algorithms, \textsc{SFQ} and \textsc{SFQPlus}, for static graphs. \textsc{SFQ} employs a probabilistic interpretation of the forest matrix, while \textsc{SFQPlus} incorporates a novel variance reduction technique and is theoretically proven to offer enhanced accuracy. Based on these two algorithms, we further devise two dynamic algorithms centered around efficiently maintaining a list of spanning converging forests. This approach ensures $O(1)$ runtime complexity for updates, including edge additions and deletions, as well as for querying matrix elements, and provides an unbiased estimation of forest matrix entries. Finally, through extensive experiments on various real-world networks, we demonstrate the efficiency and effectiveness of our algorithms. Particularly, our algorithms are scalable to massive graphs with more than forty million nodes.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
486,799
1910.01923
Layout-Graph Reasoning for Fashion Landmark Detection
Detecting dense landmarks for diverse clothes, as a fundamental technique for clothes analysis, has attracted increasing research attention due to its huge application potential. However, due to the lack of modeling underlying semantic layout constraints among landmarks, prior works often detect ambiguous and structure-inconsistent landmarks of multiple overlapped clothes in one person. In this paper, we propose to seamlessly enforce structural layout relationships among landmarks on the intermediate representations via multiple stacked layout-graph reasoning layers. We define the layout-graph as a hierarchical structure including a root node, body-part nodes (e.g. upper body, lower body), coarse clothes-part nodes (e.g. collar, sleeve) and leaf landmark nodes (e.g. left-collar, right-collar). Each Layout-Graph Reasoning(LGR) layer aims to map feature representations into structural graph nodes via a Map-to-Node module, performs reasoning over structural graph nodes to achieve global layout coherency via a layout-graph reasoning module, and then maps graph nodes back to enhance feature representations via a Node-to-Map module. The layout-graph reasoning module integrates a graph clustering operation to generate representations of intermediate nodes (bottom-up inference) and then a graph deconvolution operation (top-down inference) over the whole graph. Extensive experiments on two public fashion landmark datasets demonstrate the superiority of our model. Furthermore, to advance the fine-grained fashion landmark research for supporting more comprehensive clothes generation and attribute recognition, we contribute the first Fine-grained Fashion Landmark Dataset (FFLD) containing 200k images annotated with at most 32 key-points for 13 clothes types.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
148,087
1705.08317
A Cloud-based Service for Real-Time Performance Evaluation of NoSQL Databases
We have created a cloud-based service that allows the end users to run tests on multiple different databases to find which databases are most suitable for their project. From our research, we could not find another application that enables the user to test several databases to gauge the difference between them. This application allows the user to choose which type of test to perform and which databases to target. The application also displays the results of different tests that were run by other users previously. There is also a map to show the location where all the tests are run to give the user an estimate of the location. Unlike the orthodox static tests and reports conducted to evaluate NoSQL databases, we have created a web application to run and analyze these tests in real time. This web application evaluates the performance of several NoSQL databases. The databases covered are MongoDB, DynamoDB, CouchDB, and Firebase. The web service is accessible from: nosqldb.nextproject.ca.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
73,997
1211.2367
IS-LABEL: an Independent-Set based Labeling Scheme for Point-to-Point Distance Querying on Large Graphs
We study the problem of computing shortest path or distance between two query vertices in a graph, which has numerous important applications. Quite a number of indexes have been proposed to answer such distance queries. However, all of these indexes can only process graphs of size barely up to 1 million vertices, which is rather small in view of many of the fast-growing real-world graphs today such as social networks and Web graphs. We propose an efficient index, which is a novel labeling scheme based on the independent set of a graph. We show that our method can handle graphs of size three orders of magnitude larger than those existing indexes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
19,675
2305.11255
Reasoning Implicit Sentiment with Chain-of-Thought Prompting
While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50% F1 on zero-shot setting. Our code is open at https://github.com/scofield7419/THOR-ISA.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
365,446
1905.11664
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks
Channel pruning can significantly accelerate and compress deep neural networks. Many channel pruning works utilize structured sparsity regularization to zero out all the weights in some channels and automatically obtain structure-sparse network in training stage. However, these methods apply structured sparsity regularization on each layer separately where the correlations between consecutive layers are omitted. In this paper, we first combine one out-channel in current layer and the corresponding in-channel in next layer as a regularization group, namely out-in-channel. Our proposed Out-In-Channel Sparsity Regularization (OICSR) considers correlations between successive layers to further retain predictive power of the compact network. Training with OICSR thoroughly transfers discriminative features into a fraction of out-in-channels. Correspondingly, OICSR measures channel importance based on statistics computed from two consecutive layers, not individual layer. Finally, a global greedy pruning algorithm is designed to remove redundant out-in-channels in an iterative way. Our method is comprehensively evaluated with various CNN architectures including CifarNet, AlexNet, ResNet, DenseNet and PreActSeNet on CIFAR-10, CIFAR-100 and ImageNet-1K datasets. Notably, on ImageNet-1K, we reduce 37.2% FLOPs on ResNet-50 while outperforming the original model by 0.22% top-1 accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
132,503
2407.12637
Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients
Network quantization generally converts full-precision weights and/or activations into low-bit fixed-point values in order to accelerate an inference process. Recent approaches to network quantization further discretize the gradients into low-bit fixed-point values, enabling an efficient training. They typically set a quantization interval using a min-max range of the gradients or adjust the interval such that the quantization error for entire gradients is minimized. In this paper, we analyze the quantization error of gradients for the low-bit fixed-point training, and show that lowering the error for large-magnitude gradients boosts the quantization performance significantly. Based on this, we derive an upper bound of quantization error for the large gradients in terms of the quantization interval, and obtain an optimal condition for the interval minimizing the quantization error for large gradients. We also introduce an interval update algorithm that adjusts the quantization interval adaptively to maintain a small quantization error for large gradients. Experimental results demonstrate the effectiveness of our quantization method for various combinations of network architectures and bit-widths on various tasks, including image classification, object detection, and super-resolution.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,013
2207.06032
Abnormality Detection and Localization Schemes using Molecular Communication Systems: A Survey
Abnormality detection and localization (ADL) have been studied widely in wireless sensor networks (WSNs) literature, where the sensors use electromagnetic waves for communication. Molecular communication (MC) has been introduced as an alternative approach for ADL in particular areas such as healthcare, being able to tackle the shortcomings of conventional WSNs, such as invasiveness, bio-incompatibility, and high energy consumption. In this paper, we introduce a general framework for MC-based ADL, which consists of multiple tiers for sensing the abnormality and communication between different agents, including the sensors, the fusion center (FC), the gateway (GW), and the external node (e.g., a local cloud), and describe each tier and the agents in this framework. We classify and explain different abnormality recognition methods, the functional units of the sensors, and different sensor features. Further, we describe different types of interfaces required for converting the internal and external signals at the FC and GW. Moreover, we present a unified channel model for the sensing and communication links. We categorize the MC-based abnormality detection schemes based on the sensor mobility, cooperative detection, and cooperative sensing/activation. We also classify the localization approaches based on the sensor mobility and propulsion mechanisms and present a general framework for the externally-controllable localization systems. Finally, we present some challenges and future research directions to realize and develop MC-based systems for ADL. The important challenges in the MC-based systems lie in four main directions as implementation, system design, modeling, and methods, which need considerable attention from multidisciplinary perspectives.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
307,747
1701.06545
Exponent Function for Stationary Memoryless Channels with Input Cost at Rates above the Capacity
We consider the stationaly memoryless channels with input cost. We prove that for transmission rates above the capacity the correct probability of decoding tends to zero exponentially as the block length $n$ of codes tends to infinity. In the case where both of channel input and output sets are finite, we determine the optimal exponent function on the above exponential decay of the correct probability. To derive this result we use a new technique called the recuresive method, which is based on the information spectrum approach. The recursive method utilize a certain recursive structure on the information spectrum quantities.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
67,155
2402.11173
How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization
We provide a simple and flexible framework for designing differentially private algorithms to find approximate stationary points of non-convex loss functions. Our framework is based on using a private approximate risk minimizer to "warm start" another private algorithm for finding stationary points. We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions. First, we obtain improved rates for finding stationary points of smooth non-convex empirical loss functions. Second, we specialize to quasar-convex functions, which generalize star-convex functions and arise in learning dynamical systems and training some neural nets. We achieve the optimal rate for this class. Third, we give an optimal algorithm for finding stationary points of functions satisfying the Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural networks often satisfy this condition. Fourth, we provide new state-of-the-art rates for stationary points of non-convex population loss functions. Fifth, we obtain improved rates for non-convex generalized linear models. A modification of our algorithm achieves nearly the same rates for second-order stationary points of functions with Lipschitz Hessian, improving over the previous state-of-the-art for each of the above problems.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
430,267
1009.2221
Performance Bounds and Design Criteria for Estimating Finite Rate of Innovation Signals
In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainable regardless of the sampling method. Next, we provide a bound on the performance achievable using any specific sampling approach. Essential differences between the noisy and noise-free cases arise from this analysis. In particular, we identify settings in which noise-free recovery techniques deteriorate substantially under slight noise levels, thus quantifying the numerical instability inherent in such methods. This instability, which is only present in some families of FRI signals, is shown to be related to a specific type of structure, which can be characterized by viewing the signal model as a union of subspaces. Finally, we develop a methodology for choosing the optimal sampling kernels based on a generalization of the Karhunen--Lo\`eve transform. The results are illustrated for several types of time-delay estimation problems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,531
2411.02788
When to Localize? A Risk-Constrained Reinforcement Learning Approach
In a standard navigation pipeline, a robot localizes at every time step to lower navigational errors. However, in some scenarios, a robot needs to selectively localize when it is expensive to obtain observations. For example, an underwater robot surfacing to localize too often hinders it from searching for critical items underwater, such as black boxes from crashed aircraft. On the other hand, if the robot never localizes, poor state estimates cause failure to find the items due to inadvertently leaving the search area or entering hazardous, restricted areas. Motivated by these scenarios, we investigate approaches to help a robot determine "when to localize?" We formulate this as a bi-criteria optimization problem: minimize the number of localization actions while ensuring the probability of failure (due to collision or not reaching a desired goal) remains bounded. In recent work, we showed how to formulate this active localization problem as a constrained Partially Observable Markov Decision Process (POMDP), which was solved using an online POMDP solver. However, this approach is too slow and requires full knowledge of the robot transition and observation models. In this paper, we present RiskRL, a constrained Reinforcement Learning (RL) framework that overcomes these limitations. RiskRL uses particle filtering and recurrent Soft Actor-Critic network to learn a policy that minimizes the number of localizations while ensuring the probability of failure constraint is met. Our numerical experiments show that RiskRL learns a robust policy that outperforms the baseline by at least 13% while also generalizing to unseen environments.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
505,659
1702.08101
Solutions for Practice-oriented Requirements for Optimal Path Planning for the AUV "SLOCUM Glider"
This paper presents a few important practiceoriented requirements for optimal path planning for the AUV "SLOCUM Glider" as well as solutions using fast graph basedalgorithms. These algorithms build upon the TVE (time-varying environment) search algorithm. The experience with this algorithm, requirements of real missions along the Newfoundland and Labrador Shelf and the idea to find the optimal departure time are the motivation to address the field of research, which is described in this paper. The main focus of this paper is a discussion of possible methods to accelerate the path planning algorithm, without deterioration of the results.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
68,920
2304.03917
MC-MLP:Multiple Coordinate Frames in all-MLP Architecture for Vision
In deep learning, Multi-Layer Perceptrons (MLPs) have once again garnered attention from researchers. This paper introduces MC-MLP, a general MLP-like backbone for computer vision that is composed of a series of fully-connected (FC) layers. In MC-MLP, we propose that the same semantic information has varying levels of difficulty in learning, depending on the coordinate frame of features. To address this, we perform an orthogonal transform on the feature information, equivalent to changing the coordinate frame of features. Through this design, MC-MLP is equipped with multi-coordinate frame receptive fields and the ability to learn information across different coordinate frames. Experiments demonstrate that MC-MLP outperforms most MLPs in image classification tasks, achieving better performance at the same parameter level. The code will be available at: https://github.com/ZZM11/MC-MLP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
356,992
2502.00461
On Multiquantum Bits, Segre Embeddings and Coxeter Chambers
This work explores the interplay between quantum information theory, algebraic geometry, and number theory, with a particular focus on multiqubit systems, their entanglement structure, and their classification via geometric embeddings. The Segre embedding, a fundamental construction in algebraic geometry, provides an algebraic framework to distinguish separable and entangled states, encoding quantum correlations in projective geometry. We develop a systematic study of qubit moduli spaces, illustrating the geometric structure of entanglement through hypercube constructions and Coxeter chamber decompositions. We establish a bijection between the Segre embeddings of tensor products of projective spaces and binary words of length $n-1$, structured as an $(n-1)$-dimensional hypercube, where adjacency corresponds to a single Segre operation. This reveals a combinatorial structure underlying the hierarchy of embeddings, with direct implications for quantum error correction schemes. The symmetry of the Segre variety under the Coxeter group of type $A$ allows us to analyze quantum states and errors through the lens of reflection groups, viewing separable states as lying in distinct Coxeter chambers on a Segre variety. The transitive action of the permutation group on these chambers provides a natural method for tracking errors in quantum states and potentially reversing them. Beyond foundational aspects, we highlight relations between Segre varieties and Dixon elliptic curves, drawing connections between entanglement and number theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
529,393
2404.17357
Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model
In clinical practice, tri-modal medical image fusion, compared to the existing dual-modal technique, can provide a more comprehensive view of the lesions, aiding physicians in evaluating the disease's shape, location, and biological activity. However, due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited, leading to sub-optimal fusion performance, and affecting the depth of image analysis by the physician. Thus, there is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information. Although current image processing methods can effectively address image fusion and super-resolution individually, solving both problems synchronously remains extremely challenging. In this paper, we propose TFS-Diff, a simultaneously realize tri-modal medical image fusion and super-resolution model. Specially, TFS-Diff is based on the diffusion model generation of a random iterative denoising process. We also develop a simple objective function and the proposed fusion super-resolution loss, effectively evaluates the uncertainty in the fusion and ensures the stability of the optimization process. And the channel attention module is proposed to effectively integrate key information from different modalities for clinical diagnosis, avoiding information loss caused by multiple image processing. Extensive experiments on public Harvard datasets show that TFS-Diff significantly surpass the existing state-of-the-art methods in both quantitative and visual evaluations. Code is available at https://github.com/XylonXu01/TFS-Diff.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
449,830
2001.04861
Fairness in Learning-Based Sequential Decision Algorithms: A Survey
Algorithmic fairness in decision-making has been studied extensively in static settings where one-shot decisions are made on tasks such as classification. However, in practice most decision-making processes are of a sequential nature, where decisions made in the past may have an impact on future data. This is particularly the case when decisions affect the individuals or users generating the data used for future decisions. In this survey, we review existing literature on the fairness of data-driven sequential decision-making. We will focus on two types of sequential decisions: (1) past decisions have no impact on the underlying user population and thus no impact on future data; (2) past decisions have an impact on the underlying user population and therefore the future data, which can then impact future decisions. In each case the impact of various fairness interventions on the underlying population is examined.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
160,381
1910.11477
Phase Retrieval of Low-Rank Matrices by Anchored Regression
We study the low-rank phase retrieval problem, where we try to recover a $d_1\times d_2$ low-rank matrix from a series of phaseless linear measurements. This is a fourth-order inverse problem, as we are trying to recover factors of matrix that have been put through a quadratic nonlinearity after being multiplied together. We propose a solution to this problem using the recently introduced technique of anchored regression. This approach uses two different types of convex relaxations: we replace the quadratic equality constraints for the phaseless measurements by a search over a polytope, and enforce the rank constraint through nuclear norm regularization. The result is a convex program that works in the space of $d_1 \times d_2$ matrices. We analyze two specific scenarios. In the first, the target matrix is rank-$1$, and the observations are structured to correspond to a phaseless blind deconvolution. In the second, the target matrix has general rank, and we observe the magnitudes of the inner products against a series of independent Gaussian random matrices. In each of these problems, we show that the anchored regression returns an accurate estimate from a near-optimal number of measurements given that we have access to an anchor matrix of sufficient quality. We also show how to create such an anchor in the phaseless blind deconvolution problem, again from an optimal number of measurements, and present a partial result in this direction for the general rank problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
150,787
2203.15323
Improving Persian Relation Extraction Models by Data Augmentation
Relation extraction that is the task of predicting semantic relation type between entities in a sentence or document is an important task in natural language processing. Although there are many researches and datasets for English, Persian suffers from sufficient researches and comprehensive datasets. The only available Persian dataset for this task is PERLEX, which is a Persian expert-translated version of the SemEval-2010-Task-8 dataset. In this paper, we present our augmented dataset and the results and findings of our system, participated in the Persian relation Extraction shared task of NSURL 2021 workshop. We use PERLEX as the base dataset and enhance it by applying some text preprocessing steps and by increasing its size via data augmentation techniques to improve the generalization and robustness of applied models. We then employ two different models including ParsBERT and multilingual BERT for relation extraction on the augmented PERLEX dataset. Our best model obtained 64.67% of Macro-F1 on the test phase of the contest and it achieved 83.68% of Macro-F1 on the test set of PERLEX.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
288,345
2108.11357
ProoFVer: Natural Logic Theorem Proving for Fact Verification
Fact verification systems typically rely on neural network classifiers for veracity prediction which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence retrieved, each marked with a natural logic operator. Claim veracity is determined solely based on the sequence of these operators. Hence, these proofs are faithful explanations, and this makes ProoFVer faithful by construction. Currently, ProoFVer has the highest label accuracy and the second-best Score in the FEVER leaderboard. Furthermore, it improves by 13.21% points over the next best model on a dataset with counterfactual instances, demonstrating its robustness. As explanations, the proofs show better overlap with human rationales than attention-based highlights and the proofs help humans predict model decisions correctly more often than using the evidence directly.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
252,164
2312.16755
Graph Neural Networks for Antisocial Behavior Detection on Twitter
Social media resurgence of antisocial behavior has exerted a downward spiral on stereotypical beliefs, and hateful comments towards individuals and social groups, as well as false or distorted news. The advances in graph neural networks employed on massive quantities of graph-structured data raise high hopes for the future of mediating communication on social media platforms. An approach based on graph convolutional data was employed to better capture the dependencies between the heterogeneous types of data. Utilizing past and present experiences on the topic, we proposed and evaluated a graph-based approach for antisocial behavior detection, with general applicability that is both language- and context-independent. In this research, we carried out an experimental validation of our graph-based approach on several PAN datasets provided as part of their shared tasks, that enable the discussion of the results obtained by the proposed solution.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
418,495
2501.04272
On weight and variance uncertainty in neural networks for regression tasks
We consider the problem of weight uncertainty proposed by [Blundell et al. (2015). Weight uncertainty in neural network. In International conference on machine learning, 1613-1622, PMLR.] in neural networks {(NNs)} specialized for regression tasks. {We further} investigate the effect of variance uncertainty in {their model}. We show that including the variance uncertainty can improve the prediction performance of the Bayesian {NN}. Variance uncertainty enhances the generalization of the model {by} considering the posterior distribution over the variance parameter. { We examine the generalization ability of the proposed model using a function approximation} example and {further illustrate it with} the riboflavin genetic data set. {We explore fully connected dense networks and dropout NNs with} Gaussian and spike-and-slab priors, respectively, for the network weights.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
523,156
2203.04738
Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences
Parallelizing Gated Recurrent Unit (GRU) networks is a challenging task, as the training procedure of GRU is inherently sequential. Prior efforts to parallelize GRU have largely focused on conventional parallelization strategies such as data-parallel and model-parallel training algorithms. However, when the given sequences are very long, existing approaches are still inevitably performance limited in terms of training time. In this paper, we present a novel parallel training scheme (called parallel-in-time) for GRU based on a multigrid reduction in time (MGRIT) solver. MGRIT partitions a sequence into multiple shorter sub-sequences and trains the sub-sequences on different processors in parallel. The key to achieving speedup is a hierarchical correction of the hidden state to accelerate end-to-end communication in both the forward and backward propagation phases of gradient descent. Experimental results on the HMDB51 dataset, where each video is an image sequence, demonstrate that the new parallel training scheme achieves up to 6.5$\times$ speedup over a serial approach. As efficiency of our new parallelization strategy is associated with the sequence length, our parallel GRU algorithm achieves significant performance improvement as the sequence length increases.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
284,580
2112.03267
Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding
Mobile devices are indispensable sources of big data. Federated learning (FL) has a great potential in exploiting these private data by exchanging locally trained models instead of their raw data. However, mobile devices are often energy limited and wirelessly connected, and FL cannot cope flexibly with their heterogeneous and time-varying energy capacity and communication throughput, limiting the adoption. Motivated by these issues, we propose a novel energy and communication efficient FL framework, coined SlimFL. To resolve the heterogeneous energy capacity problem, each device in SlimFL runs a width-adjustable slimmable neural network (SNN). To address the heterogeneous communication throughput problem, each full-width (1.0x) SNN model and its half-width ($0.5$x) model are superposition-coded before transmission, and successively decoded after reception as the 0.5x or $1.0$x model depending on the channel quality. Simulation results show that SlimFL can simultaneously train both $0.5$x and $1.0$x models with reasonable accuracy and convergence speed, compared to its vanilla FL counterpart separately training the two models using $2$x more communication resources. Surprisingly, SlimFL achieves even higher accuracy with lower energy footprints than vanilla FL for poor channels and non-IID data distributions, under which vanilla FL converges slowly.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
270,143
1710.10386
Dual Skipping Networks
Inspired by the recent neuroscience studies on the left-right asymmetry of the human brain in processing low and high spatial frequency information, this paper introduces a dual skipping network which carries out coarse-to-fine object categorization. Such a network has two branches to simultaneously deal with both coarse and fine-grained classification tasks. Specifically, we propose a layer-skipping mechanism that learns a gating network to predict which layers to skip in the testing stage. This layer-skipping mechanism endows the network with good flexibility and capability in practice. Evaluations are conducted on several widely used coarse-to-fine object categorization benchmarks, and promising results are achieved by our proposed network model.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
83,369
1303.6711
An intelligent approach towards automatic shape modeling and object extraction from satellite images using cellular automata based algorithm
Automatic feature extraction domain has witnessed the application of many intelligent methodologies over past decade; however detection accuracy of these approaches were limited as object geometry and contextual knowledge were not given enough consideration. In this paper, we propose a frame work for accurate detection of features along with automatic interpolation, and interpretation by modeling feature shape as well as contextual knowledge using advanced techniques such as SVRF, Cellular Neural Network, Core set, and MACA. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the CNN approach. CNN has been effective in modeling different complex features effectively and complexity of the approach has been considerably reduced using corset optimization. The system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. System has been also proved to be effective in providing intelligent interpolation and interpretation of random features.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
23,284
2408.00217
Load Balancing in Federated Learning
Federated Learning (FL) is a decentralized machine learning framework that enables learning from data distributed across multiple remote devices, enhancing communication efficiency and data privacy. Due to limited communication resources, a scheduling policy is often applied to select a subset of devices for participation in each FL round. The scheduling process confronts significant challenges due to the need for fair workload distribution, efficient resource utilization, scalability in environments with numerous edge devices, and statistically heterogeneous data across devices. This paper proposes a load metric for scheduling policies based on the Age of Information and addresses the above challenges by minimizing the load metric variance across the clients. Furthermore, a decentralized Markov scheduling policy is presented, that ensures a balanced workload distribution while eliminating the management overhead irrespective of the network size due to independent client decision-making. We establish the optimal parameters of the Markov chain model and validate our approach through simulations. The results demonstrate that reducing the load metric variance not only promotes fairness and improves operational efficiency, but also enhances the convergence rate of the learning models.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
477,746
1603.03083
A Decentralized Mechanism for Computing Competitive Equilibria in Deregulated Electricity Markets
With the increased level of distributed generation and demand response comes the need for associated mechanisms that can perform well in the face of increasingly complex deregulated energy market structures. Using Lagrangian duality theory, we develop a decentralized market mechanism that ensures that, under the guidance of a market operator, self-interested market participants: generation companies (GenCos), distribution companies (DistCos), and transmission companies (TransCos), reach a competitive equilibrium. We show that even in the presence of informational asymmetries and nonlinearities (such as power losses and transmission constraints), the resulting competitive equilibrium is Pareto efficient.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
53,072
2501.01806
TRG-planner: Traversal Risk Graph-Based Path Planning in Unstructured Environments for Safe and Efficient Navigation
Unstructured environments such as mountains, caves, construction sites, or disaster areas are challenging for autonomous navigation because of terrain irregularities. In particular, it is crucial to plan a path to avoid risky terrain and reach the goal quickly and safely. In this paper, we propose a method for safe and distance-efficient path planning, leveraging Traversal Risk Graph (TRG), a novel graph representation that takes into account geometric traversability of the terrain. TRG nodes represent stability and reachability of the terrain, while edges represent relative traversal risk-weighted path candidates. Additionally, TRG is constructed in a wavefront propagation manner and managed hierarchically, enabling real-time planning even in large-scale environments. Lastly, we formulate a graph optimization problem on TRG that leads the robot to navigate by prioritizing both safe and short paths. Our approach demonstrated superior safety, distance efficiency, and fast processing time compared to the conventional methods. It was also validated in several real-world experiments using a quadrupedal robot. Notably, TRG-planner contributed as the global path planner of an autonomous navigation framework for the DreamSTEP team, which won the Quadruped Robot Challenge at ICRA 2023. The project page is available at https://trg-planner.github.io .
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
522,220
2104.03123
Partially-Connected Differentiable Architecture Search for Deepfake and Spoofing Detection
This paper reports the first successful application of a differentiable architecture search (DARTS) approach to the deepfake and spoofing detection problems. An example of neural architecture search, DARTS operates upon a continuous, differentiable search space which enables both the architecture and parameters to be optimised via gradient descent. Solutions based on partially-connected DARTS use random channel masking in the search space to reduce GPU time and automatically learn and optimise complex neural architectures composed of convolutional operations and residual blocks. Despite being learned quickly with little human effort, the resulting networks are competitive with the best performing systems reported in the literature. Some are also far less complex, containing 85% fewer parameters than a Res2Net competitor.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
228,981
2410.17744
Learning Versatile Skills with Curriculum Masking
Masked prediction has emerged as a promising pretraining paradigm in offline reinforcement learning (RL) due to its versatile masking schemes, enabling flexible inference across various downstream tasks with a unified model. Despite the versatility of masked prediction, it remains unclear how to balance the learning of skills at different levels of complexity. To address this, we propose CurrMask, a curriculum masking pretraining paradigm for sequential decision making. Motivated by how humans learn by organizing knowledge in a curriculum, CurrMask adjusts its masking scheme during pretraining for learning versatile skills. Through extensive experiments, we show that CurrMask exhibits superior zero-shot performance on skill prompting tasks, goal-conditioned planning tasks, and competitive finetuning performance on offline RL tasks. Additionally, our analysis of training dynamics reveals that CurrMask gradually acquires skills of varying complexity by dynamically adjusting its masking scheme.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
501,597
1803.11034
Automatic Generation of Optimal Reductions of Distributions
A reduction of a source distribution is a collection of smaller sized distributions that are collectively equivalent to the source distribution with respect to the property of decomposability. That is, an arbitrary language is decomposable with respect to the source distribution if and only if it is decomposable with respect to each smaller sized distribution (in the reduction). The notion of reduction of distributions has previously been proposed to improve the complexity of decomposability verification. In this work, we address the problem of generating (optimal) reductions of distributions automatically. A (partial) solution to this problem is provided, which consists of 1) an incremental algorithm for the production of candidate reductions and 2) a reduction validation procedure. In the incremental production stage, backtracking is applied whenever a candidate reduction that cannot be validated is produced. A strengthened substitution-based proof technique is used for reduction validation, while a fixed template of candidate counter examples is used for reduction refutation; put together, they constitute our (partial) solution to the reduction verification problem. In addition, we show that a recursive approach for the generation of (small) reductions is easily supported.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
93,808
2106.14144
Learning-based Framework for Sensor Fault-Tolerant Building HVAC Control with Model-assisted Learning
As people spend up to 87% of their time indoors, intelligent Heating, Ventilation, and Air Conditioning (HVAC) systems in buildings are essential for maintaining occupant comfort and reducing energy consumption. These HVAC systems in smart buildings rely on real-time sensor readings, which in practice often suffer from various faults and could also be vulnerable to malicious attacks. Such faulty sensor inputs may lead to the violation of indoor environment requirements (e.g., temperature, humidity, etc.) and the increase of energy consumption. While many model-based approaches have been proposed in the literature for building HVAC control, it is costly to develop accurate physical models for ensuring their performance and even more challenging to address the impact of sensor faults. In this work, we present a novel learning-based framework for sensor fault-tolerant HVAC control, which includes three deep learning based components for 1) generating temperature proposals with the consideration of possible sensor faults, 2) selecting one of the proposals based on the assessment of their accuracy, and 3) applying reinforcement learning with the selected temperature proposal. Moreover, to address the challenge of training data insufficiency in building-related tasks, we propose a model-assisted learning method leveraging an abstract model of building physical dynamics. Through extensive experiments, we demonstrate that the proposed fault-tolerant HVAC control framework can significantly reduce building temperature violations under a variety of sensor fault patterns while maintaining energy efficiency.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
243,306
2209.02986
Shifting Perspective to See Difference: A Novel Multi-View Method for Skeleton based Action Recognition
Skeleton-based human action recognition is a longstanding challenge due to its complex dynamics. Some fine-grain details of the dynamics play a vital role in classification. The existing work largely focuses on designing incremental neural networks with more complicated adjacent matrices to capture the details of joints relationships. However, they still have difficulties distinguishing actions that have broadly similar motion patterns but belong to different categories. Interestingly, we found that the subtle differences in motion patterns can be significantly amplified and become easy for audience to distinct through specified view directions, where this property haven't been fully explored before. Drastically different from previous work, we boost the performance by proposing a conceptually simple yet effective Multi-view strategy that recognizes actions from a collection of dynamic view features. Specifically, we design a novel Skeleton-Anchor Proposal (SAP) module which contains a Multi-head structure to learn a set of views. For feature learning of different views, we introduce a novel Angle Representation to transform the actions under different views and feed the transformations into the baseline model. Our module can work seamlessly with the existing action classification model. Incorporated with baseline models, our SAP module exhibits clear performance gains on many challenging benchmarks. Moreover, comprehensive experiments show that our model consistently beats down the state-of-the-art and remains effective and robust especially when dealing with corrupted data. Related code will be available on https://github.com/ideal-idea/SAP .
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
316,364
1509.03602
DeepSat - A Learning framework for Satellite Imagery
Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. The contributions of this paper are twofold - (1) first, we present two new satellite datasets called SAT-4 and SAT-6, and (2) then, we propose a classification framework that extracts features from an input image, normalizes them and feeds the normalized feature vectors to a Deep Belief Network for classification. On the SAT-4 dataset, our best network produces a classification accuracy of 97.95% and outperforms three state-of-the-art object recognition algorithms, namely - Deep Belief Networks, Convolutional Neural Networks and Stacked Denoising Autoencoders by ~11%. On SAT-6, it produces a classification accuracy of 93.9% and outperforms the other algorithms by ~15%. Comparative studies with a Random Forest classifier show the advantage of an unsupervised learning approach over traditional supervised learning techniques. A statistical analysis based on Distribution Separability Criterion and Intrinsic Dimensionality Estimation substantiates the effectiveness of our approach in learning better representations for satellite imagery.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
46,850
2002.12142
Energy Resolved Neutron Imaging for Strain Reconstruction using the Finite Element Method
A pulsed neutron imaging technique is used to reconstruct the residual strain within a polycrystalline material from Bragg edge strain images. This technique offers the possibility of a nondestructive analysis of strain fields with a high spatial resolution. A finite element approach is used to reconstruct the strain using the least square method constrained by the conditions of equilibrium. The procedure is developed and verified by validating for a cantilevered beam problem. It is subsequently demonstrated by reconstructing the strain from experimental data for a ring-and-plug sample, measured at the spallation neutron source RADEN at J-PARC in Japan. The reconstruction is validated by comparison with conventional constant wavelength strain measurements on the KOWARI diffractometer at ANSTO in Australia. It is also shown that the addition of a simple Tikhonov regularization can improve the reconstruction.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
165,937
2412.09668
Vision-Language Models Represent Darker-Skinned Black Individuals as More Homogeneous than Lighter-Skinned Black Individuals
Vision-Language Models (VLMs) combine Large Language Model (LLM) capabilities with image processing, enabling tasks like image captioning and text-to-image generation. Yet concerns persist about their potential to amplify human-like biases, including skin tone bias. Skin tone bias, where darker-skinned individuals face more negative stereotyping than lighter-skinned individuals, is well-documented in the social sciences but remains under-explored in Artificial Intelligence (AI), particularly in VLMs. While well-documented in the social sciences, this bias remains under-explored in AI, particularly in VLMs. Using the GAN Face Database, we sampled computer-generated images of Black American men and women, controlling for skin tone variations while keeping other features constant. We then asked VLMs to write stories about these faces and compared the homogeneity of the generated stories. Stories generated by VLMs about darker-skinned Black individuals were more homogeneous than those about lighter-skinned individuals in three of four models, and Black women were consistently represented more homogeneously than Black men across all models. Interaction effects revealed a greater impact of skin tone on women in two VLMs, while the other two showed nonsignificant results, reflecting known stereotyping patterns. These findings underscore the propagation of biases from single-modality AI systems to multimodal models and highlight the need for further research to address intersectional biases in AI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
516,595
2106.01144
Towards Emotional Support Dialog Systems
Emotional support is a crucial ability for many conversation scenarios, including social interactions, mental health support, and customer service chats. Following reasonable procedures and using various support skills can help to effectively provide support. However, due to the lack of a well-designed task and corpora of effective emotional support conversations, research on building emotional support into dialog systems remains untouched. In this paper, we define the Emotional Support Conversation (ESC) task and propose an ESC Framework, which is grounded on the Helping Skills Theory. We construct an Emotion Support Conversation dataset (ESConv) with rich annotation (especially support strategy) in a help-seeker and supporter mode. To ensure a corpus of high-quality conversations that provide examples of effective emotional support, we take extensive effort to design training tutorials for supporters and several mechanisms for quality control during data collection. Finally, we evaluate state-of-the-art dialog models with respect to the ability to provide emotional support. Our results show the importance of support strategies in providing effective emotional support and the utility of ESConv in training more emotional support systems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,404
2501.15915
Parametric Retrieval Augmented Generation
Retrieval-augmented generation (RAG) techniques have emerged as a promising solution to enhance the reliability of large language models (LLMs) by addressing issues like hallucinations, outdated knowledge, and domain adaptation. In particular, existing RAG methods append relevant documents retrieved from external corpus or databases to the input of LLMs to guide their generation process, which we refer to as the in-context knowledge injection method. While this approach is simple and often effective, it has inherent limitations. Firstly, increasing the context length and number of relevant documents can lead to higher computational overhead and degraded performance, especially in complex reasoning tasks. More importantly, in-context knowledge injection operates primarily at the input level, but LLMs store their internal knowledge in their parameters. This gap fundamentally limits the capacity of in-context methods. To this end, we introduce Parametric retrieval-augmented generation (Parametric RAG), a new RAG paradigm that integrates external knowledge directly into the parameters of feed-forward networks (FFN) of an LLM through document parameterization. This approach not only saves online computational costs by eliminating the need to inject multiple documents into the LLMs' input context, but also deepens the integration of external knowledge into the parametric knowledge space of the LLM. Experimental results demonstrate that Parametric RAG substantially enhances both the effectiveness and efficiency of knowledge augmentation in LLMs. Also, it can be combined with in-context RAG methods to achieve even better performance. We have open-sourced all the code, data, and models in the following anonymized GitHub link: https://github.com/oneal2000/PRAG
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
527,759
2411.00039
Linear Chain Transformation: Expanding Optimization Dynamics for Fine-Tuning Large Language Models
Fine-tuning large language models (LLMs) has become essential for adapting pretrained models to specific downstream tasks. In this paper, we propose Linear Chain Transformation (LinChain), a novel approach that introduces a sequence of linear transformations during fine-tuning to enrich optimization dynamics. By incorporating multiple linear transformations into the parameter update process, LinChain expands the effective rank of updates and enhances the model's ability to learn complex task-specific representations. We demonstrate that this method significantly improves the performance of LLM fine-tuning over state-of-the-art methods by providing more flexible optimization paths during training, while maintaining the inference efficiency of the resulting model. Our experiments on various benchmark tasks show that LinChain leads to better generalization, fewer learnable parameters, and improved task adaptation, making it a compelling strategy for LLM fine-tuning.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
504,409
2311.08354
Hierarchical Experience-informed Navigation for Multi-modal Quadrupedal Rebar Grid Traversal
This study focuses on a layered, experience-based, multi-modal contact planning framework for agile quadrupedal locomotion over a constrained rebar environment. To this end, our hierarchical planner incorporates locomotion-specific modules into the high-level contact sequence planner and solves kinodynamically-aware trajectory optimization as the low-level motion planner. Through quantitative analysis of the experience accumulation process and experimental validation of the kinodynamic feasibility of the generated locomotion trajectories, we demonstrate that the experience planning heuristic offers an effective way of providing candidate footholds for a legged contact planner. Additionally, we introduce a guiding torso path heuristic at the global planning level to enhance the navigation success rate in the presence of environmental obstacles. Our results indicate that the torso-path guided experience accumulation requires significantly fewer offline trials to successfully reach the goal compared to regular experience accumulation. Finally, our planning framework is validated in both dynamics simulations and real hardware implementations on a quadrupedal robot provided by Skymul Inc.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
407,688
2007.11189
Predicting Job-Hopping Motive of Candidates Using Answers to Open-ended Interview Questions
A significant proportion of voluntary employee turnover includes people who frequently move from job to job, known as job-hopping. Our work shows that language used in responding to interview questions on past behaviour and situational judgement is predictive of job-hopping motive as measured by the Job-Hopping Motives (JHM) Scale. The study is based on responses from over 45,000 job applicants who completed an online chat interview and self-rated themselves on JHM Scale. Five different methods of text representation were evaluated, namely four open-vocabulary approaches (TF-IDF, LDA, Glove word embeddings and Doc2Vec document embeddings) and one closed-vocabulary approach (LIWC). The Glove embeddings provided the best results with a correlation of r = 0.35 between sequences of words used and the JHM Scale. Further analysis also showed a correlation of r = 0.25 between language-based job-hopping motive and the personality trait Openness to experience and a correlation of r = -0.09 with the trait Agreeableness.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
188,490
2412.09466
Distributional Reinforcement Learning based Integrated Decision Making and Control for Autonomous Surface Vehicles
With the growing demands for Autonomous Surface Vehicles (ASVs) in recent years, the number of ASVs being deployed for various maritime missions is expected to increase rapidly in the near future. However, it is still challenging for ASVs to perform sensor-based autonomous navigation in obstacle-filled and congested waterways, where perception errors, closely gathered vehicles and limited maneuvering space near buoys may cause difficulties in following the Convention on the International Regulations for Preventing Collisions at Sea (COLREGs). To address these issues, we propose a novel Distributional Reinforcement Learning based navigation system that can work with onboard LiDAR and odometry sensors to generate arbitrary thrust commands in continuous action space. Comprehensive evaluations of the proposed system in high-fidelity Gazebo simulations show its ability to decide whether to follow COLREGs or take other beneficial actions based on the scenarios encountered, offering superior performance in navigation safety and efficiency compared to systems using state-of-the-art Distributional RL, non-Distributional RL and classical methods.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
516,494
1904.08397
SACOBRA with Online Whitening for Solving Optimization Problems with High Conditioning
Real-world optimization problems often have expensive objective functions in terms of cost and time. It is desirable to find near-optimal solutions with very few function evaluations. Surrogate-assisted optimizers tend to reduce the required number of function evaluations by replacing the real function with an efficient mathematical model built on few evaluated points. Problems with a high condition number are a challenge for many surrogate-assisted optimizers including SACOBRA. To address such problems we propose a new online whitening operating in the black-box optimization paradigm. We show on a set of high-conditioning functions that online whitening tackles SACOBRA's early stagnation issue and reduces the optimization error by a factor between 10 to 1e12 as compared to the plain SACOBRA, though it imposes many extra function evaluations. Covariance matrix adaptation evolution strategy (CMA-ES) has for very high numbers of function evaluations even lower errors, whereas SACOBRA performs better in the expensive setting (less than 1e03 function evaluations). If we count all parallelizable function evaluations (population evaluation in CMA-ES, online whitening in our approach) as one iteration, then both algorithms have comparable strength even on the long run. This holds for problems with dimension D <= 20.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
128,046
1401.7369
Linear Codes are Optimal for Index-Coding Instances with Five or Fewer Receivers
We study zero-error unicast index-coding instances, where each receiver must perfectly decode its requested message set, and the message sets requested by any two receivers do not overlap. We show that for all these instances with up to five receivers, linear index codes are optimal. Although this class contains 9847 non-isomorphic instances, by using our recent results and by properly categorizing the instances based on their graphical representations, we need to consider only 13 non-trivial instances to solve the entire class. This work complements the result by Arbabjolfaei et al. (ISIT 2013), who derived the capacity region of all unicast index-coding problems with up to five receivers in the diminishing-error setup. They employed random-coding arguments, which require infinitely-long messages. We consider the zero-error setup; our approach uses graph theory and combinatorics, and does not require long messages.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
30,447
math/0701131
Compressed Sensing and Redundant Dictionaries
This article extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via Basis Pursuit from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,734
2403.10992
On extended perfect codes
We consider extended $1$-perfect codes in Hamming graphs $H(n,q)$. Such nontrivial codes are known only when $n=2^k$, $k\geq 1$, $q=2$, or $n=q+2$, $q=2^m$, $m\geq 1$. Recently, Bespalov proved nonexistence of extended $1$-perfect codes for $q=3$, $4$, $n>q+2$. In this work, we characterize all positive integers $n$, $r$ and prime $p$, for which there exist such a code in $H(n,p^r)$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
438,469
2405.05662
Approximate Dec-POMDP Solving Using Multi-Agent A*
We present an A*-based algorithm to compute policies for finite-horizon Dec-POMDPs. Our goal is to sacrifice optimality in favor of scalability for larger horizons. The main ingredients of our approach are (1) using clustered sliding window memory, (2) pruning the A* search tree, and (3) using novel A* heuristics. Our experiments show competitive performance to the state-of-the-art. Moreover, for multiple benchmarks, we achieve superior performance. In addition, we provide an A* algorithm that finds upper bounds for the optimum, tailored towards problems with long horizons. The main ingredient is a new heuristic that periodically reveals the state, thereby limiting the number of reachable beliefs. Our experiments demonstrate the efficacy and scalability of the approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
453,008
1406.2785
A New Class of Multiple-rate Codes Based on Block Markov Superposition Transmission
Hadamard transform~(HT) as over the binary field provides a natural way to implement multiple-rate codes~(referred to as {\em HT-coset codes}), where the code length $N=2^p$ is fixed but the code dimension $K$ can be varied from $1$ to $N-1$ by adjusting the set of frozen bits. The HT-coset codes, including Reed-Muller~(RM) codes and polar codes as typical examples, can share a pair of encoder and decoder with implementation complexity of order $O(N \log N)$. However, to guarantee that all codes with designated rates perform well, HT-coset coding usually requires a sufficiently large code length, which in turn causes difficulties in the determination of which bits are better for being frozen. In this paper, we propose to transmit short HT-coset codes in the so-called block Markov superposition transmission~(BMST) manner. At the transmitter, signals are spatially coupled via superposition, resulting in long codes. At the receiver, these coupled signals are recovered by a sliding-window iterative soft successive cancellation decoding algorithm. Most importantly, the performance around or below the bit-error-rate~(BER) of $10^{-5}$ can be predicted by a simple genie-aided lower bound. Both these bounds and simulation results show that the BMST of short HT-coset codes performs well~(within one dB away from the corresponding Shannon limits) in a wide range of code rates.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
33,787
2106.12142
IQ-Learn: Inverse soft-Q Learning for Imitation
In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
242,634
2204.01198
Antenna Impedance Estimation at MIMO Receivers
This paper considers antenna impedance estimation based on training sequences at MIMO receivers. The goal is to firstly leverage extensive resources available in most wireless systems for channel estimation to estimate antenna impedance in real-time. We assume the receiver switches its impedance in a predetermined fashion during each training sequence. Based on voltage observation across the load, a classical estimation framework is developed incorporating the Rayleigh fading assumption. We then derive in closed-form a maximum-likelihood (ML) estimator under i.i.d. fading and show this same ML estimator is a method of moments (MM) estimator in correlated channels. Numerical results suggest a fast algorithm, i.e., MLE in i.i.d. fading and the MM estimator in correlated fading, that estimates the unknown antenna impedance in real-time for all Rayleigh fading channels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
289,530
2312.16109
fMPI: Fast Novel View Synthesis in the Wild with Layered Scene Representations
In this study, we propose two novel input processing paradigms for novel view synthesis (NVS) methods based on layered scene representations that significantly improve their runtime without compromising quality. Our approach identifies and mitigates the two most time-consuming aspects of traditional pipelines: building and processing the so-called plane sweep volume (PSV), which is a high-dimensional tensor of planar re-projections of the input camera views. In particular, we propose processing this tensor in parallel groups for improved compute efficiency as well as super-sampling adjacent input planes to generate denser, and hence more accurate scene representation. The proposed enhancements offer significant flexibility, allowing for a balance between performance and speed, thus making substantial steps toward real-time applications. Furthermore, they are very general in the sense that any PSV-based method can make use of them, including methods that employ multiplane images, multisphere images, and layered depth images. In a comprehensive set of experiments, we demonstrate that our proposed paradigms enable the design of an NVS method that achieves state-of-the-art on public benchmarks while being up to $50x$ faster than existing state-of-the-art methods. It also beats the current forerunner in terms of speed by over $3x$, while achieving significantly better rendering quality.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
418,267
2302.10631
FedST: Secure Federated Shapelet Transformation for Time Series Classification
This paper explores how to build a shapelet-based time series classification (TSC) model in the federated learning (FL) scenario, that is, using more data from multiple owners without actually sharing the data. We propose FedST, a novel federated TSC framework extended from a centralized shapelet transformation method. We recognize the federated shapelet search step as the kernel of FedST. Thus, we design a basic protocol for the FedST kernel that we prove to be secure and accurate. However, we identify that the basic protocol suffers from efficiency bottlenecks and the centralized acceleration techniques lose their efficacy due to the security issues. To speed up the federated protocol with security guarantee, we propose several optimizations tailored for the FL setting. Our theoretical analysis shows that the proposed methods are secure and more efficient. We conduct extensive experiments using both synthetic and real-world datasets. Empirical results show that our FedST solution is effective in terms of TSC accuracy, and the proposed optimizations can achieve three orders of magnitude of speedup.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
346,874
2006.06244
CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks
Despite the recent success of text detection and recognition methods, existing evaluation metrics fail to provide a fair and reliable comparison among those methods. In addition, there exists no end-to-end evaluation metric that takes characteristics of OCR tasks into account. Previous end-to-end metric contains cascaded errors from the binary scoring process applied in both detection and recognition tasks. Ignoring partially correct results raises a gap between quantitative and qualitative analysis, and prevents fine-grained assessment. Based on the fact that character is a key element of text, we hereby propose a Character-Level Evaluation metric (CLEval). In CLEval, the \textit{instance matching} process handles split and merge detection cases, and the \textit{scoring process} conducts character-level evaluation. By aggregating character-level scores, the CLEval metric provides a fine-grained evaluation of end-to-end results composed of the detection and recognition as well as individual evaluations for each module from the end-performance perspective. We believe that our metrics can play a key role in developing and analyzing state-of-the-art text detection and recognition methods. The evaluation code is publicly available at https://github.com/clovaai/CLEval.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,360
2305.00278
Segment Anything Model (SAM) Meets Glass: Mirror and Transparent Objects Cannot Be Easily Detected
Meta AI Research has recently released SAM (Segment Anything Model) which is trained on a large segmentation dataset of over 1 billion masks. As a foundation model in the field of computer vision, SAM (Segment Anything Model) has gained attention for its impressive performance in generic object segmentation. Despite its strong capability in a wide range of zero-shot transfer tasks, it remains unknown whether SAM can detect things in challenging setups like transparent objects. In this work, we perform an empirical evaluation of two glass-related challenging scenarios: mirror and transparent objects. We found that SAM often fails to detect the glass in both scenarios, which raises concern for deploying the SAM in safety-critical situations that have various forms of glass.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
361,278
1706.10197
Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion
It is challenging to recognize facial action unit (AU) from spontaneous facial displays, especially when they are accompanied by speech. The major reason is that the information is extracted from a single source, i.e., the visual channel, in the current practice. However, facial activity is highly correlated with voice in natural human communications. Instead of solely improving visual observations, this paper presents a novel audiovisual fusion framework, which makes the best use of visual and acoustic cues in recognizing speech-related facial AUs. In particular, a dynamic Bayesian network (DBN) is employed to explicitly model the semantic and dynamic physiological relationships between AUs and phonemes as well as measurement uncertainty. A pilot audiovisual AU-coded database has been collected to evaluate the proposed framework, which consists of a "clean" subset containing frontal faces under well controlled circumstances and a challenging subset with large head movements and occlusions. Experiments on this database have demonstrated that the proposed framework yields significant improvement in recognizing speech-related AUs compared to the state-of-the-art visual-based methods especially for those AUs whose visual observations are impaired during speech, and more importantly also outperforms feature-level fusion methods by explicitly modeling and exploiting physiological relationships between AUs and phonemes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
76,256
2204.05369
Learning Implicit Priors for Motion Optimization
In this paper, we focus on the problem of integrating Energy-based Models (EBM) as guiding priors for motion optimization. EBMs are a set of neural networks that can represent expressive probability density distributions in terms of a Gibbs distribution parameterized by a suitable energy function. Due to their implicit nature, they can easily be integrated as optimization factors or as initial sampling distributions in the motion optimization problem, making them good candidates to integrate data-driven priors in the motion optimization problem. In this work, we present a set of required modeling and algorithmic choices to adapt EBMs into motion optimization. We investigate the benefit of including additional regularizers in the learning of the EBMs to use them with gradient-based optimizers and we present a set of EBM architectures to learn generalizable distributions for manipulation tasks. We present multiple cases in which the EBM could be integrated for motion optimization and evaluate the performance of learned EBMs as guiding priors for both simulated and real robot experiments.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
290,998
2111.11289
Environment-Aware Beam Selection for IRS-Aided Communication with Channel Knowledge Map
Intelligent reflecting surface (IRS)-aided communication is a promising technology for beyond 5G (B5G) systems, to reconfigure the radio environment proactively. However, IRS-aided communication in practice requires efficient channel estimation or passive beam training, whose overhead and complexity increase drastically with the number of reflecting elements/beam directions. To tackle this challenge, we propose in this paper a novel environment-aware joint active and passive beam selection scheme for IRS-aided wireless communication, based on the new concept of channel knowledge map (CKM). Specifically, by utilizing both the location information of the user equipment (UE), which is readily available in contemporary wireless systems with ever-increasing accuracy, and the environment information offered by CKM, the proposed scheme achieves efficient beam selection with either no real-time training required (training-free beam selection) or only moderate training overhead (light-training beam selection). Numerical results based on practical channels obtained using commercial ray tracing software are presented, which demonstrate the superior performance of the proposed scheme over various benchmark schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
267,613
1606.07767
Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks
Vanishing (and exploding) gradients effect is a common problem for recurrent neural networks with nonlinear activation functions which use backpropagation method for calculation of derivatives. Deep feedforward neural networks with many hidden layers also suffer from this effect. In this paper we propose a novel universal technique that makes the norm of the gradient stay in the suitable range. We construct a way to estimate a contribution of each training example to the norm of the long-term components of the target function s gradient. Using this subroutine we can construct mini-batches for the stochastic gradient descent (SGD) training that leads to high performance and accuracy of the trained network even for very complex tasks. We provide a straightforward mathematical estimation of minibatch s impact on for the gradient norm and prove its correctness theoretically. To check our framework experimentally we use some special synthetic benchmarks for testing RNNs on ability to capture long-term dependencies. Our network can detect links between events in the (temporal) sequence at the range approx. 100 and longer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
57,775
1908.03830
Supervised Negative Binomial Classifier for Probabilistic Record Linkage
Motivated by the need of the linking records across various databases, we propose a novel graphical model based classifier that uses a mixture of Poisson distributions with latent variables. The idea is to derive insight into each pair of hypothesis records that match by inferring its underlying latent rate of error using Bayesian Modeling techniques. The novel approach of using gamma priors for learning the latent variables along with supervised labels is unique and allows for active learning. The naive assumption is made deliberately as to the independence of the fields to propose a generalized theory for this class of problems and not to undermine the hierarchical dependencies that could be present in different scenarios. This classifier is able to work with sparse and streaming data. The application to record linkage is able to meet several challenges of sparsity, data streams and varying nature of the data-sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
141,330
2004.14788
Character-Level Translation with Self-attention
We explore the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolutions. We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese). Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
175,006
cs/0211004
The DLV System for Knowledge Representation and Reasoning
This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to $\Delta^P_3$-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
537,716
1911.04096
UW-MARL: Multi-Agent Reinforcement Learning for Underwater Adaptive Sampling using Autonomous Vehicles
Near-real-time water-quality monitoring in uncertain environments such as rivers, lakes, and water reservoirs of different variables is critical to protect the aquatic life and to prevent further propagation of the potential pollution in the water. In order to measure the physical values in a region of interest, adaptive sampling is helpful as an energy- and time-efficient technique since an exhaustive search of an area is not feasible with a single vehicle. We propose an adaptive sampling algorithm using multiple autonomous vehicles, which are well-trained, as agents, in a Multi-Agent Reinforcement Learning (MARL) framework to make efficient sequence of decisions on the adaptive sampling procedure. The proposed solution is evaluated using experimental data, which is fed into a simulation framework. Experiments were conducted in the Raritan River, Somerset and in Carnegie Lake, Princeton, NJ during July 2019.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
152,889
2210.09822
Two low differentially uniform power permutations over odd characteristic finite fields: APN and differentially $4$-uniform functions
Permutation polynomials over finite fields are fundamental objects as they are used in various theoretical and practical applications in cryptography, coding theory, combinatorial design, and related topics. This family of polynomials constitutes an active research area in which advances are being made constantly. In particular, constructing infinite classes of permutation polynomials over finite fields with good differential properties (namely, low) remains an exciting problem despite much research in this direction for many years. This article exhibits low differentially uniform power permutations over finite fields of odd characteristic. Specifically, its objective is twofold concerning the power functions $F(x)=x^{\frac{p^n+3}{2}}$ defined over the finite field $F_{p^n}$ of order $p^n$, where $p$ is an odd prime, and $n$ is a positive integer. The first is to complement some former results initiated by Helleseth and Sandberg in \cite{HS} by solving the open problem left open for more than twenty years concerning the determination of the differential spectrum of $F$ when $p^n\equiv3\pmod 4$ and $p\neq 3$. The second is to determine the exact value of its differential uniformity. Our achievements are obtained firstly by evaluating some exponential sums over $F_{p^n}$ (which amounts to evaluating the number of $F_{p^n}$-rational points on some related curves and secondly by computing the number of solutions in $(F_{p^n})^4$ of a system of equations presented by Helleseth, Rong, and Sandberg in ["New families of almost perfect nonlinear power mappings," IEEE Trans. Inform. Theory, vol. 45. no. 2, 1999], naturally appears while determining the differential spectrum of $F$. We show that in the considered case ($p^n\equiv3\pmod 4$ and $p\neq 3$), $F$ is an APN power permutation when $p^n=11$, and a differentially $4$-uniform power permutation otherwise.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
324,683
1504.04003
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. "Deep learning" methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ- or body-part-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using ConvNets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
42,092
2404.00623
Variational Autoencoders for exteroceptive perception in reinforcement learning-based collision avoidance
Modern control systems are increasingly turning to machine learning algorithms to augment their performance and adaptability. Within this context, Deep Reinforcement Learning (DRL) has emerged as a promising control framework, particularly in the domain of marine transportation. Its potential for autonomous marine applications lies in its ability to seamlessly combine path-following and collision avoidance with an arbitrary number of obstacles. However, current DRL algorithms require disproportionally large computational resources to find near-optimal policies compared to the posed control problem when the searchable parameter space becomes large. To combat this, our work delves into the application of Variational AutoEncoders (VAEs) to acquire a generalized, low-dimensional latent encoding of a high-fidelity range-finding sensor, which serves as the exteroceptive input to a DRL agent. The agent's performance, encompassing path-following and collision avoidance, is systematically tested and evaluated within a stochastic simulation environment, presenting a comprehensive exploration of our proposed approach in maritime control systems.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
443,031
2209.07163
Morphology-Aware Interactive Keypoint Estimation
Diagnosis based on medical images, such as X-ray images, often involves manual annotation of anatomical keypoints. However, this process involves significant human efforts and can thus be a bottleneck in the diagnostic process. To fully automate this procedure, deep-learning-based methods have been widely proposed and have achieved high performance in detecting keypoints in medical images. However, these methods still have clinical limitations: accuracy cannot be guaranteed for all cases, and it is necessary for doctors to double-check all predictions of models. In response, we propose a novel deep neural network that, given an X-ray image, automatically detects and refines the anatomical keypoints through a user-interactive system in which doctors can fix mispredicted keypoints with fewer clicks than needed during manual revision. Using our own collected data and the publicly available AASCE dataset, we demonstrate the effectiveness of the proposed method in reducing the annotation costs via extensive quantitative and qualitative results. A demo video of our approach is available on our project webpage.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
317,650
1702.08130
Multiuser Precoding and Channel Estimation for Hybrid Millimeter Wave MIMO Systems
In this paper, we develop a low-complexity channel estimation for hybrid millimeter wave (mmWave) systems, where the number of radio frequency (RF) chains is much less than the number of antennas equipped at each transceiver. The proposed channel estimation algorithm aims to estimate the strongest angle-of-arrivals (AoAs) at both the base station (BS) and the users. Then all the users transmit orthogonal pilot symbols to the BS via these estimated strongest AoAs to facilitate the channel estimation. The algorithm does not require any explicit channel state information (CSI) feedback from the users and the associated signalling overhead of the algorithm is only proportional to the number of users, which is significantly less compared to various existing schemes. Besides, the proposed algorithm is applicable to both non-sparse and sparse mmWave channel environments. Based on the estimated CSI, zero-forcing (ZF) precoding is adopted for multiuser downlink transmission. In addition, we derive a tight achievable rate upper bound of the system. Our analytical and simulation results show that the proposed scheme offer a considerable achievable rate gain compared to fully digital systems, where the number of RF chains equipped at each transceiver is equal to the number of antennas. Furthermore, the achievable rate performance gap between the considered hybrid mmWave systems and the fully digital system is characterized, which provides useful system design insights.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,926
2306.04098
Phoenix: A Federated Generative Diffusion Model
Generative AI has made impressive strides in enabling users to create diverse and realistic visual content such as images, videos, and audio. However, training generative models on large centralized datasets can pose challenges in terms of data privacy, security, and accessibility. Federated learning (FL) is an approach that uses decentralized techniques to collaboratively train a shared deep learning model while retaining the training data on individual edge devices to preserve data privacy. This paper proposes a novel method for training a Denoising Diffusion Probabilistic Model (DDPM) across multiple data sources using FL techniques. Diffusion models, a newly emerging generative model, show promising results in achieving superior quality images than Generative Adversarial Networks (GANs). Our proposed method Phoenix is an unconditional diffusion model that leverages strategies to improve the data diversity of generated samples even when trained on data with statistical heterogeneity or Non-IID (Non-Independent and Identically Distributed) data. We demonstrate how our approach outperforms the default diffusion model in an FL setting. These results indicate that high-quality samples can be generated by maintaining data diversity, preserving privacy, and reducing communication between data sources, offering exciting new possibilities in the field of generative AI.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
371,602
2401.08584
Nahid: AI-based Algorithm for operating fully-automatic surgery
In this paper, for the first time, a method is presented that can provide a fully automated surgery based on software and computer vision techniques. Then, the advantages and challenges of computerization of medical surgery are examined. Finally, the surgery related to isolated ovarian endometriosis disease has been examined, and based on the presented method, a more detailed algorithm is presented that is capable of automatically diagnosing and treating this disease during surgery as proof of our proposed method where a U-net is trained to detect the endometriosis during surgery.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
true
false
false
421,947
2009.03005
Towards an Interoperable Data Protocol Aimed at Linking the Fashion Industry with AI Companies
The fashion industry is looking forward to use artificial intelligence technologies to enhance their processes, services, and applications. Although the amount of fashion data currently in use is increasing, there is a large gap in data exchange between the fashion industry and the related AI companies, not to mention the different structure used for each fashion dataset. As a result, AI companies are relying on manually annotated fashion data to build different applications. Furthermore, as of this writing, the terminology, vocabulary and methods of data representation used to denote fashion items are still ambiguous and confusing. Hence, it is clear that the fashion industry and AI companies will benefit from a protocol that allows them to exchange and organise fashion information in a unified way. To achieve this goal we aim (1) to define a protocol called DDOIF that will allow interoperability of fashion data; (2) for DDOIF to contain diverse entities including extensive information on clothing and accessories attributes in the form of text and various media formats; and (3)To design and implement an API that includes, among other things, functions for importing and exporting a file built according to the DDOIF protocol that stores all information about a single item of clothing. To this end, we identified over 1000 class and subclass names used to name fashion items and use them to build the DDOIF dictionary. We make DDOIF publicly available to all interested users and developers and look forward to engaging more collaborators to improve and enrich it.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
194,718
2311.07711
Histopathologic Cancer Detection
Early diagnosis of the cancer cells is necessary for making an effective treatment plan and for the health and safety of a patient. Nowadays, doctors usually use a histological grade that pathologists determine by performing a semi-quantitative analysis of the histopathological and cytological features of hematoxylin-eosin (HE) stained histopathological images. This research contributes a potential classification model for cancer prognosis to efficiently utilize the valuable information underlying the HE-stained histopathological images. This work uses the PatchCamelyon benchmark datasets and trains them in a multi-layer perceptron and convolution model to observe the model's performance in terms of precision, Recall, F1 Score, Accuracy, and AUC Score. The evaluation result shows that the baseline convolution model outperforms the baseline MLP model. Also, this paper introduced ResNet50 and InceptionNet models with data augmentation, where ResNet50 is able to beat the state-of-the-art model. Furthermore, the majority vote and concatenation ensemble were evaluated and provided the future direction of using transfer learning and segmentation to understand the specific features.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
407,442
2107.07229
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
The recent state-of-the-art natural language understanding (NLU) systems often behave unpredictably, failing on simpler reasoning examples. Despite this, there has been limited focus on quantifying progress towards systems with more predictable behavior. We think that reasoning capability-wise behavioral summary is a step towards bridging this gap. We create a CheckList test-suite (184K examples) for the Natural Language Inference (NLI) task, a representative NLU task. We benchmark state-of-the-art NLI systems on this test-suite, which reveals fine-grained insights into the reasoning abilities of BERT and RoBERTa. Our analysis further reveals inconsistencies of the models on examples derived from the same template or distinct templates but pertaining to same reasoning capability, indicating that generalizing the models' behavior through observations made on a CheckList is non-trivial. Through an user-study, we find that users were able to utilize behavioral information to generalize much better for examples predicted from RoBERTa, compared to that of BERT.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
246,353
1205.3504
A Note on Extending Taylor's Power Law for Characterizing Human Microbial Communities: Inspiration from Comparative Studies on the Distribution Patterns of Insects and Galaxies, and as a Case Study for Medical Ecology
Many natural patterns, such as the distributions of blood particles in a blood sample, proteins on cell surfaces, biological populations in their habitat, galaxies in the universe, the sequence of human genes, and the fitness in evolutionary computing, have been found to follow power law. Taylor's power law (Taylor 1961: Nature, 189:732-) is well recognized as one of the fundamental models in population ecology. A fundamental property of biological populations, which Taylor's power law reveals, is the near universal heterogeneity of population abundance distribution in habitat. Obviously, the heterogeneity also exists at the community level, where not only the distributions of population abundances but also the proportions of the species composition in the community are often heterogeneous. Nevertheless, existing community diversity indexes such as Shannon index and Simpson index can only measure "local" or "static" diversity in the sense that they are computed for each habitat at a specific time point, and the indexes alone do not reflect the diversity changes. In this note, I propose to extend the application scope of Taylor's power law to the studies of human microbial communities, specifically, the community heterogeneity at both population and community levels. I further suggested that population dispersion models such as Taylor (1980: Nature, 286, 53-), which is known to generate population distribution patterns consistent with the power law, should also be very useful for analyzing the distribution patterns of human microbes within the human body. Overall, I hope that the approach to human microbial community with the power law offers an example that ecological theories can play an important role in the emerging medical ecology, which aims at studying the ecology of human microbiome and its implications to human diseases and health, as well as in personalized medicine.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
16,029
1810.00046
Estimation-Based Model Predictive Control for Automatic Crosswind Stabilization of Hybrid Aerial Vehicles
In this paper, we study the control design of an automatic crosswind stabilization system for a novel, buoyantly-assisted aerial transportation vehicle. This vehicle has several advantages over other aircraft including the ability to take-off and land in very short distances and without the need for roads or runways. Despite these advantages, the large surface area of the vehicle's wing makes it more susceptible to wind, which introduces undesirable roll angle motions. The role of the automatic crosswind stabilization system is to detect the roll angle deviation, and then use motors at the wingtips to counteract the wind effect. However, due to the relatively large inertia of the wing compared to small-size unmanned aerial vehicles and additional input time delays, an automatic crosswind stabilization system based on traditional control algorithms such as the proportional-integral-derivative (PID) controller results in a response time that is too slow. Another challenge is the lack of high-accuracy wind sensors that can be mounted on the vehicle's wing. Therefore, we first design a wind torque estimator that relies on inertial measurements, and then use feed-forward compensation to directly correct for the wind torque, resulting in a significantly faster response. We second combine the proposed estimator with a model predictive controller (MPC), and compare constrained MPC with unconstrained MPC for the considered application. Experimental results show that our proposed estimation-based MPC strategy reduces the response time of the system by around 80-90% compared to a standard PID controller, without the need for adding wind sensors or changing the hardware of the stabilization system.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
109,079
2205.13272
FCN-Pose: A Pruned and Quantized CNN for Robot Pose Estimation for Constrained Devices
IoT devices suffer from resource limitations, such as processor, RAM, and disc storage. These limitations become more evident when handling demanding applications, such as deep learning, well-known for their heavy computational requirements. A case in point is robot pose estimation, an application that predicts the critical points of the desired image object. One way to mitigate processing and storage problems is compressing that deep learning application. This paper proposes a new CNN for the pose estimation while applying the compression techniques of pruning and quantization to reduce his demands and improve the response time. While the pruning process reduces the total number of parameters required for inference, quantization decreases the precision of the floating-point. We run the approach using a pose estimation task for a robotic arm and compare the results in a high-end device and a constrained device. As metrics, we consider the number of Floating-point Operations Per Second(FLOPS), the total of mathematical computations, the calculation of parameters, the inference time, and the number of video frames processed per second. In addition, we undertake a qualitative evaluation where we compare the output image predicted for each pruned network with the corresponding original one. We reduce the originally proposed network to a 70% pruning rate, implying an 88.86% reduction in parameters, 94.45% reduction in FLOPS, and for the disc storage, we reduced the requirement in 70% while increasing error by a mere $1\%$. With regard input image processing, this metric increases from 11.71 FPS to 41.9 FPS for the Desktop case. When using the constrained device, image processing augmented from 2.86 FPS to 10.04 FPS. The higher processing rate of image frames achieved by the proposed approach allows a much shorter response time.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
298,871
2409.04056
Refining Wikidata Taxonomy using Large Language Models
Due to its collaborative nature, Wikidata is known to have a complex taxonomy, with recurrent issues like the ambiguity between instances and classes, the inaccuracy of some taxonomic paths, the presence of cycles, and the high level of redundancy across classes. Manual efforts to clean up this taxonomy are time-consuming and prone to errors or subjective decisions. We present WiKC, a new version of Wikidata taxonomy cleaned automatically using a combination of Large Language Models (LLMs) and graph mining techniques. Operations on the taxonomy, such as cutting links or merging classes, are performed with the help of zero-shot prompting on an open-source LLM. The quality of the refined taxonomy is evaluated from both intrinsic and extrinsic perspectives, on a task of entity typing for the latter, showing the practical interest of WiKC.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
486,275
2111.15046
A Secure Key Sharing Algorithm Exploiting Phase Reciprocity in Wireless Channels
This article presents a secure key exchange algorithm that exploits reciprocity in wireless channels to share a secret key between two nodes $A$ and $B$. Reciprocity implies that the channel phases in the links $A\rightarrow B$ and $B\rightarrow A$ are the same. A number of such reciprocal phase values are measured at nodes $A$ and $B$, called shared phase values hereafter. Each shared phase value is used to mask points of a Phase Shift Keying (PSK) constellation. Masking is achieved by rotating each PSK constellation with a shared phase value. Rotation of constellation is equivalent to adding phases modulo-$2\pi$, and as the channel phase is uniformly distributed in $[0,2\pi)$, the result of summation conveys zero information about summands. To enlarge the key size over a static or slow fading channel, the Radio Frequency (RF) propagation path is perturbed to create several independent realizations of multi-path fading, each used to share a new phase value. To eavesdrop a phase value shared in this manner, the Eavesdropper (Eve) will always face an under-determined system of linear equations which will not reveal any useful information about its actual solution value. This property is used to establish a secure key between two legitimate users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
268,787
2312.11084
Multi-Agent Reinforcement Learning for Connected and Automated Vehicles Control: Recent Advancements and Future Prospects
Connected and automated vehicles (CAVs) are considered a potential solution for future transportation challenges, aiming to develop systems that are efficient, safe, and environmentally friendly. However, CAV control presents significant challenges due to the complexity of interconnectivity and coordination required among vehicles. Multi-agent reinforcement learning (MARL), which has shown notable advancements in addressing complex problems in autonomous driving, robotics, and human-vehicle interaction, emerges as a promising tool to enhance CAV capabilities. Despite its potential, there is a notable absence of current reviews on mainstream MARL algorithms for CAVs. To fill this gap, this paper offers a comprehensive review of MARL's application in CAV control. The paper begins with an introduction to MARL, explaining its unique advantages in handling complex and multi-agent scenarios. It then presents a detailed survey of MARL applications across various control dimensions for CAVs, including critical scenarios such as platooning control, lane-changing, and unsignalized intersections. Additionally, the paper reviews prominent simulation platforms essential for developing and testing MARL algorithms. Lastly, it examines the current challenges in deploying MARL for CAV control, including macro-micro optimization, communication, mixed traffic, and sim-to-real challenges. Potential solutions discussed include hierarchical MARL, decentralized MARL, adaptive interactions, and offline MARL.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
416,441
1703.00168
Modular Representation of Layered Neural Networks
Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
69,122
2402.08878
Distributed Secret Securing in Discrete-Event Systems
In this paper, we study a security problem of protecting secrets in distributed systems. Specifically, we employ discrete-event systems to describe the structure and behaviour of distributed systems, in which global secret information is separated into pieces and stored in local component agents. The goal is to prevent such secrets from being exposed to intruders by imposing appropriate protection measures. This problem is formulated as to ensure that at least one piece of every distributed global secret is secured by a required number of protections, while the overall cost to apply protections is minimum. We first characterize the solvability of this security problem by providing a necessary and sufficient condition, and then develop an algorithm to compute a solution based on the supervisory control theory of discrete-event systems. Finally, we illustrate the effectiveness of our solution with an example system comprising distributed databases.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
429,276
1702.03253
D4M 3.0
The D4M tool is used by hundreds of researchers to perform complex analytics on unstructured data. Over the past few years, the D4M toolbox has evolved to support connectivity with a variety of database engines, graph analytics in the Apache Accumulo database, and an implementation using the Julia programming language. In this article, we describe some of our latest additions to the D4M toolbox and our upcoming D4M 3.0 release.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
68,099
2204.02350
Information-Theoretic Policy Learning from Partial Observations with Fully Informed Decision Makers
In this work we formulate and treat an extension of the Imitation from Observations problem. Imitation from Observations is a generalisation of the well-known Imitation Learning problem where state-only demonstrations are considered. In our treatment we extend the scope of Imitation from Observations to feature-only demonstrations which could arguably be described as partial observations. Therewith we mean that the full state of the decision makers is unknown and imitation must take place on the basis of a limited set of features. We set out for methods that extract an executable policy directly from those features which, in the literature, would be referred to as Behavioural Cloning methods. Our treatment combines elements from probability and information theory and draws connections with entropy regularized Markov Decision Processes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
289,910
1406.3454
The Degrees-of-Freedom of Multi-way Device-to-Device Communications is Limited by 2
A 3-user device-to-device (D2D) communications scenario is studied where each user wants to send and receive a message from each other user. This scenario resembles a 3-way communication channel. The capacity of this channel is unknown in general. In this paper, a sum-capacity upper bound that characterizes the degrees-of-freedom of the channel is derived by using genie-aided arguments. It is further shown that the derived upper bound is achievable within a gap of 2 bits, thus leading to an approximate sum-capacity characterization for the 3-way channel. As a by-product, interesting analogies between multi-way communications and multi-way relay communications are concluded.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
33,848
1807.11560
Efficient Gauss-Newton-Krylov momentum conservation constrained PDE-LDDMM using the band-limited vector field parameterization
The class of non-rigid registration methods proposed in the framework of PDE-constrained Large Deformation Diffeomorphic Metric Mapping is a particularly interesting family of physically meaningful diffeomorphic registration methods. PDE-constrained LDDMM methods are formulated as constrained variational problems, where the different physical models are imposed using the associated partial differential equations as hard constraints. Inexact Newton-Krylov optimization has shown an excellent numerical accuracy and an extraordinarily fast convergence rate in this framework. However, the Galerkin representation of the non-stationary velocity fields does not provide proper geodesic paths. In a previous work, we proposed a method for PDE-constrained LDDMM parameterized in the space of initial velocity fields under the EPDiff equation. The proposed method provided geodesics in the framework of PDE-constrained LDDMM, and it showed performance competitive to benchmark PDE-constrained LDDMM and EPDiff-LDDMM methods. However, the major drawback of this method was the large memory load inherent to PDE-constrained LDDMM methods and the increased computational time with respect to the benchmark methods. In this work we optimize the computational complexity of the method using the band-limited vector field parameterization closing the loop with our previous works.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
104,195
1709.01421
Multi-label Class-imbalanced Action Recognition in Hockey Videos via 3D Convolutional Neural Networks
Automatic analysis of the video is one of most complex problems in the fields of computer vision and machine learning. A significant part of this research deals with (human) activity recognition (HAR) since humans, and the activities that they perform, generate most of the video semantics. Video-based HAR has applications in various domains, but one of the most important and challenging is HAR in sports videos. Some of the major issues include high inter- and intra-class variations, large class imbalance, the presence of both group actions and single player actions, and recognizing simultaneous actions, i.e., the multi-label learning problem. Keeping in mind these challenges and the recent success of CNNs in solving various computer vision problems, in this work, we implement a 3D CNN based multi-label deep HAR system for multi-label class-imbalanced action recognition in hockey videos. We test our system for two different scenarios: an ensemble of $k$ binary networks vs. a single $k$-output network, on a publicly available dataset. We also compare our results with the system that was originally designed for the chosen dataset. Experimental results show that the proposed approach performs better than the existing solution.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
80,073
2008.03346
Generative Adversarial Network for Radar Signal Generation
A major obstacle in radar based methods for concealed object detection on humans and seamless integration into security and access control system is the difficulty in collecting high quality radar signal data. Generative adversarial networks (GAN) have shown promise in data generation application in the fields of image and audio processing. As such, this paper proposes the design of a GAN for application in radar signal generation. Data collected using the Finite-Difference Time-Domain (FDTD) method on three concealed object classes (no object, large object, and small object) were used as training data to train a GAN to generate radar signal samples for each class. The proposed GAN generated radar signal data which was indistinguishable from the training data by qualitative human observers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
190,868
2111.00278
A Decentralized Reinforcement Learning Framework for Efficient Passage of Emergency Vehicles
Emergency vehicles (EMVs) play a critical role in a city's response to time-critical events such as medical emergencies and fire outbreaks. The existing approaches to reduce EMV travel time employ route optimization and traffic signal pre-emption without accounting for the coupling between route these two subproblems. As a result, the planned route often becomes suboptimal. In addition, these approaches also do not focus on minimizing disruption to the overall traffic flow. To address these issues, we introduce EMVLight in this paper. This is a decentralized reinforcement learning (RL) framework for simultaneous dynamic routing and traffic signal control. EMVLight extends Dijkstra's algorithm to efficiently update the optimal route for an EMV in real-time as it travels through the traffic network. Consequently, the decentralized RL agents learn network-level cooperative traffic signal phase strategies that reduce EMV travel time and the average travel time of non-EMVs in the network. We have carried out comprehensive experiments with synthetic and real-world maps to demonstrate this benefit. Our results show that EMVLight outperforms benchmark transportation engineering techniques as well as existing RL-based traffic signal control methods.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
264,167
1807.10194
Linkage between piecewise constant Mumford-Shah model and ROF model and its virtue in image segmentation
The piecewise constant Mumford-Shah (PCMS) model and the Rudin-Osher-Fatemi (ROF) model are two important variational models in image segmentation and image restoration, respectively. In this paper, we explore a linkage between these models. We prove that for the two-phase segmentation problem a partial minimizer of the PCMS model can be obtained by thresholding the minimizer of the ROF model. A similar linkage is still valid for multiphase segmentation under specific assumptions. Thus it opens a new segmentation paradigm: image segmentation can be done via image restoration plus thresholding. This new paradigm, which circumvents the innate non-convex property of the PCMS model, therefore improves the segmentation performance in both efficiency (much faster than state-of-the-art methods based on PCMS model, particularly when the phase number is high) and effectiveness (producing segmentation results with better quality) due to the flexibility of the ROF model in tackling degraded images, such as noisy images, blurry images or images with information loss. As a by-product of the new paradigm, we derive a novel segmentation method, called thresholded-ROF (T-ROF) method, to illustrate the virtue of managing image segmentation through image restoration techniques. The convergence of the T-ROF method is proved, and elaborate experimental results and comparisons are presented.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
103,892