id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2209.02617
Priority Based Synchronization for Faster Learning in Games
Learning in games has been widely used to solve many cooperative multi-agent problems such as coverage control, consensus, self-reconfiguration or vehicle-target assignment. One standard approach in this domain is to formulate the problem as a potential game and to use an algorithm such as log-linear learning to achieve the stochastic stability of globally optimal configurations. Standard versions of such learning algorithms are asynchronous, i.e., only one agent updates its action at each round of the learning process. To enable faster learning, we propose a synchronization strategy based on decentralized random prioritization of agents, which allows multiple agents to change their actions simultaneously when they do not affect each other's utility or feasible actions. We show that the proposed approach can be integrated into any standard asynchronous learning algorithm to improve the convergence speed while maintaining the limiting behavior (e.g., stochastically stable configurations). We support our theoretical results with simulations in a coverage control scenario.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
316,263
2302.09450
Robust and Versatile Bipedal Jumping Control through Reinforcement Learning
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world. We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions. To improve performance on these challenging tasks, we develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history. In order to train a versatile jumping policy, we utilize a multi-stage training scheme that includes different training stages for different objectives. After multi-stage training, the policy can be directly transferred to a real bipedal Cassie robot. Training on different tasks and exploring more diverse scenarios lead to highly robust policies that can exploit the diverse set of learned maneuvers to recover from perturbations or poor landings during real-world deployment. Such robustness in the proposed policy enables Cassie to succeed in completing a variety of challenging jump tasks in the real world, such as standing long jumps, jumping onto elevated platforms, and multi-axes jumps.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
346,430
1709.03669
Capture Point Trajectories for Reduced Knee Bend using Step Time Optimization
Traditional force-controlled bipedal walking utilizes highly bent knees, resulting in high torques as well as inefficient, and unnatural motions. Even with advanced planning of center of mass height trajectories, significant amounts of knee-bend can be required due to arbitrarily chosen step timing. In this work, we present a method that examines the effects of adjusting the step timing to produce plans that only require a specified amount of knee bend to execute. We define a quadratic program that optimizes the step timings and is executed using a simple iterative feedback approach to account for higher order terms. We then illustrate the effectiveness of this algorithm by comparing the walking gait of the simulated Atlas humanoid with and without the algorithm, showing that the algorithm significantly reduces the required knee bend for execution. We aim to later use this approach to achieve natural, efficient walking motions on humanoid robot platforms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
80,505
2111.07929
High-Rate Convolutional Codes with CRC-Aided List Decoding for Short Blocklengths
Recently, rate-$1/\omega$ zero-terminated and tail-biting convolutional codes (ZTCCs and TBCCs) with cyclic-redundancy-check (CRC)-aided list decoding have been shown to closely approach the random-coding union (RCU) bound for short blocklengths. This paper designs CRCs for rate-$(\omega-1)/\omega$ CCs with short blocklengths, considering both the ZT and TB cases. The CRC design seeks to optimize the frame error rate (FER) performance of the code resulting from the concatenation of the CRC and the CC. Utilization of the dual trellis proposed by Yamada \emph{et al.} lowers the complexity of CRC-aided serial list Viterbi decoding (SLVD) of ZTCCs and TBCCs. CRC-aided SLVD of the TBCCs closely approaches the RCU bound at a blocklength of $128$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
266,517
2501.13959
Assisting Mathematical Formalization with A Learning-based Premise Retriever
Premise selection is a crucial yet challenging step in mathematical formalization, especially for users with limited experience. Due to the lack of available formalization projects, existing approaches that leverage language models often suffer from data scarcity. In this work, we introduce an innovative method for training a premise retriever to support the formalization of mathematics. Our approach employs a BERT model to embed proof states and premises into a shared latent space. The retrieval model is trained within a contrastive learning framework and incorporates a domain-specific tokenizer along with a fine-grained similarity computation method. Experimental results show that our model is highly competitive compared to existing baselines, achieving strong performance while requiring fewer computational resources. Performance is further enhanced through the integration of a re-ranking module. To streamline the formalization process, we will release a search engine that enables users to query Mathlib theorems directly using proof states, significantly improving accessibility and efficiency. Codes are available at https://github.com/ruc-ai4math/Premise-Retrieval.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
526,905
2212.00855
Reward Function Optimization of a Deep Reinforcement Learning Collision Avoidance System
The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
334,218
0806.3227
A Non-differential Distributed Space-Time Coding for Partially-Coherent Cooperative Communication
In a distributed space-time coding scheme, based on the relay channel model, the relay nodes co-operate to linearly process the transmitted signal from the source and forward them to the destination such that the signal at the destination appears as a space time block code. Recently, a code design criteria for achieving full diversity in a partially-coherent environment have been proposed along with codes based on differential encoding and decoding techniques. For such a set up, in this paper, a non-differential encoding technique and construction of distributed space time block codes from unitary matrix groups at the source and a set of diagonal unitary matrices for the relays are proposed. It is shown that, the performance of our scheme is independent of the choice of unitary matrices at the relays. When the group is cyclic, a necessary and sufficient condition on the generator of the cyclic group to achieve full diversity and to minimize the pairwise error probability is proved. Various choices on the generator of cyclic group to reduce the ML decoding complexity at the destination is presented. It is also shown that, at the source, if non-cyclic abelian unitary matrix groups are used, then full-diversity can not be obtained. The presented scheme is also robust to failure of any subset of relay nodes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,943
2405.06522
Heterogeneous Graph Neural Networks with Loss-decrease-aware Curriculum Learning
In recent years, heterogeneous graph neural networks (HGNNs) have achieved excellent performance in handling heterogeneous information networks (HINs). Curriculum learning is a machine learning strategy where training examples are presented to a model in a structured order, starting with easy examples and gradually increasing difficulty, aiming to improve learning efficiency and generalization. To better exploit the rich information in HINs, previous methods have started to explore the use of curriculum learning strategy to train HGNNs. Specifically, these works utilize the absolute value of the loss at each training epoch to evaluate the learning difficulty of each training sample. However, the relative loss, rather than the absolute value of loss, reveals the learning difficulty. Therefore, we propose a novel loss-decrease-aware training schedule (LDTS). LDTS uses the trend of loss decrease between each training epoch to better evaluating the difficulty of training samples, thereby enhancing the curriculum learning of HGNNs for downstream tasks. Additionally, we propose a sampling strategy to alleviate training imbalance issues. Our method further demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities. We call our method Loss-decrease-aware Heterogeneous Graph Neural Networks (LDHGNN). The code is public at https://github.com/wangyili00/LDHGNN.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
453,318
1111.1094
On Three Challenges of Artificial Living Systems and Embodied Evolution
Creating autonomous, self-supporting, self-replicating, sustainable systems is a great challenge. To some extent, understanding life means not only being able to create it from scratch, but also improving, supporting, saving it, or even making it even more advanced. This can be thought of as a long-term goal of living technologies and embodied evolution. Current research agenda targets several short- and middle-term steps towards achieving such a vision: connection of ICT and bio-/chemo- developments, advances in "soft" and "wet" robotics, integration of material science into developmental robotics, and potentially, addressing the self-replication in autonomous systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
12,911
2006.13462
DeepMnemonic: Password Mnemonic Generation via Deep Attentive Encoder-Decoder Model
Strong passwords are fundamental to the security of password-based user authentication systems. In recent years, much effort has been made to evaluate password strength or to generate strong passwords. Unfortunately, the usability or memorability of the strong passwords has been largely neglected. In this paper, we aim to bridge the gap between strong password generation and the usability of strong passwords. We propose to automatically generate textual password mnemonics, i.e., natural language sentences, which are intended to help users better memorize passwords. We introduce \textit{DeepMnemonic}, a deep attentive encoder-decoder framework which takes a password as input and then automatically generates a mnemonic sentence for the password. We conduct extensive experiments to evaluate DeepMnemonic on the real-world data sets. The experimental results demonstrate that DeepMnemonic outperforms a well-known baseline for generating semantically meaningful mnemonic sentences. Moreover, the user study further validates that the generated mnemonic sentences by DeepMnemonic are useful in helping users memorize strong passwords.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
183,920
2207.00165
Secure Forward Aggregation for Vertical Federated Neural Networks
Vertical federated learning (VFL) is attracting much attention because it enables cross-silo data cooperation in a privacy-preserving manner. While most research works in VFL focus on linear and tree models, deep models (e.g., neural networks) are not well studied in VFL. In this paper, we focus on SplitNN, a well-known neural network framework in VFL, and identify a trade-off between data security and model performance in SplitNN. Briefly, SplitNN trains the model by exchanging gradients and transformed data. On the one hand, SplitNN suffers from the loss of model performance since multiply parties jointly train the model using transformed data instead of raw data, and a large amount of low-level feature information is discarded. On the other hand, a naive solution of increasing the model performance through aggregating at lower layers in SplitNN (i.e., the data is less transformed and more low-level feature is preserved) makes raw data vulnerable to inference attacks. To mitigate the above trade-off, we propose a new neural network protocol in VFL called Security Forward Aggregation (SFA). It changes the way of aggregating the transformed data and adopts removable masks to protect the raw data. Experiment results show that networks with SFA achieve both data security and high model performance.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
305,662
2012.04812
Improving Relation Extraction by Leveraging Knowledge Graph Link Prediction
Relation extraction (RE) aims to predict a relation between a subject and an object in a sentence, while knowledge graph link prediction (KGLP) aims to predict a set of objects, O, given a subject and a relation from a knowledge graph. These two problems are closely related as their respective objectives are intertwined: given a sentence containing a subject and an object o, a RE model predicts a relation that can then be used by a KGLP model together with the subject, to predict a set of objects O. Thus, we expect object o to be in set O. In this paper, we leverage this insight by proposing a multi-task learning approach that improves the performance of RE models by jointly training on RE and KGLP tasks. We illustrate the generality of our approach by applying it on several existing RE models and empirically demonstrate how it helps them achieve consistent performance gains.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
210,576
2310.19415
Text-to-3D with Classifier Score Distillation
Text-to-3D generation has made remarkable progress recently, particularly with methods based on Score Distillation Sampling (SDS) that leverages pre-trained 2D diffusion models. While the usage of classifier-free guidance is well acknowledged to be crucial for successful optimization, it is considered an auxiliary trick rather than the most essential component. In this paper, we re-evaluate the role of classifier-free guidance in score distillation and discover a surprising finding: the guidance alone is enough for effective text-to-3D generation tasks. We name this method Classifier Score Distillation (CSD), which can be interpreted as using an implicit classification model for generation. This new perspective reveals new insights for understanding existing techniques. We validate the effectiveness of CSD across a variety of text-to-3D tasks including shape generation, texture synthesis, and shape editing, achieving results superior to those of state-of-the-art methods. Our project page is https://xinyu-andy.github.io/Classifier-Score-Distillation
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
403,984
2412.04174
Supertoroid fitting of objects with holes for robotic grasping and scene generation
One of the strategies to detect the pose and shape of unknown objects is their geometric modeling, consisting on fitting known geometric entities. Classical geometric modeling fits simple shapes such as spheres or cylinders, but often those don't cover the variety of shapes that can be encountered. For those situations, one solution is the use of superquadrics, which can adapt to a wider variety of shapes. One of the limitations of superquadrics is that they cannot model objects with holes, such as those with handles. This work aims to fit supersurfaces of degree four, in particular supertoroids, to objects with a single hole. Following the results of superquadrics, simple expressions for the major and minor radial distances are derived, which lead to the fitting of the intrinsic and extrinsic parameters of the supertoroid. The differential geometry of the surface is also studied as a function of these parameters. The result is a supergeometric modeling that can be used for symmetric objects with and without holes with a simple distance function for the fitting. The proposed algorithm expands considerably the amount of shapes that can be targeted for geometric modeling.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
514,292
2111.05426
DistIR: An Intermediate Representation and Simulator for Efficient Neural Network Distribution
The rapidly growing size of deep neural network (DNN) models and datasets has given rise to a variety of distribution strategies such as data, tensor-model, pipeline parallelism, and hybrid combinations thereof. Each of these strategies offers its own trade-offs and exhibits optimal performance across different models and hardware topologies. Selecting the best set of strategies for a given setup is challenging because the search space grows combinatorially, and debugging and testing on clusters is expensive. In this work we propose DistIR, an expressive intermediate representation for distributed DNN computation that is tailored for efficient analyses, such as simulation. This enables automatically identifying the top-performing strategies without having to execute on physical hardware. Unlike prior work, DistIR can naturally express many distribution strategies including pipeline parallelism with arbitrary schedules. Our evaluation on MLP training and GPT-2 inference models demonstrates how DistIR and its simulator enable fast grid searches over complex distribution spaces spanning up to 1000+ configurations, reducing optimization time by an order of magnitude for certain regimes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
265,788
2409.12150
Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference
Personalized outfit recommendation remains a complex challenge, demanding both fashion compatibility understanding and trend awareness. This paper presents a novel framework that harnesses the expressive power of large language models (LLMs) for this task, mitigating their "black box" and static nature through fine-tuning and direct feedback integration. We bridge the item visual-textual gap in items descriptions by employing image captioning with a Multimodal Large Language Model (MLLM). This enables the LLM to extract style and color characteristics from human-curated fashion images, forming the basis for personalized recommendations. The LLM is efficiently fine-tuned on the open-source Polyvore dataset of curated fashion images, optimizing its ability to recommend stylish outfits. A direct preference mechanism using negative examples is employed to enhance the LLM's decision-making process. This creates a self-enhancing AI feedback loop that continuously refines recommendations in line with seasonal fashion trends. Our framework is evaluated on the Polyvore dataset, demonstrating its effectiveness in two key tasks: fill-in-the-blank, and complementary item retrieval. These evaluations underline the framework's ability to generate stylish, trend-aligned outfit suggestions, continuously improving through direct feedback. The evaluation results demonstrated that our proposed framework significantly outperforms the base LLM, creating more cohesive outfits. The improved performance in these tasks underscores the proposed framework's potential to enhance the shopping experience with accurate suggestions, proving its effectiveness over the vanilla LLM based outfit generation.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
489,456
2403.06339
FOAA: Flattened Outer Arithmetic Attention For Multimodal Tumor Classification
Fusion of multimodal healthcare data holds great promise to provide a holistic view of a patient's health, taking advantage of the complementarity of different modalities while leveraging their correlation. This paper proposes a simple and effective approach, inspired by attention, to fuse discriminative features from different modalities. We propose a novel attention mechanism, called Flattened Outer Arithmetic Attention (FOAA), which relies on outer arithmetic operators (addition, subtraction, product, and division) to compute attention scores from keys, queries and values derived from flattened embeddings of each modality. We demonstrate how FOAA can be implemented for self-attention and cross-attention, providing a reusable component in neural network architectures. We evaluate FOAA on two datasets for multimodal tumor classification and achieve state-of-the-art results, and we demonstrate that features enriched by FOAA are superior to those derived from other fusion approaches. The code is publicly available at \href{https://github.com/omniaalwazzan/FOAA}{https://github.com/omniaalwazzan/FOAA}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
436,400
2209.14475
Intrinsic Dimensionality Estimation within Tight Localities: A Theoretical and Experimental Analysis
Accurate estimation of Intrinsic Dimensionality (ID) is of crucial importance in many data mining and machine learning tasks, including dimensionality reduction, outlier detection, similarity search and subspace clustering. However, since their convergence generally requires sample sizes (that is, neighborhood sizes) on the order of hundreds of points, existing ID estimation methods may have only limited usefulness for applications in which the data consists of many natural groups of small size. In this paper, we propose a local ID estimation strategy stable even for `tight' localities consisting of as few as 20 sample points. The estimator applies MLE techniques over all available pairwise distances among the members of the sample, based on a recent extreme-value-theoretic model of intrinsic dimensionality, the Local Intrinsic Dimension (LID). Our experimental results show that our proposed estimation technique can achieve notably smaller variance, while maintaining comparable levels of bias, at much smaller sample sizes than state-of-the-art estimators.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
320,261
2405.20715
Transforming Japan Real Estate
The Japanese real estate market, valued over 35 trillion USD, offers significant investment opportunities. Accurate rent and price forecasting could provide a substantial competitive edge. This paper explores using alternative data variables to predict real estate performance in 1100 Japanese municipalities. A comprehensive house price index was created, covering all municipalities from 2005 to the present, using a dataset of over 5 million transactions. This core dataset was enriched with economic factors spanning decades, allowing for price trajectory predictions. The findings show that alternative data variables can indeed forecast real estate performance effectively. Investment signals based on these variables yielded notable returns with low volatility. For example, the net migration ratio delivered an annualized return of 4.6% with a Sharpe ratio of 1.5. Taxable income growth and new dwellings ratio also performed well, with annualized returns of 4.1% (Sharpe ratio of 1.3) and 3.3% (Sharpe ratio of 0.9), respectively. When combined with transformer models to predict risk-adjusted returns 4 years in advance, the model achieved an R-squared score of 0.28, explaining nearly 30% of the variation in future municipality prices. These results highlight the potential of alternative data variables in real estate investment. They underscore the need for further research to identify more predictive factors. Nonetheless, the evidence suggests that such data can provide valuable insights into real estate price drivers, enabling more informed investment decisions in the Japanese market.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
459,486
2306.04249
DEMIST: A deep-learning-based task-specific denoising approach for myocardial perfusion SPECT
There is an important need for methods to process myocardial perfusion imaging (MPI) SPECT images acquired at lower radiation dose and/or acquisition time such that the processed images improve observer performance on the clinical task of detecting perfusion defects. To address this need, we build upon concepts from model-observer theory and our understanding of the human visual system to propose a Detection task-specific deep-learning-based approach for denoising MPI SPECT images (DEMIST). The approach, while performing denoising, is designed to preserve features that influence observer performance on detection tasks. We objectively evaluated DEMIST on the task of detecting perfusion defects using a retrospective study with anonymized clinical data in patients who underwent MPI studies across two scanners (N = 338). The evaluation was performed at low-dose levels of 6.25%, 12.5% and 25% and using an anthropomorphic channelized Hotelling observer. Performance was quantified using area under the receiver operating characteristics curve (AUC). Images denoised with DEMIST yielded significantly higher AUC compared to corresponding low-dose images and images denoised with a commonly used task-agnostic DL-based denoising method. Similar results were observed with stratified analysis based on patient sex and defect type. Additionally, DEMIST improved visual fidelity of the low-dose images as quantified using root mean squared error and structural similarity index metric. A mathematical analysis revealed that DEMIST preserved features that assist in detection tasks while improving the noise properties, resulting in improved observer performance. The results provide strong evidence for further clinical evaluation of DEMIST to denoise low-count images in MPI SPECT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
371,668
2301.11542
Feasibility and Transferability of Transfer Learning: A Mathematical Framework
Transfer learning is an emerging and popular paradigm for utilizing existing knowledge from previous learning tasks to improve the performance of new ones. Despite its numerous empirical successes, theoretical analysis for transfer learning is limited. In this paper we build for the first time, to the best of our knowledge, a mathematical framework for the general procedure of transfer learning. Our unique reformulation of transfer learning as an optimization problem allows for the first time, analysis of its feasibility. Additionally, we propose a novel concept of transfer risk to evaluate transferability of transfer learning. Our numerical studies using the Office-31 dataset demonstrate the potential and benefits of incorporating transfer risk in the evaluation of transfer learning performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,187
2405.15313
Enhancing Text-to-Image Editing via Hybrid Mask-Informed Fusion
Recently, text-to-image (T2I) editing has been greatly pushed forward by applying diffusion models. Despite the visual promise of the generated images, inconsistencies with the expected textual prompt remain prevalent. This paper aims to systematically improve the text-guided image editing techniques based on diffusion models, by addressing their limitations. Notably, the common idea in diffusion-based editing firstly reconstructs the source image via inversion techniques e.g., DDIM Inversion. Then following a fusion process that carefully integrates the source intermediate (hidden) states (obtained by inversion) with the ones of the target image. Unfortunately, such a standard pipeline fails in many cases due to the interference of texture retention and the new characters creation in some regions. To mitigate this, we incorporate human annotation as an external knowledge to confine editing within a ``Mask-informed'' region. Then we carefully Fuse the edited image with the source image and a constructed intermediate image within the model's Self-Attention module. Extensive empirical results demonstrate the proposed ``MaSaFusion'' significantly improves the existing T2I editing techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
456,867
2305.00191
Optimization of AoII and QAoII in Multi-User Links
We consider a network with multiple sources and a base station that send time-sensitive information to remote clients. The Age of Incorrect Information (AoII) captures the freshness of the informative pieces of status update packets at the destinations. We derive the closed-form Whittle Index formulation for a push-based multi-user network over unreliable channels with AoII-dependent cost functions. We also propose a new semantic performance metric for pull-based systems, named the Age of Incorrect Information at Query (QAoII), that quantifies AoII at particular instants when clients generate queries. Simulation results demonstrate that the proposed Whittle Index-based scheduling policies for both AoII and QAoII-dependent cost functions are superior to benchmark policies, and adopting query-aware scheduling can significantly improve the timeliness for scenarios where a single user or multiple users are scheduled at a time.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
361,245
2406.12600
Generalization bounds for mixing processes via delayed online-to-PAC conversions
We study the generalization error of statistical learning algorithms in a non-i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-off between the amount of delay in the online learning game and the degree of dependence between consecutive data points, with near-optimal rates recovered in a number of well-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
465,480
1911.06930
Inverse Reinforcement Learning with Missing Data
We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories. This issue of missing trajectory data or information occurs in many situations, e.g., GPS signals from vehicles moving on a road network are intermittent. In this paper, we propose a tractable approach to directly compute the log-likelihood of demonstrated trajectories with incomplete/missing data. Our algorithm is efficient in handling a large number of missing segments in the demonstrated trajectories, as it performs the training with incomplete data by solving a sequence of systems of linear equations, and the number of such systems to be solved does not depend on the number of missing segments. Empirical evaluation on a real-world dataset shows that our training algorithm outperforms other conventional techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
153,659
2502.14180
On the logical skills of large language models: evaluations using arbitrarily complex first-order logic problems
We present a method of generating first-order logic statements whose complexity can be controlled along multiple dimensions. We use this method to automatically create several datasets consisting of questions asking for the truth or falsity of first-order logic statements in Zermelo-Fraenkel set theory. While the resolution of these questions does not require any knowledge beyond basic notation of first-order logic and set theory, it does require a degree of planning and logical reasoning, which can be controlled up to arbitrarily high difficulty by the complexity of the generated statements. Furthermore, we do extensive evaluations of the performance of various large language models, including recent models such as DeepSeek-R1 and OpenAI's o3-mini, on these datasets. All of the datasets along with the code used for generating them, as well as all data from the evaluations is publicly available at https://github.com/bkuckuck/logical-skills-of-llms.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
535,701
2412.12185
Graph Similarity Computation via Interpretable Neural Node Alignment
\Graph similarity computation is an essential task in many real-world graph-related applications such as retrieving the similar drugs given a query chemical compound or finding the user's potential friends from the social network database. Graph Edit Distance (GED) and Maximum Common Subgraphs (MCS) are the two commonly used domain-agnostic metrics to evaluate graph similarity in practice. Unfortunately, computing the exact GED is known to be a NP-hard problem. To solve this limitation, neural network based models have been proposed to approximate the calculations of GED/MCS. However, deep learning models are well-known ``black boxes'', thus the typically characteristic one-to-one node/subgraph alignment process in the classical computations of GED and MCS cannot be seen. Existing methods have paid attention to approximating the node/subgraph alignment (soft alignment), but the one-to-one node alignment (hard alignment) has not yet been solved. To fill this gap, in this paper we propose a novel interpretable neural node alignment model without relying on node alignment ground truth information. Firstly, the quadratic assignment problem in classical GED computation is relaxed to a linear alignment via embedding the features in the node embedding space. Secondly, a differentiable Gumbel-Sinkhorn module is proposed to unsupervised generate the optimal one-to-one node alignment matrix. Experimental results in real-world graph datasets demonstrate that our method outperforms the state-of-the-art methods in graph similarity computation and graph retrieval tasks, achieving up to 16\% reduction in the Mean Squared Error and up to 12\% improvement in the retrieval evaluation metrics, respectively.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
517,776
1709.07857
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
81,352
2308.12666
Geodesic Mode Connectivity
Mode connectivity is a phenomenon where trained models are connected by a path of low loss. We reframe this in the context of Information Geometry, where neural networks are studied as spaces of parameterized distributions with curved geometry. We hypothesize that shortest paths in these spaces, known as geodesics, correspond to mode-connecting paths in the loss landscape. We propose an algorithm to approximate geodesics and demonstrate that they achieve mode connectivity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
387,631
1909.09858
Versatile Compressive mmWave Hybrid Beamformer Codebook Design Framework
Hybrid beamforming (HB) architectures are attractive for wireless communication systems with large antenna arrays because the analog beamforming stage can significantly reduce the number of RF transceivers and hence power consumption. In HB systems, channel estimation (CE) becomes challenging due to indirect access by the baseband processing to the communication channels and due to low SNR before beam alignment. Compressed sensing (CS) based algorithms have been adopted to address these challenges by leveraging the sparse nature of millimeter wave multi-input multi-output (mmWave MIMO) channels. In many CS algorithms for narrowband CE, the hybrid beamformers are randomly configured which does not always yield the low-coherence sensing matrices desirable for those CS algorithms whose recovery guarantees rely on coherence. In this paper, we propose a versatile deterministic HB codebook design framework for CS algorithms with coherence-based recovery guarantees to enhance CE accuracy. Simulation results show that the proposed design can obtain lower channel estimation error and higher spectral efficiency compared with random codebook for phase-shifter-, switch-, and lens-based HB architectures.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
146,376
1905.08412
Position Paper: From Multi-Agent Pathfinding to Pipe Routing
The 2D Multi-Agent Path Finding (MAPF) problem aims at finding collision-free paths for a number of agents, from a set of start locations to a set of goal positions in a known 2D environment. MAPF has been studied in theoretical computer science, robotics, and artificial intelligence over several decades, due to its importance for robot navigation. It is currently experiencing significant scientific progress due to its relevance in automated warehousing (such as those operated by Amazon) and in other contemporary application areas. In this paper, we demonstrate that many recently developed MAPF algorithms apply more broadly than currently believed in the MAPF research community. In particular, we describe the 3D Pipe Routing (PR) problem, which aims at placing collision-free pipes from given start locations to given goal locations in a known 3D environment. The MAPF and PR problems are similar: a solution to a MAPF instance is a set of blocked cells in x-y-t space, while a solution to the corresponding PR instance is a set of blocked cells in x-y-z space. We show how to use this similarity to apply several recently developed MAPF algorithms to the PR problem, and discuss their performance on abstract PR instances. We also discuss further research necessary to tackle real-world pipe-routing instances of interest to industry today. This opens up a new direction of industrial relevance for the MAPF research community.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
131,463
2302.00119
Machine Translation Impact in E-commerce Multilingual Search
Previous work suggests that performance of cross-lingual information retrieval correlates highly with the quality of Machine Translation. However, there may be a threshold beyond which improving query translation quality yields little or no benefit to further improve the retrieval performance. This threshold may depend upon multiple factors including the source and target languages, the existing MT system quality and the search pipeline. In order to identify the benefit of improving an MT system for a given search pipeline, we investigate the sensitivity of retrieval quality to the presence of different levels of MT quality using experimental datasets collected from actual traffic. We systematically improve the performance of our MT systems quality on language pairs as measured by MT evaluation metrics including Bleu and Chrf to determine their impact on search precision metrics and extract signals that help to guide the improvement strategies. Using this information we develop techniques to compare query translations for multiple language pairs and identify the most promising language pairs to invest and improve.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
343,105
2404.18628
Self-Avatar Animation in Virtual Reality: Impact of Motion Signals Artifacts on the Full-Body Pose Reconstruction
Virtual Reality (VR) applications have revolutionized user experiences by immersing individuals in interactive 3D environments. These environments find applications in numerous fields, including healthcare, education, or architecture. A significant aspect of VR is the inclusion of self-avatars, representing users within the virtual world, which enhances interaction and embodiment. However, generating lifelike full-body self-avatar animations remains challenging, particularly in consumer-grade VR systems, where lower-body tracking is often absent. One method to tackle this problem is by providing an external source of motion information that includes lower body information such as full Cartesian positions estimated from RGB(D) cameras. Nevertheless, the limitations of these systems are multiples: the desynchronization between the two motion sources and occlusions are examples of significant issues that hinder the implementations of such systems. In this paper, we aim to measure the impact on the reconstruction of the articulated self-avatar's full-body pose of (1) the latency between the VR motion features and estimated positions, (2) the data acquisition rate, (3) occlusions, and (4) the inaccuracy of the position estimation algorithm. In addition, we analyze the motion reconstruction errors using ground truth and 3D Cartesian coordinates estimated from \textit{YOLOv8} pose estimation. These analyzes show that the studied methods are significantly sensitive to any degradation tested, especially regarding the velocity reconstruction error.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
450,335
1003.1399
Automatic derivation of domain terms and concept location based on the analysis of the identifiers
Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and web frameworks. Knowledge of domain terms enables easy localization of chunks of code that belong to a certain term. We consider these chunks of code as "concepts" and their placement in the code as "concept location". Application developers may also benefit from the obtained domain terms. These terms are parts of speech that characterize a certain concept. Concepts are encoded in "classes" (OO paradigm) and the obtained vocabulary of terms supports the selection and the comprehension of the class' appropriate identifiers. We measured the following software products with our tool: JBoss, JOnAS, GlassFish, Tapestry, Google Web Toolkit and Echo2.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
5,858
2210.01235
CaiRL: A High-Performance Reinforcement Learning Environment Toolkit
This paper addresses the dire need for a platform that efficiently provides a framework for running reinforcement learning (RL) experiments. We propose the CaiRL Environment Toolkit as an efficient, compatible, and more sustainable alternative for training learning agents and propose methods to develop more efficient environment simulations. There is an increasing focus on developing sustainable artificial intelligence. However, little effort has been made to improve the efficiency of running environment simulations. The most popular development toolkit for reinforcement learning, OpenAI Gym, is built using Python, a powerful but slow programming language. We propose a toolkit written in C++ with the same flexibility level but works orders of magnitude faster to make up for Python's inefficiency. This would drastically cut climate emissions. CaiRL also presents the first reinforcement learning toolkit with a built-in JVM and Flash support for running legacy flash games for reinforcement learning research. We demonstrate the effectiveness of CaiRL in the classic control benchmark, comparing the execution speed to OpenAI Gym. Furthermore, we illustrate that CaiRL can act as a drop-in replacement for OpenAI Gym to leverage significantly faster training speeds because of the reduced environment computation time.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
321,172
1901.08513
Finite-Time Stability of Switched and Hybrid Systems with Unstable Modes
In this work, we study finite-time stability of switched and hybrid systems in the presence of unstable modes. We present sufficient conditions in terms of multiple Lyapunov functions for the origin of the system to be finite time stable. More specifically, we show that even if the value of the Lyapunov function increases in between two switches, i.e., if there are unstable modes in the system, finite-time stability can still be guaranteed if the finite time convergent mode is active long enough. In contrast to earlier work where the Lyapunov functions are required to be decreasing during the continuous flows and non-increasing at the discrete jumps, we allow the Lyapunov functions to increase \emph{both} during the continuous flows and the discrete jumps. As thus, the derived stability results are less conservative compared to the earlier results in the related literature, and in effect allow the hybrid system to have unstable modes. Then, we illustrate how the proposed finite-time stability conditions specialize for a class of switched systems, and present a method on the synthesis of a finite-time stabilizing switching signal for switched linear systems. As a case study, we design a finite-time stable output feedback controller for a linear switched system, in which only one of the modes is controllable and observable. Numerical example demonstrates the efficacy of the proposed methods.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
119,494
2112.05825
Revisiting Consistency Regularization for Semi-Supervised Learning
Consistency regularization is one of the most widely-used techniques for semi-supervised learning (SSL). Generally, the aim is to train a model that is invariant to various data augmentations. In this paper, we revisit this idea and find that enforcing invariance by decreasing distances between features from differently augmented images leads to improved performance. However, encouraging equivariance instead, by increasing the feature distance, further improves performance. To this end, we propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss, that imposes consistency and equivariance on the classifier and the feature level, respectively. Experimental results show that our model defines a new state of the art for various datasets and settings and outperforms previous work by a significant margin, particularly in low data regimes. Extensive experiments are conducted to analyze the method, and the code will be published.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,965
1506.00893
SkILL - a Stochastic Inductive Logic Learner
Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored area of Statistical Relational Learning which extends classic Inductive Logic Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic Learner, which takes probabilistic annotated data and produces First Order Logic theories. Data in several domains such as medicine and bioinformatics have an inherent degree of uncer- tainty, that can be used to produce models closer to reality. SkILL can not only use this type of probabilistic data to extract non-trivial knowl- edge from databases, but it also addresses efficiency issues by introducing a novel, efficient and effective search strategy to guide the search in PILP environments. The capabilities of SkILL are demonstrated in three dif- ferent datasets: (i) a synthetic toy example used to validate the system, (ii) a probabilistic adaptation of a well-known biological metabolism ap- plication, and (iii) a real world medical dataset in the breast cancer domain. Results show that SkILL can perform as well as a deterministic ILP learner, while also being able to incorporate probabilistic knowledge that would otherwise not be considered.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
43,727
2106.02105
A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples -- optimized to be classified as a chosen target class -- tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust" -- that is, robust to small-magnitude adversarial examples -- substantially improves the transferability of class-targeted and representation-targeted adversarial attacks, even between architectures as different as convolutional neural networks and transformers. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
238,724
1703.10580
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,937
2407.10995
LionGuard: Building a Contextualized Moderation Classifier to Tackle Localized Unsafe Content
As large language models (LLMs) become increasingly prevalent in a wide variety of applications, concerns about the safety of their outputs have become more significant. Most efforts at safety-tuning or moderation today take on a predominantly Western-centric view of safety, especially for toxic, hateful, or violent speech. In this paper, we describe LionGuard, a Singapore-contextualized moderation classifier that can serve as guardrails against unsafe LLM outputs. When assessed on Singlish data, LionGuard outperforms existing widely-used moderation APIs, which are not finetuned for the Singapore context, by 14% (binary) and up to 51% (multi-label). Our work highlights the benefits of localization for moderation classifiers and presents a practical and scalable approach for low-resource languages.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
473,222
1901.01375
Analysis of a Two-Layer Neural Network via Displacement Convexity
Fitting a function by using linear combinations of a large number $N$ of `simple' components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is non-convex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. Here we consider the problem of learning a concave function $f$ on a compact convex domain $\Omega\subseteq {\mathbb R}^d$, using linear combinations of `bump-like' components (neurons). The parameters to be fitted are the centers of $N$ bumps, and the resulting empirical risk minimization problem is highly non-convex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over $\Omega$. Further, when the bump width $\delta$ tends to $0$, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for $N\to\infty$, $\delta\to 0$. Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of $\delta, N$. Explaining this phenomenon, and understanding the dependence on $\delta,N$ in a quantitative manner remains an outstanding challenge.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
117,961
2407.15196
Channel Shaping Using Beyond Diagonal Reconfigurable Intelligent Surface: Analysis, Optimization, and Enhanced Flexibility
This paper investigates the capability of a passive Reconfigurable Intelligent Surface (RIS) to redistribute the singular values of point-to-point Multiple-Input Multiple-Output (MIMO) channels for achieving power and rate gains. We depart from the conventional Diagonal (D)-RIS with diagonal phase shift matrix and adopt a Beyond Diagonal (BD) architecture that offers greater wave manipulation flexibility through element-wise connections. Specifically, we first provide shaping insights by characterizing the channel singular value regions attainable by D-RIS and BD-RIS via a novel geodesic optimization. Analytical singular value bounds are then derived to explore their shaping limits in typical deployment scenarios. As a side product, we tackle BD-RIS-aided MIMO rate maximization problem by a local-optimal Alternating Optimization (AO) and a shaping-inspired low-complexity approach. Results show that compared to D-RIS, BD-RIS significantly improves the dynamic range of all channel singular values, the trade-off in manipulating them, and thus the channel power and achievable rate. Those observations become more pronounced when the number of RIS elements and MIMO dimensions increase. Of particular interest, BD-RIS is shown to activate multi-stream transmission at lower transmit power than D-RIS, hence achieving the asymptotic Degrees of Freedom (DoF) at low Signal-to-Noise Ratio (SNR) thanks to its higher flexibility of shaping the distribution of channel singular values.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
475,071
2409.12005
Representing Positional Information in Generative World Models for Object Manipulation
Object manipulation capabilities are essential skills that set apart embodied agents engaging with the world, especially in the realm of robotics. The ability to predict outcomes of interactions with objects is paramount in this setting. While model-based control methods have started to be employed for tackling manipulation tasks, they have faced challenges in accurately manipulating objects. As we analyze the causes of this limitation, we identify the cause of underperformance in the way current world models represent crucial positional information, especially about the target's goal specification for object positioning tasks. We introduce a general approach that empowers world model-based agents to effectively solve object-positioning tasks. We propose two declinations of this approach for generative world models: position-conditioned (PCP) and latent-conditioned (LCP) policy learning. In particular, LCP employs object-centric latent representations that explicitly capture object positional information for goal specification. This naturally leads to the emergence of multimodal capabilities, enabling the specification of goals through spatial coordinates or a visual goal. Our methods are rigorously evaluated across several manipulation environments, showing favorable performance compared to current model-based control approaches.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
489,397
2007.10830
CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVE
In this paper, we describe our system for Task 4 of SemEval 2020, which involves differentiating between natural language statements that confirm to common sense and those that do not. The organizers propose three subtasks - first, selecting between two sentences, the one which is against common sense. Second, identifying the most crucial reason why a statement does not make sense. Third, generating novel reasons for explaining the against common sense statement. Out of the three subtasks, this paper reports the system description of subtask A and subtask B. This paper proposes a model based on transformer neural network architecture for addressing the subtasks. The novelty in work lies in the architecture design, which handles the logical implication of contradicting statements and simultaneous information extraction from both sentences. We use a parallel instance of transformers, which is responsible for a boost in the performance. We achieved an accuracy of 94.8% in subtask A and 89% in subtask B on the test set.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
188,391
1511.00615
Optimizing the Deployment of Electric Vehicle Charging Stations Using Pervasive Mobility Data
With recent advances in battery technology and the resulting decrease in the charging times, public charging stations are becoming a viable option for Electric Vehicle (EV) drivers. Concurrently, wide-spread use of location-tracking devices in mobile phones and wearable devices makes it possible to track individual-level human movements to an unprecedented spatial and temporal grain. Motivated by these developments, we propose a novel methodology to perform data-driven optimization of EV charging stations location. We formulate the problem as a discrete optimization problem on a geographical grid, with the objective of covering the entire demand region while minimizing a measure of drivers' discomfort. Since optimally solving the problem is computationally infeasible, we present computationally efficient, near-optimal solutions based on greedy and genetic algorithms. We then apply the proposed methodology to optimize EV charging stations location in the city of Boston, starting from a massive cellular phone data sets covering 1 million users over 4 months. Results show that genetic algorithm based optimization provides the best solutions in terms of drivers' discomfort and the number of charging stations required, which are both reduced about 10 percent as compared to a randomized solution. We further investigate robustness of the proposed data-driven methodology, showing that, building upon well-known regularity of aggregate human mobility patterns, the near-optimal solution computed using single day movements preserves its properties also in later months. When collectively considered, the results presented in this paper clearly indicate the potential of data-driven approaches for optimally locating public charging facilities at the urban scale.
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
48,427
2208.02559
Equivalence between Time Series Predictability and Bayes Error Rate
Predictability is an emerging metric that quantifies the highest possible prediction accuracy for a given time series, being widely utilized in assessing known prediction algorithms and characterizing intrinsic regularities in human behaviors. Lately, increasing criticisms aim at the inaccuracy of the estimated predictability, caused by the original entropy-based method. In this brief report, we strictly prove that the time series predictability is equivalent to a seemingly unrelated metric called Bayes error rate that explores the lowest error rate unavoidable in classification. This proof bridges two independently developed fields, and thus each can immediately benefit from the other. For example, based on three theoretical models with known and controllable upper bounds of prediction accuracy, we show that the estimation based on Bayes error rate can largely solve the inaccuracy problem of predictability.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
311,505
2202.01905
Modified ResNet Model for MSI and MSS Classification of Gastrointestinal Cancer
In this work, a modified ResNet model is proposed for the classification of Microsatellite instability(MSI) and Microsatellite stability(MSS) of gastrointestinal cancer. The performance of this model is analyzed and compared with existing models. The proposed model surpassed the existing models with an accuracy of 0.8981 and F1 score of 0.9178.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
278,630
1804.10160
Two-Stream Binocular Network: Accurate Near Field Finger Detection Based On Binocular Images
Fingertip detection plays an important role in human computer interaction. Previous works transform binocular images into depth images. Then depth-based hand pose estimation methods are used to predict 3D positions of fingertips. Different from previous works, we propose a new framework, named Two-Stream Binocular Network (TSBnet) to detect fingertips from binocular images directly. TSBnet first shares convolutional layers for low level features of right and left images. Then it extracts high level features in two-stream convolutional networks separately. Further, we add a new layer: binocular distance measurement layer to improve performance of our model. To verify our scheme, we build a binocular hand image dataset, containing about 117k pairs of images in training set and 10k pairs of images in test set. Our methods achieve an average error of 10.9mm on our test set, outperforming previous work by 5.9mm (relatively 35.1%).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
96,102
1803.07484
Collective Schedules: Scheduling Meets Computational Social Choice
When scheduling public works or events in a shared facility one needs to accommodate preferences of a population. We formalize this problem by introducing the notion of a collective schedule. We show how to extend fundamental tools from social choice theory---positional scoring rules, the Kemeny rule and the Condorcet principle---to collective scheduling. We study the computational complexity of finding collective schedules. We also experimentally demonstrate that optimal collective schedules can be found for instances with realistic sizes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
93,063
2312.16211
An Explainable AI Approach to Large Language Model Assisted Causal Model Auditing and Development
Causal networks are widely used in many fields, including epidemiology, social science, medicine, and engineering, to model the complex relationships between variables. While it can be convenient to algorithmically infer these models directly from observational data, the resulting networks are often plagued with erroneous edges. Auditing and correcting these networks may require domain expertise frequently unavailable to the analyst. We propose the use of large language models such as ChatGPT as an auditor for causal networks. Our method presents ChatGPT with a causal network, one edge at a time, to produce insights about edge directionality, possible confounders, and mediating variables. We ask ChatGPT to reflect on various aspects of each causal link and we then produce visualizations that summarize these viewpoints for the human analyst to direct the edge, gather more data, or test further hypotheses. We envision a system where large language models, automated causal inference, and the human analyst and domain expert work hand in hand as a team to derive holistic and comprehensive causal models for any given case scenario. This paper presents first results obtained with an emerging prototype.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
418,308
1612.07495
Noise Mitigation for Neural Entity Typing and Relation Extraction
In this paper, we address two different types of noise in information extraction models: noise from distant supervision and noise from pipeline input features. Our target tasks are entity typing and relation extraction. For the first noise type, we introduce multi-instance multi-label learning algorithms using neural network models, and apply them to fine-grained entity typing for the first time. This gives our models comparable performance with the state-of-the-art supervised approach which uses global embeddings of entities. For the second noise type, we propose ways to improve the integration of noisy entity type predictions into relation extraction. Our experiments show that probabilistic predictions are more robust than discrete predictions and that joint training of the two tasks performs best.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
65,949
1901.05809
On Pliable Index Coding
A new variant of index coding problem termed as Pliable Index Coding Problem (PICOD) is formulated in [S. Brahma, C. Fragouli, "Pliable index coding", IEEE Transactions on Information Theory, vol. 61, no. 11, pp. 6192-6203, 2015]. In PICOD, we consider a server holding a set of messages and there is a set of clients having a subset of messages with them. Each client is satisfied if it receives any of the message which it doesn't have. We discuss about a class of PICOD where the side information is consecutive. We provide index codes for two extreme cases - for the class where each client gets exactly one desired message and for a class where total number of messages decoded by the effective clients is maximized. Another variant of index coding problem is - c-Constrained Pliable Index Coding Problem [Linqi Song, Christina Fragouli and Tianchu Zhao, "A Pliable Index Coding Approach to Data Shuffling," arXiv:1701.05540v3 [cs.IT] 3 May 2018]. It is basically PICOD with a c-constraint, i.e, each message is decoded by atmost c clients demanding that message. We provide index codes for some classes of this variant with consecutive side information.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,861
2306.03980
Counterfactual Explanations and Predictive Models to Enhance Clinical Decision-Making in Schizophrenia using Digital Phenotyping
Clinical practice in psychiatry is burdened with the increased demand for healthcare services and the scarce resources available. New paradigms of health data powered with machine learning techniques could open the possibility to improve clinical workflow in critical stages of clinical assessment and treatment in psychiatry. In this work, we propose a machine learning system capable of predicting, detecting, and explaining individual changes in symptoms of patients with Schizophrenia by using behavioral digital phenotyping data. We forecast symptoms of patients with an error rate below 10%. The system detects decreases in symptoms using changepoint algorithms and uses counterfactual explanations as a recourse in a simulated continuous monitoring scenario in healthcare. Overall, this study offers valuable insights into the performance and potential of counterfactual explanations, predictive models, and change-point detection within a simulated clinical workflow. These findings lay the foundation for further research to explore additional facets of the workflow, aiming to enhance its effectiveness and applicability in real-world healthcare settings. By leveraging these components, the goal is to develop an actionable, interpretable, and trustworthy integrative decision support system that combines real-time clinical assessments with sensor-based inputs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
371,552
2205.12406
Multi-Head Online Learning for Delayed Feedback Modeling
In online advertising, it is highly important to predict the probability and the value of a conversion (e.g., a purchase). It not only impacts user experience by showing relevant ads, but also affects ROI of advertisers and revenue of marketplaces. Unlike clicks, which often occur within minutes after impressions, conversions are expected to happen over a long period of time (e.g., 30 days for online shopping). It creates a challenge, as the true labels are only available after the long delays. Either inaccurate labels (partial conversions) are used, or models are trained on stale data (e.g., from 30 days ago). The problem is more eminent in online learning, which focuses on the live performance on the latest data. In this paper, a novel solution is presented to address this challenge using multi-head modeling. Unlike traditional methods, it directly quantizes conversions into multiple windows, such as day 1, day 2, day 3-7, and day 8-30. A sub-model is trained specifically on conversions within each window. Label freshness is maximally preserved in early models (e.g., day 1 and day 2), while late conversions are accurately utilized in models with longer delays (e.g., day 8-30). It is shown to greatly exceed the performance of known methods in online learning experiments for both conversion rate (CVR) and value per click (VPC) predictions. Lastly, as a general method for delayed feedback modeling, it can be combined with any advanced ML techniques to further improve the performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
298,517
2407.17085
OVR: A Dataset for Open Vocabulary Temporal Repetition Counting in Videos
We introduce a dataset of annotations of temporal repetitions in videos. The dataset, OVR (pronounced as over), contains annotations for over 72K videos, with each annotation specifying the number of repetitions, the start and end time of the repetitions, and also a free-form description of what is repeating. The annotations are provided for videos sourced from Kinetics and Ego4D, and consequently cover both Exo and Ego viewing conditions, with a huge variety of actions and activities. Moreover, OVR is almost an order of magnitude larger than previous datasets for video repetition. We also propose a baseline transformer-based counting model, OVRCounter, that can localise and count repetitions in videos that are up to 320 frames long. The model is trained and evaluated on the OVR dataset, and its performance assessed with and without using text to specify the target class to count. The performance is also compared to a prior repetition counting model. The dataset is available for download at: https://sites.google.com/view/openvocabreps/
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
475,845
1910.13728
Structure of Deep Neural Networks with a Priori Information in Wireless Tasks
Deep neural networks (DNNs) have been employed for designing wireless networks in many aspects, such as transceiver optimization, resource allocation, and information prediction. Existing works either use fully-connected DNN or the DNNs with specific structures that are designed in other domains. In this paper, we show that a priori information widely existed in wireless tasks is permutation invariant. For these tasks, we propose a DNN with special structure, where the weight matrices between layers of the DNN only consist of two smaller sub-matrices. By such way of parameter sharing, the number of model parameters reduces, giving rise to low sample and computational complexity for training a DNN. We take predictive resource allocation as an example to show how the designed DNN can be applied for learning the optimal policy with unsupervised learning. Simulations results validate our analysis and show dramatic gain of the proposed structure in terms of reducing training complexity.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
151,464
1607.03070
Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity
Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and post-synaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates for neuron refractory periods greater than 10 ms, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
58,454
2209.06367
A Review and Roadmap of Deep Learning Causal Discovery in Different Variable Paradigms
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
317,370
1111.6640
A Markov Random Field Topic Space Model for Document Retrieval
This paper proposes a novel statistical approach to intelligent document retrieval. It seeks to offer a more structured and extensible mathematical approach to the term generalization done in the popular Latent Semantic Analysis (LSA) approach to document indexing. A Markov Random Field (MRF) is presented that captures relationships between terms and documents as probabilistic dependence assumptions between random variables. From there, it uses the MRF-Gibbs equivalence to derive joint probabilities as well as local probabilities for document variables. A parameter learning method is proposed that utilizes rank reduction with singular value decomposition in a matter similar to LSA to reduce dimensionality of document-term relationships to that of a latent topic space. Experimental results confirm the ability of this approach to effectively and efficiently retrieve documents from substantial data sets.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
13,211
1908.04149
Enabling Commercial Autonomous Space Robotic Explorers
In contrast to manned missions, the application of autonomous robots for space exploration missions decreases the safety concerns of the exploration missions while extending the exploration distance since returning transportation is not necessary for robotics missions. In addition, the employment of robots in these missions also decreases mission complexities and costs because there is no need for onboard life support systems: robots can withstand and operate in harsh conditions, for instance, extreme temperature, pressure, and radiation, where humans cannot survive. In this article, we introduce environments on Mars, review the existing autonomous driving techniques deployed on Earth, as well as explore technologies required to enable future commercial autonomous space robotic explorers. Last but not least, we also present that one of the urgent technical challenges for autonomous space explorers, namely, computing power onboard.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
141,414
1903.06727
On Sample Complexity of Projection-Free Primal-Dual Methods for Learning Mixture Policies in Markov Decision Processes
We study the problem of learning policy of an infinite-horizon, discounted cost, Markov decision process (MDP) with a large number of states. We compute the actions of a policy that is nearly as good as a policy chosen by a suitable oracle from a given mixture policy class characterized by the convex hull of a set of known base policies. To learn the coefficients of the mixture model, we recast the problem as an approximate linear programming (ALP) formulation for MDPs, where the feature vectors correspond to the occupation measures of the base policies defined on the state-action space. We then propose a projection-free stochastic primal-dual method with the Bregman divergence to solve the characterized ALP. Furthermore, we analyze the probably approximately correct (PAC) sample complexity of the proposed stochastic algorithm, namely the number of queries required to achieve near optimal objective value. We also propose a modification of our proposed algorithm with the polytope constraint sampling for the smoothed ALP, where the restriction to lower bounding approximations are relaxed. In addition, we apply the proposed algorithms to a queuing problem, and compare their performance with a penalty function algorithm. The numerical results illustrates that the primal-dual achieves better efficiency and low variance across different trials compared to the penalty function method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,448
1601.05928
Role of Large Scale Channel Information on Predictive Resource Allocation
When the future achievable rate is perfectly known, predictive resource allocation can provide high performance gain over traditional resource allocation for the traffic without stringent delay requirement. However, future channel information is hard to obtain in wireless channels, especially the small-scale fading gains. In this paper, we analytically demonstrate that the future large-scale channel information can capture almost all the performance gain from knowing the future channel by taking an energy-saving resource allocation as an example. This result is important for practical systems, since large-scale channel gains can be easily estimated from the predicted trajectory of mobile users and radio map. Simulation results validate our analysis and illustrate the impact of the estimation errors of large-scale channel gains on energy saving.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,184
1905.04445
Explaining intuitive difficulty judgments by modeling physical effort and risk
The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments. Here we give a computational account of how humans judge the difficulty of a range of physical construction tasks (e.g., moving 10 loose blocks from their initial configuration to their target configuration, such as a vertical tower) by quantifying two key factors that influence construction difficulty: physical effort and physical risk. Physical effort captures the minimal work needed to transport all objects to their final positions, and is computed using a hybrid task-and-motion planner. Physical risk corresponds to stability of the structure, and is computed using noisy physics simulations to capture the costs for precision (e.g., attention, coordination, fine motor movements) required for success. We show that the full effort-risk model captures human estimates of difficulty and construction time better than either component alone.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
130,469
2212.13767
Learning to Detect Noisy Labels Using Model-Based Features
Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation. Many existing approaches are based on heuristics such as sample losses, which might not be flexible enough to achieve optimal solutions. Meta learning based methods address this issue by learning a data selection function, but can be hard to optimize. In light of these pros and cons, we propose Selection-Enhanced Noisy label Training (SENT) that does not rely on meta learning while having the flexibility of being data-driven. SENT transfers the noise distribution to a clean set and trains a model to distinguish noisy labels from clean ones using model-based features. Empirically, on a wide range of tasks including text classification and speech recognition, SENT improves performance over strong baselines under the settings of self-training and label corruption.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,394
1501.05396
Deep Multimodal Learning for Audio-Visual Speech Recognition
In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of $41\%$ under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of $35.83\%$ demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of $34.03\%$.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
39,475
2205.00191
Exploring Gender-Expansive Categorization Options for Robots
Gender is increasingly being explored as a social characteristic ascribed to robots by people. Yet, research involving social robots that may be gendered tends not to address gender perceptions, such as through pilot studies or manipulation checks. Moreover, research that does address gender perceptions has been limited by a reliance on the human gender binary model of feminine and masculine, prescriptive response options, and/or researcher assumptions and/or ascriptions of participant gendering. In response, we conducted an online pilot categorization study (n=55) wherein we provided gender-expansive response options for rating four robots ranging across four levels of anthropomorphism. Findings indicate that people gender robots in diverse ways, and not necessarily in relation to the gender binary. Additionally, less anthropomorphic robots and the childlike humanoid robot were deemed masculine, while the iconic robot was deemed gender neutral, fluid, and/or ambiguous. We discuss implications for future work on all humanoid robots.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
294,163
2006.07796
Structure by Architecture: Structured Representations without Regularization
We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance typically observed in VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, thereby ordering the information without any additional regularization or supervision. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
181,944
2208.09366
Approximate Dynamic Programming for Platoon Coordination under Hours-of-Service Regulations
Truck drivers are required to stop and rest with a certain regularity according to the driving and rest time regulations, also called Hours-of-Service (HoS) regulations. This paper studies the problem of optimally forming platoons when considering realistic HoS regulations. In our problem, trucks have fixed routes in a transportation network and can wait at hubs along their routes to form platoons with others while fulfilling the driving and rest time constraints. We propose a distributed decision-making scheme where each truck controls its waiting times at hubs based on the predicted schedules of others. The decoupling of trucks' decision-makings contributes to an approximate dynamic programming approach for platoon coordination under HoS regulations. Finally, we perform a simulation over the Swedish road network with one thousand trucks to evaluate the achieved platooning benefits under the HoS regulations in the European Union (EU). The simulation results show that, on average, trucks drive in platoons for 37% of their routes if each truck is allowed to be delayed for 5% of its total travel time. If trucks are not allowed to be delayed, they drive in platoons for 12% of their routes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
313,677
2106.02126
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms
One of the key drivers of complexity in the classical (stochastic) multi-armed bandit (MAB) problem is the difference between mean rewards in the top two arms, also known as the instance gap. The celebrated Upper Confidence Bound (UCB) policy is among the simplest optimism-based MAB algorithms that naturally adapts to this gap: for a horizon of play n, it achieves optimal O(log n) regret in instances with "large" gaps, and a near-optimal O(\sqrt{n log n}) minimax regret when the gap can be arbitrarily "small." This paper provides new results on the arm-sampling behavior of UCB, leading to several important insights. Among these, it is shown that arm-sampling rates under UCB are asymptotically deterministic, regardless of the problem complexity. This discovery facilitates new sharp asymptotics and a novel alternative proof for the O(\sqrt{n log n}) minimax regret of UCB. Furthermore, the paper also provides the first complete process-level characterization of the MAB problem under UCB in the conventional diffusion scaling. Among other things, the "small" gap worst-case lens adopted in this paper also reveals profound distinctions between the behavior of UCB and Thompson Sampling, such as an "incomplete learning" phenomenon characteristic of the latter.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
238,736
2203.05028
Dynamic Instance Domain Adaptation
Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features via feature alignment. However, such an assumption often does not hold true -- there often exist numerous finer-grained domains (e.g., dozens of modern painting styles have been developed, each differing dramatically from those of the classic styles). Therefore, forcing feature distribution alignment across each artificially-defined and coarse-grained domain can be ineffective. In this paper, we address both single-source and multi-source UDA from a completely different perspective, which is to view each instance as a fine domain. Feature alignment across domains is thus redundant. Instead, we propose to perform dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network with adaptive convolutional kernels is developed to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance. This enables a shared classifier to be applied to both source and target domain data without relying on any domain annotation. Further, instead of imposing intricate feature alignment losses, we adopt a simple semi-supervised learning paradigm using only a cross-entropy loss for both labeled source and pseudo labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,679
2309.08888
GCL: Gradient-Guided Contrastive Learning for Medical Image Segmentation with Multi-Perspective Meta Labels
Since annotating medical images for segmentation tasks commonly incurs expensive costs, it is highly desirable to design an annotation-efficient method to alleviate the annotation burden. Recently, contrastive learning has exhibited a great potential in learning robust representations to boost downstream tasks with limited labels. In medical imaging scenarios, ready-made meta labels (i.e., specific attribute information of medical images) inherently reveal semantic relationships among images, which have been used to define positive pairs in previous work. However, the multi-perspective semantics revealed by various meta labels are usually incompatible and can incur intractable "semantic contradiction" when combining different meta labels. In this paper, we tackle the issue of "semantic contradiction" in a gradient-guided manner using our proposed Gradient Mitigator method, which systematically unifies multi-perspective meta labels to enable a pre-trained model to attain a better high-level semantic recognition ability. Moreover, we emphasize that the fine-grained discrimination ability is vital for segmentation-oriented pre-training, and develop a novel method called Gradient Filter to dynamically screen pixel pairs with the most discriminating power based on the magnitude of gradients. Comprehensive experiments on four medical image segmentation datasets verify that our new method GCL: (1) learns informative image representations and considerably boosts segmentation performance with limited labels, and (2) shows promising generalizability on out-of-distribution datasets.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
392,367
2409.06299
Enhancing Long Video Understanding via Hierarchical Event-Based Memory
Recently, integrating visual foundation models into large language models (LLMs) to form video understanding systems has attracted widespread attention. Most of the existing models compress diverse semantic information within the whole video and feed it into LLMs for content comprehension. While this method excels in short video understanding, it may result in a blend of multiple event information in long videos due to coarse compression, which causes information redundancy. Consequently, the semantics of key events might be obscured within the vast information that hinders the model's understanding capabilities. To address this issue, we propose a Hierarchical Event-based Memory-enhanced LLM (HEM-LLM) for better understanding of long videos. Firstly, we design a novel adaptive sequence segmentation scheme to divide multiple events within long videos. In this way, we can perform individual memory modeling for each event to establish intra-event contextual connections, thereby reducing information redundancy. Secondly, while modeling current event, we compress and inject the information of the previous event to enhance the long-term inter-event dependencies in videos. Finally, we perform extensive experiments on various video understanding tasks and the results show that our model achieves state-of-the-art performances.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
487,074
2110.05376
Evaluating User Perception of Speech Recognition System Quality with Semantic Distance Metric
Measuring automatic speech recognition (ASR) system quality is critical for creating user-satisfying voice-driven applications. Word Error Rate (WER) has been traditionally used to evaluate ASR system quality; however, it sometimes correlates poorly with user perception/judgement of transcription quality. This is because WER weighs every word equally and does not consider semantic correctness which has a higher impact on user perception. In this work, we propose evaluating ASR output hypotheses quality with SemDist that can measure semantic correctness by using the distance between the semantic vectors of the reference and hypothesis extracted from a pre-trained language model. Our experimental results of 71K and 36K user annotated ASR output quality show that SemDist achieves higher correlation with user perception than WER. We also show that SemDist has higher correlation with downstream Natural Language Understanding (NLU) tasks than WER.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
260,260
1409.4826
Efficient Uncertainty Quantification for the Periodic Steady State of Forced and Autonomous Circuits
This brief paper proposes an uncertainty quantification method for the periodic steady-state (PSS) analysis with both Gaussian and non-Gaussian variations. Our stochastic testing formulation for the PSS problem provides superior efficiency over both Monte Carlo methods and existing spectral methods. The numerical implementation of a stochastic shooting Newton solver is presented for both forced and autonomous circuits. Simulation results on some analog/RF circuits are reported to show the effectiveness of our proposed algorithms.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
36,114
2304.12654
CoDi: Co-evolving Contrastive Diffusion Models for Mixed-type Tabular Synthesis
With growing attention to tabular data these days, the attempt to apply a synthetic table to various tasks has been expanded toward various scenarios. Owing to the recent advances in generative modeling, fake data generated by tabular data synthesis models become sophisticated and realistic. However, there still exists a difficulty in modeling discrete variables (columns) of tabular data. In this work, we propose to process continuous and discrete variables separately (but being conditioned on each other) by two diffusion models. The two diffusion models are co-evolved during training by reading conditions from each other. In order to further bind the diffusion models, moreover, we introduce a contrastive learning method with a negative sampling method. In our experiments with 11 real-world tabular datasets and 8 baseline methods, we prove the efficacy of the proposed method, called CoDi.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
360,308
2311.12714
Decrypting Nonlinearity: Koopman Interpretation and Analysis of Cryptosystems
Public-key cryptosystems rely on computationally difficult problems for security, traditionally analyzed using number theory methods. In this paper, we introduce a novel perspective on cryptosystems by viewing the Diffie-Hellman key exchange and the Rivest-Shamir-Adleman cryptosystem as nonlinear dynamical systems. By applying Koopman theory, we transform these dynamical systems into higher-dimensional spaces and analytically derive equivalent purely linear systems. This formulation allows us to reconstruct the secret integers of the cryptosystems through straightforward manipulations, leveraging the tools available for linear systems analysis. Additionally, we establish an upper bound on the minimum lifting dimension required to achieve perfect accuracy. Our results on the required lifting dimension are in line with the intractability of brute-force attacks. To showcase the potential of our approach, we establish connections between our findings and existing results on algorithmic complexity. Furthermore, we extend this methodology to a data-driven context, where the Koopman representation is learned from data samples of the cryptosystems.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
409,447
1003.2005
Control of Complex Maneuvers for a Quadrotor UAV using Geometric Methods on SE(3)
This paper provides new results for control of complex flight maneuvers for a quadrotor unmanned aerial vehicle (UAV). The flight maneuvers are defined by a concatenation of flight modes or primitives, each of which is achieved by a nonlinear controller that solves an output tracking problem. A mathematical model of the quadrotor UAV rigid body dynamics, defined on the configuration space $\SE$, is introduced as a basis for the analysis. The quadrotor UAV has four input degrees of freedom, namely the magnitudes of the four rotor thrusts; each flight mode is defined by solving an asymptotic optimal tracking problem. Although many flight modes can be studied, we focus on three output tracking problems, namely (1) outputs given by the vehicle attitude, (2) outputs given by the three position variables for the vehicle center of mass, and (3) output given by the three velocity variables for the vehicle center of mass. A nonlinear tracking controller is developed on the special Euclidean group $\SE$ for each flight mode, and the closed loop is shown to have desirable closed loop properties that are almost global in each case. Several numerical examples, including one example in which the quadrotor recovers from being initially upside down and another example that includes switching and transitions between different flight modes, illustrate the versatility and generality of the proposed approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
5,893
2409.10918
FSL-HDnn: A 5.7 TOPS/W End-to-end Few-shot Learning Classifier Accelerator with Feature Extraction and Hyperdimensional Computing
This paper introduces FSL-HDnn, an energy-efficient accelerator that implements the end-to-end pipeline of feature extraction, classification, and on-chip few-shot learning (FSL) through gradient-free learning techniques in a 40 nm CMOS process. At its core, FSL-HDnn integrates two low-power modules: Weight clustering feature extractor and Hyperdimensional Computing (HDC). Feature extractor utilizes advanced weight clustering and pattern reuse strategies for optimized CNN-based feature extraction. Meanwhile, HDC emerges as a novel approach for lightweight FSL classifier, employing hyperdimensional vectors to improve training accuracy significantly compared to traditional distance-based approaches. This dual-module synergy not only simplifies the learning process by eliminating the need for complex gradients but also dramatically enhances energy efficiency and performance. Specifically, FSL-HDnn achieves an Intensity unprecedented energy efficiency of 5.7 TOPS/W for feature 1 extraction and 0.78 TOPS/W for classification and learning Training Intensity phases, achieving improvements of 2.6X and 6.6X, respectively, Storage over current state-of-the-art CNN and FSL processors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
488,930
2102.06555
Online Graph Dictionary Learning
Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements. Yet, this analysis is not amenable in the context of graph learning, as graphs usually belong to different metric spaces. We fill this gap by proposing a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term. In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms, i.e. dictionary elements, estimated thanks to an online stochastic algorithm, which operates on a dataset of unregistered graphs with potentially different number of nodes. Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space. We provide numerical evidences showing the interest of our approach for unsupervised embedding of graph datasets and for online graph subspace estimation and tracking.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,792
2312.14280
Fine-grained Forecasting Models Via Gaussian Process Blurring Effect
Time series forecasting is a challenging task due to the existence of complex and dynamic temporal dependencies. This can lead to incorrect predictions by even the best forecasting models. Using more training data is one way to improve the accuracy, but this source is often limited. In contrast, we are building on successful denoising approaches for image generation by advocating for an end-to-end forecasting and denoising paradigm. We propose an end-to-end forecast-blur-denoise forecasting framework by encouraging a division of labors between the forecasting and the denoising models. The initial forecasting model is directed to focus on accurately predicting the coarse-grained behavior, while the denoiser model focuses on capturing the fine-grained behavior that is locally blurred by integrating a Gaussian Process model. All three parts are interacting for the best end-to-end performance. Our extensive experiments demonstrate that our proposed approach is able to improve the forecasting accuracy of several state-of-the-art forecasting models as well as several other denoising approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
417,574
2203.14581
S2-Net: Self-supervision Guided Feature Representation Learning for Cross-Modality Images
Combining the respective advantages of cross-modality images can compensate for the lack of information in the single modality, which has attracted increasing attention of researchers into multi-modal image matching tasks. Meanwhile, due to the great appearance differences between cross-modality image pairs, it often fails to make the feature representations of correspondences as close as possible. In this letter, we design a cross-modality feature representation learning network, S2-Net, which is based on the recently successful detect-and-describe pipeline, originally proposed for visible images but adapted to work with cross-modality image pairs. To solve the consequent problem of optimization difficulties, we introduce self-supervised learning with a well-designed loss function to guide the training without discarding the original advantages. This novel strategy simulates image pairs in the same modality, which is also a useful guide for the training of cross-modality images. Notably, it does not require additional data but significantly improves the performance and is even workable for all methods of the detect-and-describe pipeline. Extensive experiments are conducted to evaluate the performance of the strategy we proposed, compared to both handcrafted and deep learning-based methods. Results show that our elegant formulation of combined optimization of supervised and self-supervised learning outperforms state-of-the-arts on RoadScene and RGB-NIR datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,066
2205.00905
FastGCL: Fast Self-Supervised Learning on Graphs via Contrastive Neighborhood Aggregation
Graph contrastive learning (GCL), as a popular approach to graph self-supervised learning, has recently achieved a non-negligible effect. To achieve superior performance, the majority of existing GCL methods elaborate on graph data augmentation to construct appropriate contrastive pairs. However, existing methods place more emphasis on the complex graph data augmentation which requires extra time overhead, and pay less attention to developing contrastive schemes specific to encoder characteristics. We argue that a better contrastive scheme should be tailored to the characteristics of graph neural networks (e.g., neighborhood aggregation) and propose a simple yet effective method named FastGCL. Specifically, by constructing weighted-aggregated and non-aggregated neighborhood information as positive and negative samples respectively, FastGCL identifies the potential semantic information of data without disturbing the graph topology and node attributes, resulting in faster training and convergence speeds. Extensive experiments have been conducted on node classification and graph classification tasks, showing that FastGCL has competitive classification performance and significant training speedup compared to existing state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
294,419
2008.00745
Community membership consistency applied to corporate board interlock networks
Community detection is a well established method for studying the meso scale structure of social networks. Applying a community detection algorithm results in a division of a network into communities that is often used to inspect and reason about community membership of specific nodes. This micro level interpretation step of community structure is a crucial step in typical social science research. However, the methodological caveat in this step is that virtually all modern community detection methods are non-deterministic and based on randomization and approximated results. This needs to be explicitly taken into consideration when reasoning about community membership of individual nodes. To do so, we propose a metric of community membership consistency, that provides node-level insights in how reliable the placement of that node into a community really is. In addition, it enables us to distinguish the community core members of a community. The usefulness of the proposed metrics is demonstrated on corporate board interlock networks, in which weighted links represent shared senior level directors between firms. Results suggest that the community structure of global business groups is centered around persistent communities consisting of core countries tied by geographical and cultural proximity. In addition, we identify fringe countries that appear to associate with a number of different global business communities.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
190,098
1310.1419
On Association Cells in Random Heterogeneous Networks
Characterizing user to access point (AP) association strategies in heterogeneous cellular networks (HetNets) is critical for their performance analysis, as it directly influences the load across the network. In this letter, we introduce and analyze a class of association strategies, which we term stationary association, and the resulting association cells. For random HetNets, where APs are distributed according to a stationary point process, the area of the resulting association cells are shown to be the marks of the corresponding point process. Addressing the need of quantifying the load experienced by a typical user, a "Feller-paradox" like relationship is established between the area of the association cell containing origin and that of a typical association cell. For the specific case of Poisson point process and max power/SINR association, the mean association area of each tier is derived and shown to increase with channel gain variance and decrease in the path loss exponents of the corresponding tier.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
27,567
2412.13860
Domain-adaptative Continual Learning for Low-resource Tasks: Evaluation on Nepali
Continual learning has emerged as an important research direction due to the infeasibility of retraining large language models (LLMs) from scratch in the event of new data availability. Of great interest is the domain-adaptive pre-training (DAPT) paradigm, which focuses on continually training a pre-trained language model to adapt it to a domain it was not originally trained on. In this work, we evaluate the feasibility of DAPT in a low-resource setting, namely the Nepali language. We use synthetic data to continue training Llama 3 8B to adapt it to the Nepali language in a 4-bit QLoRA setting. We evaluate the adapted model on its performance, forgetting, and knowledge acquisition. We compare the base model and the final model on their Nepali generation abilities, their performance on popular benchmarks, and run case-studies to probe their linguistic knowledge in Nepali. We see some unsurprising forgetting in the final model, but also surprisingly find that increasing the number of shots during evaluation yields better percent increases in the final model (as high as 19.29% increase) compared to the base model (4.98%), suggesting latent retention. We also explore layer-head self-attention heatmaps to establish dependency resolution abilities of the final model in Nepali.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
518,483
2412.03517
NVComposer: Boosting Generative Novel View Synthesis with Multiple Sparse and Unposed Images
Recent advancements in generative models have significantly improved novel view synthesis (NVS) from multi-view data. However, existing methods depend on external multi-view alignment processes, such as explicit pose estimation or pre-reconstruction, which limits their flexibility and accessibility, especially when alignment is unstable due to insufficient overlap or occlusions between views. In this paper, we propose NVComposer, a novel approach that eliminates the need for explicit external alignment. NVComposer enables the generative model to implicitly infer spatial and geometric relationships between multiple conditional views by introducing two key components: 1) an image-pose dual-stream diffusion model that simultaneously generates target novel views and condition camera poses, and 2) a geometry-aware feature alignment module that distills geometric priors from dense stereo models during training. Extensive experiments demonstrate that NVComposer achieves state-of-the-art performance in generative multi-view NVS tasks, removing the reliance on external alignment and thus improving model accessibility. Our approach shows substantial improvements in synthesis quality as the number of unposed input views increases, highlighting its potential for more flexible and accessible generative NVS systems. Our project page is available at https://lg-li.github.io/project/nvcomposer
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,989
2210.01162
Learning Minimally-Violating Continuous Control for Infeasible Linear Temporal Logic Specifications
This paper explores continuous-time control synthesis for target-driven navigation to satisfy complex high-level tasks expressed as linear temporal logic (LTL). We propose a model-free framework using deep reinforcement learning (DRL) where the underlying dynamic system is unknown (an opaque box). Unlike prior work, this paper considers scenarios where the given LTL specification might be infeasible and therefore cannot be accomplished globally. Instead of modifying the given LTL formula, we provide a general DRL-based approach to satisfy it with minimal violation. To do this, we transform a previously multi-objective DRL problem, which requires simultaneous automata satisfaction and minimum violation cost, into a single objective. By guiding the DRL agent with a sampling-based path planning algorithm for the potentially infeasible LTL task, the proposed approach mitigates the myopic tendencies of DRL, which are often an issue when learning general LTL tasks that can have long or infinite horizons. This is achieved by decomposing an infeasible LTL formula into several reach-avoid sub-tasks with shorter horizons, which can be trained in a modular DRL architecture. Furthermore, we overcome the challenge of the exploration process for DRL in complex and cluttered environments by using path planners to design rewards that are dense in the configuration space. The benefits of the presented approach are demonstrated through testing on various complex nonlinear systems and compared with state-of-the-art baselines. The Video demonstration can be found here:https://youtu.be/jBhx6Nv224E.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
true
321,146
2108.12173
CoCo DistillNet: a Cross-layer Correlation Distillation Network for Pathological Gastric Cancer Segmentation
In recent years, deep convolutional neural networks have made significant advances in pathology image segmentation. However, pathology image segmentation encounters with a dilemma in which the higher-performance networks generally require more computational resources and storage. This phenomenon limits the employment of high-accuracy networks in real scenes due to the inherent high-resolution of pathological images. To tackle this problem, we propose CoCo DistillNet, a novel Cross-layer Correlation (CoCo) knowledge distillation network for pathological gastric cancer segmentation. Knowledge distillation, a general technique which aims at improving the performance of a compact network through knowledge transfer from a cumbersome network. Concretely, our CoCo DistillNet models the correlations of channel-mixed spatial similarity between different layers and then transfers this knowledge from a pre-trained cumbersome teacher network to a non-trained compact student network. In addition, we also utilize the adversarial learning strategy to further prompt the distilling procedure which is called Adversarial Distillation (AD). Furthermore, to stabilize our training procedure, we make the use of the unsupervised Paraphraser Module (PM) to boost the knowledge paraphrase in the teacher network. As a result, extensive experiments conducted on the Gastric Cancer Segmentation Dataset demonstrate the prominent ability of CoCo DistillNet which achieves state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
252,417
2501.13795
Training-Free Zero-Shot Temporal Action Detection with Vision-Language Models
Existing zero-shot temporal action detection (ZSTAD) methods predominantly use fully supervised or unsupervised strategies to recognize unseen activities. However, these training-based methods are prone to domain shifts and require high computational costs, which hinder their practical applicability in real-world scenarios. In this paper, unlike previous works, we propose a training-Free Zero-shot temporal Action Detection (FreeZAD) method, leveraging existing vision-language (ViL) models to directly classify and localize unseen activities within untrimmed videos without any additional fine-tuning or adaptation. We mitigate the need for explicit temporal modeling and reliance on pseudo-label quality by designing the LOGarithmic decay weighted Outer-Inner-Contrastive Score (LogOIC) and frequency-based Actionness Calibration. Furthermore, we introduce a test-time adaptation (TTA) strategy using Prototype-Centric Sampling (PCS) to expand FreeZAD, enabling ViL models to adapt more effectively for ZSTAD. Extensive experiments on the THUMOS14 and ActivityNet-1.3 datasets demonstrate that our training-free method outperforms state-of-the-art unsupervised methods while requiring only 1/13 of the runtime. When equipped with TTA, the enhanced method further narrows the gap with fully supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
526,826
1811.03700
A Comparison of Lattice-free Discriminative Training Criteria for Purely Sequence-Trained Neural Network Acoustic Models
In this work, three lattice-free (LF) discriminative training criteria for purely sequence-trained neural network acoustic models are compared on LVCSR tasks, namely maximum mutual information (MMI), boosted maximum mutual information (bMMI) and state-level minimum Bayes risk (sMBR). We demonstrate that, analogous to LF-MMI, a neural network acoustic model can also be trained from scratch using LF-bMMI or LF-sMBR criteria respectively without the need of cross-entropy pre-training. Furthermore, experimental results on Switchboard-300hrs and Switchboard+Fisher-2100hrs datasets show that models trained with LF-bMMI consistently outperform those trained with plain LF-MMI and achieve a relative word error rate (WER) reduction of 5% over competitive temporal convolution projected LSTM (TDNN-LSTMP) LF-MMI baselines.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
112,902
2201.04426
The Geometry of Navigation Problems
While many works exploiting an existing Lie group structure have been proposed for state estimation, in particular the Invariant Extended Kalman Filter (IEKF), few papers address the construction of a group structure that allows casting a given system into the framework of invariant filtering. In this paper we introduce a large class of systems encompassing most problems involving a navigating vehicle encountered in practice. For those systems we introduce a novel methodology that systematically provides a group structure for the state space, including vectors of the body frame such as biases. We use it to derive observers having properties akin to those of linear observers or filters. The proposed unifying and versatile framework encompasses all systems where IEKF has proved successful, improves state-of-the art "imperfect" IEKF for inertial navigation with sensor biases, and allows addressing novel examples, like GNSS antenna lever arm estimation.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
275,098
2008.01809
Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting Topical Components (TCs) representing evidence from a source text using the intermediate output of attention layers. We evaluate performance using a feature-based AES requiring TCs. Results show that performance is comparable whether using automatically or manually constructed TCs for 1) representing essays as rubric-based features, 2) grading essays.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
190,438
1404.3638
Approximate MMSE Estimator for Linear Dynamic Systems with Gaussian Mixture Noise
In this work we propose an approximate Minimum Mean-Square Error (MMSE) filter for linear dynamic systems with Gaussian Mixture noise. The proposed estimator tracks each component of the Gaussian Mixture (GM) posterior with an individual filter and minimizes the trace of the covariance matrix of the bank of filters, as opposed to minimizing the MSE of individual filters in the commonly used Gaussian sum filter (GSF). Hence, the spread of means in the proposed method is smaller than that of GSF which makes it more robust to removing components. Consequently, lower complexity reduction schemes can be used with the proposed filter without losing estimation accuracy and precision. This is supported through simulations on synthetic data as well as experimental data related to an indoor localization system. Additionally, we show that in two limit cases the state estimation provided by our proposed method converges to that of GSF, and we provide simulation results supporting this in other cases.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
32,329
1906.03944
Solving Electrical Impedance Tomography with Deep Learning
This paper introduces a new approach for solving electrical impedance tomography (EIT) problems using deep neural networks. The mathematical problem of EIT is to invert the electrical conductivity from the Dirichlet-to-Neumann (DtN) map. Both the forward map from the electrical conductivity to the DtN map and the inverse map are high-dimensional and nonlinear. Motivated by the linear perturbative analysis of the forward map and based on a numerically low-rank property, we propose compact neural network architectures for the forward and inverse maps for both 2D and 3D problems. Numerical results demonstrate the efficiency of the proposed neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
134,544
1907.02189
On the Convergence of FedAvg on Non-IID Data
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate $\eta$ must decay, even if full-gradient is used; otherwise, the solution will be $\Omega (\eta)$ away from the optimal.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,548
2102.09427
Deep Learning for Suicide and Depression Identification with Unsupervised Label Correction
Early detection of suicidal ideation in depressed individuals can allow for adequate medical attention and support, which in many cases is life-saving. Recent NLP research focuses on classifying, from a given piece of text, if an individual is suicidal or clinically healthy. However, there have been no major attempts to differentiate between depression and suicidal ideation, which is an important clinical challenge. Due to the scarce availability of EHR data, suicide notes, or other similar verified sources, web query data has emerged as a promising alternative. Online sources, such as Reddit, allow for anonymity that prompts honest disclosure of symptoms, making it a plausible source even in a clinical setting. However, these online datasets also result in lower performance, which can be attributed to the inherent noise in web-scraped labels, which necessitates a noise-removal process. Thus, we propose SDCNL, a suicide versus depression classification method through a deep learning approach. We utilize online content from Reddit to train our algorithm, and to verify and correct noisy labels, we propose a novel unsupervised label correction method which, unlike previous work, does not require prior noise distribution information. Our extensive experimentation with multiple deep word embedding models and classifiers display the strong performance of the method in anew, challenging classification application. We make our code and dataset available at https://github.com/ayaanzhaque/SDCNL
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,776
2212.12844
Weakly-Supervised Deep Learning Model for Prostate Cancer Diagnosis and Gleason Grading of Histopathology Images
Prostate cancer is the most common cancer in men worldwide and the second leading cause of cancer death in the United States. One of the prognostic features in prostate cancer is the Gleason grading of histopathology images. The Gleason grade is assigned based on tumor architecture on Hematoxylin and Eosin (H&E) stained whole slide images (WSI) by the pathologists. This process is time-consuming and has known interobserver variability. In the past few years, deep learning algorithms have been used to analyze histopathology images, delivering promising results for grading prostate cancer. However, most of the algorithms rely on the fully annotated datasets which are expensive to generate. In this work, we proposed a novel weakly-supervised algorithm to classify prostate cancer grades. The proposed algorithm consists of three steps: (1) extracting discriminative areas in a histopathology image by employing the Multiple Instance Learning (MIL) algorithm based on Transformers, (2) representing the image by constructing a graph using the discriminative patches, and (3) classifying the image into its Gleason grades by developing a Graph Convolutional Neural Network (GCN) based on the gated attention mechanism. We evaluated our algorithm using publicly available datasets, including TCGAPRAD, PANDA, and Gleason 2019 challenge datasets. We also cross validated the algorithm on an independent dataset. Results show that the proposed model achieved state-of-the-art performance in the Gleason grading task in terms of accuracy, F1 score, and cohen-kappa. The code is available at https://github.com/NabaviLab/Prostate-Cancer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
338,152
2011.06844
Cross-Domain Learning for Classifying Propaganda in Online Contents
As news and social media exhibit an increasing amount of manipulative polarized content, detecting such propaganda has received attention as a new task for content analysis. Prior work has focused on supervised learning with training data from the same domain. However, as propaganda can be subtle and keeps evolving, manual identification and proper labeling are very demanding. As a consequence, training data is a major bottleneck. In this paper, we tackle this bottleneck and present an approach to leverage cross-domain learning, based on labeled documents and sentences from news and tweets, as well as political speeches with a clear difference in their degrees of being propagandistic. We devise informative features and build various classifiers for propaganda labeling, using cross-domain learning. Our experiments demonstrate the usefulness of this approach, and identify difficulties and limitations in various configurations of sources and targets for the transfer step. We further analyze the influence of various features, and characterize salient indicators of propaganda.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
206,363
2309.08040
Gradient based Grasp Pose Optimization on a NeRF that Approximates Grasp Success
Current robotic grasping methods often rely on estimating the pose of the target object, explicitly predicting grasp poses, or implicitly estimating grasp success probabilities. In this work, we propose a novel approach that directly maps gripper poses to their corresponding grasp success values, without considering objectness. Specifically, we leverage a Neural Radiance Field (NeRF) architecture to learn a scene representation and use it to train a grasp success estimator that maps each pose in the robot's task space to a grasp success value. We employ this learned estimator to tune its inputs, i.e., grasp poses, by gradient-based optimization to obtain successful grasp poses. Contrary to other NeRF-based methods which enhance existing grasp pose estimation approaches by relying on NeRF's rendering capabilities or directly estimate grasp poses in a discretized space using NeRF's scene representation capabilities, our approach uniquely sidesteps both the need for rendering and the limitation of discretization. We demonstrate the effectiveness of our approach on four simulated 3DoF (Degree of Freedom) robotic grasping tasks and show that it can generalize to novel objects. Our best model achieves an average translation error of 3mm from valid grasp poses. This work opens the door for future research to apply our approach to higher DoF grasps and real-world scenarios.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
392,010