id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1307.7401
Properties of nonlinear noise in long, dispersion-uncompensated fiber links
We study the properties of nonlinear interference noise (NLIN) in fiber-optic communications systems with large accumulated dispersion. Our focus is on settling the discrepancy between the results of the Gaussian noise (GN) model (according to which NLIN is additive Gaussian) and a recently published time-domain analysis, which attributes drastically different properties to the NLIN. Upon reviewing the two approaches we identify several unjustified assumptions that are key in the derivation of the GN model, and that are responsible for the discrepancy. We derive the true NLIN power and verify that the NLIN is not additive Gaussian, but rather it depends strongly on the data transmitted in the channel of interest. In addition we validate the time-domain model numerically and demonstrate the strong dependence of the NLIN on the interfering channels' modulation format.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
26,102
1610.00663
Privacy-guaranteed Two-Agent Interactions Using Information-Theoretic Mechanisms
This paper introduces a multi-round interaction problem with privacy constraints between two agents that observe correlated data. The agents alternately share data with one another for a total of K rounds such that each agent initiates sharing over K/2 rounds. The interactions are modeled as a collection of K random mechanisms (mappings), one for each round. The goal is to jointly design the K private mechanisms to determine the set of all achievable distortion-leakage pairs at each agent. Arguing that a mutual information-based leakage metric can be appropriate for streaming data settings, this paper: (i) determines the set of all achievable distortion- leakage tuples ; (ii) shows that the K mechanisms allow for precisely composing the total privacy budget over K rounds without loss; and (ii) develops conditions under which interaction reduces the net leakage at both agents and illustrates it for a specific class of sources. The paper then focuses on log-loss distortion to better understand the effect on leakage of using a commonly used utility metric in learning theory. The resulting interaction problem leads to a non-convex sum-leakage-distortion optimization problem that can be viewed as an interactive version of the information bottleneck problem. A new merge-and-search algorithm that extends the classical agglomerative information bottleneck algorithm to the interactive setting is introduced to determine a provable locally optimal solution. Finally, the benefit of interaction under log-loss is illustrated for specific source classes and the optimality of one-shot is proved for Gaussian sources under both mean-square and log-loss distortions constraints.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
61,862
1711.09221
Constraint Coupled Distributed Optimization: a Relaxation and Duality Approach
In this paper we consider a general, challenging distributed optimization set-up arising in several important network control applications. Agents of a network want to minimize the sum of local cost functions, each one depending on a local variable, subject to local and coupling constraints, with the latter involving all the decision variables. We propose a novel fully distributed algorithm based on a relaxation of the primal problem and an elegant exploration of duality theory. Despite its complex derivation, based on several duality steps, the distributed algorithm has a very simple and intuitive structure. That is, each node finds a primal-dual optimal solution pair of a local, relaxed version of the original problem, and then updates suitable auxiliary local variables. We prove that agents asymptotically compute their portion of an optimal (feasible) solution of the original problem. This primal recovery property is obtained without any averaging mechanism typically used in dual decomposition methods. To corroborate the theoretical results, we show how the methodology applies to an instance of a Distributed Model Predictive Control scheme in a microgrid control scenario.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
85,356
1908.03736
Fallback Strategies in Operation Control of Microgrids with Communication Failures
This paper proposes a model predictive control (MPC)-based energy management system to address communication failures in an islanded microgrid (MG). The energy management system relies on a communication network to monitor and control power generation in the units. A communication failure to a unit inhibits the ability of the MPC to change the power of the unit. However, this unit is electrically connected to the network and ignoring this aspect could have adverse effect in the operation of the microgrid. Therefore, this paper considers an MPC design that includes the electrical connectivity of the unit during communication failures. This paper also highlights the benefit of including the lower layer control behaviour of the MG to withstand communication failures. The proposed approaches are validated with a case study.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
141,307
2401.13700
Towards Automated Readable Proofs of Ruler and Compass Constructions
Although there are several systems that successfully generate construction steps for ruler and compass construction problems, none of them provides readable synthetic correctness proofs for generated constructions. In the present work, we demonstrate how our triangle construction solver ArgoTriCS can cooperate with automated theorem provers for first order logic and coherent logic so that it generates construction correctness proofs, that are both human-readable and formal (can be checked by interactive theorem provers such as Coq or Isabelle/HOL). These proofs currently rely on many high-level lemmas and our goal is to have them all formally shown from the basic axioms of geometry.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
423,821
2312.03889
A Masked Pruning Approach for Dimensionality Reduction in Communication-Efficient Federated Learning Systems
Federated Learning (FL) represents a growing machine learning (ML) paradigm designed for training models across numerous nodes that retain local datasets, all without directly exchanging the underlying private data with the parameter server (PS). Its increasing popularity is attributed to notable advantages in terms of training deep neural network (DNN) models under privacy aspects and efficient utilization of communication resources. Unfortunately, DNNs suffer from high computational and communication costs, as well as memory consumption in intricate tasks. These factors restrict the applicability of FL algorithms in communication-constrained systems with limited hardware resources. In this paper, we develop a novel algorithm that overcomes these limitations by synergistically combining a pruning-based method with the FL process, resulting in low-dimensional representations of the model with minimal communication cost, dubbed Masked Pruning over FL (MPFL). The algorithm operates by initially distributing weights to the nodes through the PS. Subsequently, each node locally trains its model and computes pruning masks. These low-dimensional masks are then transmitted back to the PS, which generates a consensus pruning mask, broadcasted back to the nodes. This iterative process enhances the robustness and stability of the masked pruning model. The generated mask is used to train the FL model, achieving significant bandwidth savings. We present an extensive experimental study demonstrating the superior performance of MPFL compared to existing methods. Additionally, we have developed an open-source software package for the benefit of researchers and developers in related fields.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
413,476
2412.10103
AMuSeD: An Attentive Deep Neural Network for Multimodal Sarcasm Detection Incorporating Bi-modal Data Augmentation
Detecting sarcasm effectively requires a nuanced understanding of context, including vocal tones and facial expressions. The progression towards multimodal computational methods in sarcasm detection, however, faces challenges due to the scarcity of data. To address this, we present AMuSeD (Attentive deep neural network for MUltimodal Sarcasm dEtection incorporating bi-modal Data augmentation). This approach utilizes the Multimodal Sarcasm Detection Dataset (MUStARD) and introduces a two-phase bimodal data augmentation strategy. The first phase involves generating varied text samples through Back Translation from several secondary languages. The second phase involves the refinement of a FastSpeech 2-based speech synthesis system, tailored specifically for sarcasm to retain sarcastic intonations. Alongside a cloud-based Text-to-Speech (TTS) service, this Fine-tuned FastSpeech 2 system produces corresponding audio for the text augmentations. We also investigate various attention mechanisms for effectively merging text and audio data, finding self-attention to be the most efficient for bimodal integration. Our experiments reveal that this combined augmentation and attention approach achieves a significant F1-score of 81.0% in text-audio modalities, surpassing even models that use three modalities from the MUStARD dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,785
2312.13701
Infinite families $2$-designs from binary projective three-weight codes
Combinatorial designs are closely related to linear codes. In recent year, there are a lot of $t$-designs constructed from certain linear codes. In this paper, we aim to construct $2$-designs from binary three-weight codes. For any binary three-weight code $\mathcal{C}$ with length $n$, let $A_{n}(\mathcal{C})$ be the number of codewords in $\mathcal{C}$ with Hamming weight $n$, then we show that $\mathcal{C}$ holds $2$-designs when $\mathcal{C}$ is projective and $A_{n}(\mathcal{C})=1$. Furthermore, by extending some certain binary projective two-weight codes and basing on the defining set method, we construct two classes of binary projective three-weight codes which are suitable for holding $2$-designs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
417,393
2411.12383
Automatic staff reconstruction within SIMSSA proect
The automatic analysis of scores has been a research topic of interest for the last few decades and still is since music databases that include musical scores are currently being created to make musical content available to the public, including scores of ancient music. For the correct analysis of music elements and their interpretation, the identification of staff lines is of key importance. In this paper, a scheme to post-process the output of a previous musical object identification system is described. This system allows the reconstruction by means of detection, tracking and interpolation of the staff lines of ancient scores from the digital Salzinnes Database. The scheme developed shows a remarkable performance on the specific task it was created for.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
509,401
2305.18507
Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models
Large language models (LLMs) have scaled up to unlock a wide range of complex reasoning tasks with the aid of various prompting methods. However, current prompting methods generate natural language intermediate steps to help reasoning, which can cause imperfect task reduction and confusion. To mitigate such limitations, we explore code prompting, a neural symbolic prompting method with both zero-shot and few-shot versions which triggers code as intermediate steps. We conduct experiments on 7 widely-used benchmarks involving symbolic reasoning and arithmetic reasoning. Code prompting generally outperforms chain-of-thought (CoT) prompting. To further understand the performance and limitations of code prompting, we perform extensive ablation studies and error analyses, and identify several exclusive advantages of using symbolic promptings compared to natural language. We also consider the ensemble of code prompting and CoT prompting to combine the strengths of both. Finally, we show through experiments how code annotations and their locations affect code prompting.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
369,121
2105.04257
Rational points of lattice ideals on a toric variety and toric codes
We show that the number of rational points of a subgroup inside a toric variety over a finite field defined by a homogeneous lattice ideal can be computed via Smith normal form of the matrix whose columns constitute a basis of the lattice. This generalizes and yields a concise toric geometric proof of the same fact proven purely algebraically by Lopez and Villarreal for the case of a projective space and a standard homogeneous lattice ideal of dimension one. We also prove a Nullstellensatz type theorem over a finite field establishing a one to one correspondence between subgroups of the dense split torus and certain homogeneous lattice ideals. As application, we compute the main parameters of generalized toric codes on subgroups of the torus of Hirzebruch surfaces, generalizing the existing literature.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
234,448
1406.2023
Rational Closure in SHIQ
We define a notion of rational closure for the logic SHIQ, which does not enjoys the finite model property, building on the notion of rational closure introduced by Lehmann and Magidor in [23]. We provide a semantic characterization of rational closure in SHIQ in terms of a preferential semantics, based on a finite rank characterization of minimal models. We show that the rational closure of a TBox can be computed in EXPTIME using entailment in SHIQ.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
33,703
1211.6512
Using Friends as Sensors to Detect Global-Scale Contagious Outbreaks
Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a "friend" of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
19,982
2404.18058
Joint Reference Frame Synthesis and Post Filter Enhancement for Versatile Video Coding
This paper presents the joint reference frame synthesis (RFS) and post-processing filter enhancement (PFE) for Versatile Video Coding (VVC), aiming to explore the combination of different neural network-based video coding (NNVC) tools to better utilize the hierarchical bi-directional coding structure of VVC. Both RFS and PFE utilize the Space-Time Enhancement Network (STENet), which receives two input frames with artifacts and produces two enhanced frames with suppressed artifacts, along with an intermediate synthesized frame. STENet comprises two pipelines, the synthesis pipeline and the enhancement pipeline, tailored for different purposes. During RFS, two reconstructed frames are sent into STENet's synthesis pipeline to synthesize a virtual reference frame, similar to the current to-be-coded frame. The synthesized frame serves as an additional reference frame inserted into the reference picture list (RPL). During PFE, two reconstructed frames are fed into STENet's enhancement pipeline to alleviate their artifacts and distortions, resulting in enhanced frames with reduced artifacts and distortions. To reduce inference complexity, we propose joint inference of RFS and PFE (JISE), achieved through a single execution of STENet. Integrated into the VVC reference software VTM-15.0, RFS, PFE, and JISE are coordinated within a novel Space-Time Enhancement Window (STEW) under Random Access (RA) configuration. The proposed method could achieve -7.34%/-17.21%/-16.65% PSNR-based BD-rate on average for three components under RA configuration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
450,111
2205.05052
On learning agent-based models from data
Agent-Based Models (ABMs) are used in several fields to study the evolution of complex systems from micro-level assumptions. However, ABMs typically can not estimate agent-specific (or "micro") variables: this is a major limitation which prevents ABMs from harnessing micro-level data availability and which greatly limits their predictive power. In this paper, we propose a protocol to learn the latent micro-variables of an ABM from data. The first step of our protocol is to reduce an ABM to a probabilistic model, characterized by a computationally tractable likelihood. This reduction follows two general design principles: balance of stochasticity and data availability, and replacement of unobservable discrete choices with differentiable approximations. Then, our protocol proceeds by maximizing the likelihood of the latent variables via a gradient-based expectation maximization algorithm. We demonstrate our protocol by applying it to an ABM of the housing market, in which agents with different incomes bid higher prices to live in high-income neighborhoods. We demonstrate that the obtained model allows accurate estimates of the latent variables, while preserving the general behavior of the ABM. We also show that our estimates can be used for out-of-sample forecasting. Our protocol can be seen as an alternative to black-box data assimilation methods, that forces the modeler to lay bare the assumptions of the model, to think about the inferential process, and to spot potential identification problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
295,823
1911.08443
Time-varying constrained proximal type dynamics in multi-agent network games
In this paper, we study multi-agent network games subject to affine time-varying coupling constraints and a time-varying communication network. We focus on the class of games adopting proximal dynamics and study their convergence to a persistent equilibrium. The assumptions considered to solve the problem are discussed and motivated. We develop an iterative equilibrium seeking algorithm, using only local information, that converges to a special class of game equilibria. Its derivation is motivated by several examples, showing that the original game dynamics fail to converge. Finally, we apply the designed algorithm to solve a constrained consensus problem, validating the theoretical results.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
154,193
0911.2974
A Dynamic Near-Optimal Algorithm for Online Linear Programming
A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs and the goal is to maximize the overall objective function. In this paper, we provide a near-optimal algorithm for this general class of online problems under the assumption of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Due to the feature of dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst-case example showing that the performance of our algorithm is near-optimal.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
4,949
2405.02846
Responsible AI: Portraits with Intelligent Bibliometrics
Shifting the focus from principles to practical implementation, responsible artificial intelligence (AI) has garnered considerable attention across academia, industry, and society at large. Despite being in its nascent stages, this emerging field grapples with nebulous concepts and intricate knowledge frameworks. By analyzing three prevailing concepts - explainable AI, trustworthy AI, and ethical AI, this study defined responsible AI and identified its core principles. Methodologically, this study successfully demonstrated the implementation of leveraging AI's capabilities into bibliometrics for enhanced knowledge discovery and the cross-validation of experimentally examined models with domain insights. Empirically, this study investigated 17,799 research articles contributed by the AI community since 2015. This involves recognizing key technological players and their relationships, unveiling the topical landscape and hierarchy of responsible AI, charting its evolution, and elucidating the interplay between the responsibility principles and primary AI techniques. An analysis of a core cohort comprising 380 articles from multiple disciplines captures the most recent advancements in responsible AI. As one of the pioneering bibliometric studies dedicated to exploring responsible AI, this study will provide comprehensive macro-level insights, enhancing the understanding of responsible AI while furnishing valuable knowledge support for AI regulation and governance initiatives.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
451,952
1505.02250
Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence
We propose a randomized second-order method for optimization known as the Newton Sketch: it is based on performing an approximate Newton step using a randomly projected or sub-sampled Hessian. For self-concordant functions, we prove that the algorithm has super-linear convergence with exponentially high probability, with convergence and complexity guarantees that are independent of condition numbers and related problem-dependent quantities. Given a suitable initialization, similar guarantees also hold for strongly convex and smooth objectives without self-concordance. When implemented using randomized projections based on a sub-sampled Hadamard basis, the algorithm typically has substantially lower complexity than Newton's method. We also describe extensions of our methods to programs involving convex constraints that are equipped with self-concordant barriers. We discuss and illustrate applications to linear programs, quadratic programs with convex constraints, logistic regression and other generalized linear models, as well as semidefinite programs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
42,939
1912.04070
Synthetic Humans for Action Recognition from Unseen Viewpoints
Although synthetic training data has been shown to be beneficial for tasks such as human pose estimation, its use for RGB human action recognition is relatively unexplored. Our goal in this work is to answer the question whether synthetic humans can improve the performance of human action recognition, with a particular focus on generalization to unseen viewpoints. We make use of the recent advances in monocular 3D human body reconstruction from real action sequences to automatically render synthetic training videos for the action labels. We make the following contributions: (i) we investigate the extent of variations and augmentations that are beneficial to improving performance at new viewpoints. We consider changes in body shape and clothing for individuals, as well as more action relevant augmentations such as non-uniform frame sampling, and interpolating between the motion of individuals performing the same action; (ii) We introduce a new data generation methodology, SURREACT, that allows training of spatio-temporal CNNs for action classification; (iii) We substantially improve the state-of-the-art action recognition performance on the NTU RGB+D and UESTC standard human action multi-view benchmarks; Finally, (iv) we extend the augmentation approach to in-the-wild videos from a subset of the Kinetics dataset to investigate the case when only one-shot training data is available, and demonstrate improvements in this case as well.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,755
2206.09557
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models
Recent advances in self-supervised learning and the Transformer architecture have significantly improved natural language processing (NLP), achieving remarkably low perplexity. However, the growing size of NLP models introduces a memory wall problem during the generation phase. To mitigate this issue, recent efforts have focused on quantizing model weights to sub-4-bit precision while preserving full precision for activations, resulting in practical speed-ups during inference on a single GPU. However, these improvements primarily stem from reduced memory movement, which necessitates a resource-intensive dequantization process rather than actual computational reduction. In this paper, we introduce LUT-GEMM, an efficient kernel for quantized matrix multiplication, which not only eliminates the resource-intensive dequantization process but also reduces computational costs compared to previous kernels for weight-only quantization. Furthermore, we proposed group-wise quantization to offer a flexible trade-off between compression ratio and accuracy. The impact of LUT-GEMM is facilitated by implementing high compression ratios through low-bit quantization and efficient LUT-based operations. We show experimentally that when applied to the OPT-175B model with 3-bit quantization, LUT-GEMM substantially accelerates token generation latency, achieving a remarkable 2.1$\times$ improvement on a single GPU when compared to OPTQ, which relies on the costly dequantization process.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
303,615
2011.07537
Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
Deep reinforcement learning has been one of the fastest growing fields of machine learning over the past years and numerous libraries have been open sourced to support research. However, most codebases have a steep learning curve or limited flexibility that do not satisfy a need for fast prototyping in fundamental research. This paper introduces Tonic, a Python library allowing researchers to quickly implement new ideas and measure their importance by providing: 1) general-purpose configurable modules 2) several baseline agents: A2C, TRPO, PPO, MPO, DDPG, D4PG, TD3 and SAC built with these modules 3) support for TensorFlow 2 and PyTorch 4) support for continuous-control environments from OpenAI Gym, DeepMind Control Suite and PyBullet 5) scripts to experiment in a reproducible way, plot results, and play with trained agents 6) a benchmark of the provided agents on 70 continuous-control tasks. Evaluation is performed in fair conditions with identical seeds, training and testing loops, while sharing general improvements such as non-terminal timeouts and observation normalization. Finally, to demonstrate how Tonic simplifies experimentation, a novel agent called TD4 is implemented and evaluated.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
206,594
cs/0603003
Analyse non standard du bruit
Thanks to the nonstandard formalization of fast oscillating functions, due to P. Cartier and Y. Perrin, an appropriate mathematical framework is derived for new non-asymptotic estimation techniques, which do not necessitate any statistical analysis of the noises corrupting any sensor. Various applications are deduced for multiplicative noises, for the length of the parametric estimation windows, and for burst errors.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
539,301
2412.09341
Training LayoutLM from Scratch for Efficient Named-Entity Recognition in the Insurance Domain
Generic pre-trained neural networks may struggle to produce good results in specialized domains like finance and insurance. This is due to a domain mismatch between training data and downstream tasks, as in-domain data are often scarce due to privacy constraints. In this work, we compare different pre-training strategies for LayoutLM. We show that using domain-relevant documents improves results on a named-entity recognition (NER) problem using a novel dataset of anonymized insurance-related financial documents called Payslips. Moreover, we show that we can achieve competitive results using a smaller and faster model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,446
2407.06852
TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer's Disease Progression Analysis
Alzheimer's Dementia (AD) represents one of the most pressing challenges in the field of neurodegenerative disorders, with its progression analysis being crucial for understanding disease dynamics and developing targeted interventions. Recent advancements in deep learning and various representation learning strategies, including self-supervised learning (SSL), have shown significant promise in enhancing medical image analysis, providing innovative ways to extract meaningful patterns from complex data. Notably, the computer vision literature has demonstrated that incorporating supervisory signals into SSL can further augment model performance by guiding the learning process with additional relevant information. However, the application of such supervisory signals in the context of disease progression analysis remains largely unexplored. This gap is particularly pronounced given the inherent challenges of incorporating both event and time-to-event information into the learning paradigm. Addressing this, we propose a novel framework, Time and Even-aware SSL (TE-SSL), which integrates time-to-event and event data as supervisory signals to refine the learning process. Our comparative analysis with existing SSL-based methods in the downstream task of survival analysis shows superior performance across standard metrics.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
471,557
2501.10137
Visual Exploration of Stopword Probabilities in Topic Models
Stopword removal is a critical stage in many Machine Learning methods but often receives little consideration, it interferes with the model visualizations and disrupts user confidence. Inappropriately chosen or hastily omitted stopwords not only lead to suboptimal performance but also significantly affect the quality of models, thus reducing the willingness of practitioners and stakeholders to rely on the output visualizations. This paper proposes a novel extraction method that provides a corpus-specific probabilistic estimation of stopword likelihood and an interactive visualization system to support their analysis. We evaluated our approach and interface using real-world data, a commonly used Machine Learning method (Topic Modelling), and a comprehensive qualitative experiment probing user confidence. The results of our work show that our system increases user confidence in the credibility of topic models by (1) returning reasonable probabilities, (2) generating an appropriate and representative extension of common stopword lists, and (3) providing an adjustable threshold for estimating and analyzing stopwords visually. Finally, we discuss insights, recommendations, and best practices to support practitioners while improving the output of Machine Learning methods and topic model visualizations with robust stopword analysis and removal.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,407
2207.07284
Parameterization of Cross-Token Relations with Relative Positional Encoding for Vision MLP
Vision multi-layer perceptrons (MLPs) have shown promising performance in computer vision tasks, and become the main competitor of CNNs and vision Transformers. They use token-mixing layers to capture cross-token interactions, as opposed to the multi-head self-attention mechanism used by Transformers. However, the heavily parameterized token-mixing layers naturally lack mechanisms to capture local information and multi-granular non-local relations, thus their discriminative power is restrained. To tackle this issue, we propose a new positional spacial gating unit (PoSGU). It exploits the attention formulations used in the classical relative positional encoding (RPE), to efficiently encode the cross-token relations for token mixing. It can successfully reduce the current quadratic parameter complexity $O(N^2)$ of vision MLPs to $O(N)$ and $O(1)$. We experiment with two RPE mechanisms, and further propose a group-wise extension to improve their expressive power with the accomplishment of multi-granular contexts. These then serve as the key building blocks of a new type of vision MLP, referred to as PosMLP. We evaluate the effectiveness of the proposed approach by conducting thorough experiments, demonstrating an improved or comparable performance with reduced parameter complexity. For instance, for a model trained on ImageNet1K, we achieve a performance improvement from 72.14\% to 74.02\% and a learnable parameter reduction from $19.4M$ to $18.2M$. Code could be found at https://github.com/Zhicaiwww/PosMLP.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
308,158
1910.04915
Flexible Multi-task Networks by Learning Parameter Allocation
This paper proposes a novel learning method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, we propose a framework to learn fine-grained patterns of parameter sharing. Assuming that the network is composed of several components across layers, our framework uses learned binary variables to allocate components to tasks in order to encourage more parameter sharing between related tasks, and discourage parameter sharing otherwise. The binary allocation variables are learned jointly with the model parameters by standard back-propagation thanks to the Gumbel-Softmax reparametrization method. When applied to the Omniglot benchmark, the proposed method achieves a 17% relative reduction of the error rate compared to state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,902
2312.04161
Modeling and Numerical Analysis of Kangaroo Lower Body based on Constrained Dynamics of Hybrid Serial-Parallel Floating-Base Systems
This paper presents the modeling and numerical analysis of the Kangaroo lower body prototype, a novel bipedal humanoid robot developed and manufactured by PAL Robotics. Kangaroo features high-power linear electric actuators combined with unique serial-parallel hybrid chains, which allow for the positioning of all the leg actuators near the base of the robot in order to improve the overall mass distribution. To model and analyze such complex nonlinear mechanisms, we employ a constrained formulation that is extended to account for floating-base systems in contact with the environment. A comparison is made to demonstrate the significant improvements achieved with TALOS, another humanoid bipedal robot designed by PAL Robotics, in terms of equivalent Cartesian inertia at the feet and centroidal angular momentum. Finally, the paper includes numerical experiments conducted through simulation and preliminary tests performed on the actual Kangaroo platform.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
413,577
2408.07753
How to Solve Contextual Goal-Oriented Problems with Offline Datasets?
We present a novel method, Contextual goal-Oriented Data Augmentation (CODA), which uses commonly available unlabeled trajectories and context-goal pairs to solve Contextual Goal-Oriented (CGO) problems. By carefully constructing an action-augmented MDP that is equivalent to the original MDP, CODA creates a fully labeled transition dataset under training contexts without additional approximation error. We conduct a novel theoretical analysis to demonstrate CODA's capability to solve CGO problems in the offline data setup. Empirical results also showcase the effectiveness of CODA, which outperforms other baseline methods across various context-goal relationships of CGO problem. This approach offers a promising direction to solving CGO problems using offline datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
480,708
2007.05470
Predicting Illegal Fishing on the Patagonia Shelf from Oceanographic Seascapes
Many of the world's most important fisheries are experiencing increases in illegal fishing, undermining efforts to sustainably conserve and manage fish stocks. A major challenge to ending illegal, unreported, and unregulated (IUU) fishing is improving our ability to identify whether a vessel is fishing illegally and where illegal fishing is likely to occur in the ocean. However, monitoring the oceans is costly, time-consuming, and logistically challenging for maritime authorities to patrol. To address this problem, we use vessel tracking data and machine learning to predict illegal fishing on the Patagonian Shelf, one of the world's most productive regions for fisheries. Specifically, we focus on Chinese fishing vessels, which have consistently fished illegally in this region. We combine vessel location data with oceanographic seascapes -- classes of oceanic areas based on oceanographic variables -- as well as other remotely sensed oceanographic variables to train a series of machine learning models of varying levels of complexity. These models are able to predict whether a Chinese vessel is operating illegally with 69-96% confidence, depending on the year and predictor variables used. These results offer a promising step towards preempting illegal activities, rather than reacting to them forensically.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,689
2203.04118
An Efficient Polyp Segmentation Network
Cancer is a disease that occurs as a result of the uncontrolled division and proliferation of cells. Colon cancer is one of the most common types of cancer in the world. Polyps that can be seen in the large intestine can cause cancer if not removed with early intervention. Deep learning and image segmentation techniques are used to minimize the number of polyps that goes unnoticed by the experts during these interventions. Although these techniques perform well in terms of accuracy, they require too many parameters. We propose a new model to address this problem. Our proposed model requires fewer parameters as well as outperforms the state-of-the-art models. We use EfficientNetB0 for the encoder part, as it performs well in various tasks while requiring fewer parameters. We use partial decoder, which is used to reduce the number of parameters while achieving high accuracy in segmentation. Since polyps have variable appearances and sizes, we use an asymmetric convolution block instead of a classic convolution block. Then, we weight each feature map using a squeeze and excitation block to improve our segmentation results. We used different splits of Kvasir and CVC-ClinicDB datasets for training, validation, and testing, while we use CVC- ColonDB, ETIS, and Endoscene datasets for testing. Our model outperforms state-of-art models with a Dice metric of %71.8 on the ColonDB test dataset, %89.3 on the EndoScene test dataset, and %74.8 on the ETIS test dataset while requiring fewer parameters. Our model requires 2.626.337 parameters in total while the closest model in the state-of-the-art is U-Net++ with 9.042.177 parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,356
1908.09507
Partially-supervised Mention Detection
Learning to detect entity mentions without using syntactic information can be useful for integration and joint optimization with other tasks. However, it is common to have partially annotated data for this problem. Here, we investigate two approaches to deal with partial annotation of mentions: weighted loss and soft-target classification. We also propose two neural mention detection approaches: a sequence tagging, and an exhaustive search. We evaluate our methods with coreference resolution as a downstream task, using multitask learning. The results show that the recall and F1 score improve for all methods.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
142,869
2303.04027
BIRD-PCC: Bi-directional Range Image-based Deep LiDAR Point Cloud Compression
The large amount of data collected by LiDAR sensors brings the issue of LiDAR point cloud compression (PCC). Previous works on LiDAR PCC have used range image representations and followed the predictive coding paradigm to create a basic prototype of a coding framework. However, their prediction methods give an inaccurate result due to the negligence of invalid pixels in range images and the omission of future frames in the time step. Moreover, their handcrafted design of residual coding methods could not fully exploit spatial redundancy. To remedy this, we propose a coding framework BIRD-PCC. Our prediction module is aware of the coordinates of invalid pixels in range images and takes a bidirectional scheme. Also, we introduce a deep-learned residual coding module that can further exploit spatial redundancy within a residual frame. Experiments conducted on SemanticKITTI and KITTI-360 datasets show that BIRD-PCC outperforms other methods in most bitrate conditions and generalizes well to unseen environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
349,938
1906.00131
Decision-Making in Reinforcement Learning
In this research work, probabilistic decision-making approaches are studied, e.g. Bayesian and Boltzmann strategies, along with various deterministic exploration strategies, e.g. greedy, epsilon-Greedy and random approaches. In this research work, a comparative study has been done between probabilistic and deterministic decision-making approaches, the experiments are performed in OpenAI gym environment, solving Cart Pole problem. This research work discusses about the Bayesian approach to decision-making in deep reinforcement learning, and about dropout, how it can reduce the computational cost. All the exploration approaches are compared. It also discusses about the importance of exploration in deep reinforcement learning, and how improving exploration strategies may help in science and technology. This research work shows how probabilistic decision-making approaches are better in the long run as compared to the deterministic approaches. When there is uncertainty, Bayesian dropout approach proved to be better than all other approaches in this research work.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
133,275
1806.03576
Instance Search via Instance Level Segmentation and Feature Representation
Instance search is an interesting task as well as a challenging issue due to the lack of effective feature representation. In this paper, an instance level feature representation built upon fully convolutional instance-aware segmentation is proposed. The feature is ROI-pooled from the segmented instance region. So that instances in various sizes and layouts are represented by deep features in uniform length. This representation is further enhanced by the use of deformable ResNeXt blocks. Superior performance is observed in terms of its distinctiveness and scalability on a challenging evaluation dataset built by ourselves. In addition, the proposed enhancement on the network structure also shows superior performance on the instance segmentation task.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
100,032
2410.20232
SAFE setup for generative molecular design
SMILES-based molecular generative models have been pivotal in drug design but face challenges in fragment-constrained tasks. To address this, the Sequential Attachment-based Fragment Embedding (SAFE) representation was recently introduced as an alternative that streamlines those tasks. In this study, we investigate the optimal setups for training SAFE generative models, focusing on dataset size, data augmentation through randomization, model architecture, and bond disconnection algorithms. We found that larger, more diverse datasets improve performance, with the LLaMA architecture using Rotary Positional Embedding proving most robust. SAFE-based models also consistently outperform SMILES-based approaches in scaffold decoration and linker design, particularly with BRICS decomposition yielding the best results. These insights highlight key factors that significantly impact the efficacy of SAFE-based generative models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
502,712
1710.07365
An extension of the Moran process using type-specific connection graphs
The Moran process, as studied by [Lieberman, E., Hauert, C. and Nowak, M. Evolutionary dynamics on graphs. Nature 433, pp. 312-316 (2005)], is a stochastic process modeling the spread of genetic mutations in populations. In this process, agents of a two-type population (i.e. mutants and residents) are associated with the vertices of a graph. Initially, only one vertex chosen uniformly at random is a mutant, with fitness $r > 0$, while all other individuals are residents, with fitness $1$. In every step, an individual is chosen with probability proportional to its fitness, and its state (mutant or resident) is passed on to a neighbor which is chosen uniformly at random. In this paper, we introduce and study a generalization of the model of Lieberman et al. by assuming that different types of individuals perceive the population through different graphs, namely $G_R(V,E_R)$ for residents and $G_M(V,E_M)$ for mutants. In this model, we study the fixation probability, i.e. the probability that eventually only mutants remain in the population, for various pairs of graphs. First, we transfer known results from the original single-graph model of Lieberman et al. to our 2-graph model. Among them, we provide a generalization of the Isothermal Theorem of Lieberman et al., that gives sufficient conditions for a pair of graphs to have the same fixation probability as a pair of cliques. Next, we give a 2-player strategic game view of the process where player payoffs correspond to fixation and/or extinction probabilities. In this setting, we attempt to identify best responses for each player and give evidence that the clique is the most beneficial graph for both players. Finally, we examine the possibility of efficient approximation of the fixation probability and provide a FPRAS for the special case where the mutant graph is complete.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
82,921
2210.11087
Weighted Maximum Likelihood for Controller Tuning
Recently, Model Predictive Contouring Control (MPCC) has arisen as the state-of-the-art approach for model-based agile flight. MPCC benefits from great flexibility in trading-off between progress maximization and path following at runtime without relying on globally optimized trajectories. However, finding the optimal set of tuning parameters for MPCC is challenging because (i) the full quadrotor dynamics are non-linear, (ii) the cost function is highly non-convex, and (iii) of the high dimensionality of the hyperparameter space. This paper leverages a probabilistic Policy Search method - Weighted Maximum Likelihood (WML)- to automatically learn the optimal objective for MPCC. WML is sample-efficient due to its closed-form solution for updating the learning parameters. Additionally, the data efficiency provided by the use of a model-based approach allows us to directly train in a high-fidelity simulator, which in turn makes our approach able to transfer zero-shot to the real world. We validate our approach in the real world, where we show that our method outperforms both the previous manually tuned controller and the state-of-the-art auto-tuning baseline reaching speeds of 75 km/h.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
325,178
2307.02744
Active Learning with Contrastive Pre-training for Facial Expression Recognition
Deep learning has played a significant role in the success of facial expression recognition (FER), thanks to large models and vast amounts of labelled data. However, obtaining labelled data requires a tremendous amount of human effort, time, and financial resources. Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER. This approach involves selecting and labelling the most representative samples from an unlabelled set to make the best use of a limited 'labelling budget'. In this paper, we implement and study 8 recent active learning methods on three public FER datasets, FER13, RAF-DB, and KDEF. Our findings show that existing active learning methods do not perform well in the context of FER, likely suffering from a phenomenon called 'Cold Start', which occurs when the initial set of labelled samples is not well representative of the entire dataset. To address this issue, we propose contrastive self-supervised pre-training, which first learns the underlying representations based on the entire unlabelled dataset. We then follow this with the active learning methods and observe that our 2-step approach shows up to 9.2% improvement over random sampling and up to 6.7% improvement over the best existing active learning baseline without the pre-training. We will make the code for this study public upon publication at: github.com/ShuvenduRoy/ActiveFER.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
377,796
1603.00652
Physics-Based Damage-Aware Manipulation Strategy Planning Using Scene Dynamics Anticipation
We present a damage-aware planning approach which determines the best sequence to manipulate a number of objects in a scene. This works on task-planning level, abstracts from motion planning and anticipates the dynamics of the scene using a physics simulation. Instead of avoiding interaction with the environment, we take unintended motion of other objects into account and plan manipulation sequences which minimize the potential damage. Our method can also be used as a validation measure to judge planned motions for their feasibility in terms of damage avoidance. We evaluate our approach on one industrial scenario (autonomous container unloading) and one retail scenario (shelf replenishment).
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
52,802
1803.09535
Connectionist Recommendation in the Wild: On the utility and scrutability of neural networks for personalized course guidance
The aggregate behaviors of users can collectively encode deep semantic information about the objects with which they interact. In this paper, we demonstrate novel ways in which the synthesis of these data can illuminate the terrain of users' environment and support them in their decision making and wayfinding. A novel application of Recurrent Neural Networks and skip-gram models, approaches popularized by their application to modeling language, are brought to bear on student university enrollment sequences to create vector representations of courses and map out traversals across them. We present demonstrations of how scrutability from these neural networks can be gained and how the combination of these techniques can be seen as an evolution of content tagging and a means for a recommender to balance user preferences inferred from data with those explicitly specified. From validation of the models to the development of a UI, we discuss additional requisite functionality informed by the results of a usability study leading to the ultimate deployment of the system at a university.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
93,517
2502.13373
Fighter Jet Navigation and Combat using Deep Reinforcement Learning with Explainable AI
This paper presents the development of an Artificial Intelligence (AI) based fighter jet agent within a customized Pygame simulation environment, designed to solve multi-objective tasks via deep reinforcement learning (DRL). The jet's primary objectives include efficiently navigating the environment, reaching a target, and selectively engaging or evading an enemy. A reward function balances these goals while optimized hyperparameters enhance learning efficiency. Results show more than 80\% task completion rate, demonstrating effective decision-making. To enhance transparency, the jet's action choices are analyzed by comparing the rewards of the actual chosen action (factual action) with those of alternate actions (counterfactual actions), providing insights into the decision-making rationale. This study illustrates DRL's potential for multi-objective problem-solving with explainable AI. Project page is available at: \href{https://github.com/swatikar95/Autonomous-Fighter-Jet-Navigation-and-Combat}{Project GitHub Link}.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
535,336
2410.17268
SPikE-SSM: A Sparse, Precise, and Efficient Spiking State Space Model for Long Sequences Learning
Spiking neural networks (SNNs) provide an energy-efficient solution by utilizing the spike-based and sparse nature of biological systems. Since the advent of Transformers, SNNs have struggled to compete with artificial networks on long sequential tasks, until the recent emergence of state space models (SSMs), which offer superior computational efficiency and modeling capability. However, applying the highly capable SSMs to SNNs for long sequences learning poses three major challenges: (1) The membrane potential is determined by the past spiking history of the neuron, leading to reduced efficiency for sequence modeling in parallel computing scenarios. (2) Complex dynamics of biological spiking neurons are crucial for functionality but challenging to simulate and exploit effectively in large networks. (3) It is arduous to maintain high sparsity while achieving high accuracy for spiking neurons without resorting to dense computing, as utilized in artificial neuron-based SSMs. To address them, we propose a sparse, precise and efficient spiking SSM framework, termed SPikE-SSM. For (1), we propose a boundary compression strategy (PMBC) to accelerate the inference of the spiking neuron model, enabling parallel processing for long sequence learning. For (2), we propose a novel and concise neuron model incorporating reset-refractory mechanism to leverage the inherent temporal dimension for dynamic computing with biological interpretability. For (3), we hierarchically integrate the proposed neuron model to the original SSM block, and enhance the dynamics of SPikE-SSM by incorporating trainable thresholds and refractory magnitudes to balance accuracy and sparsity. Extensive experiments verify the effectiveness and robustness of SPikE-SSM on the long range arena benchmarks and large language dataset WikiText-103, showing the potential of dynamic spiking neurons in efficient long sequence learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
501,395
2401.08921
Electromagnetic Information Theory: Fundamentals and Applications for 6G Wireless Communication Systems
In wireless communications, electromagnetic theory and information theory constitute a pair of fundamental theories, bridged by antenna theory and wireless propagation channel modeling theory. Up to the fifth generation (5G) wireless communication networks, these four theories have been developing relatively independently. However, in sixth generation (6G) space-air-ground-sea wireless communication networks, seamless coverage is expected in the three-dimensional (3D) space, potentially necessitating the acquisition of channel state information (CSI) and channel capacity calculation at anywhere and any time. Additionally, the key 6G technologies such as ultra-massive multiple-input multiple-output (MIMO) and holographic MIMO achieves intricate interaction of the antennas and wireless propagation environments, which necessitates the joint modeling of antennas and wireless propagation channels. To address the challenges in 6G, the integration of the above four theories becomes inevitable, leading to the concept of the so-called electromagnetic information theory (EIT). In this article, a suite of 6G key technologies is highlighted. Then, the concepts and relationships of the four theories are unveiled. Finally, the necessity and benefits of integrating them into the EIT are revealed.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
422,076
2301.02843
On vectorial functions with maximal number of bent components
We study vectorial functions with maximal number of bent components in this paper. We first study the Walsh transform and nonlinearity of $F(x)=x^{2^e}h(\Tr_{2^{2m}/2^m}(x))$, where $e\geq0$ and $h(x)$ is a permutation over $\F_{2^m}$. If $h(x)$ is monomial, the nonlinearity of $F(x)$ is shown to be at most $ 2^{2m-1}-2^{\lfloor\frac{3m}{2}\rfloor}$ and some non-plateaued and plateaued functions attaining the upper bound are found. This gives a partial answer to the open problems proposed by Pott et al. and Anbar et al. If $h(x)$ is linear, the exact nonlinearity of $F(x)$ is determined. Secondly, we give a construction of vectorial functions with maximal number of bent components from known ones, thus obtain two new classes from the Niho class and the Maiorana-McFarland class. Our construction gives a partial answer to an open problem proposed by Pott et al., and also contains vectorial functions outside the complete Maiorana-McFarland class. Finally, we show that the vectorial function $F: \F_{2^{2m}}\rightarrow \F_{2^{2m}}$, $x\mapsto x^{2^m+1}+x^{2^i+1}$ has maximal number of bent components if and only if $i=0$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
339,614
2411.07118
ConvMixFormer- A Resource-efficient Convolution Mixer for Transformer-based Dynamic Hand Gesture Recognition
Transformer models have demonstrated remarkable success in many domains such as natural language processing (NLP) and computer vision. With the growing interest in transformer-based architectures, they are now utilized for gesture recognition. So, we also explore and devise a novel ConvMixFormer architecture for dynamic hand gestures. The transformers use quadratic scaling of the attention features with the sequential data, due to which these models are computationally complex and heavy. We have considered this drawback of the transformer and designed a resource-efficient model that replaces the self-attention in the transformer with the simple convolutional layer-based token mixer. The computational cost and the parameters used for the convolution-based mixer are comparatively less than the quadratic self-attention. Convolution-mixer helps the model capture the local spatial features that self-attention struggles to capture due to their sequential processing nature. Further, an efficient gate mechanism is employed instead of a conventional feed-forward network in the transformer to help the model control the flow of features within different stages of the proposed model. This design uses fewer learnable parameters which is nearly half the vanilla transformer that helps in fast and efficient training. The proposed method is evaluated on NVidia Dynamic Hand Gesture and Briareo datasets and our model has achieved state-of-the-art results on single and multimodal inputs. We have also shown the parameter efficiency of the proposed ConvMixFormer model compared to other methods. The source code is available at https://github.com/mallikagarg/ConvMixFormer.
true
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
507,398
2012.13853
ANL: Anti-Noise Learning for Cross-Domain Person Re-Identification
Due to the lack of labels and the domain diversities, it is a challenge to study person re-identification in the cross-domain setting. An admirable method is to optimize the target model by assigning pseudo-labels for unlabeled samples through clustering. Usually, attributed to the domain gaps, the pre-trained source domain model cannot extract appropriate target domain features, which will dramatically affect the clustering performance and the accuracy of pseudo-labels. Extensive label noise will lead to sub-optimal solutions doubtlessly. To solve these problems, we propose an Anti-Noise Learning (ANL) approach, which contains two modules. The Feature Distribution Alignment (FDA) module is designed to gather the id-related samples and disperse id-unrelated samples, through the camera-wise contrastive learning and adversarial adaptation. Creating a friendly cross-feature foundation for clustering that is to reduce clustering noise. Besides, the Reliable Sample Selection (RSS) module utilizes an Auxiliary Model to correct noisy labels and select reliable samples for the Main Model. In order to effectively utilize the outlier information generated by the clustering algorithm and RSS module, we train these samples at the instance-level. The experiments demonstrate that our proposed ANL framework can effectively reduce the domain conflicts and alleviate the influence of noisy samples, as well as superior performance compared with the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
213,342
1807.08074
ScoutBot: A Dialogue System for Collaborative Navigation
ScoutBot is a dialogue interface to physical and simulated robots that supports collaborative exploration of environments. The demonstration will allow users to issue unconstrained spoken language commands to ScoutBot. ScoutBot will prompt for clarification if the user's instruction needs additional input. It is trained on human-robot dialogue collected from Wizard-of-Oz experiments, where robot responses were initiated by a human wizard in previous interactions. The demonstration will show a simulated ground robot (Clearpath Jackal) in a simulated environment supported by ROS (Robot Operating System).
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
103,447
cs/0508029
Selfish vs. Unselfish Optimization of Network Creation
We investigate several variants of a network creation model: a group of agents builds up a network between them while trying to keep the costs of this network small. The cost function consists of two addends, namely (i) a constant amount for each edge an agent buys and (ii) the minimum number of hops it takes sending messages to other agents. Despite the simplicity of this model, various complex network structures emerge depending on the weight between the two addends of the cost function and on the selfish or unselfish behaviour of the agents.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
538,865
2203.02130
Mapping evolving population geography in China
China's demographic changes have important global economic and geopolitical implications. Yet, our understanding of such transitions at the micro-spatial scale remains limited due to spatial inconsistency of the census data caused by administrative boundary adjustments. To fill this gap, we manually collected and built a population census panel from 2010 to 2020 at both the county and prefectural-city levels. We show that the massive internal migration drives China's increasing population concentration and regional disparity, resulting in severe population aging in shrinking cities and increasing gender imbalance in growing cities.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
283,651
2305.16536
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
Contrastive learning (CL) has emerged as a powerful technique for representation learning, with or without label supervision. However, supervised CL is prone to collapsing representations of subclasses within a class by not capturing all their features, and unsupervised CL may suppress harder class-relevant features by focusing on learning easy class-irrelevant features; both significantly compromise representation quality. Yet, there is no theoretical understanding of \textit{class collapse} or \textit{feature suppression} at \textit{test} time. We provide the first unified theoretically rigorous framework to determine \textit{which} features are learnt by CL. Our analysis indicate that, perhaps surprisingly, bias of (stochastic) gradient descent towards finding simpler solutions is a key factor in collapsing subclass representations and suppressing harder class-relevant features. Moreover, we present increasing embedding dimensionality and improving the quality of data augmentations as two theoretically motivated solutions to {feature suppression}. We also provide the first theoretical explanation for why employing supervised and unsupervised CL together yields higher-quality representations, even when using commonly-used stochastic gradient methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,133
1605.08074
Temporal Clustering in Dynamic Networks with Tensor Decomposition
Dynamic networks are increasingly being usedd to model real world datasets. A challenging task in their analysis is to detect and characterize clusters. It is useful for analyzing real-world data such as detecting evolving communities in networks. We propose a temporal clustering framework based on a set of network generative models to address this problem. We use PARAFAC decomposition to learn network models from datasets.We then use $K$-means for clustering, the Silhouette criterion to determine the number of clusters, and a similarity score to order the clusters and retain the significant ones. In order to address the time-dependent aspect of these clusters, we propose a segmentation algorithm to detect their formations, dissolutions and lifetimes. Synthetic networks with ground truth and real-world datasets are used to test our method against state-of-the-art, and the results show that our method has better performance in clustering and lifetime detection than previous methods.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
56,381
1902.02067
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
This paper demonstrates that Non-Maximum Suppression (NMS), which is commonly used in Object Detection (OD) tasks to filter redundant detection results, is no longer secure. Considering that NMS has been an integral part of OD systems, thwarting the functionality of NMS can result in unexpected or even lethal consequences for such systems. In this paper, an adversarial example attack which triggers malfunctioning of NMS in end-to-end OD models is proposed. The attack, namely \texttt{Daedalus}, compresses the dimensions of detection boxes to evade NMS. As a result, the final detection output contains extremely dense false positives. This can be fatal for many OD applications such as autonomous vehicles and surveillance systems. The attack can be generalised to different end-to-end OD models, such that the attack cripples various OD applications. Furthermore, a way to craft robust adversarial examples is developed by using an ensemble of popular detection models as the substitutes. Considering the pervasive nature of model reusing in real-world OD scenarios, Daedalus examples crafted based on an \textit{ensemble of substitutes} can launch attacks without knowing the parameters of the victim models. Experimental results demonstrate that the attack effectively stops NMS from filtering redundant bounding boxes. As the evaluation results suggest, Daedalus increases the false positive rate in detection results to $99.9\%$ and reduces the mean average precision scores to $0$, while maintaining a low cost of distortion on the original inputs. It is also demonstrated that the attack can be practically launched against real-world OD systems via printed posters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
120,798
2312.16465
Multi-Contact Whole-Body Force Control for Position-Controlled Robots
Many humanoid and multi-legged robots are controlled in positions rather than in torques, which prevents direct control of contact forces, and hampers their ability to create multiple contacts to enhance their balance, such as placing a hand on a wall or a handrail. This letter introduces the SEIKO (Sequential Equilibrium Inverse Kinematic Optimization) pipeline, and proposes a unified formulation that exploits an explicit model of flexibility to indirectly control contact forces on traditional position-controlled robots. SEIKO formulates whole-body retargeting from Cartesian commands and admittance control using two quadratic programs solved in real-time. Our pipeline is validated with experiments on the real, full-scale humanoid robot Talos in various multi-contact scenarios, including pushing tasks, far-reaching tasks, stair climbing, and stepping on sloped surfaces. Code and videos are available at: https://hucebot.github.io/seiko_controller_website/
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
418,391
1906.06792
Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction
We propose 4 insights that help to significantly improve the performance of deep learning models that predict surface normals and semantic labels from a single RGB image. These insights are: (1) denoise the "ground truth" surface normals in the training set to ensure consistency with the semantic labels; (2) concurrently train on a mix of real and synthetic data, instead of pretraining on synthetic and finetuning on real; (3) jointly predict normals and semantics using a shared model, but only backpropagate errors on pixels that have valid training labels; (4) slim down the model and use grayscale instead of color inputs. Despite the simplicity of these steps, we demonstrate consistently improved results on several datasets, using a model that runs at 12 fps on a standard mobile phone.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
135,415
1511.05266
Semi-supervised Collaborative Ranking with Push at Top
Existing collaborative ranking based recommender systems tend to perform best when there is enough observed ratings for each user and the observation is made completely at random. Under this setting recommender systems can properly suggest a list of recommendations according to the user interests. However, when the observed ratings are extremely sparse (e.g. in the case of cold-start users where no rating data is available), and are not sampled uniformly at random, existing ranking methods fail to effectively leverage side information to transduct the knowledge from existing ratings to unobserved ones. We propose a semi-supervised collaborative ranking model, dubbed \texttt{S$^2$COR}, to improve the quality of cold-start recommendation. \texttt{S$^2$COR} mitigates the sparsity issue by leveraging side information about both observed and missing ratings by collaboratively learning the ranking model. This enables it to deal with the case of missing data not at random, but to also effectively incorporate the available side information in transduction. We experimentally evaluated our proposed algorithm on a number of challenging real-world datasets and compared against state-of-the-art models for cold-start recommendation. We report significantly higher quality recommendations with our algorithm compared to the state-of-the-art.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
49,024
2107.09843
TumorCP: A Simple but Effective Object-Level Data Augmentation for Tumor Segmentation
Deep learning models are notoriously data-hungry. Thus, there is an urging need for data-efficient techniques in medical image analysis, where well-annotated data are costly and time consuming to collect. Motivated by the recently revived "Copy-Paste" augmentation, we propose TumorCP, a simple but effective object-level data augmentation method tailored for tumor segmentation. TumorCP is online and stochastic, providing unlimited augmentation possibilities for tumors' subjects, locations, appearances, as well as morphologies. Experiments on kidney tumor segmentation task demonstrate that TumorCP surpasses the strong baseline by a remarkable margin of 7.12% on tumor Dice. Moreover, together with image-level data augmentation, it beats the current state-of-the-art by 2.32% on tumor Dice. Comprehensive ablation studies are performed to validate the effectiveness of TumorCP. Meanwhile, we show that TumorCP can lead to striking improvements in extremely low-data regimes. Evaluated with only 10% labeled data, TumorCP significantly boosts tumor Dice by 21.87%. To the best of our knowledge, this is the very first work exploring and extending the "Copy-Paste" design in medical imaging domain. Code is available at: https://github.com/YaoZhang93/TumorCP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
247,141
1904.02422
Resource Efficient 3D Convolutional Neural Networks
Recently, convolutional neural networks with 3D kernels (3D CNNs) have been very popular in computer vision community as a result of their superior ability of extracting spatio-temporal features within video frames compared to 2D CNNs. Although there has been great advances recently to build resource efficient 2D CNN architectures considering memory and power budget, there is hardly any similar resource efficient architectures for 3D CNNs. In this paper, we have converted various well-known resource efficient 2D CNNs to 3D CNNs and evaluated their performance on three major benchmarks in terms of classification accuracy for different complexity levels. We have experimented on (1) Kinetics-600 dataset to inspect their capacity to learn, (2) Jester dataset to inspect their ability to capture motion patterns, and (3) UCF-101 to inspect the applicability of transfer learning. We have evaluated the run-time performance of each model on a single Titan XP GPU and a Jetson TX2 embedded system. The results of this study show that these models can be utilized for different types of real-world applications since they provide real-time performance with considerable accuracies and memory usage. Our analysis on different complexity levels shows that the resource efficient 3D CNNs should not be designed too shallow or narrow in order to save complexity. The codes and pretrained models used in this work are publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,428
2212.05958
Design, Application and Evaluation of a Multi Agent System in the Logistics Domain
The increasing demand for flexibility of automated production systems also affects the automated material flow systems (aMFS) they contain and demands reconfigurable systems. However, the centralized control concept usually applied in aMFS hinders an easy adaptation, as the entire control software has to be re-tested, when manually changing sub-parts of the control. As adaption and subsequent testing are a time-consuming task, concepts for splitting the control from one centralized to multiple, decentralized control nodes are required. Therefore, this paper presents a holistic agent-based control concept for aMFS, whereby the system is divided into so-called automated material flow modules (aMFM), each being controlled by a dedicated module agent. The concept allows the reconfiguration of aMFS, consisting of heterogeneous, stationary aMFM, during runtime. Furthermore, it includes aspects such as uniform agent knowledge bases through metamodel-based development, a communication ontology considering different information types and properties, strategic route optimization in decentralized control architecture and a visualization concept to make decisions of the module agents comprehensible to operators and maintenance staff. The evaluation of the concept is performed by means of material flow simulations as well as a prototypical implementation on a lab-sized demonstrator.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
335,952
2111.03651
The Curious Layperson: Fine-Grained Image Recognition without Expert Labels
Most of us are not experts in specific fields, such as ornithology. Nonetheless, we do have general image and language understanding capabilities that we use to match what we see to expert resources. This allows us to expand our knowledge and perform novel tasks without ad-hoc external supervision. On the contrary, machines have a much harder time consulting expert-curated knowledge bases unless trained specifically with that knowledge in mind. Thus, in this paper we consider a new problem: fine-grained image recognition without expert annotations, which we address by leveraging the vast knowledge available in web encyclopedias. First, we learn a model to describe the visual appearance of objects using non-expert image descriptions. We then train a fine-grained textual similarity model that matches image descriptions with documents on a sentence-level basis. We evaluate the method on two datasets and compare with several strong baselines and the state of the art in cross-modal retrieval. Code is available at: https://github.com/subhc/clever
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
265,236
2412.08258
Large Language Models for Scholarly Ontology Generation: An Extensive Analysis in the Engineering Field
Ontologies of research topics are crucial for structuring scientific knowledge, enabling scientists to navigate vast amounts of research, and forming the backbone of intelligent systems such as search engines and recommendation systems. However, manual creation of these ontologies is expensive, slow, and often results in outdated and overly general representations. As a solution, researchers have been investigating ways to automate or semi-automate the process of generating these ontologies. This paper offers a comprehensive analysis of the ability of large language models (LLMs) to identify semantic relationships between different research topics, which is a critical step in the development of such ontologies. To this end, we developed a gold standard based on the IEEE Thesaurus to evaluate the task of identifying four types of relationships between pairs of topics: broader, narrower, same-as, and other. Our study evaluates the performance of seventeen LLMs, which differ in scale, accessibility (open vs. proprietary), and model type (full vs. quantised), while also assessing four zero-shot reasoning strategies. Several models have achieved outstanding results, including Mixtral-8x7B, Dolphin-Mistral-7B, and Claude 3 Sonnet, with F1-scores of 0.847, 0.920, and 0.967, respectively. Furthermore, our findings demonstrate that smaller, quantised models, when optimised through prompt engineering, can deliver performance comparable to much larger proprietary models, while requiring significantly fewer computational resources.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
true
516,011
2311.08000
LiPar: A Lightweight Parallel Learning Model for Practical In-Vehicle Network Intrusion Detection
With the development of intelligent transportation systems, vehicles are exposed to a complex network environment. As the main network of in-vehicle networks, the controller area network (CAN) has many potential security hazards, resulting in higher generalization capability and lighter security requirements for intrusion detection systems to ensure safety. Among intrusion detection technologies, methods based on deep learning work best without prior expert knowledge. However, they all have a large model size and usually rely on large computing power such as cloud computing, and are therefore not suitable to be installed on the in-vehicle network. Therefore, we explore computational resource allocation schemes in in-vehicle network and propose a lightweight parallel neural network structure, LiPar, which achieve enhanced generalization capability for identifying normal and abnormal patterns of in-vehicle communication flows to achieve effective intrusion detection while improving the utilization of limited computing resources. In particular, LiPar adaptationally allocates task loads to in-vehicle computing devices, such as multiple electronic control units, domain controllers, computing gateways through evaluates whether a computing device is suitable to undertake the branch computing tasks according to its real-time resource occupancy. Through experiments, we prove that LiPar has great detection performance, running efficiency, and lightweight model size, which can be well adapted to the in-vehicle environment practically and protect the in-vehicle CAN bus security. Furthermore, with only the common multi-dimensional branch convolution networks for detection, LiPar can have a high potential for generalization in spatial and temporal feature fusion learning.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
407,554
1608.00550
Theory of the GMM Kernel
We develop some theoretical results for a robust similarity measure named "generalized min-max" (GMM). This similarity has direct applications in machine learning as a positive definite kernel and can be efficiently computed via probabilistic hashing. Owing to the discrete nature, the hashed values can also be used for efficient near neighbor search. We prove the theoretical limit of GMM and the consistency result, assuming that the data follow an elliptical distribution, which is a very general family of distributions and includes the multivariate $t$-distribution as a special case. The consistency result holds as long as the data have bounded first moment (an assumption which essentially holds for datasets commonly encountered in practice). Furthermore, we establish the asymptotic normality of GMM. Compared to the "cosine" similarity which is routinely adopted in current practice in statistics and machine learning, the consistency of GMM requires much weaker conditions. Interestingly, when the data follow the $t$-distribution with $\nu$ degrees of freedom, GMM typically provides a better measure of similarity than "cosine" roughly when $\nu<8$ (which is already very close to normal). These theoretical results will help explain the recent success of GMM in learning tasks.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
59,295
1811.07296
GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint
We know SGAN may have a risk of gradient vanishing. A significant improvement is WGAN, with the help of 1-Lipschitz constraint on discriminator to prevent from gradient vanishing. Is there any GAN having no gradient vanishing and no 1-Lipschitz constraint on discriminator? We do find one, called GAN-QP. To construct a new framework of Generative Adversarial Network (GAN) usually includes three steps: 1. choose a probability divergence; 2. convert it into a dual form; 3. play a min-max game. In this articles, we demonstrate that the first step is not necessary. We can analyse the property of divergence and even construct new divergence in dual space directly. As a reward, we obtain a simpler alternative of WGAN: GAN-QP. We demonstrate that GAN-QP have a better performance than WGAN in theory and practice.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
113,732
2411.17132
Improving Resistance to Noisy Label Fitting by Reweighting Gradient in SAM
Noisy labels pose a substantial challenge in machine learning, often resulting in overfitting and poor generalization. Sharpness-Aware Minimization (SAM), as demonstrated in Foret et al. (2021), improves generalization over traditional Stochastic Gradient Descent (SGD) in classification tasks with noisy labels by implicitly slowing noisy learning. While SAM's ability to generalize in noisy environments has been studied in several simplified settings, its full potential in more realistic training settings remains underexplored. In this work, we analyze SAM's behavior at each iteration, identifying specific components of the gradient vector that contribute significantly to its robustness against noisy labels. Based on these insights, we propose SANER (Sharpness-Aware Noise-Explicit Reweighting), an effective variant that enhances SAM's ability to manage noisy fitting rate. Our experiments on CIFAR-10, CIFAR-100, and Mini-WebVision demonstrate that SANER consistently outperforms SAM, achieving up to an 8% increase on CIFAR-100 with 50% label noise.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
511,318
2103.15502
Remote Sensing Image Translation via Style-Based Recalibration Module and Improved Style Discriminator
Existing remote sensing change detection methods are heavily affected by seasonal variation. Since vegetation colors are different between winter and summer, such variations are inclined to be falsely detected as changes. In this letter, we proposed an image translation method to solve the problem. A style-based recalibration module is introduced to capture seasonal features effectively. Then, a new style discriminator is designed to improve the translation performance. The discriminator can not only produce a decision for the fake or real sample, but also return a style vector according to the channel-wise correlations. Extensive experiments are conducted on season-varying dataset. The experimental results show that the proposed method can effectively perform image translation, thereby consistently improving the season-varying image change detection performance. Our codes and data are available at https://github.com/summitgao/RSIT_SRM_ISD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
227,233
2204.07703
TeleGraph: A Benchmark Dataset for Hierarchical Link Prediction
Link prediction is a key problem for network-structured data, attracting considerable research efforts owing to its diverse applications. The current link prediction methods focus on general networks and are overly dependent on either the closed triangular structure of networks or node attributes. Their performance on sparse or highly hierarchical networks has not been well studied. On the other hand, the available tree-like benchmark datasets are either simulated, with limited node information, or small in scale. To bridge this gap, we present a new benchmark dataset TeleGraph, a highly sparse and hierarchical telecommunication network associated with rich node attributes, for assessing and fostering the link inference techniques. Our empirical results suggest that most of the algorithms fail to produce a satisfactory performance on a nearly tree-like dataset, which calls for special attention when designing or deploying the link prediction algorithm in practice.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
291,807
1707.05254
Explainable Entity-based Recommendations with Knowledge Graphs
Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method jointly ranks items and knowledge graph entities using a Personalized PageRank procedure to produce recommendations together with their explanations.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
77,191
1803.01877
On Algebraic Proofs of Stability for Homogeneous Vector Fields
We prove that if a homogeneous, continuously differentiable vector field is asymptotically stable, then it admits a Lyapunov function which is the ratio of two polynomials (i.e., a rational function). We further show that when the vector field is polynomial, the Lyapunov inequalities on both the rational function and its derivative have sum of squares certificates and hence such a Lyapunov function can always be found by semidefinite programming. This generalizes the classical fact that an asymptotically stable linear system admits a quadratic Lyapunov function which satisfies a certain linear matrix inequality. In addition to homogeneous vector fields, the result can be useful for showing local asymptotic stability of non-homogeneous systems by proving asymptotic stability of their lowest order homogeneous component. This paper also includes some negative results: We show that (i) in absence of homogeneity, globally asymptotically stable polynomial vector fields may fail to admit a global rational Lyapunov function, and (ii) in presence of homogeneity, the degree of the numerator of a rational Lyapunov function may need to be arbitrarily high (even for vector fields of fixed degree and dimension). On the other hand, we also give a family of homogeneous polynomial vector fields that admit a low-degree rational Lyapunov function but necessitate polynomial Lyapunov functions of arbitrarily high degree. This shows the potential benefits of working with rational Lyapunov functions, particularly as the ones whose existence we guarantee have structured denominators and are not more expensive to search for than polynomial ones.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
91,953
1604.06634
Forman-Ricci flow for change detection in large dynamic data sets
We present a viable solution to the challenging question of change detection in complex networks inferred from large dynamic data sets. Building on Forman's discretization of the classical notion of Ricci curvature, we introduce a novel geometric method to characterize different types of real-world networks with an emphasis on peer-to-peer networks. Furthermore we adapt the classical Ricci flow that already proved to be a powerful tool in image processing and graphics, to the case of undirected and weighted networks. The application of the proposed method on peer-to-peer networks yields insights into topological properties and the structure of their underlying data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
54,969
1709.05295
And That's A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue
We investigate the characteristics of factual and emotional argumentation styles observed in online debates. Using an annotated set of "factual" and "feeling" debate forum posts, we extract patterns that are highly correlated with factual and emotional arguments, and then apply a bootstrapping methodology to find new patterns in a larger pool of unannotated forum posts. This process automatically produces a large set of patterns representing linguistic expressions that are highly correlated with factual and emotional language. Finally, we analyze the most discriminating patterns to better understand the defining characteristics of factual and emotional arguments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
80,828
2208.00063
Topology-Driven Generative Completion of Lacunae in Molecular Data
We introduce an approach to the targeted completion of lacunae in molecular data sets which is driven by topological data analysis, such as Mapper algorithm. Lacunae are filled in using scaffold-constrained generative models trained with different scoring functions. The approach enables addition of links and vertices to the skeletonized representations of the data, such as Mapper graph, and falls in the broad category of network completion methods. We illustrate application of the topology-driven data completion strategy by creating a lacuna in the data set of onium cations extracted from USPTO patents, and repairing it.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,727
2402.13058
Random Graph Set and Evidence Pattern Reasoning Model
Evidence theory is widely used in decision-making and reasoning systems. In previous research, Transferable Belief Model (TBM) is a commonly used evidential decision making model, but TBM is a non-preference model. In order to better fit the decision making goals, the Evidence Pattern Reasoning Model (EPRM) is proposed. By defining pattern operators and decision making operators, corresponding preferences can be set for different tasks. Random Permutation Set (RPS) expands order information for evidence theory. It is hard for RPS to characterize the complex relationship between samples such as cycling, paralleling relationships. Therefore, Random Graph Set (RGS) were proposed to model complex relationships and represent more event types. In order to illustrate the significance of RGS and EPRM, an experiment of aircraft velocity ranking was designed and 10,000 cases were simulated. The implementation of EPRM called Conflict Resolution Decision optimized 18.17\% of the cases compared to Mean Velocity Decision, effectively improving the aircraft velocity ranking. EPRM provides a unified solution for evidence-based decision making.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
431,098
2410.01755
Integrating Protein Sequence and Expression Level to Analysis Molecular Characterization of Breast Cancer Subtypes
Breast cancer's complexity and variability pose significant challenges in understanding its progression and guiding effective treatment. This study aims to integrate protein sequence data with expression levels to improve the molecular characterization of breast cancer subtypes and predict clinical outcomes. Using ProtGPT2, a language model designed for protein sequences, we generated embeddings that capture the functional and structural properties of proteins sequence. These embeddings were integrated with protein expression level to form enriched biological representations, which were analyzed using machine learning methods like ensemble K-means for clustering and XGBoost for classification. Our approach enabled successful clustering of patients into biologically distinct groups and accurately predicted clinical outcomes such as survival and biomarkers status, achieving high performance metrics, notably an F1 score of 0.88 for survival and 0.87 for biomarkers status prediction. Analysis of feature importance highlighted key proteins like KMT2C, GCN1, and CLASP2, linked to hormone receptor and Human Epidermal Growth Factor Receptor 2 (HER2) expression, which play a role in tumor progression and patient outcomes, respectively. Furthermore, protein-protein interaction networks and correlation analyses revealed the interdependence of proteins that may influence breast cancer subtype behaviors. These findings suggest that integrating protein sequence and expression data provides valuable insights into tumor biology and has significant potential to enhance personalized treatment strategies in breast cancer care.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
493,927
2501.10523
Multiclass Queue Scheduling Under Slowdown: An Approximate Dynamic Programming Approach
In many service systems, especially those in healthcare, customer waiting times can result in increased service requirements. Such service slowdowns can significantly impact system performance. Therefore, it is important to properly account for their impact when designing scheduling policies. Scheduling under wait-dependent service times is challenging, especially when multiple customer classes are heterogeneously affected by waiting. In this work, we study scheduling policies in multiclass, multiserver queues with wait-dependent service slowdowns. We propose a simulation-based Approximate Dynamic Programming (ADP) algorithm to find close-to-optimal scheduling policies. The ADP algorithm (i) represents the policy using classifiers based on the index policy structure, (ii) leverages a coupling method to estimate the differences of the relative value functions directly, and (iii) uses adaptive sampling for efficient state-space exploration. Through extensive numerical experiments, we illustrate that the ADP algorithm generates close-to-optimal policies that outperform well-known benchmarks. We also provide insights into the structure of the optimal policy, which reveals an important trade-off between instantaneous cost reduction and preventing the system from reaching high-cost equilibria. Lastly, we conduct a case study on scheduling admissions into rehabilitation care to illustrate the effectiveness of the ADP algorithm in practice.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
525,555
1611.02126
Dependence and Relevance: A probabilistic view
We examine three probabilistic concepts related to the sentence "two variables have no bearing on each other". We explore the relationships between these three concepts and establish their relevance to the process of constructing similarity networks---a tool for acquiring probabilistic knowledge from human experts. We also establish a precise relationship between connectedness in Bayesian networks and relevance in probability.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
63,514
2501.02458
Neural Reflectance Fields for Radio-Frequency Ray Tracing
Ray tracing is widely employed to model the propagation of radio-frequency (RF) signal in complex environment. The modelling performance greatly depends on how accurately the target scene can be depicted, including the scene geometry and surface material properties. The advances in computer vision and LiDAR make scene geometry estimation increasingly accurate, but there still lacks scalable and efficient approaches to estimate the material reflectivity in real-world environment. In this work, we tackle this problem by learning the material reflectivity efficiently from the path loss of the RF signal from the transmitters to receivers. Specifically, we want the learned material reflection coefficients to minimize the gap between the predicted and measured powers of the receivers. We achieve this by translating the neural reflectance field from optics to RF domain by modelling both the amplitude and phase of RF signals to account for the multipath effects. We further propose a differentiable RF ray tracing framework that optimizes the neural reflectance field to match the signal strength measurements. We simulate a complex real-world environment for experiments and our simulation results show that the neural reflectance field can successfully learn the reflection coefficients for all incident angles. As a result, our approach achieves better accuracy in predicting the powers of receivers with significantly less training data compared to existing approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
522,494
1908.07934
Spatio-Temporal Representation with Deep Neural Recurrent Network in MIMO CSI Feedback
In multiple-input multiple-output (MIMO) systems, it is crucial of utilizing the available channel state information (CSI) at the transmitter for precoding to improve the performance of frequency division duplex (FDD) networks. One of the mainchallenges is to compress a large amount of CSI in CSI feedback transmission in massive MIMO systems. In this paper, we propose a deep learning (DL)-based approach that uses a deep recurrent neural network (RNN) to learn temporal correlation and adopts depthwise separable convolution to shrink the model. The feature extraction module is also elaborately devised by studyingdecoupled spatio-temporal feature representations in different structures. Experimental results demonstrate that the proposed approach outperforms existing DL-based methods in terms of recovery quality and accuracy, which can also achieve remarkable robustness at low compression ratio (CR).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
142,422
2307.13571
PT$\mathrm{L}^{p}$: Partial Transport $\mathrm{L}^{p}$ Distances
Optimal transport and its related problems, including optimal partial transport, have proven to be valuable tools in machine learning for computing meaningful distances between probability or positive measures. This success has led to a growing interest in defining transport-based distances that allow for comparing signed measures and, more generally, multi-channeled signals. Transport $\mathrm{L}^{p}$ distances are notable extensions of the optimal transport framework to signed and possibly multi-channeled signals. In this paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new family of metrics for comparing generic signals, benefiting from the robustness of partial transport distances. We provide theoretical background such as the existence of optimal plans and the behavior of the distance in various limits. Furthermore, we introduce the sliced variation of these distances, which allows for rapid comparison of generic signals. Finally, we demonstrate the application of the proposed distances in signal class separability and nearest neighbor classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
381,633
1703.09911
Learning with Privileged Information for Multi-Label Classification
In this paper, we propose a novel approach for learning multi-label classifiers with the help of privileged information. Specifically, we use similarity constraints to capture the relationship between available information and privileged information, and use ranking constraints to capture the dependencies among multiple labels. By integrating similarity constraints and ranking constraints into the learning process of classifiers, the privileged information and the dependencies among multiple labels are exploited to construct better classifiers during training. A maximum margin classifier is adopted, and an efficient learning algorithm of the proposed method is also developed. We evaluate the proposed method on two applications: multiple object recognition from images with the help of implicit information about object importance conveyed by the list of manually annotated image tags; and multiple facial action unit detection from low-resolution images augmented by high-resolution images. Experimental results demonstrate that the proposed method can effectively take full advantage of privileged information and dependencies among multiple labels for better object recognition and better facial action unit detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,826
2310.01459
NarrativePlay: Interactive Narrative Understanding
In this paper, we introduce NarrativePlay, a novel system that allows users to role-play a fictional character and interact with other characters in narratives such as novels in an immersive environment. We leverage Large Language Models (LLMs) to generate human-like responses, guided by personality traits extracted from narratives. The system incorporates auto-generated visual display of narrative settings, character portraits, and character speech, greatly enhancing user experience. Our approach eschews predefined sandboxes, focusing instead on main storyline events extracted from narratives from the perspective of a user-selected character. NarrativePlay has been evaluated on two types of narratives, detective and adventure stories, where users can either explore the world or improve their favorability with the narrative characters through conversations.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
396,442
1506.00598
Energy Efficiency and Sum Rate Tradeoffs for Massive MIMO Systems with Underlaid Device-to-Device Communications
In this paper, we investigate the coexistence of two technologies that have been put forward for the fifth generation (5G) of cellular networks, namely, network-assisted device-to-device (D2D) communications and massive MIMO (multiple-input multiple-output). Potential benefits of both technologies are known individually, but the tradeoffs resulting from their coexistence have not been adequately addressed. To this end, we assume that D2D users reuse the downlink resources of cellular networks in an underlay fashion. In addition, multiple antennas at the BS are used in order to obtain precoding gains and simultaneously support multiple cellular users using multiuser or massive MIMO technique. Two metrics are considered, namely the average sum rate (ASR) and energy efficiency (EE). We derive tractable and directly computable expressions and study the tradeoffs between the ASR and EE as functions of the number of BS antennas, the number of cellular users and the density of D2D users within a given coverage area. Our results show that both the ASR and EE behave differently in scenarios with low and high density of D2D users, and that coexistence of underlay D2D communications and massive MIMO is mainly beneficial in low densities of D2D users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
43,691
2103.10245
Building Safer Autonomous Agents by Leveraging Risky Driving Behavior Knowledge
Simulation environments are good for learning different driving tasks like lane changing, parking or handling intersections etc. in an abstract manner. However, these simulation environments often restrict themselves to operate under conservative interaction behavior amongst different vehicles. But, as we know, real driving tasks often involve very high risk scenarios where other drivers often don't behave in the expected sense. There can be many reasons for this behavior like being tired or inexperienced. The simulation environment doesn't take this information into account while training the navigation agent. Therefore, in this study we especially focus on systematically creating these risk prone scenarios with heavy traffic and unexpected random behavior for creating better model-free learning agents. We generate multiple autonomous driving scenarios by creating new custom Markov Decision Process (MDP) environment iterations in the highway-env simulation package. The behavior policy is learnt by agents trained with the help from deep reinforcement learning models. Our behavior policy is deliberated to handle collisions and risky randomized driver behavior. We train model free learning agents with supplement information of risk prone driving scenarios and compare their performance with baseline agents. Finally, we casually measure the impact of adding these perturbations in the training process to precisely account for the performance improvement obtained from utilizing the learnings from these scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
225,395
2402.17003
Monitoring Fidelity of Online Reinforcement Learning Algorithms in Clinical Trials
Online reinforcement learning (RL) algorithms offer great potential for personalizing treatment for participants in clinical trials. However, deploying an online, autonomous algorithm in the high-stakes healthcare setting makes quality control and data quality especially difficult to achieve. This paper proposes algorithm fidelity as a critical requirement for deploying online RL algorithms in clinical trials. It emphasizes the responsibility of the algorithm to (1) safeguard participants and (2) preserve the scientific utility of the data for post-trial analyses. We also present a framework for pre-deployment planning and real-time monitoring to help algorithm developers and clinical researchers ensure algorithm fidelity. To illustrate our framework's practical application, we present real-world examples from the Oralytics clinical trial. Since Spring 2023, this trial successfully deployed an autonomous, online RL algorithm to personalize behavioral interventions for participants at risk for dental disease.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
432,800
2310.18459
VFAS-Grasp: Closed Loop Grasping with Visual Feedback and Adaptive Sampling
We consider the problem of closed-loop robotic grasping and present a novel planner which uses Visual Feedback and an uncertainty-aware Adaptive Sampling strategy (VFAS) to close the loop. At each iteration, our method VFAS-Grasp builds a set of candidate grasps by generating random perturbations of a seed grasp. The candidates are then scored using a novel metric which combines a learned grasp-quality estimator, the uncertainty in the estimate and the distance from the seed proposal to promote temporal consistency. Additionally, we present two mechanisms to improve the efficiency of our sampling strategy: We dynamically scale the sampling region size and number of samples in it based on past grasp scores. We also leverage a motion vector field estimator to shift the center of our sampling region. We demonstrate that our algorithm can run in real time (20 Hz) and is capable of improving grasp performance for static scenes by refining the initial grasp proposal. We also show that it can enable grasping of slow moving objects, such as those encountered during human to robot handover.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
403,553
1512.00242
Towards Dropout Training for Convolutional Neural Networks
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
49,694
2405.01714
Interpretable Vital Sign Forecasting with Model Agnostic Attention Maps
Sepsis is a leading cause of mortality in intensive care units (ICUs), representing a substantial medical challenge. The complexity of analyzing diverse vital signs to predict sepsis further aggravates this issue. While deep learning techniques have been advanced for early sepsis prediction, their 'black-box' nature obscures the internal logic, impairing interpretability in critical settings like ICUs. This paper introduces a framework that combines a deep learning model with an attention mechanism that highlights the critical time steps in the forecasting process, thus improving model interpretability and supporting clinical decision-making. We show that the attention mechanism could be adapted to various black box time series forecasting models such as N-HiTS and N-BEATS. Our method preserves the accuracy of conventional deep learning models while enhancing interpretability through attention-weight-generated heatmaps. We evaluated our model on the eICU-CRD dataset, focusing on forecasting vital signs for sepsis patients. We assessed its performance using mean squared error (MSE) and dynamic time warping (DTW) metrics. We explored the attention maps of N-HiTS and N-BEATS, examining the differences in their performance and identifying crucial factors influencing vital sign forecasting.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
451,472
1903.08810
Link prediction with hyperbolic geometry
Link prediction is a paradigmatic problem in network science with a variety of applications. In latent space network models this problem boils down to ranking pairs of nodes in the order of increasing latent distances between them. The network model with hyperbolic latent spaces has a number of attractive properties suggesting it must be a powerful tool to predict links, but the past work in this direction reported mixed results. Here we perform systematic investigation of the utility of latent hyperbolic geometry for link prediction in networks. We first show that some measures of link prediction accuracy are extremely sensitive with respect to inaccuracies in the inference of latent hyperbolic coordinates of nodes, so that we develop a new coordinate inference method that maximizes the accuracy of such inference. Applying this method to synthetic and real networks, we then find that while there exists a multitude of competitive methods to predict obvious easy-to-predict links, among which hyperbolic link prediction is rarely the best but often competitive, it is the best, often by far, when the task is to predict less obvious missing links that are really hard to predict. These links include missing links in incomplete networks with large fractions of missing links, missing links between nodes that do not have any common neighbors, and missing links between dissimilar nodes at large latent distances. Overall these results suggest that the harder a specific link prediction task is, the more seriously one should consider using hyperbolic geometry.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
124,910
2412.06139
Bounded Exploration with World Model Uncertainty in Soft Actor-Critic Reinforcement Learning Algorithm
One of the bottlenecks preventing Deep Reinforcement Learning algorithms (DRL) from real-world applications is how to explore the environment and collect informative transitions efficiently. The present paper describes bounded exploration, a novel exploration method that integrates both 'soft' and intrinsic motivation exploration. Bounded exploration notably improved the Soft Actor-Critic algorithm's performance and its model-based extension's converging speed. It achieved the highest score in 6 out of 8 experiments. Bounded exploration presents an alternative method to introduce intrinsic motivations to exploration when the original reward function has strict meanings.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
515,106
1908.08380
Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time Series Forecasting
Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry. As interest in reservoir computing has grown, networks have become deeper and more intricate. While these networks are increasingly applied to nontrivial forecasting tasks, there is a need for comprehensive performance analysis of deep reservoirs. In this work, we study the influence of partitioning neurons given a budget and the effect of parallel reservoir pathways across different datasets exhibiting multi-scale and nonlinear dynamics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
142,535
1903.06015
Computing the Scope of Applicability for Acquired Task Knowledge in Experience-Based Planning Domains
Experience-based planning domains have been proposed to improve problem solving by learning from experience. They rely on acquiring and using task knowledge, i.e., activity schemata, for generating solutions to problem instances in a class of tasks. Using Three-Valued Logic Analysis (TVLA), we extend previous work to generate a set of conditions that determine the scope of applicability of an activity schema. The inferred scope is a bounded representation of a set of problems of potentially unbounded size, in the form of a 3-valued logical structure, which is used to automatically find an applicable activity schema for solving task problems. We validate this work in two classical planning domains.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
124,290
2307.05780
Automated Artifact Detection in Ultra-widefield Fundus Photography of Patients with Sickle Cell Disease
Importance: Ultra-widefield fundus photography (UWF-FP) has shown utility in sickle cell retinopathy screening; however, image artifact may diminish quality and gradeability of images. Objective: To create an automated algorithm for UWF-FP artifact classification. Design: A neural network based automated artifact detection algorithm was designed to identify commonly encountered UWF-FP artifacts in a cross section of patient UWF-FP. A pre-trained ResNet-50 neural network was trained on a subset of the images and the classification accuracy, sensitivity, and specificity were quantified on the hold out test set. Setting: The study is based on patients from a tertiary care hospital site. Participants: There were 243 UWF-FP acquired from patients with sickle cell disease (SCD), and artifact labelling in the following categories was performed: Eyelash Present, Lower Eyelid Obstructing, Upper Eyelid Obstructing, Image Too Dark, Dark Artifact, and Image Not Centered. Results: Overall, the accuracy for each class was Eyelash Present at 83.7%, Lower Eyelid Obstructing at 83.7%, Upper Eyelid Obstructing at 98.0%, Image Too Dark at 77.6%, Dark Artifact at 93.9%, and Image Not Centered at 91.8%. Conclusions and Relevance: This automated algorithm shows promise in identifying common imaging artifacts on a subset of Optos UWF-FP in SCD patients. Further refinement is ongoing with the goal of improving efficiency of tele-retinal screening in sickle cell retinopathy (SCR) by providing a photographer real-time feedback as to the types of artifacts present, and the need for image re-acquisition. This algorithm also may have potential future applicability in other retinal diseases by improving quality and efficiency of image acquisition of UWF-FP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
378,850
2305.18307
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Auditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to communicate to members of the public that an AI has been audited and considered trustworthy remains an open challenge. This study empirically investigated certification labels as a promising solution. Through interviews (N = 12) and a census-representative survey (N = 302), we investigated end-users' attitudes toward certification labels and their effectiveness in communicating trustworthiness in low- and high-stakes AI scenarios. Based on the survey results, we demonstrate that labels can significantly increase end-users' trust and willingness to use AI in both low- and high-stakes scenarios. However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios. Qualitative content analysis of the interviews revealed opportunities and limitations of certification labels, as well as facilitators and inhibitors for the effective use of labels in the context of AI. For example, while certification labels can mitigate data-related concerns expressed by end-users (e.g., privacy and data protection), other concerns (e.g., model performance) are more challenging to address. Our study provides valuable insights and recommendations for designing and implementing certification labels as a promising constituent within the trustworthy AI ecosystem.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
368,949
1605.06770
Automatic Construction of Discourse Corpora for Dialogue Translation
In this paper, a novel approach is proposed to automatically construct parallel discourse corpus for dialogue machine translation. Firstly, the parallel subtitle data and its corresponding monolingual movie script data are crawled and collected from Internet. Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts. We not only evaluate the mapping results, but also integrate speaker information into the translation. Experiments show our proposed method can achieve 81.79% and 98.64% accuracy on speaker and dialogue boundary annotation, and speaker-based language model adaptation can obtain around 0.5 BLEU points improvement in translation qualities. Finally, we publicly release around 100K parallel discourse data with manual speaker and dialogue boundary annotation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
56,187
1901.05811
No reference image quality assessment metric based on regional mutual information among images
With the inclusion of camera in daily life, an automatic no reference image quality evaluation index is required for automatic classification of images. The present manuscripts proposes a new No Reference Regional Mutual Information based technique for evaluating the quality of an image. We use regional mutual information on subsets of the complete image. Proposed technique is tested on four benchmark natural image databases, and one benchmark synthetic database. A comparative analysis with classical and state-of-art methods indicate superiority of the present technique for high quality images and comparable for other images of the respective databases.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
118,862
1303.4175
Stable Nonlinear Identification From Noisy Repeated Experiments via Convex Optimization
This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repeated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
22,988
2306.05432
Towards End-to-end Speech-to-text Summarization
Speech-to-text (S2T) summarization is a time-saving technique for filtering and keeping up with the broadcast news uploaded online on a daily basis. The rise of large language models from deep learning with impressive text generation capabilities has placed the research focus on summarization systems that produce paraphrased compact versions of the document content, also known as abstractive summaries. End-to-end (E2E) modelling of S2T abstractive summarization is a promising approach that offers the possibility of generating rich latent representations that leverage non-verbal and acoustic information, as opposed to the use of only linguistic information from automatically generated transcripts in cascade systems. However, the few literature on E2E modelling of this task fails on exploring different domains, namely broadcast news, which is challenging domain where large and diversified volumes of data are presented to the user every day. We model S2T summarization both with a cascade and an E2E system for a corpus of broadcast news in French. Our novel E2E model leverages external data by resorting to transfer learning from a pre-trained T2T summarizer. Experiments show that both our cascade and E2E abstractive summarizers are stronger than an extractive baseline. However, the performance of the E2E model still lies behind the cascade one, which is object of an extensive analysis that includes future directions to close that gap.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
372,199
2407.04801
Revisiting Structured Sentiment Analysis as Latent Dependency Graph Parsing
Structured Sentiment Analysis (SSA) was cast as a problem of bi-lexical dependency graph parsing by prior studies. Multiple formulations have been proposed to construct the graph, which share several intrinsic drawbacks: (1) The internal structures of spans are neglected, thus only the boundary tokens of spans are used for relation prediction and span recognition, thus hindering the model's expressiveness; (2) Long spans occupy a significant proportion in the SSA datasets, which further exacerbates the problem of internal structure neglect. In this paper, we treat the SSA task as a dependency parsing task on partially-observed dependency trees, regarding flat spans without determined tree annotations as latent subtrees to consider internal structures of spans. We propose a two-stage parsing method and leverage TreeCRFs with a novel constrained inside algorithm to model latent structures explicitly, which also takes advantages of joint scoring graph arcs and headed spans for global optimization and inference. Results of extensive experiments on five benchmark datasets reveal that our method performs significantly better than all previous bi-lexical methods, achieving new state-of-the-art.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
470,696
2311.17607
Topology-preserving Adversarial Training for Alleviating Natural Accuracy Degradation
Despite the effectiveness in improving the robustness of neural networks, adversarial training has suffered from the natural accuracy degradation problem, i.e., accuracy on natural samples has reduced significantly. In this study, we reveal that natural accuracy degradation is highly related to the disruption of the natural sample topology in the representation space by quantitative and qualitative experiments. Based on this observation, we propose Topology-pReserving Adversarial traINing (TRAIN) to alleviate the problem by preserving the topology structure of natural samples from a standard model trained only on natural samples during adversarial training. As an additional regularization, our method can be combined with various popular adversarial training algorithms, taking advantage of both sides. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet show that our proposed method achieves consistent and significant improvements over various strong baselines in most cases. Specifically, without additional data, TRAIN achieves up to 8.86% improvement in natural accuracy and 6.33% improvement in robust accuracy.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
411,358