id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2309.05429
Improving Information Extraction on Business Documents with Specific Pre-Training Tasks
Transformer-based Language Models are widely used in Natural Language Processing related tasks. Thanks to their pre-training, they have been successfully adapted to Information Extraction in business documents. However, most pre-training tasks proposed in the literature for business documents are too generic and not sufficient to learn more complex structures. In this paper, we use LayoutLM, a language model pre-trained on a collection of business documents, and introduce two new pre-training tasks that further improve its capacity to extract relevant information. The first is aimed at better understanding the complex layout of documents, and the second focuses on numeric values and their order of magnitude. These tasks force the model to learn better-contextualized representations of the scanned documents. We further introduce a new post-processing algorithm to decode BIESO tags in Information Extraction that performs better with complex entities. Our method significantly improves extraction performance on both public (from 93.88 to 95.50 F1 score) and private (from 84.35 to 84.84 F1 score) datasets composed of expense receipts, invoices, and purchase orders.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
391,078
1809.07665
Dynamic Power Control for Packets with Deadlines
Wireless devices need to adapt their transmission power according to the fluctuating wireless channel in order to meet constraints of delay sensitive applications. In this paper, we consider delay sensitivity in the form of strict packet deadlines arriving in a transmission queue. Packets missing the deadline while in the queue are dropped from the system. We aim at minimizing the packet drop rate under average power constraints. We utilize tools from Lyapunov optimization to find an approximate solution by selecting power allocation. We evaluate the performance of the proposed algorithm and show that it achieves the same performance in terms of packet drop rate with that of the Earliest Deadline First (EDF) when the available power is sufficient. However, our algorithm outperforms EDF regarding the trade-off between packet drop rate and average power consumption.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
108,324
2007.04391
A Critical Evaluation of Open-World Machine Learning
Open-world machine learning (ML) combines closed-world models trained on in-distribution data with out-of-distribution (OOD) detectors, which aim to detect and reject OOD inputs. Previous works on open-world ML systems usually fail to test their reliability under diverse, and possibly adversarial conditions. Therefore, in this paper, we seek to understand how resilient are state-of-the-art open-world ML systems to changes in system components? With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$. We further show that OOD inputs with 22 unintentional corruptions or adversarial perturbations render open-world ML systems unusable with false positive rates of up to $100\%$. To increase the resilience of open-world ML, we combine robust classifiers with OOD detection techniques and uncover a new trade-off between OOD detection and robustness.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
186,333
2005.06588
PERLEX: A Bilingual Persian-English Gold Dataset for Relation Extraction
Relation extraction is the task of extracting semantic relations between entities in a sentence. It is an essential part of some natural language processing tasks such as information extraction, knowledge extraction, and knowledge base population. The main motivations of this research stem from a lack of a dataset for relation extraction in the Persian language as well as the necessity of extracting knowledge from the growing big-data in the Persian language for different applications. In this paper, we present "PERLEX" as the first Persian dataset for relation extraction, which is an expert-translated version of the "Semeval-2010-Task-8" dataset. Moreover, this paper addresses Persian relation extraction utilizing state-of-the-art language-agnostic algorithms. We employ six different models for relation extraction on the proposed bilingual dataset, including a non-neural model (as the baseline), three neural models, and two deep learning models fed by multilingual-BERT contextual word representations. The experiments result in the maximum f-score 77.66% (provided by BERTEM-MTB method) as the state-of-the-art of relation extraction in the Persian language.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
177,037
2309.04030
Brief technical note on linearizing recurrent neural networks (RNNs) before vs after the pointwise nonlinearity
Linearization of the dynamics of recurrent neural networks (RNNs) is often used to study their properties. The same RNN dynamics can be written in terms of the ``activations" (the net inputs to each unit, before its pointwise nonlinearity) or in terms of the ``activities" (the output of each unit, after its pointwise nonlinearity); the two corresponding linearizations are different from each other. This brief and informal technical note describes the relationship between the two linearizations, between the left and right eigenvectors of their dynamics matrices, and shows that some context-dependent effects are readily apparent under linearization of activity dynamics but not linearization of activation dynamics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
390,589
1607.00567
Rademacher Complexity Bounds for a Penalized Multiclass Semi-Supervised Algorithm
We propose Rademacher complexity bounds for multiclass classifiers trained with a two-step semi-supervised model. In the first step, the algorithm partitions the partially labeled data and then identifies dense clusters containing $\kappa$ predominant classes using the labeled training examples such that the proportion of their non-predominant classes is below a fixed threshold. In the second step, a classifier is trained by minimizing a margin empirical loss over the labeled training set and a penalization term measuring the disability of the learner to predict the $\kappa$ predominant classes of the identified clusters. The resulting data-dependent generalization error bound involves the margin distribution of the classifier, the stability of the clustering technique used in the first step and Rademacher complexity terms corresponding to partially labeled training data. Our theoretical result exhibit convergence rates extending those proposed in the literature for the binary case, and experimental results on different multiclass classification problems show empirical evidence that supports the theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
58,101
1301.0596
From Qualitative to Quantitative Probabilistic Networks
Quantification is well known to be a major obstacle in the construction of a probabilistic network, especially when relying on human experts for this purpose. The construction of a qualitative probabilistic network has been proposed as an initial step in a network s quantification, since the qualitative network can be used TO gain preliminary insight IN the projected networks reasoning behaviour. We extend on this idea and present a new type of network in which both signs and numbers are specified; we further present an associated algorithm for probabilistic inference. Building upon these semi-qualitative networks, a probabilistic network can be quantified and studied in a stepwise manner. As a result, modelling inadequacies can be detected and amended at an early stage in the quantification process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
20,776
2110.07531
Deep learning models for predicting RNA degradation via dual crowdsourcing
Messenger RNA-based medicines hold immense potential, as evidenced by their rapid deployment as COVID-19 vaccines. However, worldwide distribution of mRNA molecules has been limited by their thermostability, which is fundamentally limited by the intrinsic instability of RNA molecules to a chemical degradation reaction called in-line hydrolysis. Predicting the degradation of an RNA molecule is a key task in designing more stable RNA-based therapeutics. Here, we describe a crowdsourced machine learning competition ("Stanford OpenVaccine") on Kaggle, involving single-nucleotide resolution measurements on 6043 102-130-nucleotide diverse RNA constructs that were themselves solicited through crowdsourcing on the RNA design platform Eterna. The entire experiment was completed in less than 6 months, and 41% of nucleotide-level predictions from the winning model were within experimental error of the ground truth measurement. Furthermore, these models generalized to blindly predicting orthogonal degradation data on much longer mRNA molecules (504-1588 nucleotides) with improved accuracy compared to previously published models. Top teams integrated natural language processing architectures and data augmentation techniques with predictions from previous dynamic programming models for RNA secondary structure. These results indicate that such models are capable of representing in-line hydrolysis with excellent accuracy, supporting their use for designing stabilized messenger RNAs. The integration of two crowdsourcing platforms, one for data set creation and another for machine learning, may be fruitful for other urgent problems that demand scientific discovery on rapid timescales.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,025
2501.06108
Inferring High-Order Couplings with Neural Networks
Maximum entropy methods, based on the inverse Ising/Potts problem from statistical mechanics, are essential for modeling interactions between pairs of variables in data-driven problems across disciplines such as bioinformatics, ecology, and neuroscience. Despite their considerable success, these methods typically fail to capture higher-order interactions that are often essential for understanding complex systems. Conversely, modern machine learning methods capture these complex interactions, but the computational cost of interpretable frameworks makes them impractical for real-world applications. Restricted Boltzmann Machines (RBMs) provide a computationally efficient way to capture statistical correlations using hidden nodes in a bipartite neural network. In this study, we introduce a new method that maps RBMs to generalized Potts models, allowing for the extraction of interactions up to any specified order. This method utilizes large-$N$ approximations, enabled by the RBM's simple structure, to extract effective many-body couplings with minimal computational effort. Furthermore, we propose a robust framework for extracting higher-order interactions in more complex probabilistic models and a simple gauge-fixing method within the effective many-body Potts model. Our validation on synthetic datasets confirms the method's ability to recover two- and three-body interactions accurately. When applied to protein sequence data, the framework competently reconstructs protein contact maps and provides performance comparable to the best inverse Potts models. These findings confirm that RBMs are an effective and streamlined tool for exploring higher-order interactions within complex systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
523,839
1002.2044
On the Stability of Empirical Risk Minimization in the Presence of Multiple Risk Minimizers
Recently Kutin and Niyogi investigated several notions of algorithmic stability--a property of a learning map conceptually similar to continuity--showing that training-stability is sufficient for consistency of Empirical Risk Minimization while distribution-free CV-stability is necessary and sufficient for having finite VC-dimension. This paper concerns a phase transition in the training stability of ERM, conjectured by the same authors. Kutin and Niyogi proved that ERM on finite hypothesis spaces containing a unique risk minimizer has training stability that scales exponentially with sample size, and conjectured that the existence of multiple risk minimizers prevents even super-quadratic convergence. We prove this result for the strictly weaker notion of CV-stability, positively resolving the conjecture.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
5,667
2307.07696
Coupling Large Language Models with Logic Programming for Robust and General Reasoning from Text
While large language models (LLMs), such as GPT-3, appear to be robust and general, their reasoning ability is not at a level to compete with the best models trained for specific natural language reasoning problems. In this study, we observe that a large language model can serve as a highly effective few-shot semantic parser. It can convert natural language sentences into a logical form that serves as input for answer set programs, a logic-based declarative knowledge representation formalism. The combination results in a robust and general system that can handle multiple question-answering tasks without requiring retraining for each new task. It only needs a few examples to guide the LLM's adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks. We demonstrate that this method achieves state-of-the-art performance on several NLP benchmarks, including bAbI, StepGame, CLUTRR, and gSCAN. Additionally, it successfully tackles robot planning tasks that an LLM alone fails to solve.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
379,508
2309.08611
Maneuver Decision-Making Through Proximal Policy Optimization And Monte Carlo Tree Search
Maneuver decision-making can be regarded as a Markov decision process and can be address by reinforcement learning. However, original reinforcement learning algorithms can hardly solve the maneuvering decision-making problem. One reason is that agents use random actions in the early stages of training, which makes it difficult to get rewards and learn how to make effective decisions. To address this issue, a method based on proximal policy optimization and Monte Carlo tree search is proposed. The method uses proximal policy optimization to train the agent, and regards the results of air combat as targets to train the value network. Then, based on the value network and the visit count of each node, Monte Carlo tree search is used to find the actions with more expected returns than random actions, which can improve the training performance. The ablation studies and simulation experiments indicate that agents trained by the proposed method can make different decisions according to different states, which demonstrates that the method can solve the maneuvering decision problem that the original reinforcement learning algorithm cannot solve.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
392,246
2103.11093
Exploring The Effect of High-frequency Components in GANs Training
Generative Adversarial Networks (GANs) have the ability to generate images that are visually indistinguishable from real images. However, recent studies have revealed that generated and real images share significant differences in the frequency domain. In this paper, we explore the effect of high-frequency components in GANs training. According to our observation, during the training of most GANs, severe high-frequency differences make the discriminator focus on high-frequency components excessively, which hinders the generator from fitting the low-frequency components that are important for learning images' content. Then, we propose two simple yet effective frequency operations for eliminating the side effects caused by high-frequency differences in GANs training: High-Frequency Confusion (HFC) and High-Frequency Filter (HFF). The proposed operations are general and can be applied to most existing GANs with a fraction of the cost. The advanced performance of the proposed operations is verified on multiple loss functions, network architectures, and datasets. Specifically, the proposed HFF achieves significant improvements of $42.5\%$ FID on CelebA (128*128) unconditional generation based on SNGAN, $30.2\%$ FID on CelebA unconditional generation based on SSGAN, and $69.3\%$ FID on CelebA unconditional generation based on InfoMAXGAN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
225,664
1812.10524
Exploring the Challenges towards Lifelong Fact Learning
So far life-long learning (LLL) has been studied in relatively small-scale and relatively artificial setups. Here, we introduce a new large-scale alternative. What makes the proposed setup more natural and closer to human-like visual systems is threefold: First, we focus on concepts (or facts, as we call them) of varying complexity, ranging from single objects to more complex structures such as objects performing actions, and objects interacting with other objects. Second, as in real-world settings, our setup has a long-tail distribution, an aspect which has mostly been ignored in the LLL context. Third, facts across tasks may share structure (e.g., <person, riding, wave> and <dog, riding, wave>). Facts can also be semantically related (e.g., "liger" relates to seen categories like "tiger" and "lion"). Given the large number of possible facts, a LLL setup seems a natural choice. To avoid model size growing over time and to optimally exploit the semantic relations and structure, we combine it with a visual semantic embedding instead of discrete class labels. We adapt existing datasets with the properties mentioned above into new benchmarks, by dividing them semantically or randomly into disjoint tasks. This leads to two large-scale benchmarks with 906,232 images and 165,150 unique facts, on which we evaluate and analyze state-of-the-art LLL methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,382
2206.08903
Colonoscopy 3D Video Dataset with Paired Depth from 2D-3D Registration
Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at durr.jhu.edu/C3VD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
303,348
2311.11009
Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion Recognition
Multimodal emotion recognition aims to recognize emotions for each utterance of multiple modalities, which has received increasing attention for its application in human-machine interaction. Current graph-based methods fail to simultaneously depict global contextual features and local diverse uni-modal features in a dialogue. Furthermore, with the number of graph layers increasing, they easily fall into over-smoothing. In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized. Specifically, we first design a new multimodal fusion mechanism that can provide deep interaction and fusion between the global contextual and uni-modal specific features. Then, we introduce a graph contrastive learning framework with inter-view and intra-view contrastive losses to learn more distinguishable representations for samples with different sentiments. Extensive experiments on three benchmark datasets indicate that Joyful achieved state-of-the-art (SOTA) performance compared to all baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
408,755
2212.09553
Mu$^{2}$SLAM: Multitask, Multilingual Speech and Language Models
We present Mu$^{2}$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages. By leveraging a quantized representation of speech as a target, Mu$^{2}$SLAM trains the speech-text models with a sequence-to-sequence masked denoising objective similar to T5 on the decoder and a masked language modeling (MLM) objective on the encoder, for both unlabeled speech and text, while utilizing the supervised tasks to improve cross-lingual and cross-modal representation alignment within the model. On CoVoST AST, Mu$^{2}$SLAM establishes a new state-of-the-art for models trained on public datasets, improving on xx-en translation over the previous best by 1.9 BLEU points and on en-xx translation by 1.1 BLEU points. On Voxpopuli ASR, our model matches the performance of an mSLAM model fine-tuned with an RNN-T decoder, despite using a relatively weaker sequence-to-sequence architecture. On text understanding tasks, our model improves by more than 6\% over mSLAM on XNLI, getting closer to the performance of mT5 models of comparable capacity on XNLI and TydiQA, paving the way towards a single model for all speech and text understanding tasks.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
337,147
1010.0422
Convolutional Matching Pursuit and Dictionary Training
Matching pursuit and K-SVD is demonstrated in the translation invariant setting
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
7,765
1208.3774
Graphical Query Builder in Opportunistic Sensor Networks to discover Sensor Information
A lot of sensor network applications are data-driven. We believe that query is the most preferred way to discover sensor services. Normally users are unaware of available sensors. Thus users need to pose different types of query over the sensor network to get the desired information. Even users may need to input more complicated queries with higher levels of aggregations, and requires more complex interactions with the system. As the users have no prior knowledge of the sensor data or services our aim is to develop a visual query interface where users can feed more user friendly queries and machine can understand those. In this paper work, we have developed an Interactive visual query interface for the users. To accomplish this we have considered several use cases and we have derived graphical representation of query from their text based format for those use case scenario. We have facilitated the user by extracting class, subclass and properties from Ontology. To do so we have parsed OWL file in the user interface and based upon the parsed information users build visual query. Later on we have translated the visual query languages into SPARQL query, a machine understandable format which helps the machine to communicate with the underlying technology.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
18,134
1408.2288
Genetic Programming for Smart Phone Personalisation
Personalisation in smart phones requires adaptability to dynamic context based on user mobility, application usage and sensor inputs. Current personalisation approaches, which rely on static logic that is developed a priori, do not provide sufficient adaptability to dynamic and unexpected context. This paper proposes genetic programming (GP), which can evolve program logic in realtime, as an online learning method to deal with the highly dynamic context in smart phone personalisation. We introduce the concept of collaborative smart phone personalisation through the GP Island Model, in order to exploit shared context among co-located phone users and reduce convergence time. We implement these concepts on real smartphones to demonstrate the capability of personalisation through GP and to explore the benefits of the Island Model. Our empirical evaluations on two example applications confirm that the Island Model can reduce convergence time by up to two-thirds over standalone GP personalisation.
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
35,281
2206.01730
On the complexity of nonsmooth automatic differentiation
Using the notion of conservative gradient, we provide a simple model to estimate the computational costs of the backward and forward modes of algorithmic differentiation for a wide class of nonsmooth programs. The overhead complexity of the backward mode turns out to be independent of the dimension when using programs with locally Lipschitz semi-algebraic or definable elementary functions. This considerably extends Baur-Strassen's smooth cheap gradient principle. We illustrate our results by establishing fast backpropagation results of conservative gradients through feedforward neural networks with standard activation and loss functions. Nonsmooth backpropagation's cheapness contrasts with concurrent forward approaches, which have, to this day, dimensional-dependent worst-case overhead estimates. We provide further results suggesting the superiority of backward propagation of conservative gradients. Indeed, we relate the complexity of computing a large number of directional derivatives to that of matrix multiplication, and we show that finding two subgradients in the Clarke subdifferential of a function is an NP-hard problem.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
300,581
1701.07955
Statistical Analysis on Bangla Newspaper Data to Extract Trending Topic and Visualize Its Change Over Time
Trending topic of newspapers is an indicator to understand the situation of a country and also a way to evaluate the particular newspaper. This paper represents a model describing few techniques to select trending topics from Bangla Newspaper. Topics that are discussed more frequently than other in Bangla newspaper will be marked and how a very famous topic loses its importance with the change of time and another topic takes its place will be demonstrated. Data from two popular Bangla Newspaper with date and time were collected. Statistical analysis was performed after on these data after preprocessing. Popular and most used keywords were extracted from the stream of Bangla keyword with this analysis. This model can also cluster category wise news trend or a list of news trend in daily or weekly basis with enough data. A pattern can be found on their news trend too. Comparison among past news trend of Bangla newspapers will give a visualization of the situation of Bangladesh. This visualization will be helpful to predict future trending topics of Bangla Newspaper.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
67,376
2208.07097
Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task
The adoption of pre-trained language models in task-oriented dialogue systems has resulted in significant enhancements of their text generation abilities. However, these architectures are slow to use because of the large number of trainable parameters and can sometimes fail to generate diverse responses. To address these limitations, we propose two models with auxiliary tasks for response selection - (1) distinguishing distractors from ground truth responses and (2) distinguishing synthetic responses from ground truth labels. They achieve state-of-the-art results on the MultiWOZ 2.1 dataset with combined scores of 107.5 and 108.3 and outperform a baseline with three times more parameters. We publish reproducible code and checkpoints and discuss the effects of applying auxiliary tasks to T5-based architectures.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
312,936
2409.07808
FedHide: Federated Learning by Hiding in the Neighbors
We propose a prototype-based federated learning method designed for embedding networks in classification or verification tasks. Our focus is on scenarios where each client has data from a single class. The main challenge is to develop an embedding network that can distinguish between different classes while adhering to privacy constraints. Sharing true class prototypes with the server or other clients could potentially compromise sensitive information. To tackle this issue, we propose a proxy class prototype that will be shared among clients instead of the true class prototype. Our approach generates proxy class prototypes by linearly combining them with their nearest neighbors. This technique conceals the true class prototype while enabling clients to learn discriminative embedding networks. We compare our method to alternative techniques, such as adding random Gaussian noise and using random selection with cosine similarity constraints. Furthermore, we evaluate the robustness of our approach against gradient inversion attacks and introduce a measure for prototype leakage. This measure quantifies the extent of private information revealed when sharing the proposed proxy class prototype. Moreover, we provide a theoretical analysis of the convergence properties of our approach. Our proposed method for federated learning from scratch demonstrates its effectiveness through empirical results on three benchmark datasets: CIFAR-100, VoxCeleb1, and VGGFace2.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
487,672
2212.05506
FastClass: A Time-Efficient Approach to Weakly-Supervised Text Classification
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data. Recent research shows that keyword-driven methods can achieve state-of-the-art performance on various tasks. However, these methods not only rely on carefully-crafted class descriptions to obtain class-specific keywords but also require substantial amount of unlabeled data and takes a long time to train. This paper proposes FastClass, an efficient weakly-supervised classification approach. It uses dense text representation to retrieve class-relevant documents from external unlabeled corpus and selects an optimal subset to train a classifier. Compared to keyword-driven methods, our approach is less reliant on initial class descriptions as it no longer needs to expand each class description into a set of class-specific keywords. Experiments on a wide range of classification tasks show that the proposed approach frequently outperforms keyword-driven models in terms of classification accuracy and often enjoys orders-of-magnitude faster training speed.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
335,808
1912.00835
Low Rank Factorization for Compact Multi-Head Self-Attention
Effective representation learning from text has been an active area of research in the fields of NLP and text mining. Attention mechanisms have been at the forefront in order to learn contextual sentence representations. Current state-of-the-art approaches for many NLP tasks use large pre-trained language models such as BERT, XLNet and so on for learning representations. These models are based on the Transformer architecture that involves recurrent blocks of computation consisting of multi-head self-attention and feedforward networks. One of the major bottlenecks largely contributing to the computational complexity of the Transformer models is the self-attention layer, that is both computationally expensive and parameter intensive. In this work, we introduce a novel multi-head self-attention mechanism operating on GRUs that is shown to be computationally cheaper and more parameter efficient than self-attention mechanism proposed in Transformers for text classification tasks. The efficiency of our approach mainly stems from two optimizations; 1) we use low-rank matrix factorization of the affinity matrix to efficiently get multiple attention distributions instead of having separate parameters for each head 2) attention scores are obtained by querying a global context vector instead of densely querying all the words in the sentence. We evaluate the performance of the proposed model on tasks such as sentiment analysis from movie reviews, predicting business ratings from reviews and classifying news articles into topics. We find that the proposed approach matches or outperforms a series of strong baselines and is more parameter efficient than comparable multi-head approaches. We also perform qualitative analyses to verify that the proposed approach is interpretable and captures context-dependent word importance.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
155,907
1405.2597
Decoding and Computing Algorithms for Linear Superposition LDPC Coded Systems
This paper is concerned with linear superposition systems in which all components of the superimposed signal are coded with an identical binary low-density parity-check (LDPC) code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
32,999
2212.02992
Sparse Message Passing Network with Feature Integration for Online Multiple Object Tracking
Existing Multiple Object Tracking (MOT) methods design complex architectures for better tracking performance. However, without a proper organization of input information, they still fail to perform tracking robustly and suffer from frequent identity switches. In this paper, we propose two novel methods together with a simple online Message Passing Network (MPN) to address these limitations. First, we explore different integration methods for the graph node and edge embeddings and put forward a new IoU (Intersection over Union) guided function, which improves long term tracking and handles identity switches. Second, we introduce a hierarchical sampling strategy to construct sparser graphs which allows to focus the training on more difficult samples. Experimental results demonstrate that a simple online MPN with these two contributions can perform better than many state-of-the-art methods. In addition, our association method generalizes well and can also improve the results of private detection based methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
334,959
1608.07685
KSR: A Semantic Representation of Knowledge Graph within a Novel Unsupervised Paradigm
Knowledge representation is a long-history topic in AI, which is very important. A variety of models have been proposed for knowledge graph embedding, which projects symbolic entities and relations into continuous vector space. However, most related methods merely focus on the data-fitting of knowledge graph, and ignore the interpretable semantic expression. Thus, traditional embedding methods are not friendly for applications that require semantic analysis, such as question answering and entity retrieval. To this end, this paper proposes a semantic representation method for knowledge graph \textbf{(KSR)}, which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments show that our model outperforms other state-of-the-art baselines substantially.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
60,253
2010.13938
Neural Unsigned Distance Fields for Implicit Function Learning
In this work we target a learnable output representation that allows continuous, high resolution outputs of arbitrary shape. Recent works represent 3D surfaces implicitly with a Neural Network, thereby breaking previous barriers in resolution, and ability to represent diverse topologies. However, neural implicit representations are limited to closed surfaces, which divide the space into inside and outside. Many real world objects such as walls of a scene scanned by a sensor, clothing, or a car with inner structures are not closed. This constitutes a significant barrier, in terms of data pre-processing (objects need to be artificially closed creating artifacts), and the ability to output open surfaces. In this work, we propose Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes given sparse point clouds. NDF represent surfaces at high resolutions as prior implicit models, but do not require closed surface data, and significantly broaden the class of representable shapes in the output. NDF allow to extract the surface as very dense point clouds and as meshes. We also show that NDF allow for surface normal calculation and can be rendered using a slight modification of sphere tracing. We find NDF can be used for multi-target regression (multiple outputs for one input) with techniques that have been exclusively used for rendering in graphics. Experiments on ShapeNet show that NDF, while simple, is the state-of-the art, and allows to reconstruct shapes with inner structures, such as the chairs inside a bus. Notably, we show that NDF are not restricted to 3D shapes, and can approximate more general open surfaces such as curves, manifolds, and functions. Code is available for research at https://virtualhumans.mpi-inf.mpg.de/ndf/.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
203,291
2205.06267
Topologically-Aware Deformation Fields for Single-View 3D Reconstruction
We present a framework for learning 3D object shapes and dense cross-object 3D correspondences from just an unaligned category-specific image collection. The 3D shapes are generated implicitly as deformations to a category-specific signed distance field and are learned in an unsupervised manner solely from unaligned image collections and their poses without any 3D supervision. Generally, image collections on the internet contain several intra-category geometric and topological variations, for example, different chairs can have different topologies, which makes the task of joint shape and correspondence estimation much more challenging. Because of this, prior works either focus on learning each 3D object shape individually without modeling cross-instance correspondences or perform joint shape and correspondence estimation on categories with minimal intra-category topological variations. We overcome these restrictions by learning a topologically-aware implicit deformation field that maps a 3D point in the object space to a higher dimensional point in the category-specific canonical space. At inference time, given a single image, we reconstruct the underlying 3D shape by first implicitly deforming each 3D point in the object space to the learned category-specific canonical space using the topologically-aware deformation field and then reconstructing the 3D shape as a canonical signed distance field. Both canonical shape and deformation field are learned end-to-end in an inverse-graphics fashion using a learned recurrent ray marcher (SRN) as a differentiable rendering module. Our approach, dubbed TARS, achieves state-of-the-art reconstruction fidelity on several datasets: ShapeNet, Pascal3D+, CUB, and Pix3D chairs. Result videos and code at https://shivamduggal4.github.io/tars-3D/
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
296,193
1811.08888
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
We study the problem of training deep neural networks with Rectified Linear Unit (ReLU) activation function using gradient descent and stochastic gradient descent. In particular, we study the binary classification problem and show that for a broad family of loss functions, with proper random weight initialization, both gradient descent and stochastic gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under mild assumption on the training data. The key idea of our proof is that Gaussian random initialization followed by (stochastic) gradient descent produces a sequence of iterates that stay inside a small perturbation region centering around the initial weights, in which the empirical loss function of deep ReLU networks enjoys nice local curvature properties that ensure the global convergence of (stochastic) gradient descent. Our theoretical results shed light on understanding the optimization for deep learning, and pave the way for studying the optimization dynamics of training modern deep neural networks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
114,142
1712.01048
Adaptive Quantization for Deep Neural Network
In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
86,034
2301.09474
DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion
Real-world data generation often involves complex inter-dependencies among instances, violating the IID-data hypothesis of standard learning paradigms and posing a challenge for uncovering the geometric structures for learning desired instance representations. To this end, we introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states that progressively incorporate other instances' information by their interactions. The diffusion process is constrained by descent criteria w.r.t.~a principled energy function that characterizes the global consistency of instance representations over latent structures. We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs, which gives rise to a new class of neural encoders, dubbed as DIFFormer (diffusion-based Transformers), with two instantiations: a simple version with linear complexity for prohibitive instance numbers, and an advanced version for learning complex structures. Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks, such as node classification on large graphs, semi-supervised image/text classification, and spatial-temporal dynamics prediction.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
341,506
1510.02189
Sparse approximation based on a random overcomplete basis
We discuss a strategy of sparse approximation that is based on the use of an overcomplete basis, and evaluate its performance when a random matrix is used as this basis. A small combination of basis vectors is chosen from a given overcomplete basis, according to a given compression rate, such that they compactly represent the target data with as small a distortion as possible. As a selection method, we study the $\ell_0$- and $\ell_1$-based methods, which employ the exhaustive search and $\ell_1$-norm regularization techniques, respectively. The performance is assessed in terms of the trade-off relation between the representation distortion and the compression rate. First, we evaluate the performance analytically in the case that the methods are carried out ideally, using methods of statistical mechanics. Our result clarifies the fact that the $\ell_0$-based method greatly outperforms the $\ell_1$-based one. Second, we examine the practical performances of two well-known algorithms, orthogonal matching pursuit and approximate message passing, when they are used to execute the $\ell_0$- and $\ell_1$-based methods, respectively. Our examination shows that orthogonal matching pursuit achieves a much better performance than the exact execution of the $\ell_1$-based method, as well as approximate message passing. However, regarding the $\ell_0$-based method, there is still room to design more effective greedy algorithms than orthogonal matching pursuit. Finally, we evaluate the performances of the algorithms when they are applied to image data compression.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,696
2006.11653
Towards Understanding Label Smoothing
Label smoothing regularization (LSR) has a great success in training deep neural networks by stochastic algorithms such as stochastic gradient descent and its variants. However, the theoretical understanding of its power from the view of optimization is still rare. This study opens the door to a deep understanding of LSR by initiating the analysis. In this paper, we analyze the convergence behaviors of stochastic gradient descent with label smoothing regularization for solving non-convex problems and show that an appropriate LSR can help to speed up the convergence by reducing the variance. More interestingly, we proposed a simple yet effective strategy, namely Two-Stage LAbel smoothing algorithm (TSLA), that uses LSR in the early training epochs and drops it off in the later training epochs. We observe from the improved convergence result of TSLA that it benefits from LSR in the first stage and essentially converges faster in the second stage. To the best of our knowledge, this is the first work for understanding the power of LSR via establishing convergence complexity of stochastic methods with LSR in non-convex optimization. We empirically demonstrate the effectiveness of the proposed method in comparison with baselines on training ResNet models over benchmark data sets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
183,319
2412.04880
MozzaVID: Mozzarella Volumetric Image Dataset
Influenced by the complexity of volumetric imaging, there is a shortage of established datasets useful for benchmarking volumetric deep-learning models. As a consequence, new and existing models are not easily comparable, limiting the development of architectures optimized specifically for volumetric data. To counteract this trend, we introduce MozzaVID - a large, clean, and versatile volumetric classification dataset. Our dataset contains X-ray computed tomography (CT) images of mozzarella microstructure and enables the classification of 25 cheese types and 149 cheese samples. We provide data in three different resolutions, resulting in three dataset instances containing from 591 to 37,824 images. While being general-purpose, the dataset also facilitates investigating mozzarella structure properties. The structure of food directly affects its functional properties and thus its consumption experience. Understanding food structure helps tune the production and mimicking it enables sustainable alternatives to animal-derived food products. The complex and disordered nature of food structures brings a unique challenge, where a choice of appropriate imaging method, scale, and sample size is not trivial. With this dataset we aim to address these complexities, contributing to more robust structural analysis models. The dataset can be downloaded from: https://archive.compute.dtu.dk/files/public/projects/MozzaVID/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,612
2107.03969
Study of Block Diagonalization Precoding and Power Allocation for Multiple-Antenna Systems with Coarsely Quantized Signals
In this work, we present block diagonalization and power allocation algorithms for large-scale multiple-antenna systems with coarsely quantized signals. In particular, we develop Coarse Quantization-Aware Block Diagonalization ${\scriptstyle\mathrm{\left(CQA-BD\right)}}$ and Coarse Quantization-Aware Regularized Block Diagonalization ${\scriptstyle\mathrm{\left(CQA-RBD\right)}}$ precoding algorithms that employ the Bussgang decomposition and can mitigate the effects of low-resolution signals and interference. Moreover, we also devise the Coarse Quantization-Aware Most Advantageous Allocation Strategy ${\scriptstyle\mathrm{\left(CQA-MAAS\right)}}$ power allocation algorithm to improve the sum rate of precoders that operate with low-resolution signals. An analysis of the sum-rate performance is carried out along with computational complexity and power consumption studies of the proposed and existing techniques. Simulation results illustrate the performance of the proposed ${\scriptstyle\mathrm{CQA-BD}}$ and ${\scriptstyle\mathrm{CQA-RBD}}$ precoding algorithms, and the proposed ${\scriptstyle\mathrm{CQA-MAAS}}$ power allocation strategy against existing approaches.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
245,307
2010.05360
A range characterization of the single-quadrant ADRT
This work characterizes the range of the single-quadrant approximate discrete Radon transform (ADRT) of square images. The characterization follows from a set of linear constraints on the codomain. We show that for data satisfying these constraints, the exact and fast inversion formula [Rim, Appl. Math. Lett. 102 106159, 2020] yields a square image in a stable manner. The range characterization is obtained by first showing that the ADRT is a bijection between images supported on infinite half-strips, then identifying the linear subspaces that stay finitely supported under the inversion formula.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
200,097
1905.07385
Representation Learning on Visual-Symbolic Graphs for Video Understanding
Events in natural videos typically arise from spatio-temporal interactions between actors and objects and involve multiple co-occurring activities and object classes. To capture this rich visual and semantic context, we propose using two graphs: (1) an attributed spatio-temporal visual graph whose nodes correspond to actors and objects and whose edges encode different types of interactions, and (2) a symbolic graph that models semantic relationships. We further propose a graph neural network for refining the representations of actors, objects and their interactions on the resulting hybrid graph. Our model goes beyond current approaches that assume nodes and edges are of the same type, operate on graphs with fixed edge weights and do not use a symbolic graph. In particular, our framework: a) has specialized attention-based message functions for different node and edge types; b) uses visual edge features; c) integrates visual evidence with label relationships; and d) performs global reasoning in the semantic space. Experiments on challenging video understanding tasks, such as temporal action localization on the Charades dataset, show that the proposed method leads to state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,218
1802.03557
Reachable Set Estimation and Verification for Neural Network Models of Nonlinear Dynamic Systems
Neural networks have been widely used to solve complex real-world problems. Due to the complicate, nonlinear, non-convex nature of neural networks, formal safety guarantees for the behaviors of neural network systems will be crucial for their applications in safety-critical systems. In this paper, the reachable set estimation and verification problems for Nonlinear Autoregressive-Moving Average (NARMA) models in the forms of neural networks are addressed. The neural network involved in the model is a class of feed-forward neural networks called Multi-Layer Perceptron (MLP). By partitioning the input set of an MLP into a finite number of cells, a layer-by-layer computation algorithm is developed for reachable set estimation for each individual cell. The union of estimated reachable sets of all cells forms an over-approximation of reachable set of the MLP. Furthermore, an iterative reachable set estimation algorithm based on reachable set estimation for MLPs is developed for NARMA models. The safety verification can be performed by checking the existence of intersections of unsafe regions and estimated reachable set. Several numerical examples are provided to illustrate our approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
90,003
2502.12529
Alternating Regret for Online Convex Optimization
Motivated by alternating learning dynamics in two-player games, a recent work by Cevher et al.(2024) shows that $o(\sqrt{T})$ alternating regret is possible for any $T$-round adversarial Online Linear Optimization (OLO) problem, and left as an open question whether the same is true for general Online Convex Optimization (OCO). We answer this question in the affirmative by showing that the continuous Hedge algorithm achieves $\tilde{\mathcal{O}}(d^{\frac{2}{3}}T^{\frac{1}{3}})$ alternating regret for any adversarial $d$-dimensional OCO problems. We show that this implies an alternating learning dynamic that finds a Nash equilibrium for any convex-concave zero-sum games or a coarse correlated equilibrium for any convex two-player general-sum games at a rate of $\tilde{\mathcal{O}}(d^{\frac{2}{3}}/T^{\frac{2}{3}})$. To further improve the time complexity and/or the dimension dependence, we propose another simple algorithm, Follow-the-Regularized-Leader with a regularizer whose convex conjugate is 3rd-order smooth, for OCO with smooth and self-concordant loss functions (such as linear or quadratic losses). We instantiate our algorithm with different regularizers and show that, for example, when the decision set is the $\ell_2$ ball, our algorithm achieves $\tilde{\mathcal{O}}(T^{\frac{2}{5}})$ alternating regret with no dimension dependence (and a better $\tilde{\mathcal{O}}(T^{\frac{1}{3}})$ bound for quadratic losses). We complement our results by showing some algorithm-specific alternating regret lower bounds, including a somewhat surprising $\Omega(\sqrt{T})$ lower bound for a Regret Matching variant that is widely used in alternating learning dynamics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
534,919
1907.04484
Co-training for Policy Learning
We study the problem of learning sequential decision-making policies in settings with multiple state-action representations. Such settings naturally arise in many domains, such as planning (e.g., multiple integer programming formulations) and various combinatorial optimization problems (e.g., those with both integer programming and graph-based formulations). Inspired by the classical co-training framework for classification, we study the problem of co-training for policy learning. We present sufficient conditions under which learning from two views can improve upon learning from a single view alone. Motivated by these theoretical insights, we present a meta-algorithm for co-training for sequential decision making. Our framework is compatible with both reinforcement learning and imitation learning. We validate the effectiveness of our approach across a wide range of tasks, including discrete/continuous control and combinatorial optimization.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
138,120
2307.08106
Polarization Multi-Image Synthesis with Birefringent Metasurfaces
Optical metasurfaces composed of precisely engineered nanostructures have gained significant attention for their ability to manipulate light and implement distinct functionalities based on the properties of the incident field. Computational imaging systems have started harnessing this capability to produce sets of coded measurements that benefit certain tasks when paired with digital post-processing. Inspired by these works, we introduce a new system that uses a birefringent metasurface with a polarizer-mosaicked photosensor to capture four optically-coded measurements in a single exposure. We apply this system to the task of incoherent opto-electronic filtering, where digital spatial-filtering operations are replaced by simpler, per-pixel sums across the four polarization channels, independent of the spatial filter size. In contrast to previous work on incoherent opto-electronic filtering that can realize only one spatial filter, our approach can realize a continuous family of filters from a single capture, with filters being selected from the family by adjusting the post-capture digital summation weights. To find a metasurface that can realize a set of user-specified spatial filters, we introduce a form of gradient descent with a novel regularizer that encourages light efficiency and a high signal-to-noise ratio. We demonstrate several examples in simulation and with fabricated prototypes, including some with spatial filters that have prescribed variations with respect to depth and wavelength. Visit the Project Page at https://deanhazineh.github.io/publications/Multi_Image_Synthesis/MIS_Home.html
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
379,674
2112.08605
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection
Domain adaptive object detection (DAOD) aims to improve the generalization ability of detectors when the training and test data are from different domains. Considering the significant domain gap, some typical methods, e.g., CycleGAN-based methods, adopt the intermediate domain to bridge the source and target domains progressively. However, the CycleGAN-based intermediate domain lacks the pix- or instance-level supervision for object detection, which leads to semantic differences. To address this problem, in this paper, we introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations. In this way, we can obtain a series of augmented data as the intermediate domain. Concretely, we propose a two-stage optimization framework. In the first stage, we utilize all the original and augmented source data to train an object detector. In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency. And a teacher model optimized using Mean Teacher is used to further revise the pseudo labels. In the experiment, we evaluate our method on the single- and compound- target DAOD separately, which demonstrate the effectiveness of our method.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
271,856
2403.16861
DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts
The DISL dataset features a collection of $514,506$ unique Solidity files that have been deployed to Ethereum mainnet. It caters to the need for a large and diverse dataset of real-world smart contracts. DISL serves as a resource for developing machine learning systems and for benchmarking software engineering tools designed for smart contracts. By aggregating every verified smart contract from Etherscan up to January 15, 2024, DISL surpasses existing datasets in size and recency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
441,221
2308.00231
Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks
The modern pervasiveness of large-scale deep neural networks (NNs) is driven by their extraordinary performance on complex problems but is also plagued by their sudden, unexpected, and often catastrophic failures, particularly on challenging scenarios. Existing algorithms that provide risk-awareness to NNs are complex and ad-hoc. Specifically, these methods require significant engineering changes, are often developed only for particular settings, and are not easily composable. Here we present capsa, a framework for extending models with risk-awareness. Capsa provides a methodology for quantifying multiple forms of risk and composing different algorithms together to quantify different risk metrics in parallel. We validate capsa by implementing state-of-the-art uncertainty estimation algorithms within the capsa framework and benchmarking them on complex perception datasets. We demonstrate capsa's ability to easily compose aleatoric uncertainty, epistemic uncertainty, and bias estimation together in a single procedure, and show how this approach provides a comprehensive awareness of NN risk.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
382,863
2004.05530
Exact Volume of Zonotopes Generated by a Matrix Pair
In this article, we define a class of special zonotopes generated by a matrix pair with finite-interval parameters. We discuss the relationship between the volume of these zonotopes and the controllability of one aspect (the volume of the controllable region) of the dynamic systems. We present a corollary and develop an effective recursive method to compute the volume of the special zonotopes. Furthermore, we develop two recursive and analytical volume-computation methods for the finite- and infinite-time controllable regions with real eigenvalues. We conduct numerical experiments to demonstrate the effectiveness of these new volume-computation methods for zonotopes and regions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
172,228
2003.11291
A Unified Object Motion and Affinity Model for Online Multi-Object Tracking
Current popular online multi-object tracking (MOT) solutions apply single object trackers (SOTs) to capture object motions, while often requiring an extra affinity network to associate objects, especially for the occluded ones. This brings extra computational overhead due to repetitive feature extraction for SOT and affinity computation. Meanwhile, the model size of the sophisticated affinity network is usually non-trivial. In this paper, we propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA, in order to learn a compact feature that is discriminative for both object motion and affinity measure. In particular, UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning. Such design brings advantages of improved computation efficiency, low memory requirement and simplified training procedure. In addition, we equip our model with a task-specific attention module, which is used to boost task-aware feature learning. The proposed UMA can be easily trained end-to-end, and is elegant - requiring only one training stage. Experimental results show that it achieves promising performance on several MOT Challenge benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
169,571
1806.02418
Finding Convincing Arguments Using Scalable Bayesian Preference Learning
We introduce a scalable Bayesian preference learning method for identifying convincing arguments in the absence of gold-standard rat- ings or rankings. In contrast to previous work, we avoid the need for separate methods to perform quality control on training data, predict rankings and perform pairwise classification. Bayesian approaches are an effective solution when faced with sparse or noisy training data, but have not previously been used to identify convincing arguments. One issue is scalability, which we address by developing a stochastic variational inference method for Gaussian process (GP) preference learning. We show how our method can be applied to predict argument convincingness from crowdsourced data, outperforming the previous state-of-the-art, particularly when trained with small amounts of unreliable data. We demonstrate how the Bayesian approach enables more effective active learning, thereby reducing the amount of data required to identify convincing arguments for new users and domains. While word embeddings are principally used with neural networks, our results show that word embeddings in combination with linguistic features also benefit GPs when predicting argument convincingness.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
99,775
2306.09859
MixedTeacher : Knowledge Distillation for fast inference textural anomaly detection
For a very long time, unsupervised learning for anomaly detection has been at the heart of image processing research and a stepping stone for high performance industrial automation process. With the emergence of CNN, several methods have been proposed such as Autoencoders, GAN, deep feature extraction, etc. In this paper, we propose a new method based on the promising concept of knowledge distillation which consists of training a network (the student) on normal samples while considering the output of a larger pretrained network (the teacher). The main contributions of this paper are twofold: First, a reduced student architecture with optimal layer selection is proposed, then a new Student-Teacher architecture with network bias reduction combining two teachers is proposed in order to jointly enhance the performance of anomaly detection and its localization accuracy. The proposed texture anomaly detector has an outstanding capability to detect defects in any texture and a fast inference time compared to the SOTA methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,996
2112.07785
Variable Selection and Regularization via Arbitrary Rectangle-range Generalized Elastic Net
We introduce the arbitrary rectangle-range generalized elastic net penalty method, abbreviated to ARGEN, for performing constrained variable selection and regularization in high-dimensional sparse linear models. As a natural extension of the nonnegative elastic net penalty method, ARGEN is proved to have variable selection consistency and estimation consistency under some conditions. The asymptotic behavior in distribution of the ARGEN estimators have been studied. We also propose an algorithm called MU-QP-RR-W-$l_1$ to efficiently solve ARGEN. By conducting simulation study we show that ARGEN outperforms the elastic net in a number of settings. Finally an application of S&P 500 index tracking with constraints on the stock allocations is performed to provide general guidance for adapting ARGEN to solve real-world problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,582
2205.13216
Encoded Gradients Aggregation against Gradient Leakage in Federated Learning
Federated learning enables isolated clients to train a shared model collaboratively by aggregating the locally-computed gradient updates. However, privacy information could be leaked from uploaded gradients and be exposed to malicious attackers or an honest-but-curious server. Although the additive homomorphic encryption technique guarantees the security of this process, it brings unacceptable computation and communication burdens to FL participants. To mitigate this cost of secure aggregation and maintain the learning performance, we propose a new framework called Encoded Gradient Aggregation (\emph{EGA}). In detail, EGA first encodes local gradient updates into an encoded domain with injected noises in each client before the aggregation in the server. Then, the encoded gradients aggregation results can be recovered for the global model update via a decoding function. This scheme could prevent the raw gradients of a single client from exposing on the internet and keep them unknown to the server. EGA could provide optimization and communication benefits under different noise levels and defend against gradient leakage. We further provide a theoretical analysis of the approximation error and its impacts on federated optimization. Moreover, EGA is compatible with the most federated optimization algorithms. We conduct intensive experiments to evaluate EGA in real-world federated settings, and the results have demonstrated its efficacy.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
298,851
1804.07155
Instance Selection Improves Geometric Mean Accuracy: A Study on Imbalanced Data Classification
A natural way of handling imbalanced data is to attempt to equalise the class frequencies and train the classifier of choice on balanced data. For two-class imbalanced problems, the classification success is typically measured by the geometric mean (GM) of the true positive and true negative rates. Here we prove that GM can be improved upon by instance selection, and give the theoretical conditions for such an improvement. We demonstrate that GM is non-monotonic with respect to the number of retained instances, which discourages systematic instance selection. We also show that balancing the distribution frequencies is inferior to a direct maximisation of GM. To verify our theoretical findings, we carried out an experimental study of 12 instance selection methods for imbalanced data, using 66 standard benchmark data sets. The results reveal possible room for new instance selection methods for imbalanced data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
95,466
2301.10503
In Which Graph Structures Can We Efficiently Find Temporally Disjoint Paths and Walks?
A temporal graph has an edge set that may change over discrete time steps, and a temporal path (or walk) must traverse edges that appear at increasing time steps. Accordingly, two temporal paths (or walks) are temporally disjoint if they do not visit any vertex at the same time. The study of the computational complexity of finding temporally disjoint paths or walks in temporal graphs has recently been initiated by Klobas et al. [IJCAI '21]. This problem is motivated by applications in multi-agent path finding (MAPF), which include robotics, warehouse management, aircraft management, and traffic routing. We extend Klobas et al.'s research by providing parameterized hardness results for very restricted cases, with a focus on structural parameters of the so-called underlying graph. On the positive side, we identify sufficiently simple cases where we can solve the problem efficiently. Our results reveal some surprising differences between the "path version" and the "walk version" (where vertices may be visited multiple times) of the problem, and answer several open questions posed by Klobas et al.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
341,826
2405.14750
Extreme Solar Flare Prediction Using Residual Networks with HMI Magnetograms and Intensitygrams
Solar flares, especially C, M, and X class, pose significant risks to satellite operations, communication systems, and power grids. We present a novel approach for predicting extreme solar flares using HMI intensitygrams and magnetograms. By detecting sunspots from intensitygrams and extracting magnetic field patches from magnetograms, we train a Residual Network (ResNet) to classify extreme class flares. Our model demonstrates high accuracy, offering a robust tool for predicting extreme solar flares and improving space weather forecasting. Additionally, we show that HMI magnetograms provide more useful data for deep learning compared to other SDO AIA images by better capturing features critical for predicting flare magnitudes. This study underscores the importance of identifying magnetic fields in solar flare prediction, marking a significant advancement in solar activity prediction with practical implications for mitigating space weather impacts.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,580
1807.09458
Channel Dependent Mutual Information in Index Modulations
Mutual Information is the metric that is used to perform link adaptation, which allows to achieve rates near capacity. The computation of adaptive transmission modes is achieved by employing the mapping between the Signal to Noise Ratio and the Mutual Information. Due to the high complexity of the computation of the Mutual Information, this process is performed off-line via Monte Carlo simulations, whose results are stored in look-up tables. However, in Index Modulations, such as Spatial Modulation or Polarized Modulation, this is not feasible since the constellation and the Mutual Information are channel dependent and it would require to compute this metric at each time instant if the channel is time varying. In this paper, we propose different approximations in order to obtain a simple closed-form expression that allows to compute the Mutual Information at each time instant and thus, making feasible the link adaptation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
103,727
2311.11762
MUVO: A Multimodal World Model with Spatial Representations for Autonomous Driving
Learning unsupervised world models for autonomous driving has the potential to improve the reasoning capabilities of today's systems dramatically. However, most work neglects the physical attributes of the world and focuses on sensor data alone. We propose MUVO, a MUltimodal World Model with spatial VOxel representations, to address this challenge. We utilize raw camera and lidar data to learn a sensor-agnostic geometric representation of the world. We demonstrate multimodal future predictions and show that our spatial representation improves the prediction quality of both camera images and lidar point clouds.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
409,065
2004.03096
Is Graph Structure Necessary for Multi-hop Question Answering?
Recently, attempting to model texts as graph structure and introducing graph neural networks to deal with it has become a trend in many NLP research areas. In this paper, we investigate whether the graph structure is necessary for multi-hop question answering. Our analysis is centered on HotpotQA. We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering. We point out that both graph structure and adjacency matrix are task-related prior knowledge, and graph-attention can be considered as a special case of self-attention. Experiments and visualized analysis demonstrate that graph-attention or the entire graph structure can be replaced by self-attention or Transformers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,446
2107.07827
A Theoretical Analysis of Granulometry-based Roughness Measures on Cartosat DEMs
The study of water bodies such as rivers is an important problem in the remote sensing community. A meaningful set of quantitative features reflecting the geophysical properties help us better understand the formation and evolution of rivers. Typically, river sub-basins are analysed using Cartosat Digital Elevation Models (DEMs), obtained at regular time epochs. One of the useful geophysical features of a river sub-basin is that of a roughness measure on DEMs. However, to the best of our knowledge, there is not much literature available on theoretical analysis of roughness measures. In this article, we revisit the roughness measure on DEM data adapted from multiscale granulometries in mathematical morphology, namely multiscale directional granulometric index (MDGI). This measure was classically used to obtain shape-size analysis in greyscale images. In earlier works, MDGIs were introduced to capture the characteristic surficial roughness of a river sub-basin along specific directions. Also, MDGIs can be efficiently computed and are known to be useful features for classification of river sub-basins. In this article, we provide a theoretical analysis of a MDGI. In particular, we characterize non-trivial sufficient conditions on the structure of DEMs under which MDGIs are invariant. These properties are illustrated with some fictitious DEMs. We also provide connections to a discrete derivative of volume of a DEM. Based on these connections, we provide intuition as to why a MDGI is considered a roughness measure. Further, we experimentally illustrate on Lower-Indus, Wardha, and Barmer river sub-basins that the proposed features capture the characteristics of the river sub-basin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
246,541
2102.09298
GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices. Training QNNs using different levels of precision throughout the network (dynamic quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different precision levels of QNNs can be complicated, as the values of the bit allocations are discrete and difficult to differentiate for. Also, adequately accounting for the dependencies between the bit allocation of different layers is not straight-forward. To meet these challenges, in this work we propose GradFreeBits: a novel joint optimization scheme for training dynamic QNNs, which alternates between gradient-based optimization for the weights, and gradient-free optimization for the bit allocation. Our method achieves better or on par performance with current state of the art low precision neural networks on CIFAR10/100 and ImageNet classification. Furthermore, our approach can be extended to a variety of other applications involving neural networks used in conjunction with parameters which are difficult to optimize for.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
220,731
1704.02161
ReLayNet: Retinal Layer and Fluid Segmentation of Macular Optical Coherence Tomography using Fully Convolutional Network
Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,393
1802.03800
Drug response prediction by ensemble learning and drug-induced gene expression signatures
Chemotherapeutic response of cancer cells to a given compound is one of the most fundamental information one requires to design anti-cancer drugs. Recent advances in producing large drug screens against cancer cell lines provided an opportunity to apply machine learning methods for this purpose. In addition to cytotoxicity databases, considerable amount of drug-induced gene expression data has also become publicly available. Following this, several methods that exploit omics data were proposed to predict drug activity on cancer cells. However, due to the complexity of cancer drug mechanisms, none of the existing methods are perfect. One possible direction, therefore, is to combine the strengths of both the methods and the databases for improved performance. We demonstrate that integrating a large number of predictions by the proposed method improves the performance for this task. The predictors in the ensemble differ in several aspects such as the method itself, the number of tasks method considers (multi-task vs. single-task) and the subset of data considered (sub-sampling). We show that all these different aspects contribute to the success of the final ensemble. In addition, we attempt to use the drug screen data together with two novel signatures produced from the drug-induced gene expression profiles of cancer cell lines. Finally, we evaluate the method predictions by in vitro experiments in addition to the tests on data sets.The predictions of the methods, the signatures and the software are available from \url{http://mtan.etu.edu.tr/drug-response-prediction/}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,068
2101.05891
A Deep Learning Based Ternary Task Classification System Using Gramian Angular Summation Field in fNIRS Neuroimaging Data
Functional near-infrared spectroscopy (fNIRS) is a non-invasive, economical method used to study its blood flow pattern. These patterns can be used to classify tasks a subject is performing. Currently, most of the classification systems use simple machine learning solutions for the classification of tasks. These conventional machine learning methods, which are easier to implement and interpret, usually suffer from low accuracy and undergo a complex preprocessing phase before network training. The proposed method converts the raw fNIRS time series data into an image using Gramian Angular Summation Field. A Deep Convolutional Neural Network (CNN) based architecture is then used for task classification, including mental arithmetic, motor imagery, and idle state. Further, this method can eliminate the feature selection stage, which affects the traditional classifiers' performance. This system obtained 87.14% average classification accuracy higher than any other method for the dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
215,543
1304.2799
Nested Aggregates in Answer Sets: An Application to a Priori Optimization
We allow representing and reasoning in the presence of nested multiple aggregates over multiple variables and nested multiple aggregates over functions involving multiple variables in answer sets, precisely, in answer set optimization programming and in answer set programming. We show the applicability of the answer set optimization programming with nested multiple aggregates and the answer set programming with nested multiple aggregates to the Probabilistic Traveling Salesman Problem, a fundamental a priori optimization problem in Operation Research.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,772
2404.03962
RaSim: A Range-aware High-fidelity RGB-D Data Simulation Pipeline for Real-world Applications
In robotic vision, a de-facto paradigm is to learn in simulated environments and then transfer to real-world applications, which poses an essential challenge in bridging the sim-to-real domain gap. While mainstream works tackle this problem in the RGB domain, we focus on depth data synthesis and develop a range-aware RGB-D data simulation pipeline (RaSim). In particular, high-fidelity depth data is generated by imitating the imaging principle of real-world sensors. A range-aware rendering strategy is further introduced to enrich data diversity. Extensive experiments show that models trained with RaSim can be directly applied to real-world scenarios without any finetuning and excel at downstream RGB-D perception tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
444,463
2109.13090
Oscillatory Fourier Neural Network: A Compact and Efficient Architecture for Sequential Processing
Tremendous progress has been made in sequential processing with the recent advances in recurrent neural networks. However, recurrent architectures face the challenge of exploding/vanishing gradients during training, and require significant computational resources to execute back-propagation through time. Moreover, large models are typically needed for executing complex sequential tasks. To address these challenges, we propose a novel neuron model that has cosine activation with a time varying component for sequential processing. The proposed neuron provides an efficient building block for projecting sequential inputs into spectral domain, which helps to retain long-term dependencies with minimal extra model parameters and computation. A new type of recurrent network architecture, named Oscillatory Fourier Neural Network, based on the proposed neuron is presented and applied to various types of sequential tasks. We demonstrate that recurrent neural network with the proposed neuron model is mathematically equivalent to a simplified form of discrete Fourier transform applied onto periodical activation. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the state of the art inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to LSTM. The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
257,523
2302.09976
Discouraging posterior collapse in hierarchical Variational Autoencoders using context
Hierarchical Variational Autoencoders (VAEs) are among the most popular likelihood-based generative models. There is a consensus that the top-down hierarchical VAEs allow effective learning of deep latent structures and avoid problems like posterior collapse. Here, we show that this is not necessarily the case, and the problem of collapsing posteriors remains. To discourage this issue, we propose a deep hierarchical VAE with a context on top. Specifically, we use a Discrete Cosine Transform to obtain the last latent variable. In a series of experiments, we observe that the proposed modification allows us to achieve better utilization of the latent space and does not harm the model's generative abilities.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
346,642
2303.11888
Penalty-Based Imitation Learning With Cross Semantics Generation Sensor Fusion for Autonomous Driving
In recent times, there has been a growing focus on end-to-end autonomous driving technologies. This technology involves the replacement of the entire driving pipeline with a single neural network, which has a simpler structure and faster inference time. However, while this approach reduces the number of components in the driving pipeline, it also presents challenges related to interpretability and safety. For instance, the trained policy may not always comply with traffic rules, and it is difficult to determine the reason for such misbehavior due to the lack of intermediate outputs. Additionally, the successful implementation of autonomous driving technology heavily depends on the reliable and expedient processing of sensory data to accurately perceive the surrounding environment. In this paper, we provide penalty-based imitation learning approach combined with cross semantics generation sensor fusion technologies (P-CSG) to efficiently integrate multiple modalities of information and enable the autonomous agent to effectively adhere to traffic regulations. Our model undergoes evaluation within the Town 05 Long benchmark, where we observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser. Notably, our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%. For more detailed information, including code-based resources, they can be found at https://hk-zh.github.io/p-csg/
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
353,050
2407.11052
Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation
Unsupervised Graph Domain Adaptation (UGDA) involves the transfer of knowledge from a label-rich source graph to an unlabeled target graph under domain discrepancies. Despite the proliferation of methods designed for this emerging task, the lack of standard experimental settings and fair performance comparisons makes it challenging to understand which and when models perform well across different scenarios. To fill this gap, we present the first comprehensive benchmark for unsupervised graph domain adaptation named GDABench, which encompasses 16 algorithms across 5 datasets with 74 adaptation tasks. Through extensive experiments, we observe that the performance of current UGDA models varies significantly across different datasets and adaptation scenarios. Specifically, we recognize that when the source and target graphs face significant distribution shifts, it is imperative to formulate strategies to effectively address and mitigate graph structural shifts. We also find that with appropriate neighbourhood aggregation mechanisms, simple GNN variants can even surpass state-of-the-art UGDA baselines. To facilitate reproducibility, we have developed an easy-to-use library PyGDA for training and evaluating existing UGDA methods, providing a standardized platform in this community. Our source codes and datasets can be found at: https://github.com/pygda-team/pygda.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
473,272
2104.14806
GODIVA: Generating Open-DomaIn Videos from nAtural Descriptions
Generating videos from text is a challenging task due to its high computational requirements for training and infinite possible answers for evaluation. Existing works typically experiment on simple or small datasets, where the generalization ability is quite limited. In this work, we propose GODIVA, an open-domain text-to-video pretrained model that can generate videos from text in an auto-regressive manner using a three-dimensional sparse attention mechanism. We pretrain our model on Howto100M, a large-scale text-video dataset that contains more than 136 million text-video pairs. Experiments show that GODIVA not only can be fine-tuned on downstream video generation tasks, but also has a good zero-shot capability on unseen texts. We also propose a new metric called Relative Matching (RM) to automatically evaluate the video generation quality. Several challenges are listed and discussed as future work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
232,953
1601.05889
A Review on Recent Active Vibration Control Techniques
Active vibration control has been introduced and used as one of the effective approaches to suppress unwanted vibrations in different systems. Effective performance of each vibration control method is contingent to accurate design and proper dynamics selection of the control unit. These methods have been extensively studied in various studies in recent years. Each of these new methods are designed by a specific dynamics for a specific system. Here in this paper, we aim to introduce some of these recent approaches in a brief discussion, and familiarize the readers with these techniques. Engineers who wish to design proper vibration controllers in different scales, from micro- to macro applications, will certainly design a more successful vibration controller if they know better about similar techniques, and they can implement the novelties that other scholars have utilized.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
51,176
1210.6510
A measure of similarity between scientific journals and of diversity of a list of publications
The aim of this note is to propose a definition of the scientific diversity and corollarly, a measure of the "interdisciplinarity" of collaborations. With respect to previous studies, the proposed approach consists of 2 steps : first, the definition of similarity between journals and second, these similarities are used to characterize the homogeneity (or, on the contrary the diversity) of a publication list (that can be for one individual or a team).
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
19,373
2310.12995
Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models
This paper introduces a comprehensive approach for segmenting regions of interest (ROI) in diverse medical imaging datasets, encompassing ultrasound, CT scans, and X-ray images. The proposed method harnesses the capabilities of the YOLOv8 model for approximate boundary box detection across modalities, alongside the Segment Anything Model (SAM) and High Quality (HQ) SAM for fully automatic and precise segmentation. To generate boundary boxes, the YOLOv8 model was trained using a limited set of 100 images and masks from each modality. The results obtained from our approach are extensively computed and analyzed, demonstrating its effectiveness and potential in medical image analysis. Various evaluation metrics, including precision, recall, F1 score, and Dice Score, were employed to quantify the accuracy of the segmentation results. A comparative analysis was conducted to assess the individual and combined performance of the YOLOv8, YOLOv8+SAM, and YOLOv8+HQ-SAM models. The results indicate that the SAM model performs better than the other two models, exhibiting higher segmentation accuracy and overall performance. While HQ-SAM offers potential advantages, its incremental gains over the standard SAM model may not justify the additional computational cost. The YOLOv8+SAM model shows promise for enhancing medical image segmentation and its clinical implications.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
401,239
2404.17835
VANER: Leveraging Large Language Model for Versatile and Adaptive Biomedical Named Entity Recognition
Prevalent solution for BioNER involves using representation learning techniques coupled with sequence labeling. However, such methods are inherently task-specific, demonstrate poor generalizability, and often require dedicated model for each dataset. To leverage the versatile capabilities of recently remarkable large language models (LLMs), several endeavors have explored generative approaches to entity extraction. Yet, these approaches often fall short of the effectiveness of previouly sequence labeling approaches. In this paper, we utilize the open-sourced LLM LLaMA2 as the backbone model, and design specific instructions to distinguish between different types of entities and datasets. By combining the LLM's understanding of instructions with sequence labeling techniques, we use mix of datasets to train a model capable of extracting various types of entities. Given that the backbone LLMs lacks specialized medical knowledge, we also integrate external entity knowledge bases and employ instruction tuning to compel the model to densely recognize carefully curated entities. Our model VANER, trained with a small partition of parameters, significantly outperforms previous LLMs-based models and, for the first time, as a model based on LLM, surpasses the majority of conventional state-of-the-art BioNER systems, achieving the highest F1 scores across three datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
450,020
2311.06748
How do Minimum-Norm Shallow Denoisers Look in Function Space?
Neural network (NN) denoisers are an essential building block in many common tasks, ranging from image reconstruction to image generation. However, the success of these models is not well understood from a theoretical perspective. In this paper, we aim to characterize the functions realized by shallow ReLU NN denoisers -- in the common theoretical setting of interpolation (i.e., zero training loss) with a minimal representation cost (i.e., minimal $\ell^2$ norm weights). First, for univariate data, we derive a closed form for the NN denoiser function, find it is contractive toward the clean data points, and prove it generalizes better than the empirical MMSE estimator at a low noise level. Next, for multivariate data, we find the NN denoiser functions in a closed form under various geometric assumptions on the training data: data contained in a low-dimensional subspace, data contained in a union of one-sided rays, or several types of simplexes. These functions decompose into a sum of simple rank-one piecewise linear interpolations aligned with edges and/or faces connecting training samples. We empirically verify this alignment phenomenon on synthetic data and real images.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
407,058
1905.10236
A Research and Strategy of Remote Sensing Image Denoising Algorithms
Most raw data download from satellites are useless, resulting in transmission waste, one solution is to process data directly on satellites, then only transmit the processed results to the ground. Image processing is the main data processing on satellites, in this paper, we focus on image denoising which is the basic image processing. There are many high-performance denoising approaches at present, however, most of them rely on advanced computing resources or rich images on the ground. Considering the limited computing resources of satellites and the characteristics of remote sensing images, we do some research on these high-performance ground image denoising approaches and compare them in simulation experiments to analyze whether they are suitable for satellites. According to the analysis results, we propose two feasible image denoising strategies for satellites based on satellite TianZhi-1.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,997
2406.18414
BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using Camera-LiDAR Data
Compared with real-time multi-object tracking (MOT), offline multi-object tracking (OMOT) has the advantages to perform 2D-3D detection fusion, erroneous link correction, and full track optimization but has to deal with the challenges from bounding box misalignment and track evaluation, editing, and refinement. This paper proposes "BiTrack", a 3D OMOT framework that includes modules of 2D-3D detection fusion, initial trajectory generation, and bidirectional trajectory re-optimization to achieve optimal tracking results from camera-LiDAR data. The novelty of this paper includes threefold: (1) development of a point-level object registration technique that employs a density-based similarity metric to achieve accurate fusion of 2D-3D detection results; (2) development of a set of data association and track management skills that utilizes a vertex-based similarity metric as well as false alarm rejection and track recovery mechanisms to generate reliable bidirectional object trajectories; (3) development of a trajectory re-optimization scheme that re-organizes track fragments of different fidelities in a greedy fashion, as well as refines each trajectory with completion and smoothing techniques. The experiment results on the KITTI dataset demonstrate that BiTrack achieves the state-of-the-art performance for 3D OMOT tasks in terms of accuracy and efficiency.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
468,002
2309.06814
Comparative Analysis of Contextual Relation Extraction based on Deep Learning Models
Contextual Relation Extraction (CRE) is mainly used for constructing a knowledge graph with a help of ontology. It performs various tasks such as semantic search, query answering, and textual entailment. Relation extraction identifies the entities from raw texts and the relations among them. An efficient and accurate CRE system is essential for creating domain knowledge in the biomedical industry. Existing Machine Learning and Natural Language Processing (NLP) techniques are not suitable to predict complex relations from sentences that consist of more than two relations and unspecified entities efficiently. In this work, deep learning techniques have been used to identify the appropriate semantic relation based on the context from multiple sentences. Even though various machine learning models have been used for relation extraction, they provide better results only for binary relations, i.e., relations occurred exactly between the two entities in a sentence. Machine learning models are not suited for complex sentences that consist of the words that have various meanings. To address these issues, hybrid deep learning models have been used to extract the relations from complex sentence effectively. This paper explores the analysis of various deep learning models that are used for relation extraction.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
391,558
2206.13965
Analysis of Individual Conversational Volatility in Tandem Telecollaboration for Second Language Learning
Second language learning can be enabled by tandem collaboration where students are grouped into video conference calls while learning the native language of other student(s) on the calls. This places students in an online environment where the more outgoing can actively contribute and engage in dialogue while those more shy and unsure of their second language skills can sit back and coast through the calls. We have built and deployed the L2L system which records timings of conversational utterances from all participants in a call. We generate visualisations including participation rates and timelines for each student in each call and present these on a dashboard. We have recently developed a measure called personal conversational volatility for how dynamic has been each student's contribution to the dialogue in each call. We present an analysis of conversational volatility measures for a sample of 19 individual English-speaking students from our University who are learning Frenchm, in each of 86 tandem telecollaboration calls over one teaching semester. Our analysis shows there is a need to look into the nature of the interactions and see if the choices of discussion topics assigned to them were too difficult for some students and that may have influenced their engagement in some way.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
305,136
2302.12057
ProsAudit, a prosodic benchmark for self-supervised speech models
We present ProsAudit, a benchmark in English to assess structural prosodic knowledge in self-supervised learning (SSL) speech models. It consists of two subtasks, their corresponding metrics, and an evaluation dataset. In the protosyntax task, the model must correctly identify strong versus weak prosodic boundaries. In the lexical task, the model needs to correctly distinguish between pauses inserted between words and within words. We also provide human evaluation scores on this benchmark. We evaluated a series of SSL models and found that they were all able to perform above chance on both tasks, even when evaluated on an unseen language. However, non-native models performed significantly worse than native ones on the lexical task, highlighting the importance of lexical knowledge in this task. We also found a clear effect of size with models trained on more data performing better in the two subtasks.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
347,425
1901.10681
End-to-End Learned Early Classification of Time Series for In-Season Crop Type Mapping
Remote sensing satellites capture the cyclic dynamics of our Planet in regular time intervals recorded in satellite time series data. End-to-end trained deep learning models use this time series data to make predictions at a large scale, for instance, to produce up-to-date crop cover maps. Most time series classification approaches focus on the accuracy of predictions. However, the earliness of the prediction is also of great importance since coming to an early decision can make a crucial difference in time-sensitive applications. In this work, we present an End-to-End Learned Early Classification of Time Series (ELECTS) model that estimates a classification score and a probability of whether sufficient data has been observed to come to an early and still accurate decision. ELECTS is modular: any deep time series classification model can adopt the ELECTS conceptual idea by adding a second prediction head that outputs a probability of stopping the classification. The ELECTS loss function then optimizes the overall model on a balanced objective of earliness and accuracy. Our experiments on four crop classification datasets from Europe and Africa show that ELECTS allows reaching state-of-the-art accuracy while reducing the quantity of data massively to be downloaded, stored, and processed. The source code is available at https://github.com/marccoru/elects.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,083
cs/0406056
P=NP
We claim to resolve the P=?NP problem via a formal argument for P=NP.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
538,249
2402.02005
Topology-Informed Graph Transformer
Transformers have revolutionized performance in Natural Language Processing and Vision, paving the way for their integration with Graph Neural Networks (GNNs). One key challenge in enhancing graph transformers is strengthening the discriminative power of distinguishing isomorphisms of graphs, which plays a crucial role in boosting their predictive performances. To address this challenge, we introduce 'Topology-Informed Graph Transformer (TIGT)', a novel transformer enhancing both discriminative power in detecting graph isomorphisms and the overall performance of Graph Transformers. TIGT consists of four components: A topological positional embedding layer using non-isomorphic universal covers based on cyclic subgraphs of graphs to ensure unique graph representation: A dual-path message-passing layer to explicitly encode topological characteristics throughout the encoder layers: A global attention mechanism: And a graph information layer to recalibrate channel-wise graph features for better feature representation. TIGT outperforms previous Graph Transformers in classifying synthetic dataset aimed at distinguishing isomorphism classes of graphs. Additionally, mathematical analysis and empirical evaluations highlight our model's competitive edge over state-of-the-art Graph Transformers across various benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
426,326
2404.04819
Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer
Human-object contact serves as a strong cue to understand how humans physically interact with objects. Nevertheless, it is not widely explored to utilize human-object contact information for the joint reconstruction of 3D human and object from a single image. In this work, we present a novel joint 3D human-object reconstruction method (CONTHO) that effectively exploits contact information between humans and objects. There are two core designs in our system: 1) 3D-guided contact estimation and 2) contact-based 3D human and object refinement. First, for accurate human-object contact estimation, CONTHO initially reconstructs 3D humans and objects and utilizes them as explicit 3D guidance for contact estimation. Second, to refine the initial reconstructions of 3D human and object, we propose a novel contact-based refinement Transformer that effectively aggregates human features and object features based on the estimated human-object contact. The proposed contact-based refinement prevents the learning of erroneous correlation between human and object, which enables accurate 3D reconstruction. As a result, our CONTHO achieves state-of-the-art performance in both human-object contact estimation and joint reconstruction of 3D human and object. The code is publicly available at https://github.com/dqj5182/CONTHO_RELEASE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
444,803
1301.0148
Markov Chain Order estimation with Conditional Mutual Information
We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of $K$ symbols, we define CMI of order $m$, $I_c(m)$, as the mutual information of two variables in the chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of $I_c(m)$, where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing $m$ and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using $\phi$-divergence. The order criterion of CMI-testing turns out to be superior for orders larger than one, but its effectiveness for large orders depends on data availability. In view of the results from the simulations, we interpret the estimated orders by the CMI-testing and the other criteria on genes and intergenic regions of DNA chains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
20,703
2202.00401
Learning to Speak on Behalf of a Group: Medium Access Control for Sending a Shared Message
The rapid development of Industrial Internet of Things (IIoT) technologies has not only enabled new applications, but also presented new challenges for reliable communication with limited resources. In this work, we define a deceptively simple novel problem that can arise in these scenarios, in which a set of sensors need to communicate a joint observation. This observation is shared by a random subset of the nodes, which need to propagate it to the rest of the network, but coordination is complex: as signaling constraints require the use of random access schemes over shared channels, each sensor needs to implicitly coordinate with others with the same observation, so that at least one of the transmissions gets through without collisions. Unlike existing medium access control schemes, the goal here is not to maximize total goodput, but rather to make sure that the shared message gets through, regardless of the sender. The lack of any signaling, aside from an acknowledgment or lack thereof from the rest of the network, makes determining the optimal collective transmission strategy a significant challenge. We analyze this coordination problem theoretically, prove its hardness, and provide low-complexity solutions. While a low-complexity clustering-based approach is shown to provide near-optimal performance in certain special cases, for the general scenarios, we model each sensor as a multi-armed bandit (MAB), and provide a learning-based solution. Numerical results show the effectiveness of this approach in a variety of cases.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
278,127
2407.08626
RoboMorph: Evolving Robot Morphology using Large Language Models
We introduce RoboMorph, an automated approach for generating and optimizing modular robot designs using large language models (LLMs) and evolutionary algorithms. In this framework, we represent each robot design as a grammar and leverage the capabilities of LLMs to navigate the extensive robot design space, which is traditionally time-consuming and computationally demanding. By integrating automatic prompt design and a reinforcement learning based control algorithm, RoboMorph iteratively improves robot designs through feedback loops. Our experimental results demonstrate that RoboMorph can successfully generate nontrivial robots that are optimized for a single terrain while showcasing improvements in morphology over successive evolutions. Our approach demonstrates the potential of using LLMs for data-driven and modular robot design, providing a promising methodology that can be extended to other domains with similar design frameworks.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
472,237
1602.03854
Estimating the unconfined compressive strength of carbonate rocks using gene expression programming
Conventionally, many researchers have used both regression and black box techniques to estimate the unconfined compressive strength (UCS) of different rocks. The advantage of the regression approach is that it can be used to render a functional relationship between the predictive rock indices and its UCS. The advantage of the black box techniques is in rendering more accurate predictions. Gene expression programming (GEP) is proposed, in this study, as a robust mathematical alternative for predicting the UCS of carbonate rocks. The two parameters of total porosity and P-wave speed were selected as predictive indices. The proposed GEP model had the advantage of the both traditionally used approaches by proposing a mathematical model, similar to a regression, while keeping the prediction errors as low as the black box methods. The GEP outperformed both artificial neural networks and support vector machines in terms of yielding more accurate estimates of UCS. Both the porosity and the P-wave velocity were sufficient predictive indices for estimating the UCS of the carbonate rocks in this study. Nearly, 95% of the observed variation in the UCS values was explained by these two parameters (i.e., R2 =95%).
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
52,059
2106.04886
Fully differentiable model discovery
Model discovery aims at autonomously discovering differential equations underlying a dataset. Approaches based on Physics Informed Neural Networks (PINNs) have shown great promise, but a fully-differentiable model which explicitly learns the equation has remained elusive. In this paper we propose such an approach by integrating neural network-based surrogates with Sparse Bayesian Learning (SBL). This combination yields a robust model discovery algorithm, which we showcase on various datasets. We then identify a connection with multitask learning, and build on it to construct a Physics Informed Normalizing Flow (PINF). We present a proof-of-concept using a PINF to directly learn a density model from single particle data. Our work expands PINNs to various types of neural network architectures, and connects neural network-based surrogates to the rich field of Bayesian parameter inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
239,889
1905.08834
Codebooks from generalized bent $\mathbb{Z}_4$-valued quadratic forms
Codebooks with small inner-product correlation have application in unitary space-time modulations, multiple description coding over erasure channels, direct spread code division multiple access communications, compressed sensing, and coding theory. It is interesting to construct codebooks (asymptotically) achieving the Welch bound or the Levenshtein bound. This paper presented a class of generalized bent $\mathbb{Z}_4$-valued quadratic forms, which contain functions of Heng and Yue (Optimal codebooks achieving the Levenshtein bound from generalized bent functions over $\mathbb{Z}_4$. Cryptogr. Commun. 9(1), 41-53, 2017). By using these generalized bent $\mathbb{Z}_4$-valued quadratic forms, we constructs optimal codebooks achieving the Levenshtein bound. These codebooks have parameters $(2^{2m}+2^m,2^m)$ and alphabet size $6$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
131,575
2410.14728
Security Threats in Agentic AI System
This research paper explores the privacy and security threats posed to an Agentic AI system with direct access to database systems. Such access introduces significant risks, including unauthorized retrieval of sensitive information, potential exploitation of system vulnerabilities, and misuse of personal or confidential data. The complexity of AI systems combined with their ability to process and analyze large volumes of data increases the chances of data leaks or breaches, which could occur unintentionally or through adversarial manipulation. Furthermore, as AI agents evolve with greater autonomy, their capacity to bypass or exploit security measures becomes a growing concern, heightening the need to address these critical vulnerabilities in agentic systems.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
true
false
false
false
500,174
2009.13154
Balancing thermal comfort datasets: We GAN, but should we?
Thermal comfort assessment for the built environment has become more available to analysts and researchers due to the proliferation of sensors and subjective feedback methods. These data can be used for modeling comfort behavior to support design and operations towards energy efficiency and well-being. By nature, occupant subjective feedback is imbalanced as indoor conditions are designed for comfort, and responses indicating otherwise are less common. This situation creates a scenario for the machine learning workflow where class balancing as a pre-processing step might be valuable for developing predictive thermal comfort classification models with high-performance. This paper investigates the various thermal comfort dataset class balancing techniques from the literature and proposes a modified conditional Generative Adversarial Network (GAN), $\texttt{comfortGAN}$, to address this imbalance scenario. These approaches are applied to three publicly available datasets, ranging from 30 and 67 participants to a global collection of thermal comfort datasets, with 1,474; 2,067; and 66,397 data points, respectively. This work finds that a classification model trained on a balanced dataset, comprised of real and generated samples from $\texttt{comfortGAN}$, has higher performance (increase between 4% and 17% in classification accuracy) than other augmentation methods tested. However, when classes representing discomfort are merged and reduced to three, better imbalanced performance is expected, and the additional increase in performance by $\texttt{comfortGAN}$ shrinks to 1-2%. These results illustrate that class balancing for thermal comfort modeling is beneficial using advanced techniques such as GANs, but its value is diminished in certain scenarios. A discussion is provided to assist potential users in determining which scenarios this process is useful and which method works best.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
197,655
2009.08283
Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution. The resulting streams of events are of high value by themselves, especially for high speed motion estimation. However, a growing body of work has also focused on the reconstruction of intensity frames from the events, as this allows bridging the gap with the existing literature on appearance- and frame-based computer vision. Recent work has mostly approached this problem using neural networks trained with synthetic, ground-truth data. In this work we approach, for the first time, the intensity reconstruction problem from a self-supervised learning perspective. Our method, which leverages the knowledge of the inner workings of event cameras, combines estimated optical flow and the event-based photometric constancy to train neural networks without the need for any ground-truth or synthetic data. Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art. Additionally, we propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,192
2011.07370
Locomotion and Control of a Friction-Driven Tripedal Robot
This letter considers control of a radially symmetric tripedal friction-driven robot. The robot features 3 servo motors mounted on a 3-D printed chassis 7 cm from the center of mass and separated 120 degrees. These motors drive limbs, which impart frictional reactive forces on the body. Experimental observations performed on a uniform friction surface validated a mathematical model for robot motion. This model was used to create a gait map, which features instantaneous omni-directional control. We demonstrated line following using live feedback from an overhead tracking camera. Proportional-Integral error compensation performance was compared to a basic position update procedure on a rectangular course. The controller reduced path error by approximately $46\%$. The error compensator is also able to correct for aerodynamic disturbances generated by a high-volume industrial fan with a mean flow speed of $5.5ms^{-1}$, reducing path error by $65\%$ relative to the basic position update procedure.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
206,527
2205.14405
Strengthening Skeletal Action Recognizers via Leveraging Temporal Patterns
Skeleton sequences are compact and lightweight. Numerous skeleton-based action recognizers have been proposed to classify human behaviors. In this work, we aim to incorporate components that are compatible with existing models and further improve their accuracy. To this end, we design two temporal accessories: discrete cosine encoding (DCE) and chronological loss (CRL). DCE facilitates models to analyze motion patterns from the frequency domain and meanwhile alleviates the influence of signal noise. CRL guides networks to explicitly capture the sequence's chronological order. These two components consistently endow many recently-proposed action recognizers with accuracy boosts, achieving new state-of-the-art (SOTA) accuracy on two large datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
299,353
2203.09413
Stability and Risk Bounds of Iterative Hard Thresholding
In this paper, we analyze the generalization performance of the Iterative Hard Thresholding (IHT) algorithm widely used for sparse recovery problems. The parameter estimation and sparsity recovery consistency of IHT has long been known in compressed sensing. From the perspective of statistical learning, another fundamental question is how well the IHT estimation would predict on unseen data. This paper makes progress towards answering this open question by introducing a novel sparse generalization theory for IHT under the notion of algorithmic stability. Our theory reveals that: 1) under natural conditions on the empirical risk function over $n$ samples of dimension $p$, IHT with sparsity level $k$ enjoys an $\mathcal{\tilde O}(n^{-1/2}\sqrt{k\log(n)\log(p)})$ rate of convergence in sparse excess risk; 2) a tighter $\mathcal{\tilde O}(n^{-1/2}\sqrt{\log(n)})$ bound can be established by imposing an additional iteration stability condition on a hypothetical IHT procedure invoked to the population risk; and 3) a fast rate of order $\mathcal{\tilde O}\left(n^{-1}k(\log^3(n)+\log(p))\right)$ can be derived for strongly convex risk function under proper strong-signal conditions. The results have been substantialized to sparse linear regression and sparse logistic regression models to demonstrate the applicability of our theory. Preliminary numerical evidence is provided to confirm our theoretical predictions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
286,145
2501.06485
A Diffusive Data Augmentation Framework for Reconstruction of Complex Network Evolutionary History
The evolutionary processes of complex systems contain critical information regarding their functional characteristics. The generation time of edges provides insights into the historical evolution of various networked complex systems, such as protein-protein interaction networks, ecosystems, and social networks. Recovering these evolutionary processes holds significant scientific value, including aiding in the interpretation of the evolution of protein-protein interaction networks. However, existing methods are capable of predicting the generation times of remaining edges given a partial temporal network but often perform poorly in cross-network prediction tasks. These methods frequently fail in edge generation time recovery tasks for static networks that lack timestamps. In this work, we adopt a comparative paradigm-based framework that fuses multiple networks for training, enabling cross-network learning of the relationship between network structure and edge generation times. Compared to separate training, this approach yields an average accuracy improvement of 16.98%. Furthermore, given the difficulty in collecting temporal networks, we propose a novel diffusion-model-based generation method to produce a large number of temporal networks. By combining real temporal networks with generated ones for training, we achieve an additional average accuracy improvement of 5.46% through joint training.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
524,003
2402.18664
Online disinformation in the 2020 U.S. Election: swing vs. safe states
For U.S. presidential elections, most states use the so-called winner-take-all system, in which the state's presidential electors are awarded to the winning political party in the state after a popular vote phase, regardless of the actual margin of victory. Therefore, election campaigns are especially intense in states where there is no clear direction on which party will be the winning party. These states are often referred to as swing states. To measure the impact of such an election law on the campaigns, we analyze the Twitter activity surrounding the 2020 US preelection debate, with a particular focus on the spread of disinformation. We find that about 88% of the online traffic was associated with swing states. In addition, the sharing of links to unreliable news sources is significantly more prevalent in tweets associated with swing states: in this case, untrustworthy tweets are predominantly generated by automated accounts. Furthermore, we observe that the debate is mostly led by two main communities, one with a predominantly Republican affiliation and the other with accounts of different political orientations. Most of the disinformation comes from the former.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
433,508
2107.04520
Online Adaptation to Label Distribution Shift
Machine learning models often encounter distribution shifts when deployed in the real world. In this paper, we focus on adaptation to label distribution shift in the online setting, where the test-time label distribution is continually changing and the model must dynamically adapt to it without observing the true label. Leveraging a novel analysis, we show that the lack of true label does not hinder estimation of the expected test loss, which enables the reduction of online label shift adaptation to conventional online learning. Informed by this observation, we propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD) and derive their regret bounds. We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
245,485