id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1009.0921
An Efficient Retransmission Based on Network Coding with Unicast Flows
Recently, network coding technique has emerged as a promising approach that supports reliable transmission over wireless loss channels. In existing protocols where users have no interest in considering the encoded packets they had in coding or decoding operations, this rule is expensive and inef-ficient. This paper studies the impact of encoded packets in the reliable unicast network coding via some theoretical analysis. Using our approach, receivers do not only store the encoded packets they overheard, but also report these information to their neighbors, such that users enable to take account of encoded packets in their coding decisions as well as decoding operations. Moreover, we propose a redistribution algorithm to maximize the coding opportunities, which achieves better retransmission efficiency. Finally, theoretical analysis and simulation results for a wheel network illustrate the improve-ment in retransmissions efficiency due to the encoded packets.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
7,485
2306.08370
Object Detection in Hyperspectral Image via Unified Spectral-Spatial Feature Aggregation
Deep learning-based hyperspectral image (HSI) classification and object detection techniques have gained significant attention due to their vital role in image content analysis, interpretation, and wider HSI applications. However, current hyperspectral object detection approaches predominantly emphasize either spectral or spatial information, overlooking the valuable complementary relationship between these two aspects. In this study, we present a novel \textbf{S}pectral-\textbf{S}patial \textbf{A}ggregation (S2ADet) object detector that effectively harnesses the rich spectral and spatial complementary information inherent in hyperspectral images. S2ADet comprises a hyperspectral information decoupling (HID) module, a two-stream feature extraction network, and a one-stage detection head. The HID module processes hyperspectral images by aggregating spectral and spatial information via band selection and principal components analysis, consequently reducing redundancy. Based on the acquired spatial and spectral aggregation information, we propose a feature aggregation two-stream network for interacting spectral-spatial features. Furthermore, to address the limitations of existing databases, we annotate an extensive dataset, designated as HOD3K, containing 3,242 hyperspectral images captured across diverse real-world scenes and encompassing three object classes. These images possess a resolution of 512x256 pixels and cover 16 bands ranging from 470 nm to 620 nm. Comprehensive experiments on two datasets demonstrate that S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results. The demo code and dataset of this work are publicly available at \url{https://github.com/hexiao-cs/S2ADet}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,393
2310.05341
From Question to Exploration: Test-Time Adaptation in Semantic Segmentation?
Test-time adaptation (TTA) aims to adapt a model, initially trained on training data, to test data with potential distribution shifts. Most existing TTA methods focus on classification problems. The pronounced success of classification might lead numerous newcomers and engineers to assume that classic TTA techniques can be directly applied to the more challenging task of semantic segmentation. However, this belief is still an open question. In this paper, we investigate the applicability of existing classic TTA strategies in semantic segmentation. Our comprehensive results have led to three key observations. First, the classic normalization updating strategy only brings slight performance improvement, and in some cases, it might even adversely affect the results. Even with the application of advanced distribution estimation techniques like batch renormalization, the problem remains unresolved. Second, although the teacher-student scheme does enhance the training stability for segmentation TTA in the presence of noisy pseudo-labels and temporal correlation, it cannot directly result in performance improvement compared to the original model without TTA under complex data distribution. Third, segmentation TTA suffers a severe long-tailed class-imbalance problem, which is substantially more complex than that in TTA for classification. This long-tailed challenge negatively affects segmentation TTA performance, even when the accuracy of pseudo-labels is high. Besides those observations, we find that visual prompt tuning (VisPT) is promising in segmentation TTA and propose a novel method named TTAP. The outstanding performance of TTAP has also been verified. We hope the community can give more attention to this challenging, yet important, segmentation TTA task in the future. The source code is available at: \textit{https://github.com/ycarobot/TTAP
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
398,102
2104.06924
Evaluation of Unsupervised Entity and Event Salience Estimation
Salience Estimation aims to predict term importance in documents. Due to few existing human-annotated datasets and the subjective notion of salience, previous studies typically generate pseudo-ground truth for evaluation. However, our investigation reveals that the evaluation protocol proposed by prior work is difficult to replicate, thus leading to few follow-up studies existing. Moreover, the evaluation process is problematic: the entity linking tool used for entity matching is very noisy, while the ignorance of event argument for event evaluation leads to boosted performance. In this work, we propose a light yet practical entity and event salience estimation evaluation protocol, which incorporates the more reliable syntactic dependency parser. Furthermore, we conduct a comprehensive analysis among popular entity and event definition standards, and present our own definition for the Salience Estimation task to reduce noise during the pseudo-ground truth generation process. Furthermore, we construct dependency-based heterogeneous graphs to capture the interactions of entities and events. The empirical results show that both baseline methods and the novel GNN method utilizing the heterogeneous graph consistently outperform the previous SOTA model in all proposed metrics.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
230,233
1511.08310
Sic Transit Gloria Manuscriptum: Two Views of the Aggregate Fate of Ancient Papers
When PageRank began to be used for ranking in Web search, a concern soon arose that older pages have an inherent --- and potentially unfair --- advantage over emerging pages of high quality, because they have had more time to acquire hyperlink citations. Algorithms were then proposed to compensate for this effect. Curiously, in bibliometry, the opposite concern has often been raised: that a growing body of recent papers crowds out older papers, resulting in a collective amnesia in research communities, which potentially leads to reinventions, redundancies, and missed opportunities to connect ideas. A recent paper by Verstak et al. reported experiments on Google Scholar data, which seemed to refute the amnesia, or aging, hypothesis. They claimed that more recently written papers have a larger fraction of outbound citations targeting papers that are older by a fixed number of years, indicating that ancient papers are alive and well-loved and increasingly easily found, thanks in part to Google Scholar. In this paper we show that the full picture is considerably more nuanced. Specifically, the fate of a fixed sample of papers, as they age, is rather different from what Verstak et al.'s study suggests: there is clear and steady abandonment in favor of citations to newer papers. The two apparently contradictory views are reconciled by the realization that, as time passes, the number of papers older than a fixed number of years grows rapidly.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
49,522
2405.07666
New Solutions to Delsarte's Dual Linear Programs
Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes. In this work, we provide universal bounds in the framework of association schemes that generalize the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
453,809
2408.00082
TASI Lectures on Physics for Machine Learning
These notes are based on lectures I gave at TASI 2024 on Physics for Machine Learning. The focus is on neural network theory, organized according to network expressivity, statistics, and dynamics. I present classic results such as the universal approximation theorem and neural network / Gaussian process correspondence, and also more recent results such as the neural tangent kernel, feature learning with the maximal update parameterization, and Kolmogorov-Arnold networks. The exposition on neural network theory emphasizes a field theoretic perspective familiar to theoretical physicists. I elaborate on connections between the two, including a neural network approach to field theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
477,697
2204.00720
Shared User Interfaces of Physiological Data: Systematic Review of Social Biofeedback Systems and Contexts in HCI
As an emerging interaction paradigm, physiological computing is increasingly being used to both measure and feed back information about our internal psychophysiological states. While most applications of physiological computing are designed for individual use, recent research has explored how biofeedback can be socially shared between multiple users to augment human-human communication. Reflecting on the empirical progress in this area of study, this paper presents a systematic review of 64 studies to characterize the interaction contexts and effects of social biofeedback systems. Our findings highlight the importance of physio-temporal and social contextual factors surrounding physiological data sharing as well as how it can promote social-emotional competences on three different levels: intrapersonal, interpersonal, and task-focused. We also present the Social Biofeedback Interactions framework to articulate the current physiological-social interaction space. We use this to frame our discussion of the implications and ethical considerations for future research and design of social biofeedback interfaces.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
289,360
2203.03131
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Recently the prompt-tuning paradigm has attracted significant attention. By only tuning continuous prompts with a frozen pre-trained language model (PLM), prompt-tuning takes a step towards deploying a shared frozen PLM to serve numerous downstream tasks. Although prompt-tuning shows good performance on certain natural language understanding (NLU) tasks, its effectiveness on natural language generation (NLG) tasks is still under-explored. In this paper, we argue that one of the factors hindering the development of prompt-tuning on NLG tasks is the unfamiliar inputs (i.e., inputs are linguistically different from the pretraining corpus). For example, our preliminary exploration reveals a large performance gap between prompt-tuning and fine-tuning when unfamiliar inputs occur frequently in NLG tasks. This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs. Our proposed input-tuning is conceptually simple and empirically powerful. Experimental results on seven NLG tasks demonstrate that input-tuning is significantly and consistently better than prompt-tuning. Furthermore, on three of these tasks, input-tuning can achieve a comparable or even better performance than fine-tuning.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
283,983
2502.04409
Learning low-dimensional representations of ensemble forecast fields using autoencoder-based methods
Large-scale numerical simulations often produce high-dimensional gridded data that is challenging to process for downstream applications. A prime example is numerical weather prediction, where atmospheric processes are modeled using discrete gridded representations of the physical variables and dynamics. Uncertainties are assessed by running the simulations multiple times, yielding ensembles of simulated fields as a high-dimensional stochastic representation of the forecast distribution. The high-dimensionality and large volume of ensemble datasets poses major computing challenges for subsequent forecasting stages. Data-driven dimensionality reduction techniques could help to reduce the data volume before further processing by learning meaningful and compact representations. However, existing dimensionality reduction methods are typically designed for deterministic and single-valued inputs, and thus cannot handle ensemble data from multiple randomized simulations. In this study, we propose novel dimensionality reduction approaches specifically tailored to the format of ensemble forecast fields. We present two alternative frameworks, which yield low-dimensional representations of ensemble forecasts while respecting their probabilistic character. The first approach derives a distribution-based representation of an input ensemble by applying standard dimensionality reduction techniques in a member-by-member fashion and merging the member representations into a joint parametric distribution model. The second approach achieves a similar representation by encoding all members jointly using a tailored variational autoencoder. We evaluate and compare both approaches in a case study using 10 years of temperature and wind speed forecasts over Europe. The approaches preserve key spatial and statistical characteristics of the ensemble and enable probabilistic reconstructions of the forecast fields.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,147
2010.00284
Bayesian Policy Search for Stochastic Domains
AI planning can be cast as inference in probabilistic models, and probabilistic programming was shown to be capable of policy search in partially observable domains. Prior work introduces policy search through Markov chain Monte Carlo in deterministic domains, as well as adapts black-box variational inference to stochastic domains, however not in the strictly Bayesian sense. In this work, we cast policy search in stochastic domains as a Bayesian inference problem and provide a scheme for encoding such problems as nested probabilistic programs. We argue that probabilistic programs for policy search in stochastic domains should involve nested conditioning, and provide an adaption of Lightweight Metropolis-Hastings (LMH) for robust inference in such programs. We apply the proposed scheme to stochastic domains and show that policies of similar quality are learned, despite a simpler and more general inference algorithm. We believe that the proposed variant of LMH is novel and applicable to a wider class of probabilistic programs with nested conditioning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
198,237
2205.05198
Reducing Activation Recomputation in Large Transformer Models
Training large transformer models is one of the most important computational challenges of modern AI. In this paper, we show how to significantly accelerate training of large transformer models by reducing activation recomputation. Activation recomputation is commonly used to work around memory capacity constraints. Rather than storing activations for backpropagation, they are traditionally recomputed, which saves memory but adds redundant compute. In this work, we show most of this redundant compute is unnecessary because we can reduce memory consumption sufficiently without it. We present two novel yet very simple techniques: sequence parallelism and selective activation recomputation. In conjunction with tensor parallelism, these techniques almost eliminate the need to recompute activations. We evaluate our approach on language models up to one trillion parameters in scale and show that our method reduces activation memory by 5x, while reducing execution time overhead from activation recomputation by over 90%. For example, when training a 530B parameter GPT-3 style model on 2240 NVIDIA A100 GPUs, we achieve a Model Flops Utilization of 54.2%, which is 29% faster than the 42.1% we achieve using recomputation. Our implementation will be available in both Megatron-LM and NeMo-Megatron.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
295,866
2009.06386
Moment-based Spectrum Sensing Under Generalized Noise Channels
A new spectrum sensing detector is proposed and analytically studied, when it operates under generalized noise channels. Particularly, the McLeish distribution is used to model the underlying noise, which is suitable for both non-Gaussian (impulsive) as well as classical Gaussian noise modeling. The introduced detector adopts a moment-based approach, whereas it is not required to know the transmit signal and channel fading statistics (i.e., blind detection). Important performance metrics are presented in closed forms, such as the false-alarm probability, detection probability and decision threshold. Analytical and simulation results are cross-compared validating the accuracy of the proposed approach. Finally, it is demonstrated that the proposed approach outperforms the conventional energy detector in the practical case of noise uncertainty, yet introducing a comparable computational complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
195,623
2206.02345
Anomaly Detection with Test Time Augmentation and Consistency Evaluation
Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence stcores to out-distribuion samples. Recent works try to solve the problem using representation learning methods and specific metrics. In this paper, we propose a simple, yet effective post-hoc anomaly detection algorithm named Test Time Augmentation Anomaly Detection (TTA-AD), inspired by a novel observation. Specifically, we observe that in-distribution data enjoy more consistent predictions for its original and augmented versions on a trained network than out-distribution data, which separates in-distribution and out-distribution samples. Experiments on various high-resolution image benchmark datasets demonstrate that TTA-AD achieves comparable or better detection performance under dataset-vs-dataset anomaly detection settings with a 60%~90\% running time reduction of existing classifier-based algorithms. We provide empirical verification that the key to TTA-AD lies in the remaining classes between augmented features, which has long been partially ignored by previous works. Additionally, we use RUNS as a surrogate to analyze our algorithm theoretically.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
300,856
2405.13005
Understanding Sarcoidosis Using Large Language Models and Social Media Data
Sarcoidosis is a rare inflammatory disease characterized by the formation of granulomas in various organs. The disease presents diagnostic and treatment challenges due to its diverse manifestations and unpredictable nature. In this study, we employed a Large Language Model (LLM) to analyze sarcoidosis-related discussions on the social media platform Reddit. Our findings underscore the efficacy of LLMs in accurately identifying sarcoidosis-related content. We discovered a wide array of symptoms reported by patients, with fatigue, swollen lymph nodes, and shortness of breath as the most prevalent. Prednisone was the most prescribed medication, while infliximab showed the highest effectiveness in improving prognoses. Notably, our analysis revealed disparities in prognosis based on age and gender, with women and younger patients experiencing good and polarized outcomes, respectively. Furthermore, unsupervised clustering identified three distinct patient subgroups (phenotypes) with unique symptom profiles, prognostic outcomes, and demographic distributions. Finally, sentiment analysis revealed a moderate negative impact on patients' mental health post-diagnosis, particularly among women and younger individuals. Our study represents the first application of LLMs to understand sarcoidosis through social media data. It contributes to understanding the disease by providing data-driven insights into its manifestations, treatments, prognoses, and impact on patients' lives. Our findings have direct implications for improving personalized treatment strategies and enhancing the quality of care for individuals living with sarcoidosis.
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
false
false
false
455,738
2405.08597
Risks and Opportunities of Open-Source Generative AI
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. This regulation is likely to put at risk the budding field of open-source generative AI. Using a three-stage framework for Gen AI development (near, mid and long-term), we analyze the risks and opportunities of open-source generative AI models with similar capabilities to the ones currently available (near to mid-term) and with greater capabilities (long-term). We argue that, overall, the benefits of open-source Gen AI outweigh its risks. As such, we encourage the open sourcing of models, training and evaluation data, and provide a set of recommendations and best practices for managing risks associated with open-source generative AI.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
454,151
2307.09437
Grounded Object Centric Learning
The extraction of modular object-centric representations for downstream tasks is an emerging area of research. Learning grounded representations of objects that are guaranteed to be stable and invariant promises robust performance across different tasks and environments. Slot Attention (SA) learns object-centric representations by assigning objects to \textit{slots}, but presupposes a \textit{single} distribution from which all slots are randomly initialised. This results in an inability to learn \textit{specialized} slots which bind to specific object types and remain invariant to identity-preserving changes in object appearance. To address this, we present \emph{\textsc{Co}nditional \textsc{S}lot \textsc{A}ttention} (\textsc{CoSA}) using a novel concept of \emph{Grounded Slot Dictionary} (GSD) inspired by vector quantization. Our proposed GSD comprises (i) canonical object-level property vectors and (ii) parametric Gaussian distributions, which define a prior over the slots. We demonstrate the benefits of our method in multiple downstream tasks such as scene generation, composition, and task adaptation, whilst remaining competitive with SA in popular object discovery benchmarks.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
380,172
2008.00816
Evolving Multi-Resolution Pooling CNN for Monaural Singing Voice Separation
Monaural Singing Voice Separation (MSVS) is a challenging task and has been studied for decades. Deep neural networks (DNNs) are the current state-of-the-art methods for MSVS. However, the existing DNNs are often designed manually, which is time-consuming and error-prone. In addition, the network architectures are usually pre-defined, and not adapted to the training data. To address these issues, we introduce a Neural Architecture Search (NAS) method to the structure design of DNNs for MSVS. Specifically, we propose a new multi-resolution Convolutional Neural Network (CNN) framework for MSVS namely Multi-Resolution Pooling CNN (MRP-CNN), which uses various-size pooling operators to extract multi-resolution features. Based on the NAS, we then develop an evolving framework namely Evolving MRP-CNN (E-MRP-CNN), by automatically searching the effective MRP-CNN structures using genetic algorithms, optimized in terms of a single-objective considering only separation performance, or multi-objective considering both the separation performance and the model complexity. The multi-objective E-MRP-CNN gives a set of Pareto-optimal solutions, each providing a trade-off between separation performance and model complexity. Quantitative and qualitative evaluations on the MIR-1K and DSD100 datasets are used to demonstrate the advantages of the proposed framework over several recent baselines.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
190,121
1711.06606
Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training
To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire because labeled medical images are not usually available due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions and poor standardization. Lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose an alternative framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and hypothesize that clinically-relevant features can be preserved via self-regularization. These domain-adapted images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We test this approach for the notoriously difficult task of depth-estimation from endoscopy. We train a depth estimator on a large dataset of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. This network predicts significantly better depths when using synthetic-like domain-adapted images compared to the real images, confirming that the clinically-relevant features of depth are preserved.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
84,811
2203.15052
Learning Minimum-Time Flight in Cluttered Environments
We tackle the problem of minimum-time flight for a quadrotor through a sequence of waypoints in the presence of obstacles while exploiting the full quadrotor dynamics. Early works relied on simplified dynamics or polynomial trajectory representations that did not exploit the full actuator potential of the quadrotor, and, thus, resulted in suboptimal solutions. Recent works can plan minimum-time trajectories; yet, the trajectories are executed with control methods that do not account for obstacles. Thus, a successful execution of such trajectories is prone to errors due to model mismatch and in-flight disturbances. To this end, we leverage deep reinforcement learning and classical topological path planning to train robust neural-network controllers for minimum-time quadrotor flight in cluttered environments. The resulting neural network controller demonstrates substantially better performance of up to 19\% over state-of-the-art methods. More importantly, the learned policy solves the planning and control problem simultaneously online to account for disturbances, thus achieving much higher robustness. As such, the presented method achieves 100% success rate of flying minimum-time policies without collision, while traditional planning and control approaches achieve only 40%. The proposed method is validated in both simulation and the real world, with quadrotor speeds of up to 42km/h and accelerations of 3.6g.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
288,213
2009.11180
AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal Reasoning
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law, and extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI). AI advances in Natural Language Processing (NLP) and Machine Learning (ML) have especially furthered the capabilities of leveraging AI for aiding legal professionals, doing so in ways that are modeled here as CARE, namely Crafting, Assessing, Refining, and Engaging in legal argumentation. In addition to AI-enabled legal argumentation serving to augment human-based lawyering, an aspirational goal of this multi-disciplinary field consists of ultimately achieving autonomously effected human-equivalent legal argumentation. As such, an innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning (AILR) to the maturation of AI and Legal Argumentation (AILA), proffering a new means of gauging progress in this ever-evolving and rigorously sought domain.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
197,097
2302.01486
Xtal2DoS: Attention-based Crystal to Sequence Learning for Density of States Prediction
Modern machine learning techniques have been extensively applied to materials science, especially for property prediction tasks. A majority of these methods address scalar property predictions, while more challenging spectral properties remain less emphasized. We formulate a crystal-to-sequence learning task and propose a novel attention-based learning method, Xtal2DoS, which decodes the sequential representation of the material density of states (DoS) properties by incorporating the learned atomic embeddings through attention networks. Experiments show Xtal2DoS is faster than the existing models, and consistently outperforms other state-of-the-art methods on four metrics for two fundamental spectral properties, phonon and electronic DoS.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
343,615
2107.11359
Rethinking Hard-Parameter Sharing in Multi-Domain Learning
Hard parameter sharing in multi-domain learning (MDL) allows domains to share some of the model parameters to reduce storage cost while improving prediction accuracy. One common sharing practice is to share the bottom layers of a deep neural network among domains while using separate top layers for each domain. In this work, we revisit this common practice via an empirical study on image classification tasks from a diverse set of visual domains and make two surprising observations. (1) Using separate bottom-layer parameters could achieve significantly better performance than the common practice and this phenomenon holds with different experimental settings. (2) A multi-domain model with a small proportion of domain-specific parameters from bottom layers can achieve competitive performance with independent models trained on each domain separately. Our observations suggest that people adopt the new strategy of using separate bottom-layer parameters as a stronger baseline for model design in MDL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,572
1809.10610
Counterfactual Fairness in Text Classification through Robustness
In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that "Some people are gay" is toxic while "Some people are straight" is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
108,944
1903.00780
Fairness in Recommendation Ranking through Pairwise Comparisons
Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them? In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness.
false
false
false
false
true
true
true
false
false
false
false
false
false
true
false
false
false
false
123,099
2003.01871
Semantic sensor fusion: from camera to sparse lidar information
To navigate through urban roads, an automated vehicle must be able to perceive and recognize objects in a three-dimensional environment. A high-level contextual understanding of the surroundings is necessary to plan and execute accurate driving maneuvers. This paper presents an approach to fuse different sensory information, Light Detection and Ranging (lidar) scans and camera images. The output of a convolutional neural network (CNN) is used as classifier to obtain the labels of the environment. The transference of semantic information between the labelled image and the lidar point cloud is performed in four steps: initially, we use heuristic methods to associate probabilities to all the semantic classes contained in the labelled images. Then, the lidar points are corrected to compensate for the vehicle's motion given the difference between the timestamps of each lidar scan and camera image. In a third step, we calculate the pixel coordinate for the corresponding camera image. In the last step we perform the transfer of semantic information from the heuristic probability images to the lidar frame, while removing the lidar information that is not visible to the camera. We tested our approach in the Usyd Dataset \cite{usyd_dataset}, obtaining qualitative and quantitative results that demonstrate the validity of our probabilistic sensory fusion approach.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
166,782
2202.12183
Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence
NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-$K$ variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-$K$ NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org/.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
282,140
2403.00994
Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language
We introduce a multi-step reasoning framework using prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes. Grounded in fuzzy-trace theory, which emphasizes the importance of gists of causal coherence in effective health communication, we introduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework, to identify gists at-scale. Using RBIC, we systematically extract gists from subreddit discussions opposing COVID-19 health measures (Study 1). We then track how these gists evolve across key events (Study 2) and assess their influence on online engagement (Study 3). Finally, we investigate how the volume of gists is associated with national health trends like vaccine uptake and hospitalizations (Study 4). Our work is the first to empirically link social media linguistic patterns to real-world public health trends, highlighting the potential of prompt-based LLMs in identifying critical online discussion patterns that can form the basis of public health communication strategies.
true
false
false
true
true
false
false
false
true
false
false
false
false
false
false
false
false
false
434,219
2408.13100
Complete Autonomous Robotic Nasopharyngeal Swab System with Evaluation on a Stochastically Moving Phantom Head
The application of autonomous robotics to close-contact healthcare tasks has a clear role for the future due to its potential to reduce infection risks to staff and improve clinical efficiency. Nasopharyngeal (NP) swab sample collection for diagnosing upper-respiratory illnesses is one type of close contact task that is interesting for robotics due to the dexterity requirements and the unobservability of the nasal cavity. We propose a control system that performs the test using a collaborative manipulator arm with an instrumented end-effector to take visual and force measurements, under the scenario that the patient is unrestrained and the tools are general enough to be applied to other close contact tasks. The system employs a visual servo controller to align the swab with the nostrils. A compliant joint velocity controller inserts the swab along a trajectory optimized through a simulation environment, that also reacts to measured forces applied to the swab. Additional subsystems include a fuzzy logic system for detecting when the swab reaches the nasopharynx and a method for detaching the swab and aborting the procedure if safety criteria is violated. The system is evaluated using a second robotic arm that holds a nasal cavity phantom and simulates the natural head motions that could occur during the procedure. Through extensive experiments, we identify controller configurations capable of effectively performing the NP swab test even with significant head motion, which demonstrates the safety and reliability of the system.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
483,007
1611.05190
Driving CDCL Search
The CDCL algorithm is the leading solution adopted by state-of-the-art solvers for SAT, SMT, ASP, and others. Experiments show that the performance of CDCL solvers can be significantly boosted by embedding domain-specific heuristics, especially on large real-world problems. However, a proper integration of such criteria in off-the-shelf CDCL implementations is not obvious. In this paper, we distill the key ingredients that drive the search of CDCL solvers, and propose a general framework for designing and implementing new heuristics. We implemented our strategy in an ASP solver, and we experimented on two industrial domains. On hard problem instances, state-of-the-art implementations fail to find any solution in acceptable time, whereas our implementation is very successful and finds all solutions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
63,972
1806.08468
Personalized Thread Recommendation for MOOC Discussion Forums
Social learning, i.e., students learning from each other through social interactions, has the potential to significantly scale up instruction in online education. In many cases, such as in massive open online courses (MOOCs), social learning is facilitated through discussion forums hosted by course providers. In this paper, we propose a probabilistic model for the process of learners posting on such forums, using point processes. Different from existing works, our method integrates topic modeling of the post text, timescale modeling of the decay in post activity over time, and learner topic interest modeling into a single model, and infers this information from user data. Our method also varies the excitation levels induced by posts according to the thread structure, to reflect typical notification settings in discussion forums. We experimentally validate the proposed model on three real-world MOOC datasets, with the largest one containing up to 6,000 learners making 40,000 posts in 5,000 threads. Results show that our model excels at thread recommendation, achieving significant improvement over a number of baselines, thus showing promise of being able to direct learners to threads that they are interested in more efficiently. Moreover, we demonstrate analytics that our model parameters can provide, such as the timescales of different topic categories in a course.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
101,158
2405.04028
Masked Graph Transformer for Large-Scale Recommendation
Graph Transformers have garnered significant attention for learning graph-structured data, thanks to their superb ability to capture long-range dependencies among nodes. However, the quadratic space and time complexity hinders the scalability of Graph Transformers, particularly for large-scale recommendation. Here we propose an efficient Masked Graph Transformer, named MGFormer, capable of capturing all-pair interactions among nodes with a linear complexity. To achieve this, we treat all user/item nodes as independent tokens, enhance them with positional embeddings, and feed them into a kernelized attention module. Additionally, we incorporate learnable relative degree information to appropriately reweigh the attentions. Experimental results show the superior performance of our MGFormer, even with a single attention layer.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
452,413
2002.03374
Communication Efficient Secret Sharing in the Presence of Malicious Adversary
Consider the communication efficient secret sharing problem. A dealer wants to share a secret with $n$ parties such that any $k\leq n$ parties can reconstruct the secret and any $z<k$ parties eavesdropping on their shares obtain no information about the secret. In addition, a legitimate user contacting any $d$, $k\leq d \leq n$, parties to decode the secret can do so by reading and downloading the minimum amount of information needed. We are interested in communication efficient secret sharing schemes that tolerate the presence of malicious parties actively corrupting their shares and the data delivered to the users. The knowledge of the malicious parties about the secret is restricted to the shares they obtain. We characterize the capacity, i.e. maximum size of the secret that can be shared. We derive the minimum amount of information needed to to be read and communicated to a legitimate user to decode the secret from $d$ parties, $k\leq d \leq n$. Error-correcting codes do not achieve capacity in this setting. We construct codes that achieve capacity and achieve minimum read and communication costs for all possible values of $d$. Our codes are based on Staircase codes, previously introduced for communication efficient secret sharing, and on the use of a pairwise hashing scheme used in distributed data storage and network coding settings to detect errors inserted by a limited knowledge adversary.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
163,244
2211.09847
CoLI-Machine Learning Approaches for Code-mixed Language Identification at the Word Level in Kannada-English Texts
The task of automatically identifying a language used in a given text is called Language Identification (LI). India is a multilingual country and many Indians especially youths are comfortable with Hindi and English, in addition to their local languages. Hence, they often use more than one language to post their comments on social media. Texts containing more than one language are called "code-mixed texts" and are a good source of input for LI. Languages in these texts may be mixed at sentence level, word level or even at sub-word level. LI at word level is a sequence labeling problem where each and every word in a sentence is tagged with one of the languages in the predefined set of languages. In order to address word level LI in code-mixed Kannada-English (Kn-En) texts, this work presents i) the construction of code-mixed Kn-En dataset called CoLI-Kenglish dataset, ii) code-mixed Kn-En embedding and iii) learning models using Machine Learning (ML), Deep Learning (DL) and Transfer Learning (TL) approaches. Code-mixed Kn-En texts are extracted from Kannada YouTube video comments to construct CoLI-Kenglish dataset and code-mixed Kn-En embedding. The words in CoLI-Kenglish dataset are grouped into six major categories, namely, "Kannada", "English", "Mixed-language", "Name", "Location" and "Other". The learning models, namely, CoLI-vectors and CoLI-ngrams based on ML, CoLI-BiLSTM based on DL and CoLI-ULMFiT based on TL approaches are built and evaluated using CoLI-Kenglish dataset. The performances of the learning models illustrated, the superiority of CoLI-ngrams model, compared to other models with a macro average F1-score of 0.64. However, the results of all the learning models were quite competitive with each other.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
331,119
2009.06899
Co-evolution of Functional Brain Network at Multiple Scales during Early Infancy
The human brains are organized into hierarchically modular networks facilitating efficient and stable information processing and supporting diverse cognitive processes during the course of development. While the remarkable reconfiguration of functional brain network has been firmly established in early life, all these studies investigated the network development from a "single-scale" perspective, which ignore the richness engendered by its hierarchical nature. To fill this gap, this paper leveraged a longitudinal infant resting-state functional magnetic resonance imaging dataset from birth to 2 years of age, and proposed an advanced methodological framework to delineate the multi-scale reconfiguration of functional brain network during early development. Our proposed framework is consist of two parts. The first part developed a novel two-step multi-scale module detection method that could uncover efficient and consistent modular structure for longitudinal dataset from multiple scales in a completely data-driven manner. The second part designed a systematic approach that employed the linear mixed-effect model to four global and nodal module-related metrics to delineate scale-specific age-related changes of network organization. By applying our proposed methodological framework on the collected longitudinal infant dataset, we provided the first evidence that, in the first 2 years of life, the brain functional network is co-evolved at different scales, where each scale displays the unique reconfiguration pattern in terms of modular organization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
195,784
2107.03380
RRL: Resnet as representation for Reinforcement Learning
The ability to autonomously learn behaviors via direct interactions in uninstrumented environments can lead to generalist robots capable of enhancing productivity or providing care in unstructured settings like homes. Such uninstrumented settings warrant operations only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning -- a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fail to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
245,144
2209.03300
Spach Transformer: Spatial and Channel-wise Transformer Based on Local and Global Self-attentions for PET Image Denoising
Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., $^{18}$F-FDG, $^{18}$F-ACBC, $^{18}$F-DCFPyL, and $^{68}$Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures. Our codes are available at https://github.com/sijang/SpachTransformer
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
316,463
2301.06620
Does Spending More Always Ensure Higher Cooperation? An Analysis of Institutional Incentives on Heterogeneous Networks
Humans have developed considerable machinery used at scale to create policies and to distribute incentives, yet we are forever seeking ways in which to improve upon these, our institutions. Especially when funding is limited, it is imperative to optimise spending without sacrificing positive outcomes, a challenge which has often been approached within several areas of social, life and engineering sciences. These studies often neglect the availability of information, cost restraints, or the underlying complex network structures, which define real-world populations. Here, we have extended these models, including the aforementioned concerns, but also tested the robustness of their findings to stochastic social learning paradigms. Akin to real-world decisions on how best to distribute endowments, we study several incentive schemes, which consider information about the overall population, local neighbourhoods, or the level of influence which a cooperative node has in the network, selectively rewarding cooperative behaviour if certain criteria are met. Following a transition towards a more realistic network setting and stochastic behavioural update rule, we found that carelessly promoting cooperators can often lead to their downfall in socially diverse settings. These emergent cyclic patterns not only damage cooperation, but also decimate the budgets of external investors. Our findings highlight the complexity of designing effective and cogent investment policies in socially diverse populations.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
340,686
2404.14811
FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions of participating devices. This paper presents a new Federated Learning with Adjusted leaRning ratE (FLARE) framework to mitigate the impact of the heterogeneity. The key idea is to allow the participating devices to adjust their individual learning rates and local training iterations, adapting to their instantaneous computing powers. The convergence upper bound of FLARE is established rigorously under a general setting with non-convex models in the presence of non-i.i.d. datasets and imbalanced computing powers. By minimizing the upper bound, we further optimize the scheduling of FLARE to exploit the channel heterogeneity. A nested problem structure is revealed to facilitate iteratively allocating the bandwidth with binary search and selecting devices with a new greedy method. A linear problem structure is also identified and a low-complexity linear programming scheduling policy is designed when training models have large Lipschitz constants. Experiments demonstrate that FLARE consistently outperforms the baselines in test accuracy, and converges much faster with the proposed scheduling policy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
448,828
2102.07945
Local Hyper-Flow Diffusion
Recently, hypergraphs have attracted a lot of attention due to their ability to capture complex relations among entities. The insurgence of hypergraphs has resulted in data of increasing size and complexity that exhibit interesting small-scale and local structure, e.g., small-scale communities and localized node-ranking around a given set of seed nodes. Popular and principled ways to capture the local structure are the local hypergraph clustering problem and related seed set expansion problem. In this work, we propose the first local diffusion method that achieves edge-size-independent Cheeger-type guarantee for the problem of local hypergraph clustering while applying to a rich class of higher-order relations that covers many previously studied special cases. Our method is based on a primal-dual optimization formulation where the primal problem has a natural network flow interpretation, and the dual problem has a cut-based interpretation using the $\ell_2$-norm penalty on associated cut-costs. We demonstrate the new technique is significantly better than state-of-the-art methods on both synthetic and real-world data.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,283
2403.16124
Enhancing Visual Continual Learning with Language-Guided Supervision
Continual learning (CL) aims to empower models to learn new tasks without forgetting previously acquired knowledge. Most prior works concentrate on the techniques of architectures, replay data, regularization, \etc. However, the category name of each class is largely neglected. Existing methods commonly utilize the one-hot labels and randomly initialize the classifier head. We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks. In this paper, we revisit the role of the classifier head within the CL paradigm and replace the classifier with semantic knowledge from pretrained language models (PLMs). Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals during training. Such targets fully consider the semantic correlation between all classes across tasks. Empirical studies show that our approach mitigates forgetting by alleviating representation drifting and facilitating knowledge transfer across tasks. The proposed method is simple to implement and can seamlessly be plugged into existing methods with negligible adjustments. Extensive experiments based on eleven mainstream baselines demonstrate the effectiveness and generalizability of our approach to various protocols. For example, under the class-incremental learning setting on ImageNet-100, our method significantly improves the Top-1 accuracy by 3.2\% to 6.1\% while reducing the forgetting rate by 2.6\% to 13.1\%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
440,880
2004.09316
Degree-targeted cascades in modular, degree-heterogeneous networks
The dynamics of cascading activation, such as rapid changes in public opinion and the outbreak of disease epidemics, have a crucial dependence on the connectivity patterns among the agents. We study cascading dynamics in modular, degree-heterogeneous networks, and consider the impact of intra-module seeding strategy on inter-module spread. Specifically, we establish that although activating the highest-degree nodes is more effective than random selection at growing a cascade locally, there is a critical level of inter-module connectivity required for a cascade to cross from one module to another, irrespective of the seeding strategy. We present an analytical proof of this statement for the case that each module has the same degree distribution and all module pairs have the same inter-module connectivity, while our simulation results suggest its validity for more general situations, including a ring of modules. Interestingly, we find that on a network comprised of two modules, this critical level is primarily determined by the degree distribution of the \emph{alter} module, as opposed to the seed module. Our analytical approach extends a method developed by Gleeson, but is able to capture different seeding strategies using only one dynamical variable per module, namely the conditional exposure probability. Our work shows that the possibility of a global cascade depends sensitively on inter-module connectivity, and less on the intra-module seeding strategy. This suggests, for example, that slight changes to inter-module connectivity can be a feasible intervention strategy to promote or inhibit global cascades.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
173,311
2112.07239
Compensating trajectory bias for unsupervised patient stratification using adversarial recurrent neural networks
Electronic healthcare records are an important source of information which can be used in patient stratification to discover novel disease phenotypes. However, they can be challenging to work with as data is often sparse and irregularly sampled. One approach to solve these limitations is learning dense embeddings that represent individual patient trajectories using a recurrent neural network autoencoder (RNN-AE). This process can be susceptible to unwanted data biases. We show that patient embeddings and clusters using previously proposed RNN-AE models might be impacted by a trajectory bias, meaning that results are dominated by the amount of data contained in each patients trajectory, instead of clinically relevant details. We investigate this bias on 2 datasets (from different hospitals) and 2 disease areas as well as using different parts of the patient trajectory. Our results using 2 previously published baseline methods indicate a particularly strong bias in case of an event-to-end trajectory. We present a method that can overcome this issue using an adversarial training scheme on top of a RNN-AE. Our results show that our approach can reduce the trajectory bias in all cases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,421
2205.04992
KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
Image-based volumetric humans using pixel-aligned features promise generalization to unseen poses and identities. Prior work leverages global spatial encodings and multi-view geometric consistency to reduce spatial ambiguity. However, global encodings often suffer from overfitting to the distribution of the training data, and it is difficult to learn multi-view consistent reconstruction from sparse views. In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views. One of the key ideas is to encode relative spatial 3D information via sparse 3D keypoints. This approach is robust to the sparsity of viewpoints and cross-dataset domain gap. Our approach outperforms state-of-the-art methods for head reconstruction. On human body reconstruction for unseen subjects, we also achieve performance comparable to prior work that uses a parametric human body model and temporal feature aggregation. Our experiments show that a majority of errors in prior work stem from an inappropriate choice of spatial encoding and thus we suggest a new direction for high-fidelity image-based human modeling. https://markomih.github.io/KeypointNeRF
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
295,809
quant-ph/0610200
Quantum List Decoding of Classical Block Codes of Polynomially Small Rate from Quantumly Corrupted Codewords
Given a classical error-correcting block code, the task of quantum list decoding is to produce from any quantumly corrupted codeword a short list containing all messages whose codewords exhibit high "presence" in the quantumly corrupted codeword. Efficient quantum list decoders have been used to prove a quantum hardcore property of classical codes. However, the code rates of all known families of efficiently quantum list-decodable codes are, unfortunately, too small for other practical applications. To improve those known code rates, we prove that a specific code family of polynomially small code rate over a fixed code alphabet, obtained by concatenating generalized Reed-Solomon codes as outer codes with Hadamard codes as inner codes, has an efficient quantum list-decoding algorithm if its codewords have relatively high codeword presence in a given quantumly corrupted codeword. As an immediate application, we use the quantum list decodability of this code family to solve a certain form of quantum search problems in polynomial time. When the codeword presence becomes smaller, in contrast, we show that the quantum list decodability of generalized Reed-Solomon codes with high confidence is closely related to the efficient solvability of the following two problems: the noisy polynomial interpolation problem and the bounded distance vector problem. Moreover, assuming that NP is not included in BQP, we also prove that no efficient quantum list decoder exists for the generalized Reed-Solomon codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
540,905
2112.03364
Scalable Geometric Deep Learning on Molecular Graphs
Deep learning in molecular and materials sciences is limited by the lack of integration between applied science, artificial intelligence, and high-performance computing. Bottlenecks with respect to the amount of training data, the size and complexity of model architectures, and the scale of the compute infrastructure are all key factors limiting the scaling of deep learning for molecules and materials. Here, we present $\textit{LitMatter}$, a lightweight framework for scaling molecular deep learning methods. We train four graph neural network architectures on over 400 GPUs and investigate the scaling behavior of these methods. Depending on the model architecture, training time speedups up to $60\times$ are seen. Empirical neural scaling relations quantify the model-dependent scaling and enable optimal compute resource allocation and the identification of scalable molecular geometric deep learning model implementations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
270,170
2308.00911
Optimal Sensor Deception to Deviate from an Allowed Itinerary
In this work, we study a class of deception planning problems in which an agent aims to alter a security monitoring system's sensor readings so as to disguise its adversarial itinerary as an allowed itinerary in the environment. The adversarial itinerary set and allowed itinerary set are captured by regular languages. To deviate without being detected, we investigate whether there exists a strategy for the agent to alter the sensor readings, with a minimal cost, such that for any of those paths it takes, the system thinks the agent took a path within the allowed itinerary. Our formulation assumes an offline sensor alteration where the agent determines the sensor alteration strategy and implement it, and then carry out any path in its deviation itinerary. We prove that the problem of solving the optimal sensor alteration is NP-hard, by a reduction from the directed multi-cut problem. Further, we present an exact algorithm based on integer linear programming and demonstrate the correctness and the efficacy of the algorithm in case studies.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
383,071
1810.00360
Improving Bag-of-Visual-Words Towards Effective Facial Expressive Image Classification
Bag-of-Visual-Words (BoVW) approach has been widely used in the recent years for image classification purposes. However, the limitations regarding optimal feature selection, clustering technique, the lack of spatial organization of the data and the weighting of visual words are crucial. These factors affect the stability of the model and reduce performance. We propose to develop an algorithm based on BoVW for facial expression analysis which goes beyond those limitations. Thus the visual codebook is built by using k-Means++ method to avoid poor clustering. To exploit reliable low level features, we search for the best feature detector that avoids locating a large number of keypoints which do not contribute to the classification process. Then, we propose to compute the relative conjunction matrix in order to preserve the spatial order of the data by coding the relationships among visual words. In addition, a weighting scheme that reflects how important a visual word is with respect to a given image is introduced. We speed up the learning process by using histogram intersection kernel by Support Vector Machine to learn a discriminative classifier. The efficiency of the proposed algorithm is compared with standard bag of visual words method and with bag of visual words method with spatial pyramid. Extensive experiments on the CK+, the MMI and the JAFFE databases show good average recognition rates. Likewise, the ability to recognize spontaneous and non-basic expressive states is investigated using the DynEmo database.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
109,162
2309.09582
Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs
Most NLP tasks are modeled as supervised learning and thus require labeled training data to train effective models. However, manually producing such data at sufficient quality and quantity is known to be costly and time-intensive. Current research addresses this bottleneck by exploring a novel paradigm called zero-shot learning via dataset generation. Here, a powerful LLM is prompted with a task description to generate labeled data that can be used to train a downstream NLP model. For instance, an LLM might be prompted to "generate 500 movie reviews with positive overall sentiment, and another 500 with negative sentiment." The generated data could then be used to train a binary sentiment classifier, effectively leveraging an LLM as a teacher to a smaller student model. With this demo, we introduce Fabricator, an open-source Python toolkit for dataset generation. Fabricator implements common dataset generation workflows, supports a wide range of downstream NLP tasks (such as text classification, question answering, and entity recognition), and is integrated with well-known libraries to facilitate quick experimentation. With Fabricator, we aim to support researchers in conducting reproducible dataset generation experiments using LLMs and help practitioners apply this approach to train models for downstream tasks.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
392,670
2303.04381
Automatically Auditing Large Language Models via Discrete Optimization
Auditing large language models for unexpected behaviors is critical to preempt catastrophic deployments, yet remains challenging. In this work, we cast auditing as an optimization problem, where we automatically search for input-output pairs that match a desired target behavior. For example, we might aim to find a non-toxic input that starts with "Barack Obama" that a model maps to a toxic output. This optimization problem is difficult to solve as the set of feasible points is sparse, the space is discrete, and the language models we audit are non-linear and high-dimensional. To combat these challenges, we introduce a discrete optimization algorithm, ARCA, that jointly and efficiently optimizes over inputs and outputs. Our approach automatically uncovers derogatory completions about celebrities (e.g. "Barack Obama is a legalized unborn" -> "child murderer"), produces French inputs that complete to English outputs, and finds inputs that generate a specific name. Our work offers a promising new tool to uncover models' failure-modes before deployment.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
350,067
2312.13528
DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video
Neural Radiance Fields (NeRF), initially developed for static scenes, have inspired many video novel view synthesis techniques. However, the challenge for video view synthesis arises from motion blur, a consequence of object or camera movement during exposure, which hinders the precise synthesis of sharp spatio-temporal views. In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage. Our DyBluRF is the first that handles the novel view synthesis for blurry monocular video with a novel two-stage framework. In the BRI stage, we coarsely reconstruct dynamic 3D scenes and jointly initialize the base ray, which is further used to predict latent sharp rays, using the inaccurate camera pose information from the given blurry frames. In the MDD stage, we introduce a novel Incremental Latent Sharp-rays Prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components. We further propose two loss functions for effective geometry regularization and decomposition of static and dynamic scene components without any mask supervision. Experiments show that DyBluRF outperforms qualitatively and quantitatively the SOTA methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
417,339
2309.08302
T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point Clouds
Deep perception models have to reliably cope with an open-world setting of domain shifts induced by different geographic regions, sensor properties, mounting positions, and several other reasons. Since covering all domains with annotated data is technically intractable due to the endless possible variations, researchers focus on unsupervised domain adaptation (UDA) methods that adapt models trained on one (source) domain with annotations available to another (target) domain for which only unannotated data are available. Current predominant methods either leverage semi-supervised approaches, e.g., teacher-student setup, or exploit privileged data, such as other sensor modalities or temporal data consistency. We introduce a novel domain adaptation method that leverages the best of both trends. Our approach combines input data's temporal and cross-sensor geometric consistency with the mean teacher method. Dubbed T-UDA for "temporal UDA", such a combination yields massive performance gains for the task of 3D semantic segmentation of driving scenes. Experiments are conducted on Waymo Open Dataset, nuScenes and SemanticKITTI, for two popular 3D point cloud architectures, Cylinder3D and MinkowskiNet. Our codes are publicly available at https://github.com/ctu-vras/T-UDA.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
392,118
2407.10828
Towards Enhanced Classification of Abnormal Lung sound in Multi-breath: A Light Weight Multi-label and Multi-head Attention Classification Method
This study aims to develop an auxiliary diagnostic system for classifying abnormal lung respiratory sounds, enhancing the accuracy of automatic abnormal breath sound classification through an innovative multi-label learning approach and multi-head attention mechanism. Addressing the issue of class imbalance and lack of diversity in existing respiratory sound datasets, our study employs a lightweight and highly accurate model, using a two-dimensional label set to represent multiple respiratory sound characteristics. Our method achieved a 59.2% ICBHI score in the four-category task on the ICBHI2017 dataset, demonstrating its advantages in terms of lightweight and high accuracy. This study not only improves the accuracy of automatic diagnosis of lung respiratory sound abnormalities but also opens new possibilities for clinical applications.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
473,146
1507.03698
Lifting GIS Maps into Strong Geometric Context for Scene Understanding
Contextual information can have a substantial impact on the performance of visual tasks such as semantic segmentation, object detection, and geometric estimation. Data stored in Geographic Information Systems (GIS) offers a rich source of contextual information that has been largely untapped by computer vision. We propose to leverage such information for scene understanding by combining GIS resources with large sets of unorganized photographs using Structure from Motion (SfM) techniques. We present a pipeline to quickly generate strong 3D geometric priors from 2D GIS data using SfM models aligned with minimal user input. Given an image resectioned against this model, we generate robust predictions of depth, surface normals, and semantic labels. We show that the precision of the predicted geometry is substantially more accurate other single-image depth estimation methods. We then demonstrate the utility of these contextual constraints for re-scoring pedestrian detections, and use these GIS contextual features alongside object detection score maps to improve a CRF-based semantic segmentation framework, boosting accuracy over baseline models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
45,095
2401.03588
Gate--Level Statistical Timing Analysis: Exact Solutions, Approximations and Algorithms
In this paper, the Statistical Static Timing Analysis (SSTA) is considered within the block--based approach. The statistical model of the logic gate delay propagation is systematically studied and the exact analytical solution is obtained, which is strongly non-Gaussian. The procedure of handling such (non-Gaussian) distributions is described and the corresponding algorithm for the critical path delay is outlined. Finally, the proposed approach is tested and compared with Monte Carlo simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
420,162
2410.22134
ProMoE: Fast MoE-based LLM Serving using Proactive Caching
The promising applications of large language models are often limited by the constrained GPU memory capacity available on edge devices. Mixture-of-Experts (MoE) models help address this issue by activating only a subset of the model's parameters during computation. This approach allows the unused parameters to be offloaded to host memory, thereby reducing the overall GPU memory demand. However, existing cache-based offloading solutions handle cache misses reactively, which significantly impacts system performance. In this paper, we introduce ProMoE, a novel proactive caching system that utilizes intermediate results to predict subsequent expert usage. By proactively fetching experts in advance, ProMoE eliminates passive cache misses, removes loading time from the critical path, and reduces the performance overhead associated with offloading. Our evaluations demonstrate that ProMoE achieves an average speedup of 2.20x (up to 3.21x) and 2.07x (up to 5.02x) in the prefill and decode stages, respectively, compared to existing offloading solutions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
503,516
2409.00851
Dissecting Temporal Understanding in Text-to-Audio Retrieval
Recent advancements in machine learning have fueled research on multimodal tasks, such as for instance text-to-video and text-to-audio retrieval. These tasks require models to understand the semantic content of video and audio data, including objects, and characters. The models also need to learn spatial arrangements and temporal relationships. In this work, we analyse the temporal ordering of sounds, which is an understudied problem in the context of text-to-audio retrieval. In particular, we dissect the temporal understanding capabilities of a state-of-the-art model for text-to-audio retrieval on the AudioCaps and Clotho datasets. Additionally, we introduce a synthetic text-audio dataset that provides a controlled setting for evaluating temporal capabilities of recent models. Lastly, we present a loss function that encourages text-audio models to focus on the temporal ordering of events. Code and data are available at https://www.robots.ox.ac.uk/~vgg/research/audio-retrieval/dtu/.
false
false
true
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
485,097
2408.08925
Retail-GPT: leveraging Retrieval Augmented Generation (RAG) for building E-commerce Chat Assistants
This work presents Retail-GPT, an open-source RAG-based chatbot designed to enhance user engagement in retail e-commerce by guiding users through product recommendations and assisting with cart operations. The system is cross-platform and adaptable to various e-commerce domains, avoiding reliance on specific chat applications or commercial activities. Retail-GPT engages in human-like conversations, interprets user demands, checks product availability, and manages cart operations, aiming to serve as a virtual sales agent and test the viability of such assistants across different retail businesses.
true
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
481,215
2404.00390
Learning truly monotone operators with applications to nonlinear inverse problems
This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss. The proposed method is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks. The Forward-Backward-Forward (FBF) algorithm is employed to address these problems, offering a solution even when the Lipschitz constant of the neural network is unknown. Notably, the FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone. Building on plug-and-play methodologies, our objective is to apply these newly learned operators to solving non-linear inverse problems. To achieve this, we initially formulate the problem as a variational inclusion problem. Subsequently, we train a monotone neural network to approximate an operator that may not inherently be monotone. Leveraging the FBF algorithm, we then show simulation examples where the non-linear inverse problem is successfully solved.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
442,899
2501.13397
ExLM: Rethinking the Impact of [MASK] Tokens in Masked Language Models
Masked Language Models (MLMs) have achieved remarkable success in many self-supervised representation learning tasks. MLMs are trained by randomly masking portions of the input sequences with [MASK] tokens and learning to reconstruct the original content based on the remaining context. This paper explores the impact of [MASK] tokens on MLMs. Analytical studies show that masking tokens can introduce the corrupted semantics problem, wherein the corrupted context may convey multiple, ambiguous meanings. This problem is also a key factor affecting the performance of MLMs on downstream tasks. Based on these findings, we propose a novel enhanced-context MLM, ExLM. Our approach expands [MASK] tokens in the input context and models the dependencies between these expanded states. This enhancement increases context capacity and enables the model to capture richer semantic information, effectively mitigating the corrupted semantics problem during pre-training. Experimental results demonstrate that ExLM achieves significant performance improvements in both text modeling and SMILES modeling tasks. Further analysis confirms that ExLM enriches semantic representations through context enhancement, and effectively reduces the semantic multimodality commonly observed in MLMs.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
526,670
2404.05183
Progressive Alignment with VLM-LLM Feature to Augment Defect Classification for the ASE Dataset
Traditional defect classification approaches are facing with two barriers. (1) Insufficient training data and unstable data quality. Collecting sufficient defective sample is expensive and time-costing, consequently leading to dataset variance. It introduces the difficulty on recognition and learning. (2) Over-dependence on visual modality. When the image pattern and texture is monotonic for all defect classes in a given dataset, the performance of conventional AOI system cannot be guaranteed. In scenarios where image quality is compromised due to mechanical failures or when defect information is inherently difficult to discern, the performance of deep models cannot be guaranteed. A main question is, "how to solve those two problems when they occur at the same time?" The feasible strategy is to explore another feature within dataset and combine an eminent vision-language model (VLM) and Large-Language model (LLM) with their astonishing zero-shot capability. In this work, we propose the special ASE dataset, including rich data description recorded on image, for defect classification, but the defect feature is uneasy to learn directly. Secondly, We present the prompting for VLM-LLM against defect classification with the proposed ASE dataset to activate extra-modality feature from images to enhance performance. Then, We design the novel progressive feature alignment (PFA) block to refine image-text feature to alleviate the difficulty of alignment under few-shot scenario. Finally, the proposed Cross-modality attention fusion (CMAF) module can effectively fuse different modality feature. Experiment results have demonstrated our method's effectiveness over several defect classification methods for the ASE dataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
444,972
2003.06068
Snapshot Samplings of the Bitcoin Transaction Network and Analysis of Cryptocurrency Growth
The purpose of this work was to perform a network analysis on the rapidly growing bitcoin transaction network. Using a web-socket API, we collected data on all transactions occurring during a six hour window. Sender and receiver addresses as well as the amount of bitcoin exchanged were record. Graphs were generated, using R and Gephi, in which nodes represent addresses and edges represent the exchange of bitcoin. The six hour data set was subsetted into a one and two hour sampling snapshot of the network. We performed comparisons and analysis on all subsets of the data in an effort to determine the minimum sampling length that represented the network as a whole. Our results suggest that the six hour sampling was the minimum limit with respect to sampling time needed to accurately characterize the bitcoin transaction network.Anonymity is a desired feature of the blockchain and bitcoin network however, it limited us in our analysis and conclusions we drew from our results were mostly inferred. Future work is needed and being done to gather more comprehensive data so that the bitcoin transaction network can be better analyzed.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
168,017
1707.03350
MovePattern: Interactive Framework to Provide Scalable Visualization of Movement Patterns
The rapid growth of movement data sources such as GPS traces, traffic networks and social media have provided analysts with the opportunity to explore collective patterns of geographical movements in a nearly real-time fashion. A fast and interactive visualization framework can help analysts to understand these massive and dynamically changing datasets. However, previous studies on movement visualization either ignore the unique properties of geographical movement or are unable to handle today's massive data. In this paper, we develop MovePattern, a novel framework to 1) efficiently construct a concise multi-level view of movements using a scalable and spatially-aware MapReduce-based approach and 2) present a fast and highly interactive webbased environment which engages vector-based visualization to include on-the-fly customization and the ability to enhance analytical functions by storing metadata for both places and movements. We evaluate the framework using the movements of Twitter users captured from geo-tagged tweets. The experiments confirmed that our framework is able to aggregate close to 180 million movements in a few minutes. In addition, we run series of stress tests on the front-end of the framework to ensure that simultaneous user queries do not lead to long latency in the user response.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
true
76,848
2502.13407
JL1-CD: A New Benchmark for Remote Sensing Change Detection and a Robust Multi-Teacher Knowledge Distillation Framework
Deep learning has achieved significant success in the field of remote sensing image change detection (CD), yet two major challenges remain: the scarcity of sub-meter, all-inclusive open-source CD datasets, and the difficulty of achieving consistent and satisfactory detection results across images with varying change areas. To address these issues, we introduce the JL1-CD dataset, which contains 5,000 pairs of 512 x 512 pixel images with a resolution of 0.5 to 0.75 meters. Additionally, we propose a multi-teacher knowledge distillation (MTKD) framework for CD. Experimental results on the JL1-CD and SYSU-CD datasets demonstrate that the MTKD framework significantly improves the performance of CD models with various network architectures and parameter sizes, achieving new state-of-the-art results. The code is available at https://github.com/circleLZY/MTKD-CD.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
535,352
1901.01985
Combining Unsupervised and Supervised Learning for Asset Class Failure Prediction in Power Systems
In power systems, an asset class is a group of power equipment that has the same function and shares similar electrical or mechanical characteristics. Predicting failures for different asset classes is critical for electric utilities towards developing cost-effective asset management strategies. Previously, physical age based Weibull distribution has been widely used to failure prediction. However, this mathematical model cannot incorporate asset condition data such as inspection or testing results. As a result, the prediction cannot be very specific and accurate for individual assets. To solve this important problem, this paper proposes a novel and comprehensive data-driven approach based on asset condition data: K-means clustering as an unsupervised learning method is used to analyze the inner structure of historical asset condition data and produce the asset conditional ages; logistic regression as a supervised learning method takes in both asset physical ages and conditional ages to classify and predict asset statuses. Furthermore, an index called average aging rate is defined to quantify, track and estimate the relationship between asset physical age and conditional age. This approach was applied to an urban distribution system in West Canada to predict medium-voltage cable failures. Case studies and comparison with standard Weibull distribution are provided. The proposed approach demonstrates superior performance and practicality for predicting asset class failures in power systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
118,094
2011.09270
Respiratory Distress Detection from Telephone Speech using Acoustic and Prosodic Features
With the widespread use of telemedicine services, automatic assessment of health conditions via telephone speech can significantly impact public health. This work summarizes our preliminary findings on automatic detection of respiratory distress using well-known acoustic and prosodic features. Speech samples are collected from de-identified telemedicine phonecalls from a healthcare provider in Bangladesh. The recordings include conversational speech samples of patients talking to doctors showing mild or severe respiratory distress or asthma symptoms. We hypothesize that respiratory distress may alter speech features such as voice quality, speaking pattern, loudness, and speech-pause duration. To capture these variations, we utilize a set of well-known acoustic and prosodic features with a Support Vector Machine (SVM) classifier for detecting the presence of respiratory distress. Experimental evaluations are performed using a 3-fold cross-validation scheme, ensuring patient-independent data splits. We obtained an overall accuracy of 86.4\% in detecting respiratory distress from the speech recordings using the acoustic feature set. Correlation analysis reveals that the top-performing features include loudness, voice rate, voice duration, and pause duration.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
207,139
1605.01435
A Fast Lightweight Time-Series Store for IoT Data
With the advent of the Internet-of-Things (IoT), handling large volumes of time-series data has become a growing concern. Data, generated from millions of Internet-connected sensors, will drive new IoT applications and services. A key requirement is the ability to aggregate, preprocess, index, store and analyze data with minimal latency so that time-to-insight can be reduced. In the future, we expect real-time data collection and analysis to be performed both on small devices (e.g., in hubs and appliances) as well in server-based infrastructure. The ability to localize sensitive data to the home, and thus preserve privacy, is a key driver for small-device deployment. In this paper, we present an efficient architecture for time-series data management that provides a high data ingestion rate, while still being sufficiently lightweight that it can be deployed in embedded environments or small virtual machines. Our solution strives to minimize overhead and explores what can be done without complex indexing schemes that typically, for performance reasons, must be held in main memory. We combine a simple in-memory hierarchical index, log-structured store and in-flight sort, with a high-performance data pipeline architecture that is optimized for multicore platforms. We show that our solution is able to handle streaming insertions at over 4 million records per second (on a single x86 server) while still retaining SQL query performance better than or comparable to existing RDBMS.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
55,480
2303.16956
FeDiSa: A Semi-asynchronous Federated Learning Framework for Power System Fault and Cyberattack Discrimination
With growing security and privacy concerns in the Smart Grid domain, intrusion detection on critical energy infrastructure has become a high priority in recent years. To remedy the challenges of privacy preservation and decentralized power zones with strategic data owners, Federated Learning (FL) has contemporarily surfaced as a viable privacy-preserving alternative which enables collaborative training of attack detection models without requiring the sharing of raw data. To address some of the technical challenges associated with conventional synchronous FL, this paper proposes FeDiSa, a novel Semi-asynchronous Federated learning framework for power system faults and cyberattack Discrimination which takes into account communication latency and stragglers. Specifically, we propose a collaborative training of deep auto-encoder by Supervisory Control and Data Acquisition sub-systems which upload their local model updates to a control centre, which then perform a semi-asynchronous model aggregation for a new global model parameters based on a buffer system and a preset cut-off time. Experiments on the proposed framework using publicly available industrial control systems datasets reveal superior attack detection accuracy whilst preserving data confidentiality and minimizing the adverse effects of communication latency and stragglers. Furthermore, we see a 35% improvement in training time, thus validating the robustness of our proposed method.
false
false
false
false
false
false
true
false
false
false
true
false
true
false
false
false
false
true
355,054
0812.1557
To Cooperate, or Not to Cooperate in Imperfectly-Known Fading Channels
In this paper, communication over imperfectly-known fading channels with different degrees of cooperation is studied. The three-node relay channel is considered. It is assumed that communication starts with the network training phase in which the receivers estimate the fading coefficients of their respective channels. In the data transmission phase, amplify-and-forward and decode-and-forward relaying schemes are employed. For different cooperation protocols, achievable rate expressions are obtained. These achievable rate expressions are then used to find the optimal resource allocation strategies. In particular, the fraction of total time or bandwidth that needs to be allocated to the relay for best performance is identified. Under a total power constraint, optimal allocation of power between the source and relay is investigated. Finally, bit energy requirements in the low-power regime are studied.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,763
1602.01569
Unraveling the Rank-One Solution Mystery of Robust MISO Downlink Transmit Optimization: A Verifiable Sufficient Condition via a New Duality Result
This paper concentrates on a robust transmit optimization problem for the multiuser multi-input single-output (MISO) downlink scenario and under inaccurate channel state information (CSI). This robust problem deals with a general-rank transmit covariance design, and it follows a safe rate-constrained formulation under spherically bounded CSI uncertainties. Curiously, simulation results in previous works suggested that the robust problem admits rank-one optimal transmit covariances in most cases. Such a numerical finding is appealing because transmission with rank-one covariances can be easily realized by single-stream transmit beamforming. This gives rise to a fundamentally important question, namely, whether we can theoretically identify conditions under which the robust problem admits a rank-one solution. In this paper, we identify one such condition. Simply speaking, we show that the robust problem is guaranteed to admit a rank-one solution if the CSI uncertainties are not too large and the multiuser channel is not too poorly conditioned. To establish the aforementioned condition, we develop a novel duality framework, through which an intimate relationship between the robust problem and a related maximin problem is revealed. Our condition involves only a simple expression with respect to the multiuser channel and other system parameters. In particular, unlike other sufficient rank-one conditions that have appeared in the literature, ours is verifiable. The application of our analysis framework to several other CSI uncertainty models is also discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,717
2301.03047
Large-scale Global Low-rank Optimization for Computational Compressed Imaging
Computational reconstruction plays a vital role in computer vision and computational photography. Most of the conventional optimization and deep learning techniques explore local information for reconstruction. Recently, nonlocal low-rank (NLR) reconstruction has achieved remarkable success in improving accuracy and generalization. However, the computational cost has inhibited NLR from seeking global structural similarity, which consequentially keeps it trapped in the tradeoff between accuracy and efficiency and prevents it from high-dimensional large-scale tasks. To address this challenge, we report here the global low-rank (GLR) optimization technique, realizing highly-efficient large-scale reconstruction with global self-similarity. Inspired by the self-attention mechanism in deep learning, GLR extracts exemplar image patches by feature detection instead of conventional uniform selection. This directly produces key patches using structural features to avoid burdensome computational redundancy. Further, it performs patch matching across the entire image via neural-based convolution, which produces the global similarity heat map in parallel, rather than conventional sequential block-wise matching. As such, GLR improves patch grouping efficiency by more than one order of magnitude. We experimentally demonstrate GLR's effectiveness on temporal, frequency, and spectral dimensions, including different computational imaging modalities of compressive temporal imaging, magnetic resonance imaging, and multispectral filter array demosaicing. This work presents the superiority of inherent fusion of deep learning strategies and iterative optimization, and breaks the persistent dilemma of the tradeoff between accuracy and efficiency for various large-scale reconstruction tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
339,679
2404.05187
LGSDF: Continual Global Learning of Signed Distance Fields Aided by Local Updating
Implicit reconstruction of ESDF (Euclidean Signed Distance Field) involves training a neural network to regress the signed distance from any point to the nearest obstacle, which has the advantages of lightweight storage and continuous querying. However, existing algorithms usually rely on conflicting raw observations as training data, resulting in poor map performance. In this paper, we propose LGSDF, an ESDF continual Global learning algorithm aided by Local updating. At the front end, axis-aligned grids are dynamically updated by pre-processed sensor observations, where incremental fusion alleviates estimation error caused by limited viewing directions. At the back end, a randomly initialized implicit ESDF neural network performs continual self-supervised learning guided by these grids to generate smooth and continuous maps. The results on multiple scenes show that LGSDF can construct more accurate ESDF maps and meshes compared with SOTA (State Of The Art) explicit and implicit mapping algorithms. The source code of LGSDF is publicly available at https://github.com/BIT-DYN/LGSDF.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
true
444,975
2412.09424
Slope Considered Online Nonlinear Trajectory Planning with Differential Energy Model for Autonomous Driving
Achieving energy-efficient trajectory planning for autonomous driving remains a challenge due to the limitations of model-agnostic approaches. This study addresses this gap by introducing an online nonlinear programming trajectory optimization framework that integrates a differentiable energy model into autonomous systems. By leveraging traffic and slope profile predictions within a safety-critical framework, the proposed method enhances fuel efficiency for both sedans and diesel trucks by 3.71\% and 7.15\%, respectively, when compared to traditional model-agnostic quadratic programming techniques. These improvements translate to a potential \$6.14 billion economic benefit for the U.S. trucking industry. This work bridges the gap between model-agnostic autonomous driving and model-aware ECO-driving, highlighting a practical pathway for integrating energy efficiency into real-time trajectory planning.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
516,479
2206.11752
CLAMP: Prompt-based Contrastive Learning for Connecting Language and Animal Pose
Animal pose estimation is challenging for existing image-based methods because of limited training data and large intra- and inter-species variances. Motivated by the progress of visual-language research, we propose that pre-trained language models (e.g., CLIP) can facilitate animal pose estimation by providing rich prior knowledge for describing animal keypoints in text. However, we found that building effective connections between pre-trained language models and visual animal keypoints is non-trivial since the gap between text-based descriptions and keypoint-based visual features about animal pose can be significant. To address this issue, we introduce a novel prompt-based Contrastive learning scheme for connecting Language and AniMal Pose (CLAMP) effectively. The CLAMP attempts to bridge the gap by adapting the text prompts to the animal keypoints during network training. The adaptation is decomposed into spatial-aware and feature-aware processes, and two novel contrastive losses are devised correspondingly. In practice, the CLAMP enables the first cross-modal animal pose estimation paradigm. Experimental results show that our method achieves state-of-the-art performance under the supervised, few-shot, and zero-shot settings, outperforming image-based methods by a large margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
304,359
2205.08675
Addressing Resource and Privacy Constraints in Semantic Parsing Through Data Augmentation
We introduce a novel setup for low-resource task-oriented semantic parsing which incorporates several constraints that may arise in real-world scenarios: (1) lack of similar datasets/models from a related domain, (2) inability to sample useful logical forms directly from a grammar, and (3) privacy requirements for unlabeled natural utterances. Our goal is to improve a low-resource semantic parser using utterances collected through user interactions. In this highly challenging but realistic setting, we investigate data augmentation approaches involving generating a set of structured canonical utterances corresponding to logical forms, before simulating corresponding natural language and filtering the resulting pairs. We find that such approaches are effective despite our restrictive setup: in a low-resource setting on the complex SMCalFlow calendaring dataset (Andreas et al., 2020), we observe 33% relative improvement over a non-data-augmented baseline in top-1 match.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
297,016
2207.01230
Intelligent Reflecting Surface Enabled Multi-Target Sensing
Besides improving communication performance, intelligent reflecting surfaces (IRSs) are also promising enablers for achieving larger sensing coverage and enhanced sensing quality. Nevertheless, in the absence of a direct path between the base station (BS) and the targets, multi-target sensing is generally very difficult, since IRSs are incapable of proactively transmitting sensing beams or analyzing target information. Moreover, the echoes of different targets reflected via the IRS-established virtual links share the same directionality at the BS. In this paper, we study a wireless system comprising a multi-antenna BS and an IRS for multi-target sensing, where the beamforming vector and the IRS phase shifts are jointly optimized to improve the sensing performance. To meet the different sensing requirements, such as a minimum received power and a minimum sensing frequency, we propose three novel IRS-assisted sensing schemes: Time division (TD) sensing, signature sequence (SS) sensing, and hybrid TD-SS sensing. First, for TD sensing, the sensing tasks are performed in sequence over time. Subsequently, a novel signature sequence (SS) sensing scheme is proposed to improve sensing efficiency by establishing a relationship between directions and SSs. To strike a flexible balance between the beam pattern gain and sensing efficiency, we also propose a general hybrid TD-SS sensing scheme with target grouping, where targets belonging to the same group are sensed simultaneously via SS sensing, while the targets in different groups are assigned to orthogonal time slots. By controlling the number of groups, the hybrid TD-SS sensing scheme can provide a more flexible balance between beam pattern gain and sensing frequency. Moreover, ...
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
306,093
1703.01135
Deep Learning with Domain Adaptation for Accelerated Projection-Reconstruction MR
Purpose: The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. Methods: The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. Results: The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods.Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. Conclusion: We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,300
0911.4230
Introduction to Bioinformatics
Bioinformatics is a new discipline that addresses the need to manage and interpret the data that in the past decade was massively generated by genomic research. This discipline represents the convergence of genomics, biotechnology and information technology, and encompasses analysis and interpretation of data, modeling of biological phenomena, and development of algorithms and statistics. This article presents an introduction to bioinformatics
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
4,994
1901.04962
Multihop Routing for Data Delivery in V2X Networks
Data delivery relying on the carry-and-forward strategy of vehicle-to-vehicle (V2V) communications is of significant importance, however highly challenging due to frequent connection disruption. Fortunately, incorporating vehicle-to-infrastructure (V2I) communications, motivated by its availability in bridging long-range vehicular connectivity, dramatically improves delivery opportunity. Nevertheless, the cooperation of V2V and V2I communications, known as vehicular-to-everything (V2X) communications, necessitates a specific design of multihop routing for enhancing data delivery performance. To address this issue, this paper provides a mathematical framework to investigate the data delivery performance in V2X networks in terms of both delivery latency and data rate. With theoretical analysis, we formulate a global and a distributed optimization problem to maximize the weighted sum of delivery latency and data rate. The optimization problems are then solved by convex optimization theory and based on the solutions, we propose a global and a distributed multihop routing algorithm to select the optimal route for maximizing the weighted sum. The rigorousness of the proposed algorithms is validated by extensive simulation under a wide range of system parameters and simulation results shed insight on the design of multihop routing algorithm in V2X networks for minimizing latency and maximizing data rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,696
2006.04984
Making Convolutions Resilient via Algorithm-Based Error Detection Techniques
The ability of Convolutional Neural Networks (CNNs) to accurately process real-time telemetry has boosted their use in safety-critical and high-performance computing systems. As such systems require high levels of resilience to errors, CNNs must execute correctly in the presence of hardware faults. Full duplication provides the needed assurance but incurs a prohibitive 100% overhead. Algorithmic techniques are known to offer low-cost solutions, but the practical feasibility and performance of such techniques have never been studied for CNN deployment platforms (e.g., TensorFlow or TensorRT on GPUs). In this paper, we focus on algorithmically verifying Convolutions, which are the most resource-demanding operations in CNNs. We use checksums to verify convolutions, adding a small amount of redundancy, far less than full-duplication. We first identify the challenges that arise in employing Algorithm-Based Error Detection (ABED) for Convolutions in optimized inference platforms that fuse multiple network layers and use reduced-precision operations, and demonstrate how to overcome them. We propose and evaluate variations of ABED techniques that offer implementation complexity, runtime overhead, and coverage trade-offs. Results show that ABED can detect all transient hardware errors that might otherwise corrupt output and does so while incurring low runtime overheads (6-23%), offering at least 1.6X throughput to workloads compared to full duplication.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
180,882
2310.04436
Adaptive Control of an Inverted Pendulum by a Reinforcement Learning-based LQR Method
Inverted pendulums constitute one of the popular systems for benchmarking control algorithms. Several methods have been proposed for the control of this system, the majority of which rely on the availability of a mathematical model. However, deriving a mathematical model using physical parameters or system identification techniques requires manual effort. Moreover, the designed controllers may perform poorly if system parameters change. To mitigate these problems, recently, some studies used Reinforcement Learning (RL) based approaches for the control of inverted pendulum systems. Unfortunately, these methods suffer from slow convergence and local minimum problems. Moreover, they may require hyperparameter tuning which complicates the design process significantly. To alleviate these problems, the present study proposes an LQR-based RL method for adaptive balancing control of an inverted pendulum. As shown by numerical experiments, the algorithm stabilizes the system very fast without requiring a mathematical model or extensive hyperparameter tuning. In addition, it can adapt to parametric changes online.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
397,650
2112.15550
Improving Baselines in the Wild
We share our experience with the recently released WILDS benchmark, a collection of ten datasets dedicated to developing models and training strategies which are robust to domain shifts. Several experiments yield a couple of critical observations which we believe are of general interest for any future work on WILDS. Our study focuses on two datasets: iWildCam and FMoW. We show that (1) Conducting separate cross-validation for each evaluation metric is crucial for both datasets, (2) A weak correlation between validation and test performance might make model development difficult for iWildCam, (3) Minor changes in the training of hyper-parameters improve the baseline by a relatively large margin (mainly on FMoW), (4) There is a strong correlation between certain domains and certain target labels (mainly on iWildCam). To the best of our knowledge, no prior work on these datasets has reported these observations despite their obvious importance. Our code is public.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
273,820
2501.09064
Generative diffusion model with inverse renormalization group flows
Diffusion models represent a class of generative models that produce data by denoising a sample corrupted by white noise. Despite the success of diffusion models in computer vision, audio synthesis, and point cloud generation, so far they overlook inherent multiscale structures in data and have a slow generation process due to many iteration steps. In physics, the renormalization group offers a fundamental framework for linking different scales and giving an accurate coarse-grained model. Here we introduce a renormalization group-based diffusion model that leverages multiscale nature of data distributions for realizing a high-quality data generation. In the spirit of renormalization group procedures, we define a flow equation that progressively erases data information from fine-scale details to coarse-grained structures. Through reversing the renormalization group flows, our model is able to generate high-quality samples in a coarse-to-fine manner. We validate the versatility of the model through applications to protein structure prediction and image generation. Our model consistently outperforms conventional diffusion models across standard evaluation metrics, enhancing sample quality and/or accelerating sampling speed by an order of magnitude. The proposed method alleviates the need for data-dependent tuning of hyperparameters in the generative diffusion models, showing promise for systematically increasing sample efficiency based on the concept of the renormalization group.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,005
2304.05310
Neural Delay Differential Equations: System Reconstruction and Image Classification
Neural Ordinary Differential Equations (NODEs), a framework of continuous-depth neural networks, have been widely applied, showing exceptional efficacy in coping with representative datasets. Recently, an augmented framework has been developed to overcome some limitations that emerged in the application of the original framework. In this paper, we propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs). To compute the corresponding gradients, we use the adjoint sensitivity method to obtain the delayed dynamics of the adjoint. Differential equations with delays are typically seen as dynamical systems of infinite dimension that possess more fruitful dynamics. Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations. We use several illustrative examples to demonstrate this outstanding capacity. Firstly, we successfully model the delayed dynamics where the trajectories in the lower-dimensional phase space could be mutually intersected and even chaotic in a model-free or model-based manner. Traditional NODEs, without any argumentation, are not directly applicable for such modeling. Secondly, we achieve lower loss and higher accuracy not only for the data produced synthetically by complex models but also for the CIFAR10, a well-known image dataset. Our results on the NDDEs demonstrate that appropriately articulating the elements of dynamical systems into the network design is truly beneficial in promoting network performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
357,574
2105.00363
RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic Road Users
Object detection using automotive radars has not been explored with deep learning models in comparison to the camera based approaches. This can be attributed to the lack of public radar datasets. In this paper, we collect a novel radar dataset that contains radar data in the form of Range-Azimuth-Doppler tensors along with the bounding boxes on the tensor for dynamic road users, category labels, and 2D bounding boxes on the Cartesian Bird-Eye-View range map. To build the dataset, we propose an instance-wise auto-annotation method. Furthermore, a novel Range-Azimuth-Doppler based multi-class object detection deep learning model is proposed. The algorithm is a one-stage anchor-based detector that generates both 3D bounding boxes and 2D bounding boxes on Range-Azimuth-Doppler and Cartesian domains, respectively. Our proposed algorithm achieves 56.3% AP with IOU of 0.3 on 3D bounding box predictions, and 51.6% with IOU of 0.5 on 2D bounding box prediction. Our dataset and the code can be found at https://github.com/ZhangAoCanada/RADDet.git.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
233,192
2406.12676
Systematic equation formulation for simulation of power electronic circuits using explicit methods
Use of explicit integration methods for power electronic circuits with ideal switch models significantly improves simulation speed. The PLECS package [1] has effectively used this idea; however, the implementation details involved in PLECS are not available in the public domain. Recently, a basic framework, called the ``ELEX" scheme, for implementing explicit methods has been described [2]. A few modifications of the ELEX scheme for efficient handling of inductors and switches have been presented in [3]. In this paper, the approach presented in [3] is further augmented with robust schemes that enable systematic equation formulation for circuits involving switches, inductors, and transformers. Several examples are presented to illustrate the proposed schemes.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
465,525
2204.00255
NC-DRE: Leveraging Non-entity Clue Information for Document-level Relation Extraction
Document-level relation extraction (RE), which requires reasoning on multiple entities in different sentences to identify complex inter-sentence relations, is more challenging than sentence-level RE. To extract the complex inter-sentence relations, previous studies usually employ graph neural networks (GNN) to perform inference upon heterogeneous document-graphs. Despite their great successes, these graph-based methods, which normally only consider the words within the mentions in the process of building graphs and reasoning, tend to ignore the non-entity clue words that are not in the mentions but provide important clue information for relation reasoning. To alleviate this problem, we treat graph-based document-level RE models as an encoder-decoder framework, which typically uses a pre-trained language model as the encoder and a GNN model as the decoder, and propose a novel graph-based model NC-DRE that introduces decoder-to-encoder attention mechanism to leverage Non-entity Clue information for Document-level Relation Extraction.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
289,192
2201.09919
Faithiful Embeddings for EL++ Knowledge Bases
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs. We present BoxEL, a geometric KB embedding approach that allows for better capturing the logical structure (i.e., ABox and TBox axioms) in the description logic EL++. BoxEL models concepts in a KB as axis-parallel boxes that are suitable for modeling concept intersection, entities as points inside boxes, and relations between concepts/entities as affine transformations. We show theoretical guarantees (soundness) of BoxEL for preserving logical structure. Namely, the learned model of BoxEL embedding with loss 0 is a (logical) model of the KB. Experimental results on (plausible) subsumption reasonings and a real-world application for protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods as well as state-of-the-art EL++ embedding approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
276,815
2402.02317
INViT: A Generalizable Routing Problem Solver with Invariant Nested View Transformer
Recently, deep reinforcement learning has shown promising results for learning fast heuristics to solve routing problems. Meanwhile, most of the solvers suffer from generalizing to an unseen distribution or distributions with different scales. To address this issue, we propose a novel architecture, called Invariant Nested View Transformer (INViT), which is designed to enforce a nested design together with invariant views inside the encoders to promote the generalizability of the learned solver. It applies a modified policy gradient algorithm enhanced with data augmentations. We demonstrate that the proposed INViT achieves a dominant generalization performance on both TSP and CVRP problems with various distributions and different problem scales.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
426,488
1602.02282
Ladder Variational Autoencoders
Variational Autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
51,831
1504.03655
Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients
Nonlinear component analysis such as kernel Principle Component Analysis (KPCA) and kernel Canonical Correlation Analysis (KCCA) are widely used in machine learning, statistics and data analysis, but they can not scale up to big datasets. Recent attempts have employed random feature approximations to convert the problem to the primal form for linear computational complexity. However, to obtain high quality solutions, the number of random features should be the same order of magnitude as the number of data points, making such approach not directly applicable to the regime with millions of data points. We propose a simple, computationally efficient, and memory friendly algorithm based on the "doubly stochastic gradients" to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA and SVD. Despite the \emph{non-convex} nature of these problems, our method enjoys theoretical guarantees that it converges at the rate $\tilde{O}(1/t)$ to the global optimum, even for the top $k$ eigen subspace. Unlike many alternatives, our algorithm does not require explicit orthogonalization, which is infeasible on big datasets. We demonstrate the effectiveness and scalability of our algorithm on large scale synthetic and real world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
42,056
2105.09859
An examination of local strain fields evolution in ductile cast iron through micromechanical simulations based on 3D imaging
Microscopic digital volume correlation (DVC) and finite element precoalescence strain evaluations are compared for two nodular cast iron specimens. Displacement fields from \textit{in-situ} 3D synchrotron laminography images are obtained by DVC. Subsequently the microstructure is explicitely meshed from the images considering nodules as voids. Boundary conditions are applied from the DVC measurement. Image segmentation-related uncertainties are taken into account and observed to be negligible with respect to the differences between strain levels. Macroscopic as well as local strain levels in coalescing ligaments between voids nucleated at large graphite nodules are compared. Macroscopic strain levels are consistently predicted. A very good agreement is observed for one of the specimens, while the strain levels for the second specimen presents some discrepancies. Limitations of the modeling and numerical framework are discussed in light of these differences. A discussion of the use of strain as coalescence indicator is initiated.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
236,187
2410.04722
A Strategy for Label Alignment in Deep Neural Networks
One recent research demonstrated successful application of the label alignment property for unsupervised domain adaptation in a linear regression settings. Instead of regularizing representation learning to be domain invariant, the research proposed to regularize the linear regression model to align with the top singular vectors of the data matrix from the target domain. In this work we expand upon this idea and generalize it to the case of deep learning, where we derive an alternative formulation of the original adaptation algorithm exploiting label alignment suitable for deep neural network. We also perform experiments to demonstrate that our approach achieves comparable performance to mainstream unsupervised domain adaptation methods while having stabler convergence. All experiments and implementations in our work can be found at the following codebase: \url{https://github.com/xuanrui-work/DeepLabelAlignment}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
495,413
2401.12205
Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization
Logic synthesis, a pivotal stage in chip design, entails optimizing chip specifications encoded in hardware description languages like Verilog into highly efficient implementations using Boolean logic gates. The process involves a sequential application of logic minimization heuristics (``synthesis recipe"), with their arrangement significantly impacting crucial metrics such as area and delay. Addressing the challenge posed by the broad spectrum of design complexities - from variations of past designs (e.g., adders and multipliers) to entirely novel configurations (e.g., innovative processor instructions) - requires a nuanced `synthesis recipe` guided by human expertise and intuition. This study conducts a thorough examination of learning and search techniques for logic synthesis, unearthing a surprising revelation: pre-trained agents, when confronted with entirely novel designs, may veer off course, detrimentally affecting the search trajectory. We present ABC-RL, a meticulously tuned $\alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process. Computed based on similarity scores through nearest neighbor retrieval from the training dataset, ABC-RL yields superior synthesis recipes tailored for a wide array of hardware designs. Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques. Furthermore, ABC-RL achieves an impressive up to 9x reduction in runtime (iso-QoR) when compared to current state-of-the-art methodologies.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
423,291
2005.13282
Simulation of the COVID-19 pandemic on the social network of Slovenia: estimating the intrinsic forecast uncertainty
In the article a virus transmission model is constructed on a simplified social network. The social network consists of more than 2 million nodes, each representing an inhabitant of Slovenia. The nodes are organised and interconnected according to the real household and elderly-care center distribution, while their connections outside these clusters are semi-randomly distributed and fully-linked. The virus spread model is coupled to the disease progression model. The ensemble approach with the perturbed transmission and disease parameters is used to quantify the ensemble spread, a proxy for the forecast uncertainty. The presented ongoing forecasts of COVID-19 epidemic in Slovenia are compared with the collected Slovenian data. Results show that infection is currently twice more likely to transmit within households/elderly care centers than outside them. We use an ensemble of simulations (N = 1000) to inversely obtain posterior distributions of model parameters and to estimate the COVID-19 forecast uncertainty. We found that in the uncontrolled epidemic, the intrinsic uncertainty mostly originates from the uncertainty of the virus biology, i.e. its reproductive number. In the controlled epidemic with low ratio of infected population, the randomness of the social network becomes the major source of forecast uncertainty, particularly for the short-range forecasts. Social-network-based models are thus essential for improving epidemics forecasting.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
178,969
1410.0443
Strong Converse for a Degraded Wiretap Channel via Active Hypothesis Testing
We establish an upper bound on the rate of codes for a wiretap channel with public feedback for a fixed probability of error and secrecy parameter. As a corollary, we obtain a strong converse for the capacity of a degraded wiretap channel with public feedback. Our converse proof is based on a reduction of active hypothesis testing for discriminating between two channels to coding for wiretap channel with feedback.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,468
2501.03162
Deep-Relative-Trust-Based Diffusion for Decentralized Deep Learning
Decentralized learning strategies allow a collection of agents to learn efficiently from local data sets without the need for central aggregation or orchestration. Current decentralized learning paradigms typically rely on an averaging mechanism to encourage agreement in the parameter space. We argue that in the context of deep neural networks, which are often over-parameterized, encouraging consensus of the neural network outputs, as opposed to their parameters can be more appropriate. This motivates the development of a new decentralized learning algorithm, termed DRT diffusion, based on deep relative trust (DRT), a recently introduced similarity measure for neural networks. We provide convergence analysis for the proposed strategy, and numerically establish its benefit to generalization, especially with sparse topologies, in an image classification task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,780
1909.01409
Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making
It is widely believed that one's peers influence product adoption behaviors. This relationship has been linked to the number of signals a decision-maker receives in a social network. But it is unclear if these same principles hold when the pattern by which it receives these signals vary and when peer influence is directed towards choices which are not optimal. To investigate that, we manipulate social signal exposure in an online controlled experiment using a game with human participants. Each participant in the game makes a decision among choices with differing utilities. We observe the following: (1) even in the presence of monetary risks and previously acquired knowledge of the choices, decision-makers tend to deviate from the obvious optimal decision when their peers make similar decision which we call the influence decision, (2) when the quantity of social signals vary over time, the forwarding probability of the influence decision and therefore being responsive to social influence does not necessarily correlate proportionally to the absolute quantity of signals. To better understand how these rules of peer influence could be used in modeling applications of real world diffusion and in networked environments, we use our behavioral findings to simulate spreading dynamics in real world case studies. We specifically try to see how cumulative influence plays out in the presence of user uncertainty and measure its outcome on rumor diffusion, which we model as an example of sub-optimal choice diffusion. Together, our simulation results indicate that sequential peer effects from the influence decision overcomes individual uncertainty to guide faster rumor diffusion over time. However, when the rate of diffusion is slow in the beginning, user uncertainty can have a substantial role compared to peer influence in deciding the adoption trajectory of a piece of questionable information.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
143,888
1409.8484
An agent-driven semantical identifier using radial basis neural networks and reinforcement learning
Due to the huge availability of documents in digital form, and the deception possibility raise bound to the essence of digital documents and the way they are spread, the authorship attribution problem has constantly increased its relevance. Nowadays, authorship attribution,for both information retrieval and analysis, has gained great importance in the context of security, trust and copyright preservation. This work proposes an innovative multi-agent driven machine learning technique that has been developed for authorship attribution. By means of a preprocessing for word-grouping and time-period related analysis of the common lexicon, we determine a bias reference level for the recurrence frequency of the words within analysed texts, and then train a Radial Basis Neural Networks (RBPNN)-based classifier to identify the correct author. The main advantage of the proposed approach lies in the generality of the semantic analysis, which can be applied to different contexts and lexical domains, without requiring any modification. Moreover, the proposed system is able to incorporate an external input, meant to tune the classifier, and then self-adjust by means of continuous learning reinforcement.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
true
true
false
false
36,413
2107.11214
A3GC-IP: Attention-Oriented Adjacency Adaptive Recurrent Graph Convolutions for Human Pose Estimation from Sparse Inertial Measurements
Conventional methods for human pose estimation either require a high degree of instrumentation, by relying on many inertial measurement units (IMUs), or constraint the recording space, by relying on extrinsic cameras. These deficits are tackled through the approach of human pose estimation from sparse IMU data. We define attention-oriented adjacency adaptive graph convolutional long-short term memory networks (A3GC-LSTM), to tackle human pose estimation based on six IMUs, through incorporating the human body graph structure directly into the network. The A3GC-LSTM combines both spatial and temporal dependency in a single network operation, more memory efficiently than previous approaches. The recurrent graph learning on arbitrarily long sequences is made possible by equipping graph convolutions with adjacency adaptivity, which eliminates the problem of information loss in deep or recurrent graph networks, while it also allows for learning unknown dependencies between the human body joints. To further boost accuracy, a spatial attention formalism is incorporated into the recurrent LSTM cell. With our presented approach, we are able to utilize the inherent graph nature of the human body, and thus can outperform the state of the art for human pose estimation from sparse IMU data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
247,536