id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2309.09208
Data-driven control of nonlinear systems from input-output data
The design of controllers from data for nonlinear systems is a challenging problem. In a recent paper, De Persis, Rotulo and Tesi, "Learning controllers from data via approximate nonlinearity cancellation," IEEE Transactions on Automatic Control, 2023, a method to learn controllers that make the closed-loop system stable and dominantly linear was proposed. The approach leads to a simple solution based on data-dependent semidefinite programs. The method uses input-state measurements as data, while in a realistic setup it is more likely that only input-output measurements are available. In this note we report how the design principle of the above mentioned paper can be adjusted to deal with input-output data and obtain dynamic output feedback controllers in a favourable setting.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
392,513
1904.05423
Game-Theoretic Modeling of Multi-Vehicle Interactions at Uncontrolled Intersections
Motivated by the need to develop simulation tools for verification and validation of autonomous driving systems operating in traffic consisting of both autonomous and human-driven vehicles, we propose a framework for modeling vehicle interactions at uncontrolled intersections. The proposed interaction modeling approach is based on game theory with multiple concurrent leader-follower pairs, and accounts for common traffic rules. We parameterize the intersection layouts and geometries to model uncontrolled intersections with various configurations, and apply the proposed approach to model the interactive behavior of vehicles at these intersections. Based on simulation results in various traffic scenarios, we show that the model exhibits reasonable behavior expected in traffic, including the capability of reproducing scenarios extracted from real-world traffic data and reasonable performance in resolving traffic conflicts. The model is further validated based on the level-of-service traffic quality rating system and demonstrates manageable computational complexity compared to traditional multi-player game-theoretic models.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
127,312
2412.18255
AdaCo: Overcoming Visual Foundation Model Noise in 3D Semantic Segmentation via Adaptive Label Correction
Recently, Visual Foundation Models (VFMs) have shown a remarkable generalization performance in 3D perception tasks. However, their effectiveness in large-scale outdoor datasets remains constrained by the scarcity of accurate supervision signals, the extensive noise caused by variable outdoor conditions, and the abundance of unknown objects. In this work, we propose a novel label-free learning method, Adaptive Label Correction (AdaCo), for 3D semantic segmentation. AdaCo first introduces the Cross-modal Label Generation Module (CLGM), providing cross-modal supervision with the formidable interpretive capabilities of the VFMs. Subsequently, AdaCo incorporates the Adaptive Noise Corrector (ANC), updating and adjusting the noisy samples within this supervision iteratively during training. Moreover, we develop an Adaptive Robust Loss (ARL) function to modulate each sample's sensitivity to noisy supervision, preventing potential underfitting issues associated with robust loss. Our proposed AdaCo can effectively mitigate the performance limitations of label-free learning networks in 3D semantic segmentation tasks. Extensive experiments on two outdoor benchmark datasets highlight the superior performance of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
520,340
2109.09363
Performance and accuracy assessments of an incompressible fluid solver coupled with a deep Convolutional Neural Network
The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers. Lately, Deep Learning, and especially Convolutional Neural Networks (CNN), has been introduced to solve this equation, leading to significant inference time reduction at the cost of a lack of guarantee on the accuracy of the solution. This drawback might lead to inaccuracies and potentially unstable simulations. It also makes impossible a fair assessment of the CNN speedup, for instance, when changing the network architecture, since evaluated at different error levels. To circumvent this issue, a hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level. The CNN hybrid method is tested on two flow cases, consisting of a variable-density plume with and without obstacles, demostrating remarkable generalization capabilities, ensuring both the accuracy and stability of the simulations. The error distribution of the predictions using several network architectures is further investigated. Results show that the threshold of the hybrid strategy defined as the mean divergence of the velocity field is ensuring a consistent physical behavior of the CNN-based hybrid computational strategy. This strategy allows a systematic evaluation of the CNN performance at the same accuracy level for various network architectures. In particular, the importance of incorporating multiple scales in the network architecture is demonstrated, since improving both the accuracy and the inference performance compared with feedforward CNN architectures, as these networks can provide solutions 1 10-25 faster than traditional iterative solvers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
256,249
2109.05652
Inferential Wasserstein Generative Adversarial Networks
Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of the iWGAN to theoretically justify its performance. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of the iWGAN by obtaining competitive and stable performances for benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
254,876
2302.08243
New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning
To imitate the ability of keeping learning of human, continual learning which can learn from a never-ending data stream has attracted more interests recently. In all settings, the online class incremental learning (OCIL), where incoming samples from data stream can be used only once, is more challenging and can be encountered more frequently in real world. Actually, all continual learning models face a stability-plasticity dilemma, where the stability means the ability to preserve old knowledge while the plasticity denotes the ability to incorporate new knowledge. Although replay-based methods have shown exceptional promise, most of them concentrate on the strategy for updating and retrieving memory to keep stability at the expense of plasticity. To strike a preferable trade-off between stability and plasticity, we propose an Adaptive Focus Shifting algorithm (AFS), which dynamically adjusts focus to ambiguous samples and non-target logits in model learning. Through a deep analysis of the task-recency bias caused by class imbalance, we propose a revised focal loss to mainly keep stability. \Rt{By utilizing a new weight function, the revised focal loss will pay more attention to current ambiguous samples, which are the potentially valuable samples to make model progress quickly.} To promote plasticity, we introduce a virtual knowledge distillation. By designing a virtual teacher, it assigns more attention to non-target classes, which can surmount overconfidence and encourage model to focus on inter-class information. Extensive experiments on three popular datasets for OCIL have shown the effectiveness of AFS. The code will be available at \url{https://github.com/czjghost/AFS}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,996
2303.10276
Unleashing the Potential of Spiking Neural Networks by Dynamic Confidence
This paper presents a new methodology to alleviate the fundamental trade-off between accuracy and latency in spiking neural networks (SNNs). The approach involves decoding confidence information over time from the SNN outputs and using it to develop a decision-making agent that can dynamically determine when to terminate each inference. The proposed method, Dynamic Confidence, provides several significant benefits to SNNs. 1. It can effectively optimize latency dynamically at runtime, setting it apart from many existing low-latency SNN algorithms. Our experiments on CIFAR-10 and ImageNet datasets have demonstrated an average 40% speedup across eight different settings after applying Dynamic Confidence. 2. The decision-making agent in Dynamic Confidence is straightforward to construct and highly robust in parameter space, making it extremely easy to implement. 3. The proposed method enables visualizing the potential of any given SNN, which sets a target for current SNNs to approach. For instance, if an SNN can terminate at the most appropriate time point for each input sample, a ResNet-50 SNN can achieve an accuracy as high as 82.47% on ImageNet within just 4.71 time steps on average. Unlocking the potential of SNNs needs a highly-reliable decision-making agent to be constructed and fed with a high-quality estimation of ground truth. In this regard, Dynamic Confidence represents a meaningful step toward realizing the potential of SNNs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
352,377
2410.12163
Augmented Intelligence in Smart Intersections: Local Digital Twins-Assisted Hybrid Autonomous Driving
Vehicle-road collaboration is a promising approach for enhancing the safety and efficiency of autonomous driving by extending the intelligence of onboard systems to smart roadside infrastructures. The introduction of digital twins (DTs), particularly local DTs (LDTs) at the edge, in smart mobility presents a new embodiment of augmented intelligence, which could enhance information exchange and extract human driving expertise to improve onboard intelligence. This paper presents a novel LDT-assisted hybrid autonomous driving system for improving safety and efficiency in traffic intersections. By leveraging roadside units (RSUs) equipped with sensory and computing capabilities, the proposed system continuously monitors traffic, extracts human driving knowledge, and generates intersection-specific local driving agents through an offline reinforcement learning (RL) framework. When connected and automated vehicles (CAVs) pass through RSU-equipped intersections, RSUs can provide local agents to support safe and efficient driving in local areas. Meanwhile, they provide real-time cooperative perception (CP) to broaden onboard sensory horizons. The proposed LDT-assisted hybrid system is implemented with state-of-the-art products, e.g., CAVs and RSUs, and technologies, e.g., millimeter-wave (mmWave) communications. Hardware-in-the-loop (HiL) simulations and proof-of-concept (PoC) tests validate system performance from two standpoints: (i) The peak latency for CP and local agent downloading are 8.51 ms and 146 ms, respectively, aligning with 3GPP requirements for vehicle-to-everything (V2X) and model transfer use cases. Moreover, (ii) local driving agents can improve safety measures by 10% and reduce travel time by 15% compared with conventional onboard systems. The implemented prototype also demonstrates reliable real-time performance, fulfilling the targets of the proposed system design.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
498,876
2209.15314
Did You Get What You Paid For? Rethinking Annotation Cost of Deep Learning Based Computer Aided Detection in Chest Radiographs
As deep networks require large amounts of accurately labeled training data, a strategy to collect sufficiently large and accurate annotations is as important as innovations in recognition methods. This is especially true for building Computer Aided Detection (CAD) systems for chest X-rays where domain expertise of radiologists is required to annotate the presence and location of abnormalities on X-ray images. However, there lacks concrete evidence that provides guidance on how much resource to allocate for data annotation such that the resulting CAD system reaches desired performance. Without this knowledge, practitioners often fall back to the strategy of collecting as much detail as possible on as much data as possible which is cost inefficient. In this work, we investigate how the cost of data annotation ultimately impacts the CAD model performance on classification and segmentation of chest abnormalities in frontal-view X-ray images. We define the cost of annotation with respect to the following three dimensions: quantity, quality and granularity of labels. Throughout this study, we isolate the impact of each dimension on the resulting CAD model performance on detecting 10 chest abnormalities in X-rays. On a large scale training data with over 120K X-ray images with gold-standard annotations, we find that cost-efficient annotations provide great value when collected in large amounts and lead to competitive performance when compared to models trained with only gold-standard annotations. We also find that combining large amounts of cost efficient annotations with only small amounts of expensive labels leads to competitive CAD models at a much lower cost.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
320,564
2209.10591
Assessing ASR Model Quality on Disordered Speech using BERTScore
Word Error Rate (WER) is the primary metric used to assess automatic speech recognition (ASR) model quality. It has been shown that ASR models tend to have much higher WER on speakers with speech impairments than typical English speakers. It is hard to determine if models can be be useful at such high error rates. This study investigates the use of BERTScore, an evaluation metric for text generation, to provide a more informative measure of ASR model quality and usefulness. Both BERTScore and WER were compared to prediction errors manually annotated by Speech Language Pathologists for error type and assessment. BERTScore was found to be more correlated with human assessment of error type and assessment. BERTScore was specifically more robust to orthographic changes (contraction and normalization errors) where meaning was preserved. Furthermore, BERTScore was a better fit of error assessment than WER, as measured using an ordinal logistic regression and the Akaike's Information Criterion (AIC). Overall, our findings suggest that BERTScore can complement WER when assessing ASR model performance from a practical perspective, especially for accessibility applications where models are useful even at lower accuracy than for typical speech.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
318,909
1308.2696
B(eo)W(u)LF: Facilitating recurrence analysis on multi-level language
Discourse analysis may seek to characterize not only the overall composition of a given text but also the dynamic patterns within the data. This technical report introduces a data format intended to facilitate multi-level investigations, which we call the by-word long-form or B(eo)W(u)LF. Inspired by the long-form data format required for mixed-effects modeling, B(eo)W(u)LF structures linguistic data into an expanded matrix encoding any number of researchers-specified markers, making it ideal for recurrence-based analyses. While we do not necessarily claim to be the first to use methods along these lines, we have created a series of tools utilizing Python and MATLAB to enable such discourse analyses and demonstrate them using 319 lines of the Old English epic poem, Beowulf, translated into modern English.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
26,399
2309.02912
On the Challenges of Building Datasets for Hate Speech Detection
Detection of hate speech has been formulated as a standalone application of NLP and different approaches have been adopted for identifying the target groups, obtaining raw data, defining the labeling process, choosing the detection algorithm, and evaluating the performance in the desired setting. However, unlike other downstream tasks, hate speech suffers from the lack of large-sized, carefully curated, generalizable datasets owing to the highly subjective nature of the task. In this paper, we first analyze the issues surrounding hate speech detection through a data-centric lens. We then outline a holistic framework to encapsulate the data creation pipeline across seven broad dimensions by taking the specific example of hate speech towards sexual minorities. We posit that practitioners would benefit from following this framework as a form of best practice when creating hate speech datasets in the future.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
390,209
2408.06254
Data-Efficient Prediction of Minimum Operating Voltage via Inter- and Intra-Wafer Variation Alignment
Predicting the minimum operating voltage ($V_{min}$) of chips stands as a crucial technique in enhancing the speed and reliability of manufacturing testing flow. However, existing $V_{min}$ prediction methods often overlook various sources of variations in both training and deployment phases. Notably, the neglect of wafer zone-to-zone (intra-wafer) variations and wafer-to-wafer (inter-wafer) variations, compounded by process variations, diminishes the accuracy, data efficiency, and reliability of $V_{min}$ predictors. To address this gap, we introduce a novel data-efficient $V_{min}$ prediction flow, termed restricted bias alignment (RBA), which incorporates a novel variation alignment technique. Our approach concurrently estimates inter- and intra-wafer variations. Furthermore, we propose utilizing class probe data to model inter-wafer variations for the first time. We empirically demonstrate RBA's effectiveness and data efficiency on an industrial 16nm automotive chip dataset.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
480,129
2301.05839
NCP: Neural Correspondence Prior for Effective Unsupervised Shape Matching
We present Neural Correspondence Prior (NCP), a new paradigm for computing correspondences between 3D shapes. Our approach is fully unsupervised and can lead to high-quality correspondences even in challenging cases such as sparse point clouds or non-isometric meshes, where current methods fail. Our first key observation is that, in line with neural priors observed in other domains, recent network architectures on 3D data, even without training, tend to produce pointwise features that induce plausible maps between rigid or non-rigid shapes. Secondly, we show that given a noisy map as input, training a feature extraction network with the input map as supervision tends to remove artifacts from the input and can act as a powerful correspondence denoising mechanism, both between individual pairs and within a collection. With these observations in hand, we propose a two-stage unsupervised paradigm for shape matching by (i) performing unsupervised training by adapting an existing approach to obtain an initial set of noisy matches, and (ii) using these matches to train a network in a supervised manner. We demonstrate that this approach significantly improves the accuracy of the maps, especially when trained within a collection. We show that NCP is data-efficient, fast, and achieves state-of-the-art results on many tasks. Our code can be found online: https://github.com/pvnieo/NCP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
340,463
1901.06218
On the Achievable Rate and Capacity for a Sample-based Practical Photon-counting Receiver
We investigate the achievable rate and capacity of a non-perfect photon-counting receiver. For the case of long symbol duration, the achievable rate under on-off keying modulation is investigated based on Kullback-Leibler (KL) divergence and Chernoff $\alpha$-divergence. We prove the tightness of the derived bounds for large peak power with zero background radiation with exponential convergence rate, and for low peak power of order two convergence rate. For large peak power with fixed background radiation and low background radiation with fixed peak power, the proposed bound gap is a small positive value for low background radiation and large peak power, respectively. Moreover, we propose an approximation on the achievable rate in the low background radiation and long symbol duration regime, which is more accurate compared with the derived upper and lower bounds in the medium signal to noise ratio (SNR) regime. For the symbol duration that can be sufficiently small, the capacity and the optimal duty cycle are is investigated. We show that the capacity approaches that of continuous Poisson capacity as $T_s=\tau\to0$. The asymptotic capacity is analyzed for low and large peak power. Compared with the continuous Poisson capacity, the capacity of a non-perfect receiver is almost lossless and loss with attenuation for low peak power given zero background radiation and nonzero background radiation, respectively. For large peak power, the capacity with a non-perfect receiver converges, while that of continuous Poisson capacity channel linearly increases. The above theoretical results are extensively validated by numerical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,946
2409.02636
Mamba as a motion encoder for robotic imitation learning
Recent advancements in imitation learning, particularly with the integration of LLM techniques, are set to significantly improve robots' dexterity and adaptability. This paper proposes using Mamba, a state-of-the-art architecture with potential applications in LLMs, for robotic imitation learning, highlighting its ability to function as an encoder that effectively captures contextual information. By reducing the dimensionality of the state space, Mamba operates similarly to an autoencoder. It effectively compresses the sequential information into state variables while preserving the essential temporal dynamics necessary for accurate motion prediction. Experimental results in tasks such as cup placing and case loading demonstrate that despite exhibiting higher estimation errors, Mamba achieves superior success rates compared to Transformers in practical task execution. This performance is attributed to Mamba's structure, which encompasses the state space model. Additionally, the study investigates Mamba's capacity to serve as a real-time motion generator with a limited amount of training data.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
485,775
2209.06886
Vectorized Adjoint Sensitivity Method for Graph Convolutional Neural Ordinary Differential Equations
This document, as the title stated, is meant to provide a vectorized implementation of adjoint dynamics calculation for Graph Convolutional Neural Ordinary Differential Equations (GCDE). The adjoint sensitivity method is the gradient approximation method for neural ODEs that replaces the back propagation. When implemented on libraries such as PyTorch or Tensorflow, the adjoint can be calculated by autograd functions without the need for a hand-derived formula. In applications such as edge computing and in memristor crossbars, however, autograds are not available, and therefore we need a vectorized derivation of adjoint dynamics to efficiently map the system on hardware. This document will go over the basics, then move on to derive the vectorized adjoint dynamics for GCDE.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
317,536
2210.16613
Diverse Parallel Data Synthesis for Cross-Database Adaptation of Text-to-SQL Parsers
Text-to-SQL parsers typically struggle with databases unseen during the train time. Adapting parsers to new databases is a challenging problem due to the lack of natural language queries in the new schemas. We present ReFill, a framework for synthesizing high-quality and textually diverse parallel datasets for adapting a Text-to-SQL parser to a target schema. ReFill learns to retrieve-and-edit text queries from the existing schemas and transfers them to the target schema. We show that retrieving diverse existing text, masking their schema-specific tokens, and refilling with tokens relevant to the target schema, leads to significantly more diverse text queries than achievable by standard SQL-to-Text generation methods. Through experiments spanning multiple databases, we demonstrate that fine-tuning parsers on datasets synthesized using ReFill consistently outperforms the prior data-augmentation methods.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
327,400
2312.09203
Measurement in the Age of LLMs: An Application to Ideological Scaling
Much of social science is centered around terms like ``ideology'' or ``power'', which generally elude precise definition, and whose contextual meanings are trapped in surrounding language. This paper explores the use of large language models (LLMs) to flexibly navigate the conceptual clutter inherent to social scientific measurement tasks. We rely on LLMs' remarkable linguistic fluency to elicit ideological scales of both legislators and text, which accord closely to established methods and our own judgement. A key aspect of our approach is that we elicit such scores directly, instructing the LLM to furnish numeric scores itself. This approach affords a great deal of flexibility, which we showcase through a variety of different case studies. Our results suggest that LLMs can be used to characterize highly subtle and diffuse manifestations of political ideology in text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
415,630
1303.5759
An Efficient Implementation of Belief Function Propagation
The local computation technique (Shafer et al. 1987, Shafer and Shenoy 1988, Shenoy and Shafer 1986) is used for propagating belief functions in so called a Markov Tree. In this paper, we describe an efficient implementation of belief function propagation on the basis of the local computation technique. The presented method avoids all the redundant computations in the propagation process, and so makes the computational complexity decrease with respect to other existing implementations (Hsia and Shenoy 1989, Zarley et al. 1988). We also give a combined algorithm for both propagation and re-propagation which makes the re-propagation process more efficient when one or more of the prior belief functions is changed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,207
2004.03176
Machine Translation with Unsupervised Length-Constraints
We have seen significant improvements in machine translation due to the usage of deep learning. While the improvements in translation quality are impressive, the encoder-decoder architecture enables many more possibilities. In this paper, we explore one of these, the generation of constraint translation. We focus on length constraints, which are essential if the translation should be displayed in a given format. In this work, we propose an end-to-end approach for this task. Compared to a traditional method that first translates and then performs sentence compression, the text compression is learned completely unsupervised. By combining the idea with zero-shot multilingual machine translation, we are also able to perform unsupervised monolingual sentence compression. In order to fulfill the length constraints, we investigated several methods to integrate the constraints into the model. Using the presented technique, we are able to significantly improve the translation quality under constraints. Furthermore, we are able to perform unsupervised monolingual sentence compression.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,475
2502.00585
Converting Transformers into DGNNs Form
Recent advances in deep learning have established Transformer architectures as the predominant modeling paradigm. Central to the success of Transformers is the self-attention mechanism, which scores the similarity between query and key matrices to modulate a value matrix. This operation bears striking similarities to digraph convolution, prompting an investigation into whether digraph convolution could serve as an alternative to self-attention. In this study, we formalize this concept by introducing a synthetic unitary digraph convolution based on the digraph Fourier transform. The resulting model, which we term Converter, effectively converts a Transformer into a Directed Graph Neural Network (DGNN) form. We have tested Converter on Long-Range Arena benchmark, long document classification, and DNA sequence-based taxonomy classification. Our experimental results demonstrate that Converter achieves superior performance while maintaining computational efficiency and architectural simplicity, which establishes it as a lightweight yet powerful Transformer variant.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
529,448
2104.07713
Contrastive Learning with Stronger Augmentations
Representation learning has significantly been developed with the advance of contrastive learning methods. Most of those methods have benefited from various data augmentations that are carefully designated to maintain their identities so that the images transformed from the same instance can still be retrieved. However, those carefully designed transformations limited us to further explore the novel patterns exposed by other transformations. Meanwhile, as found in our experiments, the strong augmentations distorted the images' structures, resulting in difficult retrieval. Thus, we propose a general framework called Contrastive Learning with Stronger Augmentations~(CLSA) to complement current contrastive learning approaches. Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries from a pool of instances. Experiments on the ImageNet dataset and downstream datasets showed the information from the strongly augmented images can significantly boost the performance. For example, CLSA achieves top-1 accuracy of 76.2% on ImageNet with a standard ResNet-50 architecture with a single-layer classifier fine-tuned, which is almost the same level as 76.5% of supervised results. The code and pre-trained models are available in https://github.com/maple-research-lab/CLSA.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
230,516
2401.13977
Evaluating the Determinants of Mode Choice Using Statistical and Machine Learning Techniques in the Indian Megacity of Bengaluru
The decision making involved behind the mode choice is critical for transportation planning. While statistical learning techniques like discrete choice models have been used traditionally, machine learning (ML) models have gained traction recently among the transportation planners due to their higher predictive performance. However, the black box nature of ML models pose significant interpretability challenges, limiting their practical application in decision and policy making. This study utilised a dataset of $1350$ households belonging to low and low-middle income bracket in the city of Bengaluru to investigate mode choice decision making behaviour using Multinomial logit model and ML classifiers like decision trees, random forests, extreme gradient boosting and support vector machines. In terms of accuracy, random forest model performed the best ($0.788$ on training data and $0.605$ on testing data) compared to all the other models. This research has adopted modern interpretability techniques like feature importance and individual conditional expectation plots to explain the decision making behaviour using ML models. A higher travel costs significantly reduce the predicted probability of bus usage compared to other modes (a $0.66\%$ and $0.34\%$ reduction using Random Forests and XGBoost model for $10\%$ increase in travel cost). However, reducing travel time by $10\%$ increases the preference for the metro ($0.16\%$ in Random Forests and 0.42% in XGBoost). This research augments the ongoing research on mode choice analysis using machine learning techniques, which would help in improving the understanding of the performance of these models with real-world data in terms of both accuracy and interpretability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
423,923
2001.06680
Tree-Structured Policy based Progressive Reinforcement Learning for Temporally Language Grounding in Video
Temporally language grounding in untrimmed videos is a newly-raised task in video understanding. Most of the existing methods suffer from inferior efficiency, lacking interpretability, and deviating from the human perception mechanism. Inspired by human's coarse-to-fine decision-making paradigm, we formulate a novel Tree-Structured Policy based Progressive Reinforcement Learning (TSP-PRL) framework to sequentially regulate the temporal boundary by an iterative refinement process. The semantic concepts are explicitly represented as the branches in the policy, which contributes to efficiently decomposing complex policies into an interpretable primitive action. Progressive reinforcement learning provides correct credit assignment via two task-oriented rewards that encourage mutual promotion within the tree-structured policy. We extensively evaluate TSP-PRL on the Charades-STA and ActivityNet datasets, and experimental results show that TSP-PRL achieves competitive performance over existing state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
160,858
1910.09114
Using machine learning and information visualisation for discovering latent topics in Twitter news
We propose a method to discover latent topics and visualise large collections of tweets for easy identification and interpretation of topics, and exemplify its use with tweets from a Colombian mass media giant in the period 2014--2019. The latent topic analysis is performed in two ways: with the training of a Latent Dirichlet Allocation model, and with the combination of the FastText unsupervised model to represent tweets as vectors and the implementation of K-means clustering to group tweets into topics. Using a classification task, we found that people respond differently according to the various news topics. The classification tasks consists of the following: given a reply to a news tweet, we train a supervised algorithm to predict the topic of the news tweet solely from the reply. Furthermore, we show how the Colombian peace treaty has had a profound impact on the Colombian society, as it is the topic in which most people engage to show their opinions.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
150,080
2410.10843
Adaptive Data Transport Mechanism for UAV Surveillance Missions in Lossy Environments
Unmanned Aerial Vehicles (UAVs) play an increasingly critical role in Intelligence, Surveillance, and Reconnaissance (ISR) missions such as border patrolling and criminal detection, thanks to their ability to access remote areas and transmit real-time imagery to processing servers. However, UAVs are highly constrained by payload size, power limits, and communication bandwidth, necessitating the development of highly selective and efficient data transmission strategies. This has driven the development of various compression and optimal transmission technologies for UAVs. Nevertheless, most methods strive to preserve maximal information in transferred video frames, missing the fact that only certain parts of images/video frames might offer meaningful contributions to the ultimate mission objectives in the ISR scenarios involving moving object detection and tracking (OD/OT). This paper adopts a different perspective, and offers an alternative AI-driven scheduling policy that prioritizes selecting regions of the image that significantly contributes to the mission objective. The key idea is tiling the image into small patches and developing a deep reinforcement learning (DRL) framework that assigns higher transmission probabilities to patches that present higher overlaps with the detected object of interest, while penalizing sharp transitions over consecutive frames to promote smooth scheduling shifts. Although we used Yolov-8 object detection and UDP transmission protocols as a benchmark testing scenario the idea is general and applicable to different transmission protocols and OD/OT methods. To further boost the system's performance and avoid OD errors for cluttered image patches, we integrate it with interframe interpolations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
498,264
2111.14106
Enhancing Keyphrase Extraction from Academic Articles with their Reference Information
With the development of Internet technology, the phenomenon of information overload is becoming more and more obvious. It takes a lot of time for users to obtain the information they need. However, keyphrases that summarize document information highly are helpful for users to quickly obtain and understand documents. For academic resources, most existing studies extract keyphrases through the title and abstract of papers. We find that title information in references also contains author-assigned keyphrases. Therefore, this article uses reference information and applies two typical methods of unsupervised extraction methods (TF*IDF and TextRank), two representative traditional supervised learning algorithms (Na\"ive Bayes and Conditional Random Field) and a supervised deep learning model (BiLSTM-CRF), to analyze the specific performance of reference information on keyphrase extraction. It is expected to improve the quality of keyphrase recognition from the perspective of expanding the source text. The experimental results show that reference information can increase precision, recall, and F1 of automatic keyphrase extraction to a certain extent. This indicates the usefulness of reference information on keyphrase extraction of academic papers and provides a new idea for the following research on automatic keyphrase extraction.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
true
268,495
2402.13210
Bayesian Reward Models for LLM Alignment
To ensure that large language model (LLM) responses are helpful and non-toxic, a reward model trained on human preference data is usually used. LLM responses with high rewards are then selected through best-of-$n$ (BoN) sampling or the LLM is further optimized to produce responses with high rewards through reinforcement learning from human feedback (RLHF). However, these processes are susceptible to reward overoptimization or `hacking', where responses receive high rewards due to imperfections in the reward model rather than true preference, particularly as prompts or responses deviate from the training data. To address these challenges, we propose to train a Bayesian reward model, which signals higher uncertainty further from the training data distribution. We trained Bayesian reward models using Laplace approximation on LoRA weights, and found that the resulting uncertainty estimates can effectively mitigate reward overoptimization in BoN sampling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,151
1802.06807
On the Complexity of Opinions and Online Discussions
In an increasingly polarized world, demagogues who reduce complexity down to simple arguments based on emotion are gaining in popularity. Are opinions and online discussions falling into demagoguery? In this work, we aim to provide computational tools to investigate this question and, by doing so, explore the nature and complexity of online discussions and their space of opinions, uncovering where each participant lies. More specifically, we present a modeling framework to construct latent representations of opinions in online discussions which are consistent with human judgements, as measured by online voting. If two opinions are close in the resulting latent space of opinions, it is because humans think they are similar. Our modeling framework is theoretically grounded and establishes a surprising connection between opinions and voting models and the sign-rank of a matrix. Moreover, it also provides a set of practical algorithms to both estimate the dimension of the latent space of opinions and infer where opinions expressed by the participants of an online discussion lie in this space. Experiments on a large dataset from Yahoo! News, Yahoo! Finance, Yahoo! Sports, and the Newsroom app suggest that unidimensional opinion models may often be unable to accurately represent online discussions, provide insights into human judgements and opinions, and show that our framework is able to circumvent language nuances such as sarcasm or humor by relying on human judgements instead of textual analysis.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
90,746
2302.11349
Steerable Equivariant Representation Learning
Pre-trained deep image representations are useful for post-training tasks such as classification through transfer learning, image retrieval, and object detection. Data augmentations are a crucial aspect of pre-training robust representations in both supervised and self-supervised settings. Data augmentations explicitly or implicitly promote invariance in the embedding space to the input image transformations. This invariance reduces generalization to those downstream tasks which rely on sensitivity to these particular data augmentations. In this paper, we propose a method of learning representations that are instead equivariant to data augmentations. We achieve this equivariance through the use of steerable representations. Our representations can be manipulated directly in embedding space via learned linear maps. We demonstrate that our resulting steerable and equivariant representations lead to better performance on transfer learning and robustness: e.g. we improve linear probe top-1 accuracy by between 1% to 3% for transfer; and ImageNet-C accuracy by upto 3.4%. We further show that the steerability of our representations provides significant speedup (nearly 50x) for test-time augmentations; by applying a large number of augmentations for out-of-distribution detection, we significantly improve OOD AUC on the ImageNet-C dataset over an invariant representation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
347,179
2007.12668
KPRNet: Improving projection-based LiDAR semantic segmentation
Semantic segmentation is an important component in the perception systems of autonomous vehicles. In this work, we adopt recent advances in both image and point cloud segmentation to achieve a better accuracy in the task of segmenting LiDAR scans. KPRNet improves the convolutional neural network architecture of 2D projection methods and utilizes KPConv to replace the commonly used post-processing techniques with a learnable point-wise component which allows us to obtain more accurate 3D labels. With these improvements our model outperforms the current best method on the SemanticKITTI benchmark, reaching an mIoU of 63.1.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
188,886
2408.02897
A Metric Driven Approach to Mixed Precision Training
As deep learning methodologies have developed, it has been generally agreed that increasing neural network size improves model quality. However, this is at the expense of memory and compute requirements, which also need to be increased. Various efficiency techniques have been proposed to rein in hardware costs, one being the use of low precision numerics. Recent accelerators have introduced several different 8-bit data types to help accommodate DNNs in terms of numerics. In this paper, we identify a metric driven methodology to aid in the choice of numerics. We demonstrate how such a methodology can help scale training of a language representation model. The technique can be generalized to other model architectures.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
478,806
2308.03805
Weakly Supervised Multi-Task Representation Learning for Human Activity Analysis Using Wearables
Sensor data streams from wearable devices and smart environments are widely studied in areas like human activity recognition (HAR), person identification, or health monitoring. However, most of the previous works in activity and sensor stream analysis have been focusing on one aspect of the data, e.g. only recognizing the type of the activity or only identifying the person who performed the activity. We instead propose an approach that uses a weakly supervised multi-output siamese network that learns to map the data into multiple representation spaces, where each representation space focuses on one aspect of the data. The representation vectors of the data samples are positioned in the space such that the data with the same semantic meaning in that aspect are closely located to each other. Therefore, as demonstrated with a set of experiments, the trained model can provide metrics for clustering data based on multiple aspects, allowing it to address multiple tasks simultaneously and even to outperform single task supervised methods in many situations. In addition, further experiments are presented that in more detail analyze the effect of the architecture and of using multiple tasks within this framework, that investigate the scalability of the model to include additional tasks, and that demonstrate the ability of the framework to combine data for which only partial relationship information with respect to the target tasks is available.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
384,174
1811.12028
MOBIUS: Model-Oblivious Binarized Neural Networks
A privacy-preserving framework in which a computational resource provider receives encrypted data from a client and returns prediction results without decrypting the data, i.e., oblivious neural network or encrypted prediction, has been studied in machine learning that provides prediction services. In this work, we present MOBIUS (Model-Oblivious BInary neUral networkS), a new system that combines Binarized Neural Networks (BNNs) and secure computation based on secret sharing as tools for scalable and fast privacy-preserving machine learning. BNNs improve computational performance by binarizing values in training to $-1$ and $+1$, while secure computation based on secret sharing provides fast and various computations under encrypted forms via modulo operations with a short bit length. However, combining these tools is not trivial because their operations have different algebraic structures and the use of BNNs downgrades prediction accuracy in general. MOBIUS uses improved procedures of BNNs and secure computation that have compatible algebraic structures without downgrading prediction accuracy. We created an implementation of MOBIUS in C++ using the ABY library (NDSS 2015). We then conducted experiments using the MNIST dataset, and the results show that MOBIUS can return a prediction within 0.76 seconds, which is six times faster than SecureML (IEEE S\&P 2017). MOBIUS allows a client to request for encrypted prediction and allows a trainer to obliviously publish an encrypted model to a cloud provided by a computational resource provider, i.e., without revealing the original model itself to the provider.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
114,917
2309.00705
Indexing Irises by Intrinsic Dimension
28,000+ high-quality iris images of 1350 distinct eyes from 650+ different individuals from a relatively diverse university town population were collected. A small defined unobstructed portion of the normalized iris image is selected as a key portion for quickly identifying an unknown individual when submitting an iris image to be matched to a database of enrolled irises of the 1350 distinct eyes. The intrinsic dimension of a set of these key portions of the 1350 enrolled irises is measured to be about four (4). This set is mapped to a four-dimensional intrinsic space by principal components analysis (PCA). When an iris image is presented to the iris database for identification, the search begins in the neighborhood of the location of its key portion in the 4D intrinsic space, typically finding a correct identifying match after comparison to only a few percent of the database.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,391
2501.12557
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Large language models (LLMs) have been positioned to revolutionize HCI, by reshaping not only the interfaces, design patterns, and sociotechnical systems that we study, but also the research practices we use. To-date, however, there has been little understanding of LLMs' uptake in HCI. We address this gap via a systematic literature review of 153 CHI papers from 2020-24 that engage with LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in HCI projects; (3) contribution types; and (4) acknowledged limitations and risks. We find LLM work in 10 diverse domains, primarily via empirical and artifact contributions. Authors use LLMs in five distinct roles, including as research tools or simulated users. Still, authors often raise validity and reproducibility concerns, and overwhelmingly study closed models. We outline opportunities to improve HCI research with and on LLMs, and provide guiding questions for researchers to consider the validity and appropriateness of LLM-related work.
true
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
526,357
2401.10967
HOSC: A Periodic Activation Function for Preserving Sharp Features in Implicit Neural Representations
Recently proposed methods for implicitly representing signals such as images, scenes, or geometries using coordinate-based neural network architectures often do not leverage the choice of activation functions, or do so only to a limited extent. In this paper, we introduce the Hyperbolic Oscillation function (HOSC), a novel activation function with a controllable sharpness parameter. Unlike any previous activations, HOSC has been specifically designed to better capture sudden changes in the input signal, and hence sharp or acute features of the underlying data, as well as smooth low-frequency transitions. Due to its simplicity and modularity, HOSC offers a plug-and-play functionality that can be easily incorporated into any existing method employing a neural network as a way of implicitly representing a signal. We benchmark HOSC against other popular activations in an array of general tasks, empirically showing an improvement in the quality of obtained representations, provide the mathematical motivation behind the efficacy of HOSC, and discuss its limitations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
true
422,836
2407.13537
GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation
Plane adjustment (PA) is crucial for many 3D applications, involving simultaneous pose estimation and plane recovery. Despite recent advancements, it remains a challenging problem in the realm of multi-view point cloud registration. Current state-of-the-art methods can achieve globally optimal convergence only with good initialization. Furthermore, their high time complexity renders them impractical for large-scale problems. To address these challenges, we first exploit a novel optimization strategy termed \textit{Bi-Convex Relaxation}, which decouples the original problem into two simpler sub-problems, reformulates each sub-problem using a convex relaxation technique, and alternately solves each one until the original problem converges. Building on this strategy, we propose two algorithmic variants for solving the plane adjustment problem, namely \textit{GlobalPointer} and \textit{GlobalPointer++}, based on point-to-plane and plane-to-plane errors, respectively. Extensive experiments on both synthetic and real datasets demonstrate that our method can perform large-scale plane adjustment with linear time complexity, larger convergence region, and robustness to poor initialization, while achieving similar accuracy as prior methods. The code is available at https://github.com/wu-cvgl/GlobalPointer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,415
2406.03065
Decision Boundary-aware Knowledge Consolidation Generates Better Instance-Incremental Learner
Instance-incremental learning (IIL) focuses on learning continually with data of the same classes. Compared to class-incremental learning (CIL), the IIL is seldom explored because IIL suffers less from catastrophic forgetting (CF). However, besides retaining knowledge, in real-world deployment scenarios where the class space is always predefined, continual and cost-effective model promotion with the potential unavailability of previous data is a more essential demand. Therefore, we first define a new and more practical IIL setting as promoting the model's performance besides resisting CF with only new observations. Two issues have to be tackled in the new IIL setting: 1) the notorious catastrophic forgetting because of no access to old data, and 2) broadening the existing decision boundary to new observations because of concept drift. To tackle these problems, our key insight is to moderately broaden the decision boundary to fail cases while retain old boundary. Hence, we propose a novel decision boundary-aware distillation method with consolidating knowledge to teacher to ease the student learning new knowledge. We also establish the benchmarks on existing datasets Cifar-100 and ImageNet. Notably, extensive experiments demonstrate that the teacher model can be a better incremental learner than the student model, which overturns previous knowledge distillation-based methods treating student as the main role.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
461,074
2501.14622
ACT-JEPA: Joint-Embedding Predictive Architecture Improves Policy Representation Learning
Learning efficient representations for decision-making policies is a challenge in imitation learning (IL). Current IL methods require expert demonstrations, which are expensive to collect. Consequently, they often have underdeveloped world models. Self-supervised learning (SSL) offers an alternative by allowing models to learn from diverse, unlabeled data, including failures. However, SSL methods often operate in raw input space, making them inefficient. In this work, we propose ACT-JEPA, a novel architecture that integrates IL and SSL to enhance policy representations. We train a policy to predict (1) action sequences and (2) abstract observation sequences. The first objective uses action chunking to improve action prediction and reduce compounding errors. The second objective extends this idea of chunking by predicting abstract observation sequences. We utilize Joint-Embedding Predictive Architecture to predict in abstract representation space, allowing the model to filter out irrelevant details, improve efficiency, and develop a robust world model. Our experiments show that ACT-JEPA improves the quality of representations by learning temporal environment dynamics. Additionally, the model's ability to predict abstract observation sequences results in representations that effectively generalize to action sequence prediction. ACT-JEPA performs on par with established baselines across a range of decision-making tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
527,190
2402.05455
Large Language Models for Psycholinguistic Plausibility Pretesting
In psycholinguistics, the creation of controlled materials is crucial to ensure that research outcomes are solely attributed to the intended manipulations and not influenced by extraneous factors. To achieve this, psycholinguists typically pretest linguistic materials, where a common pretest is to solicit plausibility judgments from human evaluators on specific sentences. In this work, we investigate whether Language Models (LMs) can be used to generate these plausibility judgements. We investigate a wide range of LMs across multiple linguistic structures and evaluate whether their plausibility judgements correlate with human judgements. We find that GPT-4 plausibility judgements highly correlate with human judgements across the structures we examine, whereas other LMs correlate well with humans on commonly used syntactic structures. We then test whether this correlation implies that LMs can be used instead of humans for pretesting. We find that when coarse-grained plausibility judgements are needed, this works well, but when fine-grained judgements are necessary, even GPT-4 does not provide satisfactory discriminative power.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
427,870
2306.11838
Efficient Machine Translation Corpus Generation
This paper proposes an efficient and semi-automated method for human-in-the-loop post-editing for machine translation (MT) corpus generation. The method is based on online training of a custom MT quality estimation metric on-the-fly as linguists perform post-edits. The online estimator is used to prioritize worse hypotheses for post-editing, and auto-close best hypotheses without post-editing. This way, significant improvements can be achieved in the resulting quality of post-edits at a lower cost due to reduced human involvement. The trained estimator can also provide an online sanity check mechanism for post-edits and remove the need for additional linguists to review them or work on the same hypotheses. In this paper, the effect of prioritizing with the proposed method on the resulting MT corpus quality is presented versus scheduling hypotheses randomly. As demonstrated by experiments, the proposed method improves the lifecycle of MT models by focusing the linguist effort on production samples and hypotheses, which matter most for expanding MT corpora to be used for re-training them.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
374,727
2501.11512
Multitask Auxiliary Network for Perceptual Quality Assessment of Non-Uniformly Distorted Omnidirectional Images
Omnidirectional image quality assessment (OIQA) has been widely investigated in the past few years and achieved much success. However, most of existing studies are dedicated to solve the uniform distortion problem in OIQA, which has a natural gap with the non-uniform distortion problem, and their ability in capturing non-uniform distortion is far from satisfactory. To narrow this gap, in this paper, we propose a multitask auxiliary network for non-uniformly distorted omnidirectional images, where the parameters are optimized by jointly training the main task and other auxiliary tasks. The proposed network mainly consists of three parts: a backbone for extracting multiscale features from the viewport sequence, a multitask feature selection module for dynamically allocating specific features to different tasks, and auxiliary sub-networks for guiding the proposed model to capture local distortion and global quality change. Extensive experiments conducted on two large-scale OIQA databases demonstrate that the proposed model outperforms other state-of-the-art OIQA metrics, and these auxiliary sub-networks contribute to improve the performance of the proposed model. The source code is available at https://github.com/RJL2000/MTAOIQA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,950
2103.01648
Solving Inverse Problems by Joint Posterior Maximization with Autoencoding Prior
In this work we address the problem of solving ill-posed inverse problems in imaging where the prior is a variational autoencoder (VAE). Specifically we consider the decoupled case where the prior is trained once and can be reused for many different log-concave degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization algorithms and to the use of a stochastic encoder to accelerate computations. The resulting technique (JPMAP) performs Joint Posterior Maximization using an Autoencoding Prior. We show theoretical and experimental evidence that the proposed objective function is quite close to bi-convex. Indeed it satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. We also highlight the importance of correctly training the VAE using a denoising criterion, in order to ensure that the encoder generalizes well to out-of-distribution images, without affecting the quality of the generative model. This simple modification is key to providing robustness to the whole procedure. Finally we show how our joint MAP methodology relates to more common MAP approaches, and we propose a continuation scheme that makes use of our JPMAP algorithm to provide more robust MAP estimates. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
222,702
2501.04826
Intelligent Gradient Boosting Algorithms for Estimating Strength of Modified Subgrade Soil
The performance of pavement under loading depends on the strength of the subgrade. However, experimental estimation of properties of pavement strengths such as California bearing ratio (CBR), unconfined compressive strength (UCS) and resistance value (R) are often tedious, time-consuming and costly, thereby inspiring a growing interest in machine learning based tools which are simple, cheap and fast alternatives. Thus, the potential application of two boosting techniques; categorical boosting (CatBoost) and extreme gradient boosting (XGBoost) and support vector regression (SVR), is similarly explored in this study for estimation of properties of subgrade soil modified with hydrated lime activated rice husk ash (HARSH). Using 121 experimental data samples of varying proportions of HARSH, plastic limit, liquid limit, plasticity index, clay activity, optimum moisture content, and maximum dry density as input for CBR, UCS and R estimation, four evaluation metrics namely coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) are used to evaluate the models' performance. The results indicate that XGBoost outperformed CatBoost and SVR in estimating these properties, yielding R2 of 0.9994, 0.9995 and 0.9999 in estimating the CBR, UCS and R respectively. Also, SVR outperformed CatBoost in estimating the CBR and R with R2 of 0.9997 respectively. On the other hand, CatBoost outperformed SVR in estimating the UCS with R2 of 0.9994. Feature sensitivity analysis shows that the three machine learning techniques are unanimous that increasing HARSH proportion lead to values of the estimated properties respectively. A comparison with previous results also shows superiority of XGBoost in estimating subgrade properties.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
523,356
2401.02296
Training Single-Layer Morphological Perceptron Using Convex-Concave Programming
This paper concerns the training of a single-layer morphological perceptron using disciplined convex-concave programming (DCCP). We introduce an algorithm referred to as K-DDCCP, which combines the existing single-layer morphological perceptron (SLMP) model proposed by Ritter and Urcid with the weighted disciplined convex-concave programming (WDCCP) algorithm by Charisopoulos and Maragos. The proposed training algorithm leverages the disciplined convex-concave procedure (DCCP) and formulates a non-convex optimization problem for binary classification. To tackle this problem, the constraints are expressed as differences of convex functions, enabling the application of the DCCP package. The experimental results confirm the effectiveness of the K-DDCCP algorithm in solving binary classification problems. Overall, this work contributes to the field of morphological neural networks by proposing an algorithm that extends the capabilities of the SLMP model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
419,663
2102.05951
Text Compression-aided Transformer Encoding
Text encoding is one of the most important steps in Natural Language Processing (NLP). It has been done well by the self-attention mechanism in the current state-of-the-art Transformer encoder, which has brought about significant improvements in the performance of many NLP tasks. Though the Transformer encoder may effectively capture general information in its resulting representations, the backbone information, meaning the gist of the input text, is not specifically focused on. In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily. Our explicit text compression approaches use dedicated models to compress text, while our implicit text compression approach simply adds an additional module to the main model to handle text compression. We propose three ways of integration, namely backbone source-side fusion, target-side fusion, and both-side fusion, to integrate the backbone information into Transformer-based models for various downstream tasks. Our evaluation on benchmark datasets shows that the proposed explicit and implicit text compression approaches improve results in comparison to strong baselines. We therefore conclude, when comparing the encodings to the baseline models, text compression helps the encoders to learn better language representations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
219,588
2005.03860
Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching
Cross-view geo-localization is the problem of estimating the position and orientation (latitude, longitude and azimuth angle) of a camera at ground level given a large-scale database of geo-tagged aerial (e.g., satellite) images. Existing approaches treat the task as a pure location estimation problem by learning discriminative feature descriptors, but neglect orientation alignment. It is well-recognized that knowing the orientation between ground and aerial images can significantly reduce matching ambiguity between these two views, especially when the ground-level images have a limited Field of View (FoV) instead of a full field-of-view panorama. Therefore, we design a Dynamic Similarity Matching network to estimate cross-view orientation alignment during localization. In particular, we address the cross-view domain gap by applying a polar transform to the aerial images to approximately align the images up to an unknown azimuth angle. Then, a two-stream convolutional network is used to learn deep features from the ground and polar-transformed aerial images. Finally, we obtain the orientation by computing the correlation between cross-view features, which also provides a more accurate measure of feature similarity, improving location recall. Experiments on standard datasets demonstrate that our method significantly improves state-of-the-art performance. Remarkably, we improve the top-1 location recall rate on the CVUSA dataset by a factor of 1.5x for panoramas with known orientation, by a factor of 3.3x for panoramas with unknown orientation, and by a factor of 6x for 180-degree FoV images with unknown orientation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
176,284
2409.00884
A Novel Hybrid Parameter-Efficient Fine-Tuning Approach for Hippocampus Segmentation and Alzheimer's Disease Diagnosis
Deep learning methods have significantly advanced medical image segmentation, yet their success hinges on large volumes of manually annotated data, which require specialized expertise for accurate labeling. Additionally, these methods often demand substantial computational resources, particularly for three-dimensional medical imaging tasks. Consequently, applying deep learning techniques for medical image segmentation with limited annotated data and computational resources remains a critical challenge. In this paper, we propose a novel parameter-efficient fine-tuning strategy, termed HyPS, which employs a hybrid parallel and serial architecture. HyPS updates a minimal subset of model parameters, thereby retaining the pre-trained model's original knowledge tructure while enhancing its ability to learn specific features relevant to downstream tasks. We apply this strategy to the state-of-the-art SwinUNETR model for medical image segmentation. Initially, the model is pre-trained on the BraTs2021 dataset, after which the HyPS method is employed to transfer it to three distinct hippocampus datasets.Extensive experiments demonstrate that HyPS outperforms baseline methods, especially in scenarios with limited training samples. Furthermore, based on the segmentation results, we calculated the hippocampal volumes of subjects from the ADNI dataset and combined these with metadata to classify disease types. In distinguishing Alzheimer's disease (AD) from cognitively normal (CN) individuals, as well as early mild cognitive impairment (EMCI) from late mild cognitive impairment (LMCI), HyPS achieved classification accuracies of 83.78% and 64.29%, respectively. These findings indicate that the HyPS method not only facilitates effective hippocampal segmentation using pre-trained models but also holds potential for aiding Alzheimer's disease detection. Our code is publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
485,116
2305.18310
Motion-Scenario Decoupling for Rat-Aware Video Position Prediction: Strategy and Benchmark
Recently significant progress has been made in human action recognition and behavior prediction using deep learning techniques, leading to improved vision-based semantic understanding. However, there is still a lack of high-quality motion datasets for small bio-robotics, which presents more challenging scenarios for long-term movement prediction and behavior control based on third-person observation. In this study, we introduce RatPose, a bio-robot motion prediction dataset constructed by considering the influence factors of individuals and environments based on predefined annotation rules. To enhance the robustness of motion prediction against these factors, we propose a Dual-stream Motion-Scenario Decoupling (\textit{DMSD}) framework that effectively separates scenario-oriented and motion-oriented features and designs a scenario contrast loss and motion clustering loss for overall training. With such distinctive architecture, the dual-branch feature flow information is interacted and compensated in a decomposition-then-fusion manner. Moreover, we demonstrate significant performance improvements of the proposed \textit{DMSD} framework on different difficulty-level tasks. We also implement long-term discretized trajectory prediction tasks to verify the generalization ability of the proposed dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
368,950
1801.05948
Uplink Coverage Performance of an Underlay Drone Cell for Temporary Events
Using a drone as an aerial base station (ABS) to provide coverage to users on the ground is envisaged as a promising solution for beyond fifth generation (beyond-5G) wireless networks. While the literature to date has examined downlink cellular networks with ABSs, we consider an uplink cellular network with an ABS. Specifically, we analyze the use of an underlay ABS to provide coverage for a temporary event, such as a sporting event or a concert in a stadium. Using stochastic geometry, we derive the analytical expressions for the uplink coverage probability of the terrestrial base station (TBS) and the ABS. The results are expressed in terms of (i) the Laplace transforms of the interference power distribution at the TBS and the ABS and (ii) the distance distribution between the ABS and an independently and uniformly distributed (i.u.d.) ABS-supported user equipment and between the ABS and an i.u.d. TBS-supported user equipment. The accuracy of the analytical results is verified by Monte Carlo simulations. Our results show that varying the ABS height leads to a trade-off between the uplink coverage probability of the TBS and the ABS. In addition, assuming a quality of service of 90% at the TBS, an uplink coverage probability of the ABS of over 85% can be achieved, with the ABS deployed at or below its optimal height of typically between 250-500 m for the considered setup.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
88,541
2407.17073
Contrastive Learning Is Not Optimal for Quasiperiodic Time Series
Despite recent advancements in Self-Supervised Learning (SSL) for time series analysis, a noticeable gap persists between the anticipated achievements and actual performance. While these methods have demonstrated formidable generalization capabilities with minimal labels in various domains, their effectiveness in distinguishing between different classes based on a limited number of annotated records is notably lacking. Our hypothesis attributes this bottleneck to the prevalent use of Contrastive Learning, a shared training objective in previous state-of-the-art (SOTA) methods. By mandating distinctiveness between representations for negative pairs drawn from separate records, this approach compels the model to encode unique record-based patterns but simultaneously neglects changes occurring across the entire record. To overcome this challenge, we introduce Distilled Embedding for Almost-Periodic Time Series (DEAPS) in this paper, offering a non-contrastive method tailored for quasiperiodic time series, such as electrocardiogram (ECG) data. By avoiding the use of negative pairs, we not only mitigate the model's blindness to temporal changes but also enable the integration of a "Gradual Loss (Lgra)" function. This function guides the model to effectively capture dynamic patterns evolving throughout the record. The outcomes are promising, as DEAPS demonstrates a notable improvement of +10% over existing SOTA methods when just a few annotated records are presented to fit a Machine Learning (ML) model based on the learned representation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
475,840
2210.02693
Focal and Global Spatial-Temporal Transformer for Skeleton-based Action Recognition
Despite great progress achieved by transformer in various vision tasks, it is still underexplored for skeleton-based action recognition with only a few attempts. Besides, these methods directly calculate the pair-wise global self-attention equally for all the joints in both the spatial and temporal dimensions, undervaluing the effect of discriminative local joints and the short-range temporal dynamics. In this work, we propose a novel Focal and Global Spatial-Temporal Transformer network (FG-STFormer), that is equipped with two key components: (1) FG-SFormer: focal joints and global parts coupling spatial transformer. It forces the network to focus on modelling correlations for both the learned discriminative spatial joints and human body parts respectively. The selective focal joints eliminate the negative effect of non-informative ones during accumulating the correlations. Meanwhile, the interactions between the focal joints and body parts are incorporated to enhance the spatial dependencies via mutual cross-attention. (2) FG-TFormer: focal and global temporal transformer. Dilated temporal convolution is integrated into the global self-attention mechanism to explicitly capture the local temporal motion patterns of joints or body parts, which is found to be vital important to make temporal transformer work. Extensive experimental results on three benchmarks, namely NTU-60, NTU-120 and NW-UCLA, show our FG-STFormer surpasses all existing transformer-based methods, and compares favourably with state-of-the art GCN-based methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
321,749
1912.03430
An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness
Deep neural networks (DNNs) have had many successes, but they suffer from two major issues: (1) a vulnerability to adversarial examples and (2) a tendency to elude human interpretation. Interestingly, recent empirical and theoretical evidence suggests these two seemingly disparate issues are actually connected. In particular, robust models tend to provide more interpretable gradients than non-robust models. However, whether this relationship works in the opposite direction remains obscure. With this paper, we seek empirical answers to the following question: can models acquire adversarial robustness when they are trained to have interpretable gradients? We introduce a theoretically inspired technique called Interpretation Regularization (IR), which encourages a model's gradients to (1) match the direction of interpretable target salience maps and (2) have small magnitude. To assess model performance and tease apart factors that contribute to adversarial robustness, we conduct extensive experiments on MNIST and CIFAR-10 with both $\ell_2$ and $\ell_\infty$ attacks. We demonstrate that training the networks to have interpretable gradients improves their robustness to adversarial perturbations. Applying the network interpretation technique SmoothGrad yields additional performance gains, especially in cross-norm attacks and under heavy perturbations. The results indicate that the interpretability of the model gradients is a crucial factor for adversarial robustness. Code for the experiments can be found at https://github.com/a1noack/interp_regularization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,584
1606.01245
Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization
The Schatten-p quasi-norm $(0<p<1)$ is usually used to replace the standard nuclear norm in order to approximate the rank function more accurately. However, existing Schatten-p quasi-norm minimization algorithms involve singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each iteration, and thus may become very slow and impractical for large-scale problems. In this paper, we first define two tractable Schatten quasi-norms, i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices. We also design two efficient proximal alternating linearized minimization algorithms for solving representative matrix completion problems. Finally, we provide the global convergence and performance guarantees for our algorithms, which have better convergence properties than existing algorithms. Experimental results on synthetic and real-world data show that our algorithms are more accurate than the state-of-the-art methods, and are orders of magnitude faster.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
56,773
2208.05627
SignalKG: Towards Reasoning about the Underlying Causes of Sensor Observations
This paper demonstrates our vision for knowledge graphs that assist machines to reason about the cause of signals observed by sensors. We show how the approach allows for constructing smarter surveillance systems that reason about the most likely cause (e.g., an attacker breaking a window) of a signal rather than acting directly on the received signal without consideration for how it was produced.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
true
false
312,447
2307.04978
Diffusion idea exploration for art generation
Cross-Modal learning tasks have picked up pace in recent times. With plethora of applications in diverse areas, generation of novel content using multiple modalities of data has remained a challenging problem. To address the same, various generative modelling techniques have been proposed for specific tasks. Novel and creative image generation is one important aspect for industrial application which could help as an arm for novel content generation. Techniques proposed previously used Generative Adversarial Network(GAN), autoregressive models and Variational Autoencoders (VAE) for accomplishing similar tasks. These approaches are limited in their capability to produce images guided by either text instructions or rough sketch images decreasing the overall performance of image generator. We used state of the art diffusion models to generate creative art by primarily leveraging text with additional support of rough sketches. Diffusion starts with a pattern of random dots and slowly converts that pattern into a design image using the guiding information fed into the model. Diffusion models have recently outperformed other generative models in image generation tasks using cross modal data as guiding information. The initial experiments for this task of novel image generation demonstrated promising qualitative results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
378,571
2308.13910
Exploring Human Crowd Patterns and Categorization in Video Footage for Enhanced Security and Surveillance using Computer Vision and Machine Learning
Computer vision and machine learning have brought revolutionary shifts in perception for researchers, scientists, and the general populace. Once thought to be unattainable, these technologies have achieved the seemingly impossible. Their exceptional applications in diverse fields like security, agriculture, and education are a testament to their impact. However, the full potential of computer vision remains untapped. This paper explores computer vision's potential in security and surveillance, presenting a novel approach to track motion in videos. By categorizing motion into Arcs, Lanes, Converging/Diverging, and Random/Block motions using Motion Information Images and Blockwise dominant motion data, the paper examines different optical flow techniques, CNN models, and machine learning models. Successfully achieving its objectives with promising accuracy, the results can train anomaly-detection models, provide behavioral insights based on motion, and enhance scene comprehension.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
388,106
1906.04225
Label-Agnostic Sequence Labeling by Copying Nearest Neighbors
Retrieve-and-edit based approaches to structured prediction, where structures associated with retrieved neighbors are edited to form new structures, have recently attracted increased interest. However, much recent work merely conditions on retrieved structures (e.g., in a sequence-to-sequence framework), rather than explicitly manipulating them. We show we can perform accurate sequence labeling by explicitly (and only) copying labels from retrieved neighbors. Moreover, because this copying is label-agnostic, we can achieve impressive performance when transferring to new sequence-labeling tasks without retraining. We additionally consider a dynamic programming approach to sequence labeling in the presence of retrieved neighbors, which allows for controlling the number of distinct (copied) segments used to form a prediction, and leads to both more interpretable and accurate predictions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
134,636
2003.03572
Efficient Nonnegative Tensor Factorization via Saturating Coordinate Descent
With the advancements in computing technology and web-based applications, data is increasingly generated in multi-dimensional form. This data is usually sparse due to the presence of a large number of users and fewer user interactions. To deal with this, the Nonnegative Tensor Factorization (NTF) based methods have been widely used. However existing factorization algorithms are not suitable to process in all three conditions of size, density, and rank of the tensor. Consequently, their applicability becomes limited. In this paper, we propose a novel fast and efficient NTF algorithm using the element selection approach. We calculate the element importance using Lipschitz continuity and propose a saturation point based element selection method that chooses a set of elements column-wise for updating to solve the optimization problem. Empirical analysis reveals that the proposed algorithm is scalable in terms of tensor size, density, and rank in comparison to the relevant state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
167,276
1705.11159
Reinforcement Learning for Learning Rate Control
Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. We propose an algorithm to automatically learn learning rates using neural network based actor-critic methods from deep reinforcement learning (RL).In particular, we train a policy network called actor to decide the learning rate at each step during training, and a value network called critic to give feedback about quality of the decision (e.g., the goodness of the learning rate outputted by the actor) that the actor made. The introduction of auxiliary actor and critic networks helps the main network achieve better performance. Experiments on different datasets and network architectures show that our approach leads to better convergence of SGD than human-designed competitors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
74,535
2412.06219
Data Free Backdoor Attacks
Backdoor attacks aim to inject a backdoor into a classifier such that it predicts any input with an attacker-chosen backdoor trigger as an attacker-chosen target class. Existing backdoor attacks require either retraining the classifier with some clean data or modifying the model's architecture. As a result, they are 1) not applicable when clean data is unavailable, 2) less efficient when the model is large, and 3) less stealthy due to architecture changes. In this work, we propose DFBA, a novel retraining-free and data-free backdoor attack without changing the model architecture. Technically, our proposed method modifies a few parameters of a classifier to inject a backdoor. Through theoretical analysis, we verify that our injected backdoor is provably undetectable and unremovable by various state-of-the-art defenses under mild assumptions. Our evaluation on multiple datasets further demonstrates that our injected backdoor: 1) incurs negligible classification loss, 2) achieves 100% attack success rates, and 3) bypasses six existing state-of-the-art defenses. Moreover, our comparison with a state-of-the-art non-data-free backdoor attack shows our attack is more stealthy and effective against various defenses while achieving less classification accuracy loss.
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
false
515,158
1412.7888
Accurate Distributed Time Synchronization in Mobile Wireless Sensor Networks from Noisy Difference Measurements
We propose a distributed algorithm for time synchronization in mobile wireless sensor networks. Each node can employ the algorithm to estimate the global time based on its local clock time. The problem of time synchronization is formulated as nodes estimating their skews and offsets from noisy difference measurements of offsets and logarithm of skews; the measurements acquired by time-stamped message exchanges between neighbors. A distributed stochastic approximation based algorithm is proposed to ensure that the estimation error is mean square convergent (variance converging to 0) under certain conditions. A sequence of scheduled update instants is used to meet the requirement of decreasing time-varying gains that need to be synchronized across nodes with unsynchronized clocks. Moreover, a modification on the algorithm is also presented to improve the initial convergence speed. Simulations indicate that highly accurate global time estimates can be achieved with the proposed algorithm for long time durations, while the errors in competing algorithms increase over time.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
38,859
1808.07301
Deep Association Learning for Unsupervised Video Person Re-identification
Deep learning methods have started to dominate the research progress of video-based person re-identification (re-id). However, existing methods mostly consider supervised learning, which requires exhaustive manual efforts for labelling cross-view pairwise data. Therefore, they severely lack scalability and practicality in real-world video surveillance applications. In this work, to address the video person re-id task, we formulate a novel Deep Association Learning (DAL) scheme, the first end-to-end deep learning method using none of the identity labels in model initialisation and training. DAL learns a deep re-id matching model by jointly optimising two margin-based association losses in an end-to-end manner, which effectively constrains the association of each frame to the best-matched intra-camera representation and cross-camera representation. Existing standard CNNs can be readily employed within our DAL scheme. Experiment results demonstrate that our proposed DAL significantly outperforms current state-of-the-art unsupervised video person re-id methods on three benchmarks: PRID 2011, iLIDS-VID and MARS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
105,715
2307.00984
Predicting beauty, liking, and aesthetic quality: A comparative analysis of image databases for visual aesthetics research
In the fields of Experimental and Computational Aesthetics, numerous image datasets have been created over the last two decades. In the present work, we provide a comparative overview of twelve image datasets that include aesthetic ratings (beauty, liking or aesthetic quality) and investigate the reproducibility of results across different datasets. Specifically, we examine how consistently the ratings can be predicted by using either (A) a set of 20 previously studied statistical image properties, or (B) the layers of a convolutional neural network developed for object recognition. Our findings reveal substantial variation in the predictability of aesthetic ratings across the different datasets. However, consistent similarities were found for datasets containing either photographs or paintings, suggesting different relevant features in the aesthetic evaluation of these two image genres. To our surprise, statistical image properties and the convolutional neural network predict aesthetic ratings with similar accuracy, highlighting a significant overlap in the image information captured by the two methods. Nevertheless, the discrepancies between the datasets call into question the generalizability of previous research findings on single datasets. Our study underscores the importance of considering multiple datasets to improve the validity and generalizability of research results in the fields of experimental and computational aesthetics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
377,199
2011.09222
Prognostic and Health Management (PHM) tool for Robot Operating System (ROS)
Nowadays, prognostics-aware systems are increasingly used in many systems and it is critical for sustaining autonomy. All engineering systems, especially robots, are not perfect. Absence of failures in a certain time is the perfect system and it is impossible practically. In all engineering works, we must try to predict or minimize/prevent failures in the system. Failures in the systems are generally unknown, so prediction of these failures and reliability of the system is made by prediction process. Reliability analysis is important for the improving the system performance, extending system lifetime, etc. Prognostic and Health Management (PHM) includes reliability, safety, predictive fault detection / isolation, advanced diagnostics / prognostics, component lifecycle tracking, health reporting and information management, etc. This study proposes an open source robot prognostic and health management tool using model-based methodology namely "Prognostics and Health Management tool for ROS". This tool is a generic tool for using with any kind of robot (mobile robot, robot arm, drone etc.) with compatible with ROS. Some features of this tool are managing / monitoring robots' health, RUL, probability of task completion (PoTC) etc. User is able to enter the necessary equations and components information (hazard rates, robot configuration etc.) to the PHM tool and the other sensory data like temperature, humidity, pressure, load etc. In addition to these, a case study is conducted for the mobile robots (OTA) using this tool.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
207,127
2310.19066
Gauge-optimal approximate learning for small data classification problems
Small data learning problems are characterized by a significant discrepancy between the limited amount of response variable observations and the large feature space dimension. In this setting, the common learning tools struggle to identify the features important for the classification task from those that bear no relevant information, and cannot derive an appropriate learning rule which allows to discriminate between different classes. As a potential solution to this problem, here we exploit the idea of reducing and rotating the feature space in a lower-dimensional gauge and propose the Gauge-Optimal Approximate Learning (GOAL) algorithm, which provides an analytically tractable joint solution to the dimension reduction, feature segmentation and classification problems for small data learning problems. We prove that the optimal solution of the GOAL algorithm consists in piecewise-linear functions in the Euclidean space, and that it can be approximated through a monotonically convergent algorithm which presents -- under the assumption of a discrete segmentation of the feature space -- a closed-form solution for each optimization substep and an overall linear iteration cost scaling. The GOAL algorithm has been compared to other state-of-the-art machine learning (ML) tools on both synthetic data and challenging real-world applications from climate science and bioinformatics (i.e., prediction of the El Nino Southern Oscillation and inference of epigenetically-induced gene-activity networks from limited experimental data). The experimental results show that the proposed algorithm outperforms the reported best competitors for these problems both in learning performance and computational cost.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
403,838
2402.15328
Towards Principled Task Grouping for Multi-Task Learning
This paper presents a novel approach to task grouping in Multitask Learning (MTL), advancing beyond existing methods by addressing key theoretical and practical limitations. Unlike prior studies, our approach offers a more theoretically grounded method that does not rely on restrictive assumptions for constructing transfer gains. We also propose a flexible mathematical programming formulation which can accommodate a wide spectrum of resource constraints, thus enhancing its versatility. Experimental results across diverse domains, including computer vision datasets, combinatorial optimization benchmarks and time series tasks, demonstrate the superiority of our method over extensive baselines, validating its effectiveness and general applicability in MTL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
432,096
1704.00051
Reading Wikipedia to Answer Open-Domain Questions
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
71,014
1907.07911
Locality-constrained Spatial Transformer Network for Video Crowd Counting
Compared with single image based crowd counting, video provides the spatial-temporal information of the crowd that would help improve the robustness of crowd counting. But translation, rotation and scaling of people lead to the change of density map of heads between neighbouring frames. Meanwhile, people walking in/out or being occluded in dynamic scenes leads to the change of head counts. To alleviate these issues in video crowd counting, a Locality-constrained Spatial Transformer Network (LSTN) is proposed. Specifically, we first leverage a Convolutional Neural Networks to estimate the density map for each frame. Then to relate the density maps between neighbouring frames, a Locality-constrained Spatial Transformer (LST) module is introduced to estimate the density map of next frame with that of current frame. To facilitate the performance evaluation, a large-scale video crowd counting dataset is collected, which contains 15K frames with about 394K annotated heads captured from 13 different scenes. As far as we know, it is the largest video crowd counting dataset. Extensive experiments on our dataset and other crowd counting datasets validate the effectiveness of our LSTN for crowd counting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
138,998
1910.11236
Interpretable Text Classification Using CNN and Max-pooling
Deep neural networks have been widely used in text classification. However, it is hard to interpret the neural models due to the complicate mechanisms. In this work, we study the interpretability of a variant of the typical text classification model which is based on convolutional operation and max-pooling layer. Two mechanisms: convolution attribution and n-gram feature analysis are proposed to analyse the process procedure for the CNN model. The interpretability of the model is reflected by providing posterior interpretation for neural network predictions. Besides, a multi-sentence strategy is proposed to enable the model to beused in multi-sentence situation without loss of performance and interpret ability. We evaluate the performance of the model on several classification tasks and justify the interpretable performance with some case studies.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
150,719
1609.06018
Deep CTR Prediction in Display Advertising
Click through rate (CTR) prediction of image ads is the core task of online display advertising systems, and logistic regression (LR) has been frequently applied as the prediction model. However, LR model lacks the ability of extracting complex and intrinsic nonlinear features from handcrafted high-dimensional image features, which limits its effectiveness. To solve this issue, in this paper, we introduce a novel deep neural network (DNN) based model that directly predicts the CTR of an image ad based on raw image pixels and other basic features in one step. The DNN model employs convolution layers to automatically extract representative visual features from images, and nonlinear CTR features are then learned from visual features and other contextual features by using fully-connected layers. Empirical evaluations on a real world dataset with over 50 million records demonstrate the effectiveness and efficiency of this method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
61,228
2401.05827
Hallucination Benchmark in Medical Visual Question Answering
The recent success of large language and vision models (LLVMs) on vision question answering (VQA), particularly their applications in medicine (Med-VQA), has shown a great potential of realizing effective visual assistants for healthcare. However, these models are not extensively tested on the hallucination phenomenon in clinical settings. Here, we created a hallucination benchmark of medical images paired with question-answer sets and conducted a comprehensive evaluation of the state-of-the-art models. The study provides an in-depth analysis of current models' limitations and reveals the effectiveness of various prompting strategies.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
420,927
1902.07623
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
advertorch is a toolbox for adversarial robustness research. It contains various implementations for attacks, defenses and robust training methods. advertorch is built on PyTorch (Paszke et al., 2017), and leverages the advantages of the dynamic computational graph to provide concise and efficient reference implementations. The code is licensed under the LGPL license and is open sourced at https://github.com/BorealisAI/advertorch .
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
122,023
2102.05212
Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior
This paper is concerned with polarimetric dense map reconstruction based on a polarization camera with the help of relative depth information as a prior. In general, polarization imaging is able to reveal information about surface normal such as azimuth and zenith angles, which can support the development of solutions to the problem of dense reconstruction, especially in texture-poor regions. However, polarimetric shape cues are ambiguous due to two types of polarized reflection (specular/diffuse). Although methods have been proposed to address this issue, they either are offline and therefore not practical in robotics applications, or use incomplete polarimetric cues, leading to sub-optimal performance. In this paper, we propose an online reconstruction method that uses full polarimetric cues available from the polarization camera. With our online method, we can propagate sparse depth values both along and perpendicular to iso-depth contours. Through comprehensive experiments on challenging image sequences, we demonstrate that our method is able to significantly improve the accuracy of the depthmap as well as increase its density, specially in regions of poor texture.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
219,358
2501.11003
Building low-resource African language corpora: A case study of Kidawida, Kalenjin and Dholuo
Natural Language Processing is a crucial frontier in artificial intelligence, with broad applications in many areas, including public health, agriculture, education, and commerce. However, due to the lack of substantial linguistic resources, many African languages remain underrepresented in this digital transformation. This paper presents a case study on the development of linguistic corpora for three under-resourced Kenyan languages, Kidaw'ida, Kalenjin, and Dholuo, with the aim of advancing natural language processing and linguistic research in African communities. Our project, which lasted one year, employed a selective crowd-sourcing methodology to collect text and speech data from native speakers of these languages. Data collection involved (1) recording conversations and translation of the resulting text into Kiswahili, thereby creating parallel corpora, and (2) reading and recording written texts to generate speech corpora. We made these resources freely accessible via open-research platforms, namely Zenodo for the parallel text corpora and Mozilla Common Voice for the speech datasets, thus facilitating ongoing contributions and access for developers to train models and develop Natural Language Processing applications. The project demonstrates how grassroots efforts in corpus building can support the inclusion of African languages in artificial intelligence innovations. In addition to filling resource gaps, these corpora are vital in promoting linguistic diversity and empowering local communities by enabling Natural Language Processing applications tailored to their needs. As African countries like Kenya increasingly embrace digital transformation, developing indigenous language resources becomes essential for inclusive growth. We encourage continued collaboration from native speakers and developers to expand and utilize these corpora.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
525,749
1410.6339
Constructions and Properties of Linear Locally Repairable Codes
In this paper, locally repairable codes with all-symbol locality are studied. Methods to modify already existing codes are presented. Also, it is shown that with high probability, a random matrix with a few extra columns guaranteeing the locality property, is a generator matrix for a locally repairable code with a good minimum distance. The proof of this also gives a constructive method to find locally repairable codes. Constructions are given of three infinite classes of optimal vector-linear locally repairable codes over an alphabet of small size, not depending on the size of the code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,976
1603.08212
Human Pose Estimation using Deep Consensus Voting
In this paper we consider the problem of human pose estimation from a single still image. We propose a novel approach where each location in the image votes for the position of each keypoint using a convolutional neural net. The voting scheme allows us to utilize information from the whole image, rather than rely on a sparse set of keypoint locations. Using dense, multi-target votes, not only produces good keypoint predictions, but also enables us to compute image-dependent joint keypoint probabilities by looking at consensus voting. This differs from most previous methods where joint probabilities are learned from relative keypoint locations and are independent of the image. We finally combine the keypoints votes and joint probabilities in order to identify the optimal pose configuration. We show our competitive performance on the MPII Human Pose and Leeds Sports Pose datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
53,746
2403.15726
Convection-Diffusion Equation: A Theoretically Certified Framework for Neural Networks
In this paper, we study the partial differential equation models of neural networks. Neural network can be viewed as a map from a simple base model to a complicate function. Based on solid analysis, we show that this map can be formulated by a convection-diffusion equation. This theoretically certified framework gives mathematical foundation and more understanding of neural networks. Moreover, based on the convection-diffusion equation model, we design a novel network structure, which incorporates diffusion mechanism into network architecture. Extensive experiments on both benchmark datasets and real-world applications validate the performance of the proposed model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
440,715
2002.10365
The Early Phase of Neural Network Training
Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here, we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state during these early iterations of training and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this behavior, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are not inherently label-dependent, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
165,380
2309.06783
Ungar $\unicode{x2013}$ A C++ Framework for Real-Time Optimal Control Using Template Metaprogramming
We present Ungar, an open-source library to aid the implementation of high-dimensional optimal control problems (OCPs). We adopt modern template metaprogramming techniques to enable the compile-time modeling of complex systems while retaining maximum runtime efficiency. Our framework provides syntactic sugar to allow for expressive formulations of a rich set of structured dynamical systems. While the core modules depend only on the header-only Eigen and Boost.Hana libraries, we bundle our codebase with optional packages and custom wrappers for automatic differentiation, code generation, and nonlinear programming. Finally, we demonstrate the versatility of Ungar in various model predictive control applications, namely, four-legged locomotion and collaborative loco-manipulation with multiple one-armed quadruped robots. Ungar is available under the Apache License 2.0 at https://github.com/fdevinc/ungar.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
391,543
2207.06661
Deep Point-to-Plane Registration by Efficient Backpropagation for Error Minimizing Function
Traditional algorithms of point set registration minimizing point-to-plane distances often achieve a better estimation of rigid transformation than those minimizing point-to-point distances. Nevertheless, recent deep-learning-based methods minimize the point-to-point distances. In contrast to these methods, this paper proposes the first deep-learning-based approach to point-to-plane registration. A challenging part of this problem is that a typical solution for point-to-plane registration requires an iterative process of accumulating small transformations obtained by minimizing a linearized energy function. The iteration significantly increases the size of the computation graph needed for backpropagation and can slow down both forward and backward network evaluations. To solve this problem, we consider the estimated rigid transformation as a function of input point clouds and derive its analytic gradients using the implicit function theorem. The analytic gradient that we introduce is independent of how the error minimizing function (i.e., the rigid transformation) is obtained, thus allowing us to calculate both the rigid transformation and its gradient efficiently. We implement the proposed point-to-plane registration module over several previous methods that minimize point-to-point distances and demonstrate that the extensions outperform the base methods even with point clouds with noise and low-quality point normals estimated with local point distributions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,951
2103.14216
Which Parts Determine the Impression of the Font?
Various fonts give different impressions, such as legible, rough, and comic-text.This paper aims to analyze the correlation between the local shapes, or parts, and the impression of fonts. By focusing on local shapes instead of the whole letter shape, we can realize letter-shape independent and more general analysis. The analysis is performed by newly combining SIFT and DeepSets, to extract an arbitrary number of essential parts from a particular font and aggregate them to infer the font impressions by nonlinear regression. Our qualitative and quantitative analyses prove that (1)fonts with similar parts have similar impressions, (2)many impressions, such as legible and rough, largely depend on specific parts, (3)several impressions are very irrelevant to parts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
226,771
2401.14612
On Inhomogeneous Infinite Products of Stochastic Matrices and Applications
With the growth of magnitude of multi-agent networks, distributed optimization holds considerable significance within complex systems. Convergence, a pivotal goal in this domain, is contingent upon the analysis of infinite products of stochastic matrices (IPSMs). In this work, convergence properties of inhomogeneous IPSMs are investigated. The convergence rate of inhomogeneous IPSMs towards an absolute probability sequence $\pi$ is derived. We also show that the convergence rate is nearly exponential, which coincides with existing results on ergodic chains. The methodology employed relies on delineating the interrelations among Sarymsakov matrices, scrambling matrices, and positive-column matrices. Based on the theoretical results on inhomogeneous IPSMs, we propose a decentralized projected subgradient method for time-varying multi-agent systems with graph-related stretches in (sub)gradient descent directions. The convergence of the proposed method is established for convex objective functions, and extended to non-convex objectives that satisfy Polyak-Lojasiewicz conditions. To corroborate the theoretical findings, we conduct numerical simulations, aligning the outcomes with the established theoretical framework.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
424,157
2312.05978
Neural Architecture Codesign for Fast Bragg Peak Analysis
We develop an automated pipeline to streamline neural architecture codesign for fast, real-time Bragg peak analysis in high-energy diffraction microscopy. Traditional approaches, notably pseudo-Voigt fitting, demand significant computational resources, prompting interest in deep learning models for more efficient solutions. Our method employs neural architecture search and AutoML to enhance these models, including hardware costs, leading to the discovery of more hardware-efficient neural architectures. Our results match the performance, while achieving a 13$\times$ reduction in bit operations compared to the previous state-of-the-art. We show further speedup through model compression techniques such as quantization-aware-training and neural network pruning. Additionally, our hierarchical search space provides greater flexibility in optimization, which can easily extend to other tasks and domains.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
414,322
2301.06889
Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Non-decomposable Shared Global State
Mean Field Control (MFC) is a powerful approximation tool to solve large-scale Multi-Agent Reinforcement Learning (MARL) problems. However, the success of MFC relies on the presumption that given the local states and actions of all the agents, the next (local) states of the agents evolve conditionally independent of each other. Here we demonstrate that even in a MARL setting where agents share a common global state in addition to their local states evolving conditionally independently (thus introducing a correlation between the state transition processes of individual agents), the MFC can still be applied as a good approximation tool. The global state is assumed to be non-decomposable i.e., it cannot be expressed as a collection of local states of the agents. We compute the approximation error as $\mathcal{O}(e)$ where $e=\frac{1}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|} +\sqrt{|\mathcal{U}|}\right]$. The size of the agent population is denoted by the term $N$, and $|\mathcal{X}|, |\mathcal{U}|$ respectively indicate the sizes of (local) state and action spaces of individual agents. The approximation error is found to be independent of the size of the shared global state space. We further demonstrate that in a special case if the reward and state transition functions are independent of the action distribution of the population, then the error can be improved to $e=\frac{\sqrt{|\mathcal{X}|}}{\sqrt{N}}$. Finally, we devise a Natural Policy Gradient based algorithm that solves the MFC problem with $\mathcal{O}(\epsilon^{-3})$ sample complexity and obtains a policy that is within $\mathcal{O}(\max\{e,\epsilon\})$ error of the optimal MARL policy for any $\epsilon>0$.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
340,771
2008.05217
Large-Scale Analysis of Iliopsoas Muscle Volumes in the UK Biobank
Psoas muscle measurements are frequently used as markers of sarcopenia and predictors of health. Manually measured cross-sectional areas are most commonly used, but there is a lack of consistency regarding the position of the measurementand manual annotations are not practical for large population studies. We have developed a fully automated method to measure iliopsoas muscle volume (comprised of the psoas and iliacus muscles) using a convolutional neural network. Magnetic resonance images were obtained from the UK Biobank for 5,000 male and female participants, balanced for age, gender and BMI. Ninety manual annotations were available for model training and validation. The model showed excellent performance against out-of-sample data (dice score coefficient of 0.912 +/- 0.018). Iliopsoas muscle volumes were successfully measured in all 5,000 participants. Iliopsoas volume was greater in male compared with female subjects. There was a small but significant asymmetry between left and right iliopsoas muscle volumes. We also found that iliopsoas volume was significantly related to height, BMI and age, and that there was an acceleration in muscle volume decrease in men with age. Our method provides a robust technique for measuring iliopsoas muscle volume that can be applied to large cohorts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
191,451
2105.14267
Information Directed Sampling for Sparse Linear Bandits
Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure. In this work we explore the use of information-directed sampling (IDS), which naturally balances the information-regret trade-off. We develop a class of information-theoretic Bayesian regret bounds that nearly match existing lower bounds on a variety of problem instances, demonstrating the adaptivity of IDS. To efficiently implement sparse IDS, we propose an empirical Bayesian approach for sparse posterior sampling using a spike-and-slab Gaussian-Laplace prior. Numerical results demonstrate significant regret reductions by sparse IDS relative to several baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
237,607
2402.12066
Max-Min Fairness for Uplink Rate-Splitting Multiple Access with Finite Blocklength
In this letter, we investigate the performance of Max Minimum Fairness (MMF) for uplink Rate-Splitting Multiple Access (RSMA) in short-packet communications. Specifically, considering a Single-Input Single-Output (SISO) Multiple Access Channel (MAC), we optimize the transmit power allocation between the splitting user messages to maximize the minimum rate among users with Finite Blocklength (FBL) constraints. To tackle this problem, we propose a Successive Convex Approximation (SCA)-based approach. Additionally, we introduce a low-complexity scheme to design the decoding order at the receiver. Numerical results show that RSMA outperforms conventional transmission schemes such as Non-orthogonal Multiple Access (NOMA) in terms of MMF.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
430,701
1507.04204
Smart Pilot Assignment for Massive MIMO
A massive multiple-input multiple-output (MIMO) system, which utilizes a large number of antennas at the base station (BS) to serve multiple users, suffers from pilot contamination due to inter-cell interference. A smart pilot assignment (SPA) scheme is proposed in this letter to improve the performance of users with severe pilot contamination. Specifically, by exploiting the large-scale characteristics of fading channels, the BS firstly measures the inter-cell interference of each pilot sequence caused by the users with the same pilot sequence in other adjacent cells. Then, in contrast to the conventional schemes which assign the pilot sequences to the users randomly, the proposed SPA method assigns the pilot sequence with the smallest inter-cell interference to the user having the worst channel quality in a sequential way to improve its performance. Simulation results verify the performance gain of the proposed scheme in typical massive MIMO systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
45,147
1306.1586
Strong converse for the classical capacity of entanglement-breaking and Hadamard channels via a sandwiched Renyi relative entropy
A strong converse theorem for the classical capacity of a quantum channel states that the probability of correctly decoding a classical message converges exponentially fast to zero in the limit of many channel uses if the rate of communication exceeds the classical capacity of the channel. Along with a corresponding achievability statement for rates below the capacity, such a strong converse theorem enhances our understanding of the capacity as a very sharp dividing line between achievable and unachievable rates of communication. Here, we show that such a strong converse theorem holds for the classical capacity of all entanglement-breaking channels and all Hadamard channels (the complementary channels of the former). These results follow by bounding the success probability in terms of a "sandwiched" Renyi relative entropy, by showing that this quantity is subadditive for all entanglement-breaking and Hadamard channels, and by relating this quantity to the Holevo capacity. Prior results regarding strong converse theorems for particular covariant channels emerge as a special case of our results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
25,058
1509.06420
A Multi-Agent System Approach to Load-Balancing and Resource Allocation for Distributed Computing
In this research we use a decentralized computing approach to allocate and schedule tasks on a massively distributed grid. Using emergent properties of multi-agent systems, the algorithm dynamically creates and dissociates clusters to serve the changing resource demands of a global task queue. The algorithm is compared to a standard First-in First-out (FIFO) scheduling algorithm. Experiments done on a simulator show that the distributed resource allocation protocol (dRAP) algorithm outperforms the FIFO scheduling algorithm on time to empty queue, average waiting time and CPU utilization. Such a decentralized computing approach holds promise for massively distributed processing scenarios like SETI@home and Google MapReduce.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
47,151
2303.04502
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed. Existing adversarial defenses primarily aim at preventing adversarial examples from attacking DNNs successfully, rather than preventing their generation. If the generation of adversarial examples is unregulated, images within reach are no longer secure and pose a threat to non-robust DNNs. Although gradient obfuscation attempts to address this issue, it has been shown to be circumventable. Therefore, we propose a novel adversarial defense mechanism, which is referred to as immune defense and is the example-based pre-defense. This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples for the raw images, and thereby protecting both images and DNNs. These perturbed images are referred to as Immune Examples (IEs). In the white-box immune defense, we provide a gradient-based and an optimization-based approach, respectively. Additionally, the more complex black-box immune defense is taken into consideration. We propose Masked Gradient Sign Descent (MGSD) to reduce approximation error and stabilize the update to improve the transferability of IEs and thereby ensure their effectiveness against black-box adversarial attacks. The experimental results demonstrate that the optimization-based approach has superior performance and better visual quality in white-box immune defense. In contrast, the gradient-based approach has stronger transferability and the proposed MGSD significantly improve the transferability of baselines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,113
2112.13926
Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity
Federated learning (FL) has emerged as a popular technique for distributing machine learning across wireless edge devices. We examine FL under two salient properties of contemporary networks: device-server communication delays and device computation heterogeneity. Our proposed StoFedDelAv algorithm incorporates a local-global model combiner into the FL synchronization step. We theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device. We then formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss, and solve the non-convex problem through a series of convex approximations. Our simulations reveal that StoFedDelAv outperforms the current art in FL, evidenced by the obtained improvements in optimization objective.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
273,394
1509.08309
Energy Efficiency in MIMO Underlay and Overlay Device-to-Device Communications and Cognitive Radio Systems
This paper addresses the problem of resource allocation for systems in which a primary and a secondary link share the available spectrum by an underlay or overlay approach. After observing that such a scenario models both cognitive radio and D2D communications, we formulate the problem as the maximization of the secondary energy efficiency subject to a minimum rate requirement for the primary user. This leads to challenging non-convex, fractional problems. In the underlay scenario, we obtain the global solution by means of a suitable reformulation. In the overlay scenario, two algorithms are proposed. The first one yields a resource allocation fulfilling the first-order optimality conditions of the resource allocation problem, by solving a sequence of easier fractional problems. The second one enjoys a weaker optimality claim, but an even lower computational complexity. Numerical results demonstrate the merits of the proposed algorithms both in terms of energy-efficient performance and complexity, also showing that the two proposed algorithms for the overlay scenario perform very similarly, despite the different complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,353
2404.12416
Full Shot Predictions for the DIII-D Tokamak via Deep Recurrent Networks
Although tokamaks are one of the most promising devices for realizing nuclear fusion as an energy source, there are still key obstacles when it comes to understanding the dynamics of the plasma and controlling it. As such, it is crucial that high quality models are developed to assist in overcoming these obstacles. In this work, we take an entirely data driven approach to learn such a model. In particular, we use historical data from the DIII-D tokamak to train a deep recurrent network that is able to predict the full time evolution of plasma discharges (or "shots"). Following this, we investigate how different training and inference procedures affect the quality and calibration of the shot predictions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
447,884
2301.00786
Sparse Array Design for Dual-Function Radar-Communications System
The problem of sparse array design for dual-function radar-communications is investigated. Our goal is to design a sparse array which can simultaneously shape desired beam responses and serve multiple downlink users with the required signal-to-interference-plus-noise ratio levels. Besides, we also take into account the limitation of the radiated power by each antenna. The problem is formulated as a quadratically constrained quadratic program with a joint-sparsity-promoting regularization, which is NP-hard. The resulting problem is solved by the consensus alternating direction method of multipliers, which enjoys parallel implementation. Numerical simulations exhibit the effectiveness and superiority of the proposed method which leads to a more power-efficient solution.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
339,022
2312.13737
An Approach to Abstract Multi-stage Cyberattack Data Generation for ML-Based IDS in Smart Grids
Power grids are becoming more digitized, resulting in new opportunities for the grid operation but also new challenges, such as new threats from the cyber-domain. To address these challenges, cybersecurity solutions are being considered in the form of preventive, detective, and reactive measures. Machine learning-based intrusion detection systems are used as part of detection efforts to detect and defend against cyberattacks. However, training and testing data for these systems are often not available or suitable for use in machine learning models for detecting multi-stage cyberattacks in smart grids. In this paper, we propose a method to generate synthetic data using a graph-based approach for training machine learning models in smart grids. We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network. Within the selected scenarios, we observed promising results, but a larger number of scenarios need to be studied to draw a more informed conclusion about the suitability of synthesized data.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
417,402
1807.06036
Pangloss: Fast Entity Linking in Noisy Text Environments
Entity linking is the task of mapping potentially ambiguous terms in text to their constituent entities in a knowledge base like Wikipedia. This is useful for organizing content, extracting structured data from textual documents, and in machine learning relevance applications like semantic search, knowledge graph construction, and question answering. Traditionally, this work has focused on text that has been well-formed, like news articles, but in common real world datasets such as messaging, resumes, or short-form social media, non-grammatical, loosely-structured text adds a new dimension to this problem. This paper presents Pangloss, a production system for entity disambiguation on noisy text. Pangloss combines a probabilistic linear-time key phrase identification algorithm with a semantic similarity engine based on context-dependent document embeddings to achieve better than state-of-the-art results (>5% in F1) compared to other research or commercially available systems. In addition, Pangloss leverages a local embedded database with a tiered architecture to house its statistics and metadata, which allows rapid disambiguation in streaming contexts and on-device disambiguation in low-memory environments such as mobile phones.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
103,039