id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2403.18579
On Optimizing Hyperparameters for Quantum Neural Networks
The increasing capabilities of Machine Learning (ML) models go hand in hand with an immense amount of data and computational power required for training. Therefore, training is usually outsourced into HPC facilities, where we have started to experience limits in scaling conventional HPC hardware, as theorized by Moore's law. Despite heavy parallelization and optimization efforts, current state-of-the-art ML models require weeks for training, which is associated with an enormous $CO_2$ footprint. Quantum Computing, and specifically Quantum Machine Learning (QML), can offer significant theoretical speed-ups and enhanced expressive power. However, training QML models requires tuning various hyperparameters, which is a nontrivial task and suboptimal choices can highly affect the trainability and performance of the models. In this study, we identify the most impactful hyperparameters and collect data about the performance of QML models. We compare different configurations and provide researchers with performance data and concrete suggestions for hyperparameter selection.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
441,998
2002.08537
Adaptive Temporal Difference Learning with Linear Function Approximation
This paper revisits the temporal difference (TD) learning algorithm for the policy evaluation tasks in reinforcement learning. Typically, the performance of TD(0) and TD($\lambda$) is very sensitive to the choice of stepsizes. Oftentimes, TD(0) suffers from slow convergence. Motivated by the tight link between the TD(0) learning algorithm and the stochastic gradient methods, we develop a provably convergent adaptive projected variant of the TD(0) learning algorithm with linear function approximation that we term AdaTD(0). In contrast to the TD(0), AdaTD(0) is robust or less sensitive to the choice of stepsizes. Analytically, we establish that to reach an $\epsilon$ accuracy, the number of iterations needed is $\tilde{O}(\epsilon^{-2}\ln^4\frac{1}{\epsilon}/\ln^4\frac{1}{\rho})$ in the general case, where $\rho$ represents the speed of the underlying Markov chain converges to the stationary distribution. This implies that the iteration complexity of AdaTD(0) is no worse than that of TD(0) in the worst case. When the stochastic semi-gradients are sparse, we provide theoretical acceleration of AdaTD(0). Going beyond TD(0), we develop an adaptive variant of TD($\lambda$), which is referred to as AdaTD($\lambda$). Empirically, we evaluate the performance of AdaTD(0) and AdaTD($\lambda$) on several standard reinforcement learning tasks, which demonstrate the effectiveness of our new approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,784
2412.14446
VLM-AD: End-to-End Autonomous Driving through Vision-Language Model Supervision
Human drivers rely on commonsense reasoning to navigate diverse and dynamic real-world scenarios. Existing end-to-end (E2E) autonomous driving (AD) models are typically optimized to mimic driving patterns observed in data, without capturing the underlying reasoning processes. This limitation constrains their ability to handle challenging driving scenarios. To close this gap, we propose VLM-AD, a method that leverages vision-language models (VLMs) as teachers to enhance training by providing additional supervision that incorporates unstructured reasoning information and structured action labels. Such supervision enhances the model's ability to learn richer feature representations that capture the rationale behind driving patterns. Importantly, our method does not require a VLM during inference, making it practical for real-time deployment. When integrated with state-of-the-art methods, VLM-AD achieves significant improvements in planning accuracy and reduced collision rates on the nuScenes dataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
518,699
1204.1277
Mouse Simulation Using Two Coloured Tapes
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can. The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
15,306
2407.21046
Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines
Autoregressive language models are the currently dominant paradigm for text generation, but they have some fundamental limitations that cannot be remedied by scale-for example inherently sequential and unidirectional generation. While alternate classes of models have been explored, we have limited mathematical understanding of their fundamental power and limitations. In this paper we focus on Generative Masked Language Models (GMLMs), a non-autoregressive paradigm in which we train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model, These models empirically strike a promising speed-quality trade-off as each step can be typically parallelized by decoding the entire sequence in parallel. We develop a mathematical framework for analyzing and improving such models which sheds light on questions of sample complexity and inference speed and quality. Empirically, we adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality compared with autoregressive models. We run careful ablation experiments to give recommendations on key design choices, and make fine-grained observations on the common error modes in connection with our theory. Our mathematical analyses and empirical observations characterize both potentials and limitations of this approach, and can be applied to future works on improving understanding and performance of GMLMs. Our codes are released at https://github.com/google-research/google-research/tree/master/padir
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
477,378
1201.3059
Delay Sensitive Communications over Cognitive Radio Networks
Supporting the quality of service of unlicensed users in cognitive radio networks is very challenging, mainly due to dynamic resource availability because of the licensed users' activities. In this paper, we study the optimal admission control and channel allocation decisions in cognitive overlay networks in order to support delay sensitive communications of unlicensed users. We formulate it as a Markov decision process problem, and solve it by transforming the original formulation into a stochastic shortest path problem. We then propose a simple heuristic control policy, which includes a threshold-based admission control scheme and and a largest-delay-first channel allocation scheme, and prove the optimality of the largest-delay-first channel allocation scheme. We further propose an improved policy using the rollout algorithm. By comparing the performance of both proposed policies with the upper-bound of the maximum revenue, we show that our policies achieve close-to-optimal performance with low complexities.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
13,820
2210.16451
Robust Boosting Forests with Richer Deep Feature Hierarchy
We propose a robust variant of boosting forest to the various adversarial defense methods, and apply it to enhance the robustness of the deep neural network. We retain the deep network architecture, weights, and middle layer features, then install gradient boosting forest to select the features from each layer of the deep network, and predict the target. For training each decision tree, we propose a novel conservative and greedy trade-off, with consideration for less misprediction instead of pure gain functions, therefore being suboptimal and conservative. We actively increase tree depth to remedy the accuracy with splits in more features, being more greedy in growing tree depth. We propose a new task on 3D face model, whose robustness has not been carefully studied, despite the great security and privacy concerns related to face analytics. We tried a simple attack method on a pure convolutional neural network (CNN) face shape estimator, making it degenerate to only output average face shape with invisible perturbation. Our conservative-greedy boosting forest (CGBF) on face landmark datasets showed a great improvement over original pure deep learning methods under the adversarial attacks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
327,333
2008.10779
Continuous Authentication of Wearable Device Users from Heart Rate, Gait, and Breathing Data
The security of private information is becoming the bedrock of an increasingly digitized society. While the users are flooded with passwords and PINs, these gold-standard explicit authentications are becoming less popular and valuable. Recent biometric-based authentication methods, such as facial or finger recognition, are getting popular due to their higher accuracy. However, these hard-biometric-based systems require dedicated devices with powerful sensors and authentication models, which are often limited to most of the market wearables. Still, market wearables are collecting various private information of a user and are becoming an integral part of life: accessing cars, bank accounts, etc. Therefore, time demands a burden-free implicit authentication mechanism for wearables using the less-informative soft-biometric data that are easily obtainable from modern market wearables. In this work, we present a context-dependent soft-biometric-based authentication system for wearables devices using heart rate, gait, and breathing audio signals. From our detailed analysis using the "leave-one-out" validation, we find that a lighter $k$-Nearest Neighbor ($k$-NN) model with $k = 2$ can obtain an average accuracy of $0.93 \pm 0.06$, $F_1$ score $0.93 \pm 0.03$, and {\em false positive rate} (FPR) below $0.08$ at 50\% level of confidence, which shows the promise of this work.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
193,086
1309.0040
Enhanced Flow in Small-World Networks
The small-world property is known to have a profound effect on the navigation efficiency of complex networks [J. M. Kleinberg, Nature 406, 845 (2000)]. Accordingly, the proper addition of shortcuts to a regular substrate can lead to the formation of a highly efficient structure for information propagation. Here we show that enhanced flow properties can also be observed in these complex topologies. Precisely, our model is a network built from an underlying regular lattice over which long-range connections are randomly added according to the probability, $P_{ij}\sim r_{ij}^{-\alpha}$, where $r_{ij}$ is the Manhattan distance between nodes $i$ and $j$, and the exponent $\alpha$ is a controlling parameter. The mean two-point global conductance of the system is computed by considering that each link has a local conductance given by $g_{ij}\propto r_{ij}^{-\delta}$, where $\delta$ determines the extent of the geographical limitations (costs) on the long-range connections. Our results show that the best flow conditions are obtained for $\delta=0$ with $\alpha=0$, while for $\delta \gg 1$ the overall conductance always increases with $\alpha$. For $\delta\approx 1$, $\alpha=d$ becomes the optimal exponent, where $d$ is the topological dimension of the substrate. Interestingly, this exponent is identical to the one obtained for optimal navigation in small-world networks using decentralized algorithms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
26,753
2406.08478
What If We Recaption Billions of Web Images with LLaMA-3?
Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
463,511
2408.15695
G-Style: Stylized Gaussian Splatting
We introduce G-Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as -- compared to other approaches based on Neural Radiance Fields -- it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G-Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
484,042
1906.07789
SEN12MS -- A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion
The availability of curated large-scale training data is a crucial factor for the development of well-generalizing deep learning methods for the extraction of geoinformation from multi-sensor remote sensing imagery. While quite some datasets have already been published by the community, most of them suffer from rather strong limitations, e.g. regarding spatial coverage, diversity or simply number of available samples. Exploiting the freely available data acquired by the Sentinel satellites of the Copernicus program implemented by the European Space Agency, as well as the cloud computing facilities of Google Earth Engine, we provide a dataset consisting of 180,662 triplets of dual-pol synthetic aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches, and MODIS land cover maps. With all patches being fully georeferenced at a 10 m ground sampling distance and covering all inhabited continents during all meteorological seasons, we expect the dataset to support the community in developing sophisticated deep learning-based approaches for common tasks such as scene classification or semantic segmentation for land cover mapping.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
135,682
1901.10895
Generative Adversarial Network with Multi-Branch Discriminator for Cross-Species Image-to-Image Translation
Current approaches have made great progress on image-to-image translation tasks benefiting from the success of image synthesis methods especially generative adversarial networks (GANs). However, existing methods are limited to handling translation tasks between two species while keeping the content matching on the semantic level. A more challenging task would be the translation among more than two species. To explore this new area, we propose a simple yet effective structure of a multi-branch discriminator for enhancing an arbitrary generative adversarial architecture (GAN), named GAN-MBD. It takes advantage of the boosting strategy to break a common discriminator into several smaller ones with fewer parameters, which can enhance the generation and synthesis abilities of GANs efficiently and effectively. Comprehensive experiments show that the proposed multi-branch discriminator can dramatically improve the performance of popular GANs on cross-species image-to-image translation tasks while reducing the number of parameters for computation. The code and some datasets are attached as supplementary materials for reference.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
120,130
2401.07890
A Strategy for Implementing description Temporal Dynamic Algorithms in Dynamic Knowledge Graphs by SPIN
Planning and reasoning about actions and processes, in addition to reasoning about propositions, are important issues in recent logical and computer science studies. The widespread use of actions in everyday life such as IoT, semantic web services, etc., and the limitations and issues in the action formalisms are two factors that lead us to study how actions are represented. Since 2007, there have been some ideas to integrate Description Logic (DL) and action formalisms for representing both static and dynamic knowledge. Meanwhile, time is an important factor in dynamic situations, and actions change states over time. In this study, on the one hand, we examined related logical structures such as extensions of description logics (DLs), temporal formalisms, and action formalisms. On the other hand, we analyzed possible tools for designing and developing the Knowledge and Action Base (KAB). For representation and reasoning about actions, we embedded actions into DLs (such as Dynamic-ALC and its extensions). We propose a terminable algorithm for action projection, planning, checking the satisfiability, consistency, realizability, and executability, and also querying from KAB. Actions in this framework were modeled with SPIN and added to state space. This framework has also been implemented as a plugin for the Prot\'eg\'e ontology editor. During the last two decades, various algorithms have been presented, but due to the high computational complexity, we face many problems in implementing dynamic ontologies. In addition, an algorithm to detect the inconsistency of actions' effects was not explicitly stated. In the proposed strategy, the interactions of actions with other parts of modeled knowledge, and a method to check consistency between the effects of actions are presented. With this framework, the ramification problem can be well handled in future works.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
421,693
0904.2482
Good Concatenated Code Ensembles for the Binary Erasure Channel
In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,550
2412.06694
Digital Transformation in the Water Distribution System based on the Digital Twins Concept
Digital Twins have emerged as a disruptive technology with great potential; they can enhance WDS by offering real-time monitoring, predictive maintenance, and optimization capabilities. This paper describes the development of a state-of-the-art DT platform for WDS, introducing advanced technologies such as the Internet of Things, Artificial Intelligence, and Machine Learning models. This paper provides insight into the architecture of the proposed platform-CAUCCES-that, informed by both historical and meteorological data, effectively deploys AI/ML models like LSTM networks, Prophet, LightGBM, and XGBoost in trying to predict water consumption patterns. Furthermore, we delve into how optimization in the maintenance of WDS can be achieved by formulating a Constraint Programming problem for scheduling, hence minimizing the operational cost efficiently with reduced environmental impacts. It also focuses on cybersecurity and protection to ensure the integrity and reliability of the DT platform. In this view, the system will contribute to improvements in decision-making capabilities, operational efficiency, and system reliability, with reassurance being drawn from the important role it can play toward sustainable management of water resources.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
515,334
2202.11784
Design and experimental investigation of a vibro-impact self-propelled capsule robot with orientation control
This paper presents a novel design and experimental investigation for a self-propelled capsule robot that can be used for painless colonoscopy during a retrograde progression from the patient's rectum. The steerable robot is driven forward and backward via its internal vibration and impact with orientation control by using an electromagnetic actuator. The actuator contains four sets of coils and a shaft made by permanent magnet. The shaft can be excited linearly in a controllable and tilted angle, so guide the progression orientation of the robot. Two control strategies are studied in this work and compared via simulation and experiment. Extensive results are presented to demonstrate the progression efficiency of the robot and its potential for robotic colonoscopy.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
281,996
1711.00941
Deep Active Learning over the Long Tail
This paper is concerned with pool-based active learning for deep neural networks. Motivated by coreset dataset compression ideas, we present a novel active learning algorithm that queries consecutive points from the pool using farthest-first traversals in the space of neural activation over a representation layer. We show consistent and overwhelming improvement in sample complexity over passive learning (random sampling) for three datasets: MNIST, CIFAR-10, and CIFAR-100. In addition, our algorithm outperforms the traditional uncertainty sampling technique (obtained using softmax activations), and we identify cases where uncertainty sampling is only slightly better than random sampling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
83,802
2301.05646
Data-Assisted Control -- A Framework Development by Exploiting NASA GTM Platform
Today's focus on expanding the capabilities of control systems, resulting from the abundance of data and computational resources, requires data-based alternatives over model-based ones. These alternatives may become the sole tool for analysis and synthesis. Nevertheless, mathematical models are available to some extent, especially for air and space vehicles. Hypothetically, data assistance would be the approach to meet the requirements in collaboration with the model. In this paper, a framework of Data-Assisted Control (DAC) for aerospace vehicles is proposed. NASA Generic Transport Model (GTM) is the platform for the study and the data supports the model-based controller in extending performance over a damage event. The framework requires real-time decisions to override the control law with the information obtained from the data, while the model-based controller does not show regular performance. The closed-loop system is shown to be stable in the transition phase between the data and the model. The fixed dynamic parameters are estimated using the Dual Unscented Kalman Filter (DUKF) and the evolution of the generalized force moments is estimated using the Koopman estimator. Simulations have shown that the purely model-based robust control leads to degradation of the closed-loop performance in case of damage, suggesting the need for data assistance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
340,413
2410.09740
Gaussian Splatting Visual MPC for Granular Media Manipulation
Recent advancements in learned 3D representations have enabled significant progress in solving complex robotic manipulation tasks, particularly for rigid-body objects. However, manipulating granular materials such as beans, nuts, and rice, remains challenging due to the intricate physics of particle interactions, high-dimensional and partially observable state, inability to visually track individual particles in a pile, and the computational demands of accurate dynamics prediction. Current deep latent dynamics models often struggle to generalize in granular material manipulation due to a lack of inductive biases. In this work, we propose a novel approach that learns a visual dynamics model over Gaussian splatting representations of scenes and leverages this model for manipulating granular media via Model-Predictive Control. Our method enables efficient optimization for complex manipulation tasks on piles of granular media. We evaluate our approach in both simulated and real-world settings, demonstrating its ability to solve unseen planning tasks and generalize to new environments in a zero-shot transfer. We also show significant prediction and manipulation performance improvements compared to existing granular media manipulation methods.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
497,749
2311.14927
View-Based Luminance Mapping in Open Workplace
This paper introduces a novel computational method for mapping indoor luminance values on the facade of an open workplace to improve its daylight performance. 180-degree fisheye renderings from different indoor locations, view positions, and times of the year are created. These renderings are then transformed from two-dimensional (2D) images into three-dimensional (3D) hemispheres. High luminance values are filtered and projected from the hemisphere to the facade surface. This framework will highlight the areas of the facade that allow too much light penetration into the interior environment. The flexible workflow allows occupant centric lighting analysis that computes multiple design parameters and synthesizes results for localized facade optimization and daylight design.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,315
1907.13100
On the Robustness of Median Sampling in Noisy Evolutionary Optimization
Evolutionary algorithms (EAs) are a sort of nature-inspired metaheuristics, which have wide applications in various practical optimization problems. In these problems, objective evaluations are usually inaccurate, because noise is almost inevitable in real world, and it is a crucial issue to weaken the negative effect caused by noise. Sampling is a popular strategy, which evaluates the objective a couple of times, and employs the mean of these evaluation results as an estimate of the objective value. In this work, we introduce a novel sampling method, median sampling, into EAs, and illustrate its properties and usefulness theoretically by solving OneMax, the problem of maximizing the number of 1s in a bit string. Instead of the mean, median sampling employs the median of the evaluation results as an estimate. Through rigorous theoretical analysis on OneMax under the commonly used onebit noise, we show that median sampling reduces the expected runtime exponentially. Next, through two special noise models, we show that when the 2-quantile of the noisy fitness increases with the true fitness, median sampling can be better than mean sampling; otherwise, it may fail and mean sampling can be better. The results may guide us to employ median sampling properly in practical applications.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
140,297
2208.09500
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
As two sides of the same coin, causality and explainable artificial intelligence (xAI) were initially proposed and developed with different goals. However, the latter can only be complete when seen through the lens of the causality framework. As such, we propose a novel causality-inspired framework for xAI that creates an environment for the development of xAI approaches. To show its applicability, biometrics was used as case study. For this, we have analysed 81 research papers on a myriad of biometric modalities and different tasks. We have categorised each of these methods according to our novel xAI Ladder and discussed the future directions of the field.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
313,720
2411.13874
Next-Generation Phishing: How LLM Agents Empower Cyber Attackers
The escalating threat of phishing emails has become increasingly sophisticated with the rise of Large Language Models (LLMs). As attackers exploit LLMs to craft more convincing and evasive phishing emails, it is crucial to assess the resilience of current phishing defenses. In this study we conduct a comprehensive evaluation of traditional phishing detectors, such as Gmail Spam Filter, Apache SpamAssassin, and Proofpoint, as well as machine learning models like SVM, Logistic Regression, and Naive Bayes, in identifying both traditional and LLM-rephrased phishing emails. We also explore the emerging role of LLMs as phishing detection tools, a method already adopted by companies like NTT Security Holdings and JPMorgan Chase. Our results reveal notable declines in detection accuracy for rephrased emails across all detectors, highlighting critical weaknesses in current phishing defenses. As the threat landscape evolves, our findings underscore the need for stronger security controls and regulatory oversight on LLM-generated content to prevent its misuse in creating advanced phishing attacks. This study contributes to the development of more effective Cyber Threat Intelligence (CTI) by leveraging LLMs to generate diverse phishing variants that can be used for data augmentation, harnessing the power of LLMs to enhance phishing detection, and paving the way for more robust and adaptable threat detection systems.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
509,952
2010.01359
Perplexity-free Parametric t-SNE
The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method. Its non-parametric nature and impressive efficacy motivated its parametric extension. It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches. This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping. It produces reliable embeddings with out-of-sample extensions, competitive with the best perplexity adjustments in terms of neighborhood preservation on multiple data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
198,620
2209.12043
Unsupervised domain adaptation for speech recognition with unsupervised error correction
The transcription quality of automatic speech recognition (ASR) systems degrades significantly when transcribing audios coming from unseen domains. We propose an unsupervised error correction method for unsupervised ASR domain adaption, aiming to recover transcription errors caused by domain mismatch. Unlike existing correction methods that rely on transcribed audios for training, our approach requires only unlabeled data of the target domains in which a pseudo-labeling technique is applied to generate correction training samples. To reduce over-fitting to the pseudo data, we also propose an encoder-decoder correction model that can take into account additional information such as dialogue context and acoustic features. Experiment results show that our method obtains a significant word error rate (WER) reduction over non-adapted ASR systems. The correction model can also be applied on top of other adaptation approaches to bring an additional improvement of 10% relatively.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
319,394
2502.08692
Efficient Split Learning LSTM Models for FPGA-based Edge IoT Devices
Split Learning (SL) recently emerged as an efficient paradigm for distributed Machine Learning (ML) suitable for the Internet Of Things (IoT)-Cloud systems. However, deploying SL on resource-constrained edge IoT platforms poses a significant challenge in terms of balancing the model performance against the processing, memory, and energy resources. In this work, we present a practical study of deploying SL framework on a real-world Field-Programmable Gate Array (FPGA)-based edge IoT platform. We address the SL framework applied to a time-series processing model based on Recurrent Neural Networks (RNNs). Set in the context of river water quality monitoring and using real-world data, we train, optimize, and deploy a Long Short-Term Memory (LSTM) model on a given edge IoT FPGA platform in different SL configurations. Our results demonstrate the importance of aligning design choices with specific application requirements, whether it is maximizing speed, minimizing power, or optimizing for resource constraints.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
533,134
2209.08615
Membership Inference Attacks and Generalization: A Causal Perspective
Membership inference (MI) attacks highlight a privacy weakness in present stochastic training methods for neural networks. It is not well understood, however, why they arise. Are they a natural consequence of imperfect generalization only? Which underlying causes should we address during training to mitigate these attacks? Towards answering such questions, we propose the first approach to explain MI attacks and their connection to generalization based on principled causal reasoning. We offer causal graphs that quantitatively explain the observed MI attack performance achieved for $6$ attack variants. We refute several prior non-quantitative hypotheses that over-simplify or over-estimate the influence of underlying causes, thereby failing to capture the complex interplay between several factors. Our causal models also show a new connection between generalization and MI attacks via their shared causal factors. Our causal models have high predictive power ($0.90$), i.e., their analytical predictions match with observations in unseen experiments often, which makes analysis via them a pragmatic alternative.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
318,191
1409.5224
Plug-and-play fault diagnosis and control-reconfiguration for a class of nonlinear large-scale constrained systems
This paper deals with a novel Plug-and-Play (PnP) architecture for the control and monitoring of Large-Scale Systems (LSSs). The proposed approach integrates a distributed Model Predictive Control (MPC) strategy with a distributed Fault Detection (FD) architecture and methodology in a PnP framework. The basic concept is to use the FD scheme as an autonomous decision support system: once a fault is detected, the faulty subsystem can be unplugged to avoid the propagation of the fault in the interconnected LSS. Analogously, once the issue has been solved, the disconnected subsystem can be re-plugged-in. PnP design of local controllers and detectors allow these operations to be performed safely, i.e. without spoiling stability and constraint satisfaction for the whole LSS. The PnP distributed MPC is derived for a class of nonlinear LSS and an integrated PnP distributed FD architecture is proposed. Simulation results show the effectiveness and the potential of the general methodology.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
36,147
2304.00260
Gaussian Mechanism Design for Prescribed Privacy Sets in Data Releasing Systems
The data transmitted by cyber-physical systems can be intercepted and exploited by malicious individuals to infer privacy-sensitive information regarding the physical system. This motivates us to study the problem of preserving privacy in data releasing of linear dynamical system using stochastic perturbation. In this study, the privacy sensitive quantity is the initial state value of the system. For protecting its privacy, we directly design the covariance matrix of a Gaussian output noise to achieve a prescribed uncertainty set in the form of hyper-ellipsoids. This is done by correlated noise and through a convex optimization problem by considering the utility of released signals. Compared to other available methods, our proposed technique for designing the Gaussian output noise provides enhanced flexibility for system designers. As a case study, the results are applied to a heating ventilation and air conditioning system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
355,622
2501.00924
On the Low-Complexity of Fair Learning for Combinatorial Multi-Armed Bandit
Combinatorial Multi-Armed Bandit with fairness constraints is a framework where multiple arms form a super arm and can be pulled in each round under uncertainty to maximize cumulative rewards while ensuring the minimum average reward required by each arm. The existing pessimistic-optimistic algorithm linearly combines virtual queue-lengths (tracking the fairness violations) and Upper Confidence Bound estimates as a weight for each arm and selects a super arm with the maximum total weight. The number of super arms could be exponential to the number of arms in many scenarios. In wireless networks, interference constraints can cause the number of super arms to grow exponentially with the number of arms. Evaluating all the feasible super arms to find the one with the maximum total weight can incur extremely high computational complexity in the pessimistic-optimistic algorithm. To avoid this, we develop a low-complexity fair learning algorithm based on the so-called pick-and-compare approach that involves randomly picking $M$ feasible super arms to evaluate. By setting $M$ to a constant, the number of comparison steps in the pessimistic-optimistic algorithm can be reduced to a constant, thereby significantly reducing the computational complexity. Our theoretical proof shows this low-complexity design incurs only a slight sacrifice in fairness and regret performance. Finally, we validate the theoretical result by extensive simulations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
521,859
1509.04904
Causal Model Analysis using Collider v-structure with Negative Percentage Mapping
A major problem of causal inference is the arrangement of dependent nodes in a directed acyclic graph (DAG) with path coefficients and observed confounders. Path coefficients do not provide the units to measure the strength of information flowing from one node to the other. Here we proposed the method of causal structure learning using collider v-structures (CVS) with Negative Percentage Mapping (NPM) to get selective thresholds of information strength, to direct the edges and subjective confounders in a DAG. The NPM is used to scale the strength of information passed through nodes in units of percentage from interval from 0 to 1. The causal structures are constructed by bottom up approach using path coefficients, causal directions and confounders, derived implementing collider v-structure and NPM. The method is self-sufficient to observe all the latent confounders present in the causal model and capable of detecting every responsible causal direction. The results are tested for simulated datasets of non-Gaussian distributions and compared with DirectLiNGAM and ICA-LiNGAM to check efficiency of the proposed method.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
46,985
0801.3986
New Lower Bounds on Sizes of Permutation Arrays
A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. This correspondence focuses on the lower bound on $P(n,d)$. First we give three improvements over the Gilbert-Varshamov lower bounds on $P(n,d)$ by applying the graph theorem framework presented by Jiang and Vardy. Next we show another two new improved bounds by considering the covered balls intersections. Finally some new lower bounds for certain values of $n$ and $d$ are given.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,213
2311.16143
Ransomware Detection and Classification using Machine Learning
Vicious assaults, malware, and various ransomware pose a cybersecurity threat, causing considerable damage to computer structures, servers, and mobile and web apps across various industries and businesses. These safety concerns are important and must be addressed immediately. Ransomware detection and classification are critical for guaranteeing rapid reaction and prevention. This study uses the XGBoost classifier and Random Forest (RF) algorithms to detect and classify ransomware attacks. This approach involves analyzing the behaviour of ransomware and extracting relevant features that can help distinguish between different ransomware families. The models are evaluated on a dataset of ransomware attacks and demonstrate their effectiveness in accurately detecting and classifying ransomware. The results show that the XGBoost classifier, Random Forest Classifiers, can effectively detect and classify different ransomware attacks with high accuracy, thereby providing a valuable tool for enhancing cybersecurity.
false
false
false
false
true
false
true
false
false
false
false
true
true
false
false
false
false
false
410,791
1805.04836
Building Language Models for Text with Named Entities
Text in many domains involves a significant amount of named entities. Predict- ing the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evalu- ate the proposed model. Experimental re- sults show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the state-of-the-art language models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
97,322
2306.15876
Hybrid Distillation: Connecting Masked Autoencoders with Contrastive Learners
Representation learning has been evolving from traditional supervised training to Contrastive Learning (CL) and Masked Image Modeling (MIM). Previous works have demonstrated their pros and cons in specific scenarios, i.e., CL and supervised pre-training excel at capturing longer-range global patterns and enabling better feature discrimination, while MIM can introduce more local and diverse attention across all transformer layers. In this paper, we explore how to obtain a model that combines their strengths. We start by examining previous feature distillation and mask feature reconstruction methods and identify their limitations. We find that their increasing diversity mainly derives from the asymmetric designs, but these designs may in turn compromise the discrimination ability. In order to better obtain both discrimination and diversity, we propose a simple but effective Hybrid Distillation strategy, which utilizes both the supervised/CL teacher and the MIM teacher to jointly guide the student model. Hybrid Distill imitates the token relations of the MIM teacher to alleviate attention collapse, as well as distills the feature maps of the supervised/CL teacher to enable discrimination. Furthermore, a progressive redundant token masking strategy is also utilized to reduce the distilling costs and avoid falling into local optima. Experiment results prove that Hybrid Distill can achieve superior performance on different benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
376,184
2111.10339
Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial. In particular, copying with severe illumination changes is an impelling need, as models trained on daylight data will perform poorly at nighttime. In this paper, we study the problem of Domain Adaptive Nighttime Semantic Segmentation (DANSS), which aims to learn a discriminative nighttime model with a labeled daytime dataset and an unlabeled dataset, including coarsely aligned day-night image pairs. To this end, we propose a novel Bidirectional Mixing (Bi-Mix) framework for DANSS, which can contribute to both image translation and segmentation adaptation processes. Specifically, in the image translation stage, Bi-Mix leverages the knowledge of day-night image pairs to improve the quality of nighttime image relighting. On the other hand, in the segmentation adaptation stage, Bi-Mix effectively bridges the distribution gap between day and night domains for adapting the model to the night domain. In both processes, Bi-Mix simply operates by mixing two samples without extra hyper-parameters, thus it is easy to implement. Extensive experiments on Dark Zurich and Nighttime Driving datasets demonstrate the advantage of the proposed Bi-Mix and show that our approach obtains state-of-the-art performance in DANSS. Our code is available at https://github.com/ygjwd12345/BiMix.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
267,290
0812.0070
An Integrated Software-based Solution for Modular and Self-independent Networked Robot
An integrated software-based solution for a modular and self-independent networked robot is introduced. The wirelessly operatable robot has been developed mainly for autonomous monitoring works with full control over web. The integrated software solution covers three components : a) the digital signal processing unit for data retrieval and monitoring system; b) the externally executable codes for control system; and c) the web programming for interfacing the end-users with the robot. It is argued that this integrated software-based approach is crucial to realize a flexible, modular and low development cost mobile monitoring apparatus.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
2,727
2208.00904
Revisiting Information Cascades in Online Social Networks
It's by now folklore that to understand the activity pattern of a user in an online social network (OSN) platform, one needs to look at his friends or the ones he follows. The common perception is that these friends exert influence on the user, effecting his decision whether to re-share content or not. Hinging upon this intuition, a variety of models were developed to predict how information propagates in OSN, similar to the way infection spreads in the population. In this paper, we revisit this world view and arrive at new conclusions. Given a set of users $V$, we study the task of predicting whether a user $u \in V$ will re-share content by some $v \in V$ at the following time window given the activity of all the users in $V$ in the previous time window. We design several algorithms for this task, ranging from a simple greedy algorithm that only learns $u$'s conditional probability distribution, ignoring the rest of $V$, to a convolutional neural network-based algorithm that receives the activity of all of $V$, but does not receive explicitly the social link structure. We tested our algorithms on four datasets that we collected from Twitter, each revolving around a different popular topic in 2020. The best performance, average F1-score of 0.86 over the four datasets, was achieved by the convolutional neural network. The simple, social-link ignorant, algorithm achieved an average F1-score of 0.78.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
311,013
2304.09840
Optimum Output Long Short-Term Memory Cell for High-Frequency Trading Forecasting
High-frequency trading requires fast data processing without information lags for precise stock price forecasting. This high-paced stock price forecasting is usually based on vectors that need to be treated as sequential and time-independent signals due to the time irregularities that are inherent in high-frequency trading. A well-documented and tested method that considers these time-irregularities is a type of recurrent neural network, named long short-term memory neural network. This type of neural network is formed based on cells that perform sequential and stale calculations via gates and states without knowing whether their order, within the cell, is optimal. In this paper, we propose a revised and real-time adjusted long short-term memory cell that selects the best gate or state as its final output. Our cell is running under a shallow topology, has a minimal look-back period, and is trained online. This revised cell achieves lower forecasting error compared to other recurrent neural networks for online high-frequency trading forecasting tasks such as the limit order book mid-price prediction as it has been tested on two high-liquid US and two less-liquid Nordic stocks.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
359,194
1801.04263
Efficient Probabilistic Model Checking of Smart Building Maintenance using Fault Maintenance Trees
Cyber-physical systems, like Smart Buildings and power plants, have to meet high standards, both in terms of reliability and availability. Such metrics are typically evaluated using Fault trees (FTs) and do not consider maintenance strategies which can significantly improve lifespan and reliability. Fault Maintenance trees (FMTs) -- an extension of FTs that also incorporate maintenance and degradation models, are a novel technique that serve as a good planning platform for balancing total costs and dependability of a system. In this work, we apply the FMT formalism to a Smart Building application. We propose a framework for modelling FMTs using probabilistic model checking and present an algorithm for performing abstraction of the FMT in order to reduce the size of its equivalent Continuous Time Markov Chain. This allows us to apply the probabilistic model checking more efficiently. We demonstrate the applicability of our proposed approach by evaluating various dependability metrics and maintenance strategies of a Heating, Ventilation and Air-Conditioning system's FMT.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
88,240
2410.11417
VidCompress: Memory-Enhanced Temporal Compression for Video Understanding in Large Language Models
Video-based multimodal large language models (Video-LLMs) possess significant potential for video understanding tasks. However, most Video-LLMs treat videos as a sequential set of individual frames, which results in insufficient temporal-spatial interaction that hinders fine-grained comprehension and difficulty in processing longer videos due to limited visual token capacity. To address these challenges, we propose VidCompress, a novel Video-LLM featuring memory-enhanced temporal compression. VidCompress employs a dual-compressor approach: a memory-enhanced compressor captures both short-term and long-term temporal relationships in videos and compresses the visual tokens using a multiscale transformer with a memory-cache mechanism, while a text-perceived compressor generates condensed visual tokens by utilizing Q-Former and integrating temporal contexts into query embeddings with cross attention. Experiments on several VideoQA datasets and comprehensive benchmarks demonstrate that VidCompress efficiently models complex temporal-spatial relations and significantly outperforms existing Video-LLMs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
498,554
2211.10999
LA-VocE: Low-SNR Audio-visual Speech Enhancement using Neural Vocoders
Audio-visual speech enhancement aims to extract clean speech from a noisy environment by leveraging not only the audio itself but also the target speaker's lip movements. This approach has been shown to yield improvements over audio-only speech enhancement, particularly for the removal of interfering speech. Despite recent advances in speech synthesis, most audio-visual approaches continue to use spectral mapping/masking to reproduce the clean audio, often resulting in visual backbones added to existing speech enhancement architectures. In this work, we propose LA-VocE, a new two-stage approach that predicts mel-spectrograms from noisy audio-visual speech via a transformer-based architecture, and then converts them into waveform audio using a neural vocoder (HiFi-GAN). We train and evaluate our framework on thousands of speakers and 11+ different languages, and study our model's ability to adapt to different levels of background noise and speech interference. Our experiments show that LA-VocE outperforms existing methods according to multiple metrics, particularly under very noisy scenarios.
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
331,536
cs/0610067
Language, logic and ontology: uncovering the structure of commonsense knowledge
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic 'discovery' of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the long-awaited goal of a meaning algebra.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
539,781
2409.01646
BEVNav: Robot Autonomous Navigation Via Spatial-Temporal Contrastive Learning in Bird's-Eye View
Goal-driven mobile robot navigation in map-less environments requires effective state representations for reliable decision-making. Inspired by the favorable properties of Bird's-Eye View (BEV) in point clouds for visual perception, this paper introduces a novel navigation approach named BEVNav. It employs deep reinforcement learning to learn BEV representations and enhance decision-making reliability. First, we propose a self-supervised spatial-temporal contrastive learning approach to learn BEV representations. Spatially, two randomly augmented views from a point cloud predict each other, enhancing spatial features. Temporally, we combine the current observation with consecutive frames' actions to predict future features, establishing the relationship between observation transitions and actions to capture temporal cues. Then, incorporating this spatial-temporal contrastive learning in the Soft Actor-Critic reinforcement learning framework, our BEVNav offers a superior navigation policy. Extensive experiments demonstrate BEVNav's robustness in environments with dense pedestrians, outperforming state-of-the-art methods across multiple benchmarks. \rev{The code will be made publicly available at https://github.com/LanrenzzzZ/BEVNav.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
485,422
2202.05998
What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
Self-supervised learning establishes a new paradigm of learning representations with much fewer or even no label annotations. Recently there has been remarkable progress on large-scale contrastive learning models which require substantial computing resources, yet such models are not practically optimal for small-scale tasks. To fill the gap, we aim to study contrastive learning on the wearable-based activity recognition task. Specifically, we conduct an in-depth study of contrastive learning from both algorithmic-level and task-level perspectives. For algorithmic-level analysis, we decompose contrastive models into several key components and conduct rigorous experimental evaluations to better understand the efficacy and rationale behind contrastive learning. More importantly, for task-level analysis, we show that the wearable-based signals bring unique challenges and opportunities to existing contrastive models, which cannot be readily solved by existing algorithms. Our thorough empirical studies suggest important practices and shed light on future research challenges. In the meantime, this paper presents an open-source PyTorch library \texttt{CL-HAR}, which can serve as a practical tool for researchers. The library is highly modularized and easy to use, which opens up avenues for exploring novel contrastive models quickly in the future.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
280,056
2103.02405
Relate and Predict: Structure-Aware Prediction with Jointly Optimized Neural DAG
Understanding relationships between feature variables is one important way humans use to make decisions. However, state-of-the-art deep learning studies either focus on task-agnostic statistical dependency learning or do not model explicit feature dependencies during prediction. We propose a deep neural network framework, dGAP, to learn neural dependency Graph and optimize structure-Aware target Prediction simultaneously. dGAP trains towards a structure self-supervision loss and a target prediction loss jointly. Our method leads to an interpretable model that can disentangle sparse feature relationships, informing the user how relevant dependencies impact the target task. We empirically evaluate dGAP on multiple simulated and real datasets. dGAP is not only more accurate, but can also recover correct dependency structure.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
222,956
2402.12887
The practice of qualitative parameterisation in the development of Bayesian networks
The typical phases of Bayesian network (BN) structured development include specification of purpose and scope, structure development, parameterisation and validation. Structure development is typically focused on qualitative issues and parameterisation quantitative issues, however there are qualitative and quantitative issues that arise in both phases. A common step that occurs after the initial structure has been developed is to perform a rough parameterisation that only captures and illustrates the intended qualitative behaviour of the model. This is done prior to a more rigorous parameterisation, ensuring that the structure is fit for purpose, as well as supporting later development and validation. In our collective experience and in discussions with other modellers, this step is an important part of the development process, but is under-reported in the literature. Since the practice focuses on qualitative issues, despite being quantitative in nature, we call this step qualitative parameterisation and provide an outline of its role in the BN development process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
431,037
2107.07788
Reinforcement Learning for Adaptive Optimal Stationary Control of Linear Stochastic Systems
This paper studies the adaptive optimal stationary control of continuous-time linear stochastic systems with both additive and multiplicative noises, using reinforcement learning techniques. Based on policy iteration, a novel off-policy reinforcement learning algorithm, named optimistic least-squares-based policy iteration, is proposed which is able to find iteratively near-optimal policies of the adaptive optimal stationary control problem directly from input/state data without explicitly identifying any system matrices, starting from an initial admissible control policy. The solutions given by the proposed optimistic least-squares-based policy iteration are proved to converge to a small neighborhood of the optimal solution with probability one, under mild conditions. The application of the proposed algorithm to a triple inverted pendulum example validates its feasibility and effectiveness.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
246,530
2111.13621
An Optimal Algorithm for Finding Champions in Tournament Graphs
A tournament graph is a complete directed graph, which can be used to model a round-robin tournament between $n$ players. In this paper, we address the problem of finding a champion of the tournament, also known as Copeland winner, which is a player that wins the highest number of matches. In detail, we aim to investigate algorithms that find the champion by playing a low number of matches. Solving this problem allows us to speed up several Information Retrieval and Recommender System applications, including question answering, conversational search, etc. Indeed, these applications often search for the champion inducing a round-robin tournament among the players by employing a machine learning model to estimate who wins each pairwise comparison. Our contribution, thus, allows finding the champion by performing a low number of model inferences. We prove that any deterministic or randomized algorithm finding a champion with constant success probability requires $\Omega(\ell n)$ comparisons, where $\ell$ is the number of matches lost by the champion. We then present an asymptotically-optimal deterministic algorithm matching this lower bound without knowing $\ell$, and we extend our analysis to three variants of the problem. Lastly, we conduct a comprehensive experimental assessment of the proposed algorithms on a question answering task on public data. Results show that our proposed algorithms speed up the retrieval of the champion up to $13\times$ with respect to the state-of-the-art algorithm that perform the full tournament.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
268,339
2110.00731
Learning Region of Attraction for Nonlinear Systems
Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis. Leveraging the universal approximation property of neural networks, in this paper, we propose a counterexample-guided method to estimate the ROA of general nonlinear dynamical systems provided that they can be approximated by piecewise linear neural networks and that the approximation error can be bounded. Specifically, our method searches for robust Lyapunov functions using counterexamples, i.e., the states at which the Lyapunov conditions fail. We generate the counterexamples using Mixed-Integer Quadratic Programming. Our method is guaranteed to find a robust Lyapunov function in the parameterized function class, if exists, after collecting a finite number of counterexamples. We illustrate our method through numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
258,510
2205.08754
Revisiting PINNs: Generative Adversarial Physics-informed Neural Networks and Point-weighting Method
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs), and have been widely used in a variety of PDE problems. However, there still remain some challenges in the application of PINNs: 1) the mechanism of PINNs is unsuitable (at least cannot be directly applied) to exploiting a small size of (usually very few) extra informative samples to refine the networks; and 2) the efficiency of training PINNs often becomes low for some complicated PDEs. In this paper, we propose the generative adversarial physics-informed neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs, to improve the performance of PINNs by exploiting only a small size of exact solutions to the PDEs. Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs, where the weight of each sample point is adaptively updated at each training iteration. The numerical experiments show that GA-PINNs outperform PINNs in many well-known PDEs and the PW method also improves the efficiency of training PINNs and GA-PINNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
297,044
2408.06447
S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation
Medical image segmentation has been traditionally approached by training or fine-tuning the entire model to cater to any new modality or dataset. However, this approach often requires tuning a large number of parameters during training. With the introduction of the Segment Anything Model (SAM) for prompted segmentation of natural images, many efforts have been made towards adapting it efficiently for medical imaging, thus reducing the training time and resources. However, these methods still require expert annotations for every image in the form of point prompts or bounding box prompts during training and inference, making it tedious to employ them in practice. In this paper, we propose an adaptation technique, called S-SAM, that only trains parameters equal to 0.4% of SAM's parameters and at the same time uses simply the label names as prompts for producing precise masks. This not only makes tuning SAM more efficient than the existing adaptation methods but also removes the burden of providing expert prompts. We call this modified version S-SAM and evaluate it on five different modalities including endoscopic images, x-ray, ultrasound, CT, and histology images. Our experiments show that S-SAM outperforms state-of-the-art methods as well as existing SAM adaptation methods while tuning a significantly less number of parameters. We release the code for S-SAM at https://github.com/JayParanjape/SVDSAM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
480,208
2411.04279
Novel Non-Prehensile Rolling Problem: Modelling and Balance Control of Pendulum-Driven Reconfigurable Disks Motion with Magnetic Coupling in Simulation
This paper presents a novel type of mobile rolling robot designed as a modular platform for non-prehensile manipulation, highlighting the associated control challenges in achieving balancing control of the robotic system. The developed rolling disk modules incorporate an innovative internally actuated magnetic-pendulum coupling mechanism, which introduces a compelling control problem due to the frictional and sliding interactions, as well as the magnetic effects between each module. In this paper, we derive the nonlinear dynamics of the robot using the Euler-Lagrange formulation. Then, through simulation, the motion behavior of the system is studied and analyzed, providing critical insights for future investigations into control methods for complex non-prehensile motion between robotic modules. Also, we study the balancing of this new platform and introduce a new motion pattern of lifting. This research aims to enhance the understanding and implementation of modular self-reconfigurable robots in various scenarios for future applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
506,204
2302.00275
Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization
Image geolocalization is the challenging task of predicting the geographic coordinates of origin for a given photo. It is an unsolved problem relying on the ability to combine visual clues with general knowledge about the world to make accurate predictions across geographies. We present $\href{https://huggingface.co/geolocal/StreetCLIP}{\text{StreetCLIP}}$, a robust, publicly available foundation model not only achieving state-of-the-art performance on multiple open-domain image geolocalization benchmarks but also doing so in a zero-shot setting, outperforming supervised models trained on more than 4 million images. Our method introduces a meta-learning approach for generalized zero-shot learning by pretraining CLIP from synthetic captions, grounding CLIP in a domain of choice. We show that our method effectively transfers CLIP's generalized zero-shot capabilities to the domain of image geolocalization, improving in-domain generalized zero-shot performance without finetuning StreetCLIP on a fixed set of classes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
343,167
2306.01757
State estimation for one-dimensional agro-hydrological processes with model mismatch
The importance of accurate soil moisture data for the development of modern closed-loop irrigation systems cannot be overstated. Due to the diversity of soil, it is difficult to obtain an accurate model for agro-hydrological system. In this study, soil moisture estimation in 1D agro-hydrological systems with model mismatch is the focus. To address the problem of model mismatch, a nonlinear state-space model derived from the Richards equation is utilized, along with additive unknown inputs. The determination of the number of sensors required is achieved through sensitivity analysis and the orthogonalization projection method. To estimate states and unknown inputs in real-time, a recursive expectation maximization (EM) algorithm derived from the conventional EM algorithm is employed. During the E-step, the extended Kalman filter (EKF) is used to compute states and covariance in the recursive Q-function, while in the M-step, unknown inputs are updated by locally maximizing the recursive Q-function. The estimation performance is evaluated using comprehensive simulations. Through this method, accurate soil moisture estimation can be obtained, even in the presence of model mismatch.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
370,589
2307.14453
Predictive Maintenance of Armoured Vehicles using Machine Learning Approaches
Armoured vehicles are specialized and complex pieces of machinery designed to operate in high-stress environments, often in combat or tactical situations. This study proposes a predictive maintenance-based ensemble system that aids in predicting potential maintenance needs based on sensor data collected from these vehicles. The proposed model's architecture involves various models such as Light Gradient Boosting, Random Forest, Decision Tree, Extra Tree Classifier and Gradient Boosting to predict the maintenance requirements of the vehicles accurately. In addition, K-fold cross validation, along with TOPSIS analysis, is employed to evaluate the proposed ensemble model's stability. The results indicate that the proposed system achieves an accuracy of 98.93%, precision of 99.80% and recall of 99.03%. The algorithm can effectively predict maintenance needs, thereby reducing vehicle downtime and improving operational efficiency. Through comparisons between various algorithms and the suggested ensemble, this study highlights the potential of machine learning-based predictive maintenance solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
381,936
1809.10330
Variance reduction properties of the reparameterization trick
The reparameterization trick is widely used in variational inference as it yields more accurate estimates of the gradient of the variational objective than alternative approaches such as the score function method. Although there is overwhelming empirical evidence in the literature showing its success, there is relatively little research exploring why the reparameterization trick is so effective. We explore this under the idealized assumptions that the variational approximation is a mean-field Gaussian density and that the log of the joint density of the model parameters and the data is a quadratic function that depends on the variational mean. From this, we show that the marginal variances of the reparameterization gradient estimator are smaller than those of the score function gradient estimator. We apply the result of our idealized analysis to real-world examples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
108,890
2003.03220
Deep Active Inference for Autonomous Robot Navigation
Active inference is a theory that underpins the way biological agent's perceive and act in the real world. At its core, active inference is based on the principle that the brain is an approximate Bayesian inference engine, building an internal generative model to drive agents towards minimal surprise. Although this theory has shown interesting results with grounding in cognitive neuroscience, its application remains limited to simulations with small, predefined sensor and state spaces. In this paper, we leverage recent advances in deep learning to build more complex generative models that can work without a predefined states space. State representations are learned end-to-end from real-world, high-dimensional sensory data such as camera frames. We also show that these generative models can be used to engage in active inference. To the best of our knowledge this is the first application of deep active inference for a real-world robot navigation task.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
167,162
2307.06060
Interpreting deep embeddings for disease progression clustering
We propose a novel approach for interpreting deep embeddings in the context of patient clustering. We evaluate our approach on a dataset of participants with type 2 diabetes from the UK Biobank, and demonstrate clinically meaningful insights into disease progression patterns.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
378,953
1911.11365
ATCSpeech: a multilingual pilot-controller speech corpus from real Air Traffic Control environment
Automatic Speech Recognition (ASR) is greatly developed in recent years, which expedites many applications on other fields. For the ASR research, speech corpus is always an essential foundation, especially for the vertical industry, such as Air Traffic Control (ATC). There are some speech corpora for common applications, public or paid. However, for the ATC, it is difficult to collect raw speeches from real systems due to safety issues. More importantly, for a supervised learning task like ASR, annotating the transcription is a more laborious work, which hugely restricts the prospect of ASR application. In this paper, a multilingual speech corpus (ATCSpeech) from real ATC systems, including accented Mandarin Chinese and English, is built and released to encourage the non-commercial ASR research in ATC domain. The corpus is detailly introduced from the perspective of data amount, speaker gender and role, speech quality and other attributions. In addition, the performance of our baseline ASR models is also reported. A community edition for our speech database can be applied and used under a special contrast. To our best knowledge, this is the first work that aims at building a real and multilingual ASR corpus for the air traffic related research.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
155,099
1608.00339
Crowd-sourcing NLG Data: Pictures Elicit Better Data
Recent advances in corpus-based Natural Language Generation (NLG) hold the promise of being easily portable across domains, but require costly training data, consisting of meaning representations (MRs) paired with Natural Language (NL) utterances. In this work, we propose a novel framework for crowdsourcing high quality NLG training data, using automatic quality control measures and evaluating different MRs with which to elicit data. We show that pictorial MRs result in better NL data being collected than logic-based MRs: utterances elicited by pictorial MRs are judged as significantly more natural, more informative, and better phrased, with a significant increase in average quality ratings (around 0.5 points on a 6-point scale), compared to using the logical MRs. As the MR becomes more complex, the benefits of pictorial stimuli increase. The collected data will be released as part of this submission.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
59,274
2212.05602
ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed Residuals
Federated learning enables cooperative training among massively distributed clients by sharing their learned local model parameters. However, with increasing model size, deploying federated learning requires a large communication bandwidth, which limits its deployment in wireless networks. To address this bottleneck, we introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training. In particular, we integrate two pairs of shared predictors for the model prediction in both server-to-client and client-to-server communication. By employing a common prediction rule, both locally and globally updated models are always fully recoverable in clients and the server. We highlight that the residuals only indicate the quasi-update of a model in a single inter-round, and hence contain more dense information and have a lower entropy than the model, comparing to model weights and gradients. Based on this property, we further conduct lossy compression of the residuals by sparsification and quantization and encode them for efficient communication. The experimental evaluation shows that our ResFed needs remarkably less communication costs and achieves better accuracy by leveraging less sensitive residuals, compared to standard federated learning. For instance, to train a 4.08 MB CNN model on CIFAR-10 with 10 clients under non-independent and identically distributed (Non-IID) setting, our approach achieves a compression ratio over 700X in each communication round with minimum impact on the accuracy. To reach an accuracy of 70%, it saves around 99% of the total communication volume from 587.61 Mb to 6.79 Mb in up-streaming and to 4.61 Mb in down-streaming on average for all clients.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
335,832
2210.15279
On the Approximation and Complexity of Deep Neural Networks to Invariant Functions
Recent years have witnessed a hot wave of deep neural networks in various domains; however, it is not yet well understood theoretically. A theoretical characterization of deep neural networks should point out their approximation ability and complexity, i.e., showing which architecture and size are sufficient to handle the concerned tasks. This work takes one step on this direction by theoretically studying the approximation and complexity of deep neural networks to invariant functions. We first prove that the invariant functions can be universally approximated by deep neural networks. Then we show that a broad range of invariant functions can be asymptotically approximated by various types of neural network models that includes the complex-valued neural networks, convolutional neural networks, and Bayesian neural networks using a polynomial number of parameters or optimization iterations. We also provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions. The empirical results obtained on simulation experiments demonstrate the effectiveness of our method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
326,885
2201.01745
Atomized Search Length: Beyond User Models
We argue that current IR metrics, modeled on optimizing user experience, measure too narrow a portion of the IR space. If IR systems are weak, these metrics undersample or completely filter out the deeper documents that need improvement. If IR systems are relatively strong, these metrics undersample deeper relevant documents that could underpin even stronger IR systems, ones that could present content from tens or hundreds of relevant documents in a user-digestible hierarchy or text summary. We reanalyze over 70 TREC tracks from the past 28 years, showing that roughly half undersample top ranked documents and nearly all undersample tail documents. We show that in the 2020 Deep Learning tracks, neural systems were actually near-optimal at top-ranked documents, compared to only modest gains over BM25 on tail documents. Our analysis is based on a simple new systems-oriented metric, 'atomized search length', which is capable of accurately and evenly measuring all relevant documents at any depth.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
274,337
2211.11130
Safe Stabilization for Stochastic Time-Delay Systems
This paper addresses the safe stabilization problem of stochastic nonlinear time-delay systems. Based on theKrasovskii approach, we first propose a stochastic control Lyapunov-Krasovskii functional to guarantee the stabilization objective and a stochastic control barrier-Krasovskii functional to ensure the safety objective. Both functionals are developed respectively for each control objectives for the first time. Since the optimization problem is not easy to be resolved for stochastic time-delay systems, we derive a sliding mode based approach to combine the proposed two functionals and to mediate stabilization and safety objectives, which allows to achieve the stabilization objective under the safety requirement. The proposed approach is illustrated via a numerical example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
331,592
1811.02949
Instance Retrieval at Fine-grained Level Using Multi-Attribute Recognition
In this paper, we present a method for instance ranking and retrieval at fine-grained level based on the global features extracted from a multi-attribute recognition model which is not dependent on landmarks information or part-based annotations. Further, we make this architecture suitable for mobile-device application by adopting the bilinear CNN to make the multi-attribute recognition model smaller (in terms of the number of parameters). The experiments run on the Dress category of DeepFashion In-Shop Clothes Retrieval and CUB200 datasets show that the results of instance retrieval at fine-grained level are promising for these datasets, specially in terms of texture and color.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
112,728
1301.4432
Language learning from positive evidence, reconsidered: A simplicity-based approach
Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes are corrected. Such learning from positive evidence has been viewed as raising logical problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this logical problem can be dissolved. Here, we review recent formal results showing that the learner has sufficient data to learn successfully from positive evidence, if it favours the simplest encoding of the linguistic input. Results include the ability to learn a linguistic prediction, grammaticality judgements, language production, and form-meaning mappings. The simplicity approach can also be scaled-down to analyse the ability to learn a specific linguistic constructions, and is amenable to empirical test as a framework for describing human language acquisition.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
21,248
2502.08414
Sparse Estimation of Inverse Covariance and Partial Correlation Matrices via Joint Partial Regression
We present a new method for estimating high-dimensional sparse partial correlation and inverse covariance matrices, which exploits the connection between the inverse covariance matrix and linear regression. The method is a two-stage estimation method wherein each individual feature is regressed on all other features while positive semi-definiteness is enforced simultaneously. We provide statistical rates of convergence for the proposed method which match, and improve upon, the state-of-the-art for inverse covariance and partial correlation matrix estimation, respectively. We also propose an efficient proximal splitting algorithm for numerically computing the estimate. The effectiveness of the proposed method is demonstrated on both synthetic and real-world data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,005
2009.05554
Synthesis of Run-To-Completion Controllers for Discrete Event Systems
A controller for a Discrete Event System must achieve its goals despite that its environment being capable of resolving race conditions between controlled and uncontrolled events.Assuming that the controller loses all races is sometimes unrealistic. In many cases, a realistic assumption is that the controller sometimes wins races and is fast enough to perform multiple actions without being interrupted. However, in order to model this scenario using control of DES requires introducing foreign assumptions about scheduling, that are hard to figure out correctly. We propose a more balanced control problem, named run-to-completion (RTC), to alleviate this issue. RTC naturally supports an execution assumption in which both the controller and the environment are guaranteed to initiate and perform sequences of actions, without flooding or delaying each other indefinitely. We consider control of DES in the context where specifications are given in the form of linear temporal logic. We formalize the RTC control problem and show how it can be reduced to a standard control problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
195,352
1608.07468
On Mathematical structures on pairwise comparisons matrices with coefficients in a group arising from quantum gravity
We describe the mathematical properties of pairwise comparisons matrices with coefficients in an arbitrary group. We provide a vocabulary adapted for the description of main algebraic properties of inconsistency maps, describe an example where the use of a non abelian group is necessary. Algebraic, topological, geometric and probabilistic aspects are considered.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
60,234
2402.18124
Dark energy reconstruction analysis with artificial neural networks: Application on simulated Supernova Ia data from Rubin Observatory
In this paper, we present an analysis of Supernova Ia (SNIa) distance moduli $\mu(z)$ and dark energy using an Artificial Neural Network (ANN) reconstruction based on LSST simulated three-year SNIa data. The ANNs employed in this study utilize genetic algorithms for hyperparameter tuning and Monte Carlo Dropout for predictions. Our ANN reconstruction architecture is capable of modeling both the distance moduli and their associated statistical errors given redshift values. We compare the performance of the ANN-based reconstruction with two theoretical dark energy models: $\Lambda$CDM and Chevallier-Linder-Polarski (CPL). Bayesian analysis is conducted for these theoretical models using the LSST simulations and compared with observations from Pantheon and Pantheon+ SNIa real data. We demonstrate that our model-independent ANN reconstruction is consistent with both theoretical models. Performance metrics and statistical tests reveal that the ANN produces distance modulus estimates that align well with the LSST dataset and exhibit only minor discrepancies with $\Lambda$CDM and CPL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,295
2410.07388
On Densest $k$-Subgraph Mining and Diagonal Loading
The Densest $k$-Subgraph (D$k$S) problem aims to find a subgraph comprising $k$ vertices with the maximum number of edges between them. A continuous reformulation of the binary quadratic D$k$S problem is considered, which incorporates a diagonal loading term. It is shown that this non-convex, continuous relaxation is tight for a range of diagonal loading parameters, and the impact of the diagonal loading parameter on the optimization landscape is studied. On the algorithmic side, two projection-free algorithms are proposed to tackle the relaxed problem, based on Frank-Wolfe and explicit constraint parametrization, respectively. Experiments suggest that both algorithms have merits relative to the state-of-art, while the Frank-Wolfe-based algorithm stands out in terms of subgraph density, computational complexity, and ability to scale up to very large datasets.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
496,598
2308.14423
GADePo: Graph-Assisted Declarative Pooling Transformers for Document-Level Relation Extraction
Document-level relation extraction typically relies on text-based encoders and hand-coded pooling heuristics to aggregate information learned by the encoder. In this paper, we leverage the intrinsic graph processing capabilities of the Transformer model and propose replacing hand-coded pooling methods with new tokens in the input, which are designed to aggregate information via explicit graph relations in the computation of attention weights. We introduce a joint text-graph Transformer model and a graph-assisted declarative pooling (GADePo) specification of the input, which provides explicit and high-level instructions for information aggregation. GADePo allows the pooling process to be guided by domain-specific knowledge or desired outcomes but still learned by the Transformer, leading to more flexible and customisable pooling strategies. We evaluate our method across diverse datasets and models and show that our approach yields promising results that are consistently better than those achieved by the hand-coded pooling functions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
388,329
2411.13683
Extending Video Masked Autoencoders to 128 frames
Video understanding has witnessed significant progress with recent video foundation models demonstrating strong performance owing to self-supervised pre-training objectives; Masked Autoencoders (MAE) being the design of choice. Nevertheless, the majority of prior works that leverage MAE pre-training have focused on relatively short video representations (16 / 32 frames in length) largely due to hardware memory and compute limitations that scale poorly with video length due to the dense memory-intensive self-attention decoding. One natural strategy to address these challenges is to subsample tokens to reconstruct during decoding (or decoder masking). In this work, we propose an effective strategy for prioritizing tokens which allows training on longer video sequences (128 frames) and gets better performance than, more typical, random and uniform masking strategies. The core of our approach is an adaptive decoder masking strategy that prioritizes the most important tokens and uses quantized tokens as reconstruction objectives. Our adaptive strategy leverages a powerful MAGVIT-based tokenizer that jointly learns the tokens and their priority. We validate our design choices through exhaustive ablations and observe improved performance of the resulting long-video (128 frames) encoders over short-video (32 frames) counterparts. With our long-video masked autoencoder (LVMAE) strategy, we surpass state-of-the-art on Diving48 by 3.9 points and EPIC-Kitchens-100 verb classification by 2.5 points while relying on a simple core architecture and video-only pre-training (unlike some of the prior works that require millions of labeled video-text pairs or specialized encoders).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
509,881
2409.08249
Quantifying Aleatoric and Epistemic Dynamics Uncertainty via Local Conformal Calibration
Whether learned, simulated, or analytical, approximations of a robot's dynamics can be inaccurate when encountering novel environments. Many approaches have been proposed to quantify the aleatoric uncertainty of such methods, i.e. uncertainty resulting from stochasticity, however these estimates alone are not enough to properly estimate the uncertainty of a model in a novel environment, where the actual dynamics can change. Such changes can induce epistemic uncertainty, i.e. uncertainty due to a lack of information/data. Accounting for both epistemic and aleatoric dynamics uncertainty in a theoretically-grounded way remains an open problem. We introduce Local Uncertainty Conformal Calibration (LUCCa), a conformal prediction-based approach that calibrates the aleatoric uncertainty estimates provided by dynamics models to generate probabilistically-valid prediction regions of the system's state. We account for both epistemic and aleatoric uncertainty non-asymptotically, without strong assumptions about the form of the true dynamics or how it changes. The calibration is performed locally in the state-action space, leading to uncertainty estimates that are useful for planning. We validate our method by constructing probabilistically-safe plans for a double-integrator under significant changes in dynamics.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
487,827
1610.02707
Multi-Objective Deep Reinforcement Learning
We propose Deep Optimistic Linear Support Learning (DOL) to solve high-dimensional multi-objective decision problems where the relative importances of the objectives are not known a priori. Using features from the high-dimensional inputs, DOL computes the convex coverage set containing all potential optimal solutions of the convex combinations of the objectives. To our knowledge, this is the first time that deep reinforcement learning has succeeded in learning multi-objective policies. In addition, we provide a testbed with two experiments to be used as a benchmark for deep multi-objective reinforcement learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
62,144
2312.14262
Exploring the intersection of Generative AI and Software Development
In the ever-evolving landscape of Artificial Intelligence (AI), the synergy between generative AI and Software Engineering emerges as a transformative frontier. This whitepaper delves into the unexplored realm, elucidating how generative AI techniques can revolutionize software development. Spanning from project management to support and updates, we meticulously map the demands of each development stage and unveil the potential of generative AI in addressing them. Techniques such as zero-shot prompting, self-consistency, and multimodal chain-of-thought are explored, showcasing their unique capabilities in enhancing generative AI models. The significance of vector embeddings, context, plugins, tools, and code assistants is underscored, emphasizing their role in capturing semantic information and amplifying generative AI capabilities. Looking ahead, this intersection promises to elevate productivity, improve code quality, and streamline the software development process. This whitepaper serves as a guide for stakeholders, urging discussions and experiments in the application of generative AI in Software Engineering, fostering innovation and collaboration for a qualitative leap in the efficiency and effectiveness of software development.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
417,569
2408.01251
NeRFoot: Robot-Footprint Estimation for Image-Based Visual Servoing
This paper investigates the utility of Neural Radiance Fields (NeRF) models in extending the regions of operation of a mobile robot, controlled by Image-Based Visual Servoing (IBVS) via static CCTV cameras. Using NeRF as a 3D-representation prior, the robot's footprint may be extrapolated geometrically and used to train a CNN-based network to extract it online from the robot's appearance alone. The resulting footprint results in a tighter bound than a robot-wide bounding box, allowing the robot's controller to prescribe more optimal trajectories and expand its safe operational floor area.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
478,156
1503.00923
An Interoperable Realization of Smart Cities with Plug and Play based Device Management
The primal problem with Internet of Things (IoT) solutions for smart cities is the lack of interoperability at various levels, and more predominately at the device level. While there exist multitude of platforms from multiple manufacturers, the existing ecosystem still remains highly closed. In this paper, we propose SNaaS or Sensor/Network as a Service: a service layer that enables the creation of the plug-n-play infrastructure, across platforms from multiple vendors, necessary for interoperability and successful deployment of large-scale city wide systems. In order to correctly position the new service layer, we present a high level reference IoT architecture for smart city implementations, and follow it up with the workflow details of SNaaS along with preliminary microbenchmarks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
40,768
2005.05286
From industry-wide parameters to aircraft-centric on-flight inference: improving aeronautics performance prediction with machine learning
Aircraft performance models play a key role in airline operations, especially in planning a fuel-efficient flight. In practice, manufacturers provide guidelines which are slightly modified throughout the aircraft life cycle via the tuning of a single factor, enabling better fuel predictions. However this has limitations, in particular they do not reflect the evolution of each feature impacting the aircraft performance. Our goal here is to overcome this limitation. The key contribution of the present article is to foster the use of machine learning to leverage the massive amounts of data continuously recorded during flights performed by an aircraft and provide models reflecting its actual and individual performance. We illustrate our approach by focusing on the estimation of the drag and lift coefficients from recorded flight data. As these coefficients are not directly recorded, we resort to aerodynamics approximations. As a safety check, we provide bounds to assess the accuracy of both the aerodynamics approximation and the statistical performance of our approach. We provide numerical results on a collection of machine learning algorithms. We report excellent accuracy on real-life data and exhibit empirical evidence to support our modelling, in coherence with aerodynamics principles.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
176,693
1706.01177
PReP: Path-Based Relevance from a Probabilistic Perspective in Heterogeneous Information Networks
As a powerful representation paradigm for networked and multi-typed data, the heterogeneous information network (HIN) is ubiquitous. Meanwhile, defining proper relevance measures has always been a fundamental problem and of great pragmatic importance for network mining tasks. Inspired by our probabilistic interpretation of existing path-based relevance measures, we propose to study HIN relevance from a probabilistic perspective. We also identify, from real-world data, and propose to model cross-meta-path synergy, which is a characteristic important for defining path-based HIN relevance and has not been modeled by existing methods. A generative model is established to derive a novel path-based relevance measure, which is data-driven and tailored for each HIN. We develop an inference algorithm to find the maximum a posteriori (MAP) estimate of the model parameters, which entails non-trivial tricks. Experiments on two real-world datasets demonstrate the effectiveness of the proposed model and relevance measure.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
74,764
2201.09521
Problife: a Probabilistic Game of Life
This paper presents a probabilistic extension of the well-known cellular automaton, Game of Life. In Game of Life, cells are placed in a grid and then watched as they evolve throughout subsequent generations, as dictated by the rules of the game. In our extension, called ProbLife, these rules now have probabilities associated with them. Instead of cells being either dead or alive, they are denoted by their chance to live. After presenting the rules of ProbLife and its underlying characteristics, we show a concrete implementation in ProbLog, a probabilistic logic programming system. We use this to generate different images, as a form of rule-based generative art.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
276,700
2004.01857
Weighted Fisher Discriminant Analysis in the Input and Feature Spaces
Fisher Discriminant Analysis (FDA) is a subspace learning method which minimizes and maximizes the intra- and inter-class scatters of data, respectively. Although, in FDA, all the pairs of classes are treated the same way, some classes are closer than the others. Weighted FDA assigns weights to the pairs of classes to address this shortcoming of FDA. In this paper, we propose a cosine-weighted FDA as well as an automatically weighted FDA in which weights are found automatically. We also propose a weighted FDA in the feature space to establish a weighted kernel FDA for both existing and newly proposed weights. Our experiments on the ORL face recognition dataset show the effectiveness of the proposed weighting schemes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
171,031
1811.03494
Testing SPARUS II AUV, an open platform for industrial, scientific and academic applications
This paper describes the experience of preparing and testing the SPARUS II AUV in different applications. The AUV was designed as a lightweight vehicle combining the classical torpedo-shape features with the hovering capability. The robot has a payload area to allow the integration of different equipment depending on the application. The software architecture is based on ROS, an open framework that allows an easy integration of many devices and systems. Its flexibility, easy operation and openness makes the SPARUS II AUV a multipurpose platform that can adapt to industrial, scientific and academic applications. Five units were developed in 2014, and different teams used and adapted the platform for different applications. The paper describes some of the experiences in preparing and testing this open platform to different applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
112,852
2205.04166
Residue-based Label Protection Mechanisms in Vertical Logistic Regression
Federated learning (FL) enables distributed participants to collaboratively learn a global model without revealing their private data to each other. Recently, vertical FL, where the participants hold the same set of samples but with different features, has received increased attention. This paper first presents one label inference attack method to investigate the potential privacy leakages of the vertical logistic regression model. Specifically, we discover that the attacker can utilize the residue variables, which are calculated by solving the system of linear equations constructed by local dataset and the received decrypted gradients, to infer the privately owned labels. To deal with this, we then propose three protection mechanisms, e.g., additive noise mechanism, multiplicative noise mechanism, and hybrid mechanism which leverages local differential privacy and homomorphic encryption techniques, to prevent the attack and improve the robustness of the vertical logistic regression. model. Experimental results show that both the additive noise mechanism and the multiplicative noise mechanism can achieve efficient label protection with only a slight drop in model testing accuracy, furthermore, the hybrid mechanism can achieve label protection without any testing accuracy degradation, which demonstrates the effectiveness and efficiency of our protection techniques
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
295,558
2205.06779
Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via Scribble Annotations
Recently, weakly-supervised image segmentation using weak annotations like scribbles has gained great attention, since such annotations are much easier to obtain compared to time-consuming and label-intensive labeling at the pixel/voxel level. However, because scribbles lack structure information of region of interest (ROI), existing scribble-based methods suffer from poor boundary localization. Furthermore, most current methods are designed for 2D image segmentation, which do not fully leverage the volumetric information if directly applied to image slices. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI's boundary and regularize its shape. Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms current scribble-based methods and approaches the performance of fully-supervised ones. Our code is available online.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,350
2212.02448
The Multi-cluster Fluctuating Two-Ray Fading Model
We introduce a new class of fading channels, built as the superposition of two fluctuating specular components with random phases, plus a clustering of scattered waves: the Multi-cluster Fluctuating Two-Ray (MFTR) fading channel. The MFTR model emerges as a natural generalization of both the fluctuating two-ray (FTR) and the $\kappa$-$\mu$ shadowed fading models through a more general yet equally mathematically tractable model. This generalization enables the presence of additional multipath clusters in the purely ray-based FTR model, and the convenience of the new underlying fading channel model is discussed in depth. Then, we derive all the chief probability functions of the MFTR model (e.g., probability density function (PDF), cumulative density function (CDF), and moment generation function) in closed-form, having {a mathematical complexity similar to} other fading models in the state-of-the-art. We also provide two additional analytical formulations for the PDF and the CDF: (i) in terms of a continuous mixture of $\kappa$-$\mu$ shadowed distributions, and (ii) as an infinite discrete mixture of Gamma distributions. Such expressions enable to conduct performance analysis under MFTR fading by directly leveraging readily available results for the $\kappa$-$\mu$ shadowed or Nakagami-$m$ cases, respectively. The performance of wireless communications systems undergoing MFTR fading is exemplified in terms of a classical benchmarking metric like the outage probability, both in exact and asymptotic forms, and the amount of fading.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
334,790
1806.07351
Opportunistic Scheduling in Underlay Cognitive Radio based Systems: User Selection Probability Analysis
In this paper, an underlay cognitive radio (CR) system is considered with multiple cognitive or secondary users contending to transmit their information to the cognitive destination (e.g., eNodeB) using the spectral resource of a primary user. The novel closed-form expressions are derived for the selection probabilities of cognitive users with opportunistic scheduling wherein an optimal metric is employed for opportunistic transmission. The analytical results corroborated by the Monte Carlo simulations, can be used to demonstrate the fairness achieved in opportunistic scheduling. It is shown that the fairness in terms of equal chance for transmission amongst all cognitive users can only be seen for the scenarios when the fraction of distances between the cognitive transmitter and cognitive receiver, and cognitive transmitter and primary receiver is identical for each of the cognitive transmitters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
100,901
2103.08862
Gumbel-Attention for Multi-modal Machine Translation
Multi-modal machine translation (MMT) improves translation quality by introducing visual information. However, the existing MMT model ignores the problem that the image will bring information irrelevant to the text, causing much noise to the model and affecting the translation quality. This paper proposes a novel Gumbel-Attention for multi-modal machine translation, which selects the text-related parts of the image features. Specifically, different from the previous attention-based method, we first use a differentiable method to select the image information and automatically remove the useless parts of the image features. Experiments prove that our method retains the image features related to the text, and the remaining parts help the MMT model generates better translations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
225,008
2302.12552
Deep Learning for Video-Text Retrieval: a Review
Video-Text Retrieval (VTR) aims to search for the most relevant video related to the semantics in a given sentence, and vice versa. In general, this retrieval task is composed of four successive steps: video and textual feature representation extraction, feature embedding and matching, and objective functions. In the last, a list of samples retrieved from the dataset is ranked based on their matching similarities to the query. In recent years, significant and flourishing progress has been achieved by deep learning techniques, however, VTR is still a challenging task due to the problems like how to learn an efficient spatial-temporal video feature and how to narrow the cross-modal gap. In this survey, we review and summarize over 100 research papers related to VTR, demonstrate state-of-the-art performance on several commonly benchmarked datasets, and discuss potential challenges and directions, with the expectation to provide some insights for researchers in the field of video-text retrieval.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
347,610
2410.13765
Knowledge-Aware Query Expansion with Large Language Models for Textual and Relational Retrieval
Large language models (LLMs) have been used to generate query expansions augmenting original queries for improving information search. Recent studies also explore providing LLMs with initial retrieval results to generate query expansions more grounded to document corpus. However, these methods mostly focus on enhancing textual similarities between search queries and target documents, overlooking document relations. For queries like "Find me a highly rated camera for wildlife photography compatible with my Nikon F-Mount lenses", existing methods may generate expansions that are semantically similar but structurally unrelated to user intents. To handle such semi-structured queries with both textual and relational requirements, in this paper we propose a knowledge-aware query expansion framework, augmenting LLMs with structured document relations from knowledge graph (KG). To further address the limitation of entity-based scoring in existing KG-based methods, we leverage document texts as rich KG node representations and use document-based relation filtering for our Knowledge-Aware Retrieval (KAR). Extensive experiments on three datasets of diverse domains show the advantages of our method compared against state-of-the-art baselines on textual and relational semi-structured retrieval.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
499,674
2109.13081
Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills
In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor. In particular, we assume that the target object is located in a cluttered environment where both prehensile grasping and non-prehensile manipulation are combined for efficient teleoperation. A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects for enabling direct grasping. From the depth image of the cluttered environment and the location of the goal object, the learned policy can provide multiple options of non-prehensile manipulation to the human operator. We carefully design a reward function for the rearranging task where the policy is trained in a simulational environment. Then, the trained policy is transferred to a real-world and evaluated in a number of real-world experiments with the varying number of objects where we show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
257,518
2409.10829
ReXErr: Synthesizing Clinically Meaningful Errors in Diagnostic Radiology Reports
Accurately interpreting medical images and writing radiology reports is a critical but challenging task in healthcare. Both human-written and AI-generated reports can contain errors, ranging from clinical inaccuracies to linguistic mistakes. To address this, we introduce ReXErr, a methodology that leverages Large Language Models to generate representative errors within chest X-ray reports. Working with board-certified radiologists, we developed error categories that capture common mistakes in both human and AI-generated reports. Our approach uses a novel sampling scheme to inject diverse errors while maintaining clinical plausibility. ReXErr demonstrates consistency across error categories and produces errors that closely mimic those found in real-world scenarios. This method has the potential to aid in the development and evaluation of report correction algorithms, potentially enhancing the quality and reliability of radiology reporting.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
488,895
1712.02121
A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network
In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB achieves better link prediction performance than previous state-of-the-art embedding models on two benchmark datasets WN18RR and FB15k-237.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
86,242
2410.05684
Copiloting Diagnosis of Autism in Real Clinical Scenarios via LLMs
Autism spectrum disorder(ASD) is a pervasive developmental disorder that significantly impacts the daily functioning and social participation of individuals. Despite the abundance of research focused on supporting the clinical diagnosis of ASD, there is still a lack of systematic and comprehensive exploration in the field of methods based on Large Language Models (LLMs), particularly regarding the real-world clinical diagnostic scenarios based on Autism Diagnostic Observation Schedule, Second Edition (ADOS-2). Therefore, we have proposed a framework called ADOS-Copilot, which strikes a balance between scoring and explanation and explored the factors that influence the performance of LLMs in this task. The experimental results indicate that our proposed framework is competitive with the diagnostic results of clinicians, with a minimum MAE of 0.4643, binary classification F1-score of 81.79\%, and ternary classification F1-score of 78.37\%. Furthermore, we have systematically elucidated the strengths and limitations of current LLMs in this task from the perspectives of ADOS-2, LLMs' capabilities, language, and model scale aiming to inspire and guide the future application of LLMs in a broader fields of mental health disorders. We hope for more research to be transferred into real clinical practice, opening a window of kindness to the world for eccentric children.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
495,860
2205.15924
Continuous Temporal Graph Networks for Event-Based Graph Data
There has been an increasing interest in modeling continuous-time dynamics of temporal graph data. Previous methods encode time-evolving relational information into a low-dimensional representation by specifying discrete layers of neural networks, while real-world dynamic graphs often vary continuously over time. Hence, we propose Continuous Temporal Graph Networks (CTGNs) to capture the continuous dynamics of temporal graph data. We use both the link starting timestamps and link duration as evolving information to model the continuous dynamics of nodes. The key idea is to use neural ordinary differential equations (ODE) to characterize the continuous dynamics of node representations over dynamic graphs. We parameterize ordinary differential equations using a novel graph neural network. The existing dynamic graph networks can be considered as a specific discretization of CTGNs. Experiment results on both transductive and inductive tasks demonstrate the effectiveness of our proposed approach over competitive baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,918
2009.07828
Human biases in body measurement estimation
Body measurements, including weight and height, are key indicators of health. Being able to visually assess body measurements reliably is a step towards increased awareness of overweight and obesity and is thus important for public health. Nevertheless it is currently not well understood how accurately humans can assess weight and height from images, and when and how they fail. To bridge this gap, we start from 1,682 images of persons collected from the Web, each annotated with the true weight and height, and ask crowd workers to estimate the weight and height for each image. We conduct a faceted analysis taking into account characteristics of the images as well as the crowd workers assessing the images, revealing several novel findings: (1) Even after aggregation, the crowd's accuracy is overall low. (2) We find strong evidence of contraction bias toward a reference value, such that the weight (height) of light (short) people is overestimated, whereas that of heavy (tall) people is underestimated. (3) We estimate workers' individual reference values using a Bayesian model, finding that reference values strongly correlate with workers' own height and weight, indicating that workers are better at estimating people similar to themselves. (4) The weight of tall people is underestimated more than that of short people; yet, knowing the height decreases the weight error only mildly. (5) Accuracy is higher on images of females than of males, but female and male workers are no different in terms of accuracy. (6) Crowd workers improve over time if given feedback on previous guesses. Finally, we explore various bias correction models for improving the crowd's accuracy, but find that this only leads to modest gains. Overall, this work provides important insights on biases in body measurement estimation as obesity related conditions are on the rise.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
196,060
2010.14995
Accelerated Probabilistic Power Flow in Electrical Distribution Networks via Model Order Reduction and Neumann Series Expansion
This paper develops a computationally efficient algorithm which speeds up the probabilistic power flow (PPF) problem by exploiting the inherently low-rank nature of the voltage profile in electrical power distribution networks. The algorithm is accordingly termed the Accelerated-PPF (APPF), since it can accelerate "any" sampling-based PPF solver. As the APPF runs, it concurrently generates a low-dimensional subspace of orthonormalized solution vectors. This subspace is used to construct and update a reduced order model (ROM) of the full nonlinear system, resulting in a highly efficient simulation for future voltage profiles. When constructing and updating the subspace, the power flow problem must still be solved on the full nonlinear system. In order to accelerate the computation of these solutions, a Neumann expansion of a modified power flow Jacobian is implemented. Applicable when load bus injections are small, this Neumann expansion allows for a considerable speed up of Jacobian system solves during the standard Newton iterations. APPF test results, from experiments run on the full IEEE 8500-node test feeder, are finally presented.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
203,639
1801.00708
Restricted Deformable Convolution based Road Scene Semantic Segmentation Using Surround View Cameras
Understanding the surrounding environment of the vehicle is still one of the challenges for autonomous driving. This paper addresses 360-degree road scene semantic segmentation using surround view cameras, which are widely equipped in existing production cars. First, in order to address large distortion problem in the fisheye images, Restricted Deformable Convolution (RDC) is proposed for semantic segmentation, which can effectively model geometric transformations by learning the shapes of convolutional filters conditioned on the input feature map. Second, in order to obtain a large-scale training set of surround view images, a novel method called zoom augmentation is proposed to transform conventional images to fisheye images. Finally, an RDC based semantic segmentation model is built; the model is trained for real-world surround view images through a multi-task learning architecture by combining real-world images with transformed images. Experiments demonstrate the effectiveness of the RDC to handle images with large distortions, and that the proposed approach shows a good performance using surround view cameras with the help of the transformed images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,614