id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2310.09632
Time-based Mapping of Space Using Visual Motion Invariants
This paper focuses on visual motion-based invariants that result in a representation of 3D points in which the stationary environment remains invariant, ensuring shape constancy. This is achieved even as the images undergo constant change due to camera motion. Nonlinear functions of measurable optical flow, which are related to geometric 3D invariants, are utilized to create a novel representation. We refer to the resulting optical flow-based invariants as 'Time-Clearance' and the well-known 'Time-to-Contact' (TTC). Since these invariants remain constant over time, it becomes straightforward to detect moving points that do not adhere to the expected constancy. We present simulations of a camera moving relative to a 3D object, snapshots of its projected images captured by a rectilinearly moving camera, and the object as it appears unchanged in the new domain over time. In addition, Unity-based simulations demonstrate color-coded transformations of a projected 3D scene, illustrating how moving objects can be readily identified. This representation is straightforward, relying on simple optical flow functions. It requires only one camera, and there is no need to determine the magnitude of the camera's velocity vector. Furthermore, the representation is pixel-based, making it suitable for parallel processing.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
399,863
2312.15361
Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading
While network coverage maps continue to expand, many devices located in remote areas remain unconnected to terrestrial communication infrastructures, preventing them from getting access to the associated data-driven services. In this paper, we propose a ground-to-satellite cooperative federated learning (FL) methodology to facilitate machine learning service management over remote regions. Our methodology orchestrates satellite constellations to provide the following key functions during FL: (i) processing data offloaded from ground devices, (ii) aggregating models within device clusters, and (iii) relaying models/data to other satellites via inter-satellite links (ISLs). Due to the limited coverage time of each satellite over a particular remote area, we facilitate satellite transmission of trained models and acquired data to neighboring satellites via ISL, so that the incoming satellite can continue conducting FL for the region. We theoretically analyze the convergence behavior of our algorithm, and develop a training latency minimizer which optimizes over satellite-specific network resources, including the amount of data to be offloaded from ground devices to satellites and satellites' computation speeds. Through experiments on three datasets, we show that our methodology can significantly speed up the convergence of FL compared with terrestrial-only and other satellite baseline approaches.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
417,986
2209.03090
Modular Federated Learning
Federated learning is an approach to train machine learning models on the edge of the networks, as close as possible where the data is produced, motivated by the emerging problem of the inability to stream and centrally store the large amount of data produced by edge devices as well as by data privacy concerns. This learning paradigm is in need of robust algorithms to device heterogeneity and data heterogeneity. This paper proposes ModFL as a federated learning framework that splits the models into a configuration module and an operation module enabling federated learning of the individual modules. This modular approach makes it possible to extract knowlege from a group of heterogeneous devices as well as from non-IID data produced from its users. This approach can be viewed as an extension of the federated learning with personalisation layers FedPer framework that addresses data heterogeneity. We show that ModFL outperforms FedPer for non-IID data partitions of CIFAR-10 and STL-10 using CNNs. Our results on time-series data with HAPT, RWHAR, and WISDM datasets using RNNs remain inconclusive, we argue that the chosen datasets do not highlight the advantages of ModFL, but in the worst case scenario it performs as well as FedPer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
316,398
2305.01146
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language Models
We systematically investigate lightweight strategies to adapt large language models (LLMs) for the task of radiology report summarization (RRS). Specifically, we focus on domain adaptation via pretraining (on natural language, biomedical text, or clinical text) and via discrete prompting or parameter-efficient fine-tuning. Our results consistently achieve best performance by maximally adapting to the task via pretraining on clinical text and fine-tuning on RRS examples. Importantly, this method fine-tunes a mere 0.32% of parameters throughout the model, in contrast to end-to-end fine-tuning (100% of parameters). Additionally, we study the effect of in-context examples and out-of-distribution (OOD) training before concluding with a radiologist reader study and qualitative analysis. Our findings highlight the importance of domain adaptation in RRS and provide valuable insights toward developing effective natural language processing solutions for clinical tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
361,571
1303.5313
Incremental Maintenance for Leapfrog Triejoin
We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
23,074
1706.03341
Group-Server Queues
By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in many other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to the large-scale service system that is divided into several adaptive and self-organizing subsystems through scheduling of batch customers and regrouping of service resources, which make the middle layer of this service system more effectively managed and strengthened under a dynamic, real-time and even reward optimal framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only analysis of group-server queues is regarded as a new interesting research direction, but there also exists many theoretical challenges, basic difficulties and open problems in the area of queueing networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
75,150
2310.05478
Grease the gears for a steady microfluidic flow
Pumps are indispensable for analytical applications and ensure controlled fluid movement. Syringe pumps are among today_s most prevalent liquid delivery systems, especially for high-pressure, stable, low-flow-rate microfluidic applications. Due to moving mechanical parts of the assembly, regular maintenance is essential to ensure reliable operation and flow rates. However, lubrication of the mechanics is easily overlooked because the research focuses on novel analytical applications rather than on the maintenance of pumps. Here, we investigate the lubrication of the syringe pump guide rods with its effect on the flow rate stability after regular pump cleaning from contaminations. The guide rods of syringe pumps were thoroughly cleaned from any lubricant, and the flow rate for specified flowrates between 5 and 30 uL/min was measured, revealing tremendous flow rate fluctuations with a coefficient of variation (CV) value up to 0.34. In contrast, flow rate measurements of syringe pumps with lubricated guide rods show a five-fold smoother flow rate fluctuation depending on the specified flow rate with CV values below 0.07. In summary, we emphasize the awareness of lubricating moving parts of syringe pumps to achieve constant flow rates, minimize wear, and ensure the reliable operation of, for instance, accurate lab-on-a-chip workflows.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
398,168
1910.09594
Federated Neuromorphic Learning of Spiking Neural Networks for Low-Power Edge Intelligence
Spiking Neural Networks (SNNs) offer a promising alternative to conventional Artificial Neural Networks (ANNs) for the implementation of on-device low-power online learning and inference. On-device training is, however, constrained by the limited amount of data available at each device. In this paper, we propose to mitigate this problem via cooperative training through Federated Learning (FL). To this end, we introduce an online FL-based learning rule for networked on-device SNNs, which we refer to as FL-SNN. FL-SNN leverages local feedback signals within each SNN, in lieu of backpropagation, and global feedback through communication via a base station. The scheme demonstrates significant advantages over separate training and features a flexible trade-off between communication load and accuracy via the selective exchange of synaptic weights.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
150,236
2107.03451
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language. Researchers must thus wrestle with the issue of how and when to release these models. In this paper, we survey the problem landscape for safety for end-to-end conversational AI and discuss recent and related work. We highlight tensions between values, potential positive impact and potential harms, and provide a framework for making decisions about whether and how to release these models, following the tenets of value-sensitive design. We additionally provide a suite of tools to enable researchers to make better-informed decisions about training and releasing end-to-end conversational AI models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
245,165
1603.04530
Object Contour Detection with a Fully Convolutional Encoder-Decoder Network
We develop a deep learning algorithm for contour detection with a fully convolutional encoder-decoder network. Different from previous low-level edge detection, our algorithm focuses on detecting higher-level object contours. Our network is trained end-to-end on PASCAL VOC with refined ground truth from inaccurate polygon annotations, yielding much higher precision in object contour detection than previous methods. We find that the learned model generalizes well to unseen object classes from the same super-categories on MS COCO and can match state-of-the-art edge detection on BSDS500 with fine-tuning. By combining with the multiscale combinatorial grouping algorithm, our method can generate high-quality segmented object proposals, which significantly advance the state-of-the-art on PASCAL VOC (improving average recall from 0.62 to 0.67) with a relatively small amount of candidates ($\sim$1660 per image).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
53,253
2209.00849
Robustifying Event-Triggered Control to Measurement Noise
While many event-triggered control strategies are available in the literature, most of them are designed ignoring the presence of measurement noise. As measurement noise is omnipresent in practice and can have detrimental effects, for instance, by inducing Zeno behavior in the closed-loop system and with that the lack of a positive lower bound on the inter-event times, rendering the event-triggered control design practically useless, it is of great importance to address this gap in the literature. To do so, we present a general framework for set stabilization of (distributed) event-triggered control systems affected by additive measurement noise. It is shown that, under general conditions, Zeno-free static as well as dynamic triggering rules can be designed such that the closed-loop system satisfies an input-to-state practical set stability property. We ensure Zeno-freeness by proving the existence of a uniform strictly positive lower-bound on the minimum inter-event time. The general framework is applied to point stabilization and consensus problems as particular cases, where we show that, under similar assumptions as the original work, existing schemes can be redesigned to robustify them to measurement noise. Consequently, using this framework, noise-robust triggering conditions can be designed both from the ground up and by simple redesign of several important existing schemes. Simulation results are provided that illustrate the strengths of this novel approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
315,699
2007.16149
HMCNAS: Neural Architecture Search using Hidden Markov Chains and Bayesian Optimization
Neural Architecture Search has achieved state-of-the-art performance in a variety of tasks, out-performing human-designed networks. However, many assumptions, that require human definition, related with the problems being solved or the models generated are still needed: final model architectures, number of layers to be sampled, forced operations, small search spaces, which ultimately contributes to having models with higher performances at the cost of inducing bias into the system. In this paper, we propose HMCNAS, which is composed of two novel components: i) a method that leverages information about human-designed models to autonomously generate a complex search space, and ii) an Evolutionary Algorithm with Bayesian Optimization that is capable of generating competitive CNNs from scratch, without relying on human-defined parameters or small search spaces. The experimental results show that the proposed approach results in competitive architectures obtained in a very short time. HMCNAS provides a step towards generalizing NAS, by providing a way to create competitive models, without requiring any human knowledge about the specific task.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
189,860
2308.03240
Carbon-Aware Optimal Power Flow
To facilitate effective decarbonization of the electric power sector, this paper introduces the generic Carbon-aware Optimal Power Flow (C-OPF) method for power system decision-making that considers demand-side carbon accounting and emission management. Built upon the classic optimal power flow (OPF) model, the C-OPF method incorporates carbon emission flow equations and constraints, as well as carbon-related objectives, to jointly optimize power flow and carbon flow. In particular, this paper establishes the feasibility and solution uniqueness of the carbon emission flow equations, and proposes modeling and linearization techniques to address the issues of undetermined power flow directions and bilinear terms in the C-OPF model. Additionally, two novel carbon emission models, together with the carbon accounting schemes, for energy storage systems are developed and integrated into the C-OPF model. Numerical simulations demonstrate the characteristics and effectiveness of the C-OPF method, in comparison with OPF solutions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
383,951
2306.02259
Predicting Information Pathways Across Online Communities
The problem of community-level information pathway prediction (CLIPP) aims at predicting the transmission trajectory of content across online communities. A successful solution to CLIPP holds significance as it facilitates the distribution of valuable information to a larger audience and prevents the proliferation of misinformation. Notably, solving CLIPP is non-trivial as inter-community relationships and influence are unknown, information spread is multi-modal, and new content and new communities appear over time. In this work, we address CLIPP by collecting large-scale, multi-modal datasets to examine the diffusion of online YouTube videos on Reddit. We analyze these datasets to construct community influence graphs (CIGs) and develop a novel dynamic graph framework, INPAC (Information Pathway Across Online Communities), which incorporates CIGs to capture the temporal variability and multi-modal nature of video propagation across communities. Experimental results in both warm-start and cold-start scenarios show that INPAC outperforms seven baselines in CLIPP.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
370,822
1702.07619
Fast and robust curve skeletonization for real-world elongated objects
We consider the problem of extracting curve skeletons of three-dimensional, elongated objects given a noisy surface, which has applications in agricultural contexts such as extracting the branching structure of plants. We describe an efficient and robust method based on breadth-first search that can determine curve skeletons in these contexts. Our approach is capable of automatically detecting junction points as well as spurious segments and loops. All of that is accomplished with only one user-adjustable parameter. The run time of our method ranges from hundreds of milliseconds to less than four seconds on large, challenging datasets, which makes it appropriate for situations where real-time decision making is needed. Experiments on synthetic models as well as on data from real world objects, some of which were collected in challenging field conditions, show that our approach compares favorably to classical thinning algorithms as well as to recent contributions to the field.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
68,811
2407.02538
CGRclust: Chaos Game Representation for Twin Contrastive Clustering of Unlabelled DNA Sequences
This study proposes CGRclust, a novel combination of unsupervised twin contrastive clustering of Chaos Game Representations (CGR) of DNA sequences, with convolutional neural networks (CNNs). To the best of our knowledge, CGRclust is the first method to use unsupervised learning for image classification (herein applied to two-dimensional CGR images) for clustering datasets of DNA sequences. CGRclust overcomes the limitations of traditional sequence classification methods by leveraging unsupervised twin contrastive learning to detect distinctive sequence patterns, without requiring DNA sequence alignment or biological/taxonomic labels. CGRclust accurately clustered twenty-five diverse datasets, with sequence lengths ranging from 664 bp to 100 kbp, including mitochondrial genomes of fish, fungi, and protists, as well as viral whole genome assemblies and synthetic DNA sequences. Compared with three recent clustering methods for DNA sequences (DeLUCS, iDeLUCS, and MeShClust v3.0.), CGRclust is the only method that surpasses 81.70% accuracy across all four taxonomic levels tested for mitochondrial DNA genomes of fish. Moreover, CGRclust also consistently demonstrates superior performance across all the viral genomic datasets. The high clustering accuracy of CGRclust on these twenty-five datasets, which vary significantly in terms of sequence length, number of genomes, number of clusters, and level of taxonomy, demonstrates its robustness, scalability, and versatility.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
469,784
2111.04497
An Approach for Combining Multimodal Fusion and Neural Architecture Search Applied to Knowledge Tracing
Knowledge Tracing is the process of tracking mastery level of different skills of students for a given learning domain. It is one of the key components for building adaptive learning systems and has been investigated for decades. In parallel with the success of deep neural networks in other fields, we have seen researchers take similar approaches in the learning science community. However, most existing deep learning based knowledge tracing models either: (1) only use the correct/incorrect response (ignoring useful information from other modalities) or (2) design their network architectures through domain expertise via trial and error. In this paper, we propose a sequential model based optimization approach that combines multimodal fusion and neural architecture search within one framework. The commonly used neural architecture search technique could be considered as a special case of our proposed approach when there is only one modality involved. We further propose to use a new metric called time-weighted Area Under the Curve (weighted AUC) to measure how a sequence model performs with time. We evaluate our methods on two public real datasets showing the discovered model is able to achieve superior performance. Unlike most existing works, we conduct McNemar's test on the model predictions and the results are statistically significant.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
265,500
2411.00783
From chalkboards to chatbots: SELAR assists teachers in embracing AI in the curriculum
This paper introduces SELAR, a framework designed to effectively help teachers integrate artificial intelligence (AI) into their curriculum. The framework was designed by running workshops organized to gather lecturers' feedback. In this paper, we assess the effectiveness of the framework through additional workshops organized with lecturers from the Hague University of Applied Sciences. The workshops tested the application of the framework to adapt existing courses to leverage generative AI technology. Each participant was tasked to apply SELAR to one of their learning goals in order to evaluate AI integration potential and, if successful, to update the teaching methods accordingly. Findings show that teachers were able to effectively use the SELAR to integrate generative AI into their courses. Future work will focus on providing additional guidance and examples to use the framework more effectively.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
504,744
2203.03564
TIGGER: Scalable Generative Modelling for Temporal Interaction Graphs
There has been a recent surge in learning generative models for graphs. While impressive progress has been made on static graphs, work on generative modeling of temporal graphs is at a nascent stage with significant scope for improvement. First, existing generative models do not scale with either the time horizon or the number of nodes. Second, existing techniques are transductive in nature and thus do not facilitate knowledge transfer. Finally, due to relying on one-to-one node mapping from source to the generated graph, existing models leak node identity information and do not allow up-scaling/down-scaling the source graph size. In this paper, we bridge these gaps with a novel generative model called TIGGER. TIGGER derives its power through a combination of temporal point processes with auto-regressive modeling enabling both transductive and inductive variants. Through extensive experiments on real datasets, we establish TIGGER generates graphs of superior fidelity, while also being up to 3 orders of magnitude faster than the state-of-the-art.
false
false
false
true
true
true
true
false
false
false
false
false
false
false
false
false
false
false
284,135
2410.20965
Simultaneous Unlearning of Multiple Protected User Attributes From Variational Autoencoder Recommenders Using Adversarial Training
In widely used neural network-based collaborative filtering models, users' history logs are encoded into latent embeddings that represent the users' preferences. In this setting, the models are capable of mapping users' protected attributes (e.g., gender or ethnicity) from these user embeddings even without explicit access to them, resulting in models that may treat specific demographic user groups unfairly and raise privacy issues. While prior work has approached the removal of a single protected attribute of a user at a time, multiple attributes might come into play in real-world scenarios. In the work at hand, we present AdvXMultVAE which aims to unlearn multiple protected attributes (exemplified by gender and age) simultaneously to improve fairness across demographic user groups. For this purpose, we couple a variational autoencoder (VAE) architecture with adversarial training (AdvMultVAE) to support simultaneous removal of the users' protected attributes with continuous and/or categorical values. Our experiments on two datasets, LFM-2b-100k and Ml-1m, from the music and movie domains, respectively, show that our approach can yield better results than its singular removal counterparts (based on AdvMultVAE) in effectively mitigating demographic biases whilst improving the anonymity of latent embeddings.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
503,030
1905.02850
Understanding Attention and Generalization in Graph Neural Networks
We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention. We propose an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets. Source code and datasets are available at https://github.com/bknyaz/graph_attention_pool.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
130,063
2003.13120
Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network
This research proposes a Ground Penetrating Radar (GPR) data processing method for non-destructive detection of tunnel lining internal defects, called defect segmentation. To perform this critical step of automatic tunnel lining detection, the method uses a CNN called Segnet combined with the Lov\'asz softmax loss function to map the internal defect structure with GPR synthetic data, which improves the accuracy, automation and efficiency of defects detection. The novel method we present overcomes several difficulties of traditional GPR data interpretation as demonstrated by an evaluation on both synthetic and real datas -- to verify the method on real data, a test model containing a known defect was designed and built and GPR data was obtained and analyzed.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
170,113
2303.07558
Mitigating the Impact of Uncertain Wildfire Risk on Power Grids through Topology Control
Wildfires pose a significant threat to the safe and reliable operations of the electric grid. To mitigate wildfire risk, system operators resort to public safety power shutoffs, or PSPS, that shed load for a subset of customers. As wildfire risk forecasts are stochastic, such decision-making may often be sub-optimal. This paper proposes a two-stage topology control problem that jointly minimizes generation and load-shedding costs in the face of uncertain fire risk. Compared to existing work, we include preand post-event topology control actions and consider scenarios where the wildfire risk is known with low and high confidence. The effectiveness of the proposed approach is demonstrated using a benchmark test system, artificially geo-located in Southern California, and using stochastic wildfire risk data that exists in the literature. Our work provides a crucial study of the comparative benefits of pre-event versus post-event control and the effects of wildfire risk accuracy on each control strategy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
351,292
2310.16804
Learning COVID-19 Regional Transmission Using Universal Differential Equations in a SIR model
Highly-interconnected societies difficult to model the spread of infectious diseases such as COVID-19. Single-region SIR models fail to account for incoming forces of infection and expanding them to a large number of interacting regions involves many assumptions that do not hold in the real world. We propose using Universal Differential Equations (UDEs) to capture the influence of neighboring regions and improve the model's predictions in a combined SIR+UDE model. UDEs are differential equations totally or partially defined by a deep neural network (DNN). We include an additive term to the SIR equations composed by a DNN that learns the incoming force of infection from the other regions. The learning is performed using automatic differentiation and gradient descent to approach the change in the target system caused by the state of the neighboring regions. We compared the proposed model using a simulated COVID-19 outbreak against a single-region SIR and a fully data-driven model composed only of a DNN. The proposed UDE+SIR model generates predictions that capture the outbreak dynamic more accurately, but a decay in performance is observed at the last stages of the outbreak. The single-area SIR and the fully data-driven approach do not capture the proper dynamics accurately. Once the predictions were obtained, we employed the SINDy algorithm to substitute the DNN with a regression, removing the black box element of the model with no considerable increase in the error levels.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,883
1909.05742
Rethinking the CSC Model for Natural Images
Sparse representation with respect to an overcomplete dictionary is often used when regularizing inverse problems in signal and image processing. In recent years, the Convolutional Sparse Coding (CSC) model, in which the dictionary consists of shift-invariant filters, has gained renewed interest. While this model has been successfully used in some image processing problems, it still falls behind traditional patch-based methods on simple tasks such as denoising. In this work we provide new insights regarding the CSC model and its capability to represent natural images, and suggest a Bayesian connection between this model and its patch-based ancestor. Armed with these observations, we suggest a novel feed-forward network that follows an MMSE approximation process to the CSC model, using strided convolutions. The performance of this supervised architecture is shown to be on par with state of the art methods while using much fewer parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
145,193
2204.12160
A Reduced Order Model for Joint Assemblies by Hyper-Reduction and Model-Driven Sampling
The dynamic behavior of jointed assemblies exhibiting friction nonlinearities features amplitude-dependent dissipation and stiffness. To develop numerical simulations for predictive and design purposes, macro-scale High Fidelity Models (HFMs) of the contact interfaces are required. However, the high computational cost of such HFMs impedes the feasibility of the simulations. To this end, we propose a model-driven method for constructing hyper-reduced order models of such assemblies. Focusing on steady-state analysis, we use the Multi-Harmonic Balance Method (MHBM) to formulate the equations of motion in frequency domain. The reduction basis is constructed through solving a set of vibration problems corresponding to fictitious interface conditions. Subsequently, a Galerkin projection reduces the order of the model. Nonetheless, the necessary fine discretization of the interfaces represents a bottleneck for achieving high speedups. For this reason, we implement an adapted Energy Conserving Weighing and Sampling (ECSW) technique for Hyper Reduction (HR), thereby allowing significant speedups for meshes of arbitrary fineness. This feature is particularly advantageous since analysts typically encounter a trade-off between accuracy and computational cost when deciding on the mesh size, whose estimation is particularly challenging for problems of this type. To assess the accuracy of our method without resorting to the HF solution, we propose an error indicator with thresholds that have proven reliable in our analyses. Finally, the accuracy and efficiency of the method are demonstrated by two case studies.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
293,394
2011.02177
Diversity Aware Relevance Learning for Argument Search
In this work, we focus on the problem of retrieving relevant arguments for a query claim covering diverse aspects. State-of-the-art methods rely on explicit mappings between claims and premises, and thus are unable to utilize large available collections of premises without laborious and costly manual annotation. Their diversity approach relies on removing duplicates via clustering which does not directly ensure that the selected premises cover all aspects. This work introduces a new multi-step approach for the argument retrieval problem. Rather than relying on ground-truth assignments, our approach employs a machine learning model to capture semantic relationships between arguments. Beyond that, it aims to cover diverse facets of the query, instead of trying to identify duplicates explicitly. Our empirical evaluation demonstrates that our approach leads to a significant improvement in the argument retrieval task even though it requires less data.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
204,851
1804.03943
VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature. The proposed human perception guider criticizes the predicted quality score of the predictor with the human perceptual score using adversarial learning. For evaluation, we conducted extensive subjective experiments with omnidirectional image dataset. Experimental results show that the proposed VR IQA metric outperforms the 2-D IQA and the state-of-the-arts VR IQA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,727
1803.03317
Analysis of Hand Segmentation in the Wild
A large number of works in egocentric vision have concentrated on action and object recognition. Detection and segmentation of hands in first-person videos, however, has less been explored. For many applications in this domain, it is necessary to accurately segment not only hands of the camera wearer but also the hands of others with whom he is interacting. Here, we take an in-depth look at the hand segmentation problem. In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets. We fine-tune RefineNet, a leading semantic segmentation method, for hand segmentation and find that it does much better than the best contenders. Existing hand segmentation datasets are collected in the laboratory settings. To overcome this limitation, we contribute by collecting two new datasets: a) EgoYouTubeHands including egocentric videos containing hands in the wild, and b) HandOverFace to analyze the performance of our models in presence of similar appearance occlusions. We further explore whether conditional random fields can help refine generated hand segmentations. To demonstrate the benefit of accurate hand maps, we train a CNN for hand-based activity recognition and achieve higher accuracy when a CNN was trained using hand maps produced by the fine-tuned RefineNet. Finally, we annotate a subset of the EgoHands dataset for fine-grained action recognition and show that an accuracy of 58.6% can be achieved by just looking at a single hand pose which is much better than the chance level (12.5%).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
92,223
1004.2102
Distributed anonymous discrete function computation
We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes. In this model, each node has bounded computation and storage capabilities that do not grow with the network size. Furthermore, each node only knows its neighbors, not the entire graph. Our goal is to characterize the class of functions that can be computed within this model. In our main result, we provide a necessary condition for computability which we show to be nearly sufficient, in the sense that every function that satisfies this condition can at least be approximated. The problem of computing suitably rounded averages in a distributed manner plays a central role in our development; we provide an algorithm that solves it in time that grows quadratically with the size of the network.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
6,152
1906.02929
Coding Theorems for Asynchronous Slepian-Wolf Coding Systems
The Slepian-Wolf (SW) coding system is a source coding system with two encoders and a decoder, where these encoders independently encode source sequences from two correlated sources into codewords, and the decoder reconstructs both source sequences from the codewords. In this paper, we consider the situation in which the SW coding system is asynchronous, i.e., each encoder samples a source sequence with some unknown delay. We assume that delays are unknown but maximum and minimum values of possible delays are known to encoders and the decoder. We also assume that sources are discrete stationary memoryless and the probability mass function (PMF) of the sources is unknown but the system knows that it belongs to a certain set of PMFs. For this asynchronous SW coding system, we clarify the achievable rate region which is the set of rate pairs of encoders such that the decoding error probability vanishes as the blocklength tends to infinity. We show that this region does not always coincide with that of the synchronous SW coding system in which each encoder samples a source sequence without any delay.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
134,231
2104.01856
Direction-Based Jamming Detection and Suppression in mmWave Massive MIMO Networks
In this paper, we study the problem of physical layer security in the uplink of millimeter-wave massive multiple-input multiple-output (MIMO) networks and propose a jamming detection and suppression method. The proposed method is based on directional information of the received signals at the base station antenna array. The proposed jamming detection method can accurately detect both the existence and direction of the jammer using the received pilot signals in the training phase. The obtained information is then exploited to develop a channel estimator that excludes the jammer's angular subspace from received training signals. The estimated channel information is then used for designing a combiner at the base station that is able to effectively cancel out the deliberate interference of the jammer. By numerical simulations, we evaluate the performance of the proposed jamming detection method in terms of correct detection probability and false alarm probability and show its effectiveness when the jammer's power is substantially lower than the user's power. Also, our results show that the proposed jamming suppression method can achieve a very close spectral efficiency as the case of no jamming in the network
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
228,505
1906.05201
Task Agnostic Continual Learning via Meta Learning
While neural networks are powerful function approximators, they suffer from catastrophic forgetting when the data distribution is not stationary. One particular formalism that studies learning under non-stationary distribution is provided by continual learning, where the non-stationarity is imposed by a sequence of distinct tasks. Most methods in this space assume, however, the knowledge of task boundaries, and focus on alleviating catastrophic forgetting. In this work, we depart from this view and move the focus towards faster remembering -- i.e measuring how quickly the network recovers performance rather than measuring the network's performance without any adaptation. We argue that in many settings this can be more effective and that it opens the door to combining meta-learning and continual learning techniques, leveraging their complementary advantages. We propose a framework specific for the scenario where no information about task boundaries or task identity is given. It relies on a separation of concerns into what task is being solved and how the task should be solved. This framework is implemented by differentiating task specific parameters from task agnostic parameters, where the latter are optimized in a continual meta learning fashion, without access to multiple tasks at the same time. We showcase this framework in a supervised learning scenario and discuss the implication of the proposed formalism.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
134,953
2304.13672
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation
Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,663
1704.06962
Coherent multiple-antenna block-fading channels at finite blocklength
In this paper we consider a channel model that is often used to describe the mobile wireless scenario: multiple-antenna additive white Gaussian noise channels subject to random (fading) gain with full channel state information at the receiver. Dynamics of the fading process are approximated by a piecewise-constant process (frequency non-selective isotropic block fading). This work addresses the finite blocklength fundamental limits of this channel model. Specifically, we give a formula for the channel dispersion -- a quantity governing the delay required to achieve capacity. Multiplicative nature of the fading disturbance leads to a number of interesting technical difficulties that required us to enhance traditional methods for finding channel dispersion. Alas, one difficulty remains: the converse (impossibility) part of our result holds under an extra constraint on the growth of the peak-power with blocklength. Our results demonstrate, for example, that while capacities of $n_t\times n_r$ and $n_r \times n_t$ antenna configurations coincide (under fixed received power), the coding delay can be quite sensitive to this switch. For example, at the received SNR of $20$ dB the $16\times 100$ system achieves capacity with codes of length (delay) which is only $60\%$ of the length required for the $100\times 16$ system. Another interesting implication is that for the MISO channel, the dispersion-optimal coding schemes require employing orthogonal designs such as Alamouti's scheme -- a surprising observation considering the fact that Alamouti's scheme was designed for reducing demodulation errors, not improving coding rate. Finding these dispersion-optimal coding schemes naturally gives a criteria for producing orthogonal design-like inputs in dimensions where orthogonal designs do not exist.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
72,267
2303.05737
Clinical BERTScore: An Improved Measure of Automatic Speech Recognition Performance in Clinical Settings
Automatic Speech Recognition (ASR) in medical contexts has the potential to save time, cut costs, increase report accuracy, and reduce physician burnout. However, the healthcare industry has been slower to adopt this technology, in part due to the importance of avoiding medically-relevant transcription mistakes. In this work, we present the Clinical BERTScore (CBERTScore), an ASR metric that penalizes clinically-relevant mistakes more than others. We demonstrate that this metric more closely aligns with clinician preferences on medical sentences as compared to other metrics (WER, BLUE, METEOR, etc), sometimes by wide margins. We collect a benchmark of 18 clinician preferences on 149 realistic medical sentences called the Clinician Transcript Preference benchmark (CTP) and make it publicly available for the community to further develop clinically-aware ASR metrics. To our knowledge, this is the first public dataset of its kind. We demonstrate that CBERTScore more closely matches what clinicians prefer.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
350,581
1302.4928
Graphical Models for Preference and Utility
Probabilistic independence can dramatically simplify the task of eliciting, representing, and computing with probabilities in large domains. A key technique in achieving these benefits is the idea of graphical modeling. We survey existing notions of independence for utility functions in a multi-attribute space, and suggest that these can be used to achieve similar advantages. Our new results concern conditional additive independence, which we show always has a perfect representation as separation in an undirected graph (a Markov network). Conditional additive independencies entail a particular functional for the utility function that is analogous to a product decomposition of a probability function, and confers analogous benefits. This functional form has been utilized in the Bayesian network and influence diagram literature, but generally without an explanation in terms of independence. The functional form yields a decomposition of the utility function that can greatly speed up expected utility calculations, particularly when the utility graph has a similar topology to the probabilistic network being used.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
22,202
2004.14649
Capsule-Transformer for Neural Machine Translation
Transformer hugely benefits from its key design of the multi-head self-attention network (SAN), which extracts information from various perspectives through transforming the given input into different subspaces. However, its simple linear transformation aggregation strategy may still potentially fail to fully capture deeper contextualized information. In this paper, we thus propose the capsule-Transformer, which extends the linear transformation into a more general capsule routing algorithm by taking SAN as a special case of capsule network. So that the resulted capsule-Transformer is capable of obtaining a better attention distribution representation of the input sequence via information aggregation among different heads and words. Specifically, we see groups of attention weights in SAN as low layer capsules. By applying the iterative capsule routing algorithm they can be further aggregated into high layer capsules which contain deeper contextualized information. Experimental results on the widely-used machine translation datasets show our proposed capsule-Transformer outperforms strong Transformer baseline significantly.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
174,965
1802.02229
Axiomatic Foundations and Algorithms for Deciding Semantic Equivalences of SQL Queries
Deciding the equivalence of SQL queries is a fundamental problem in data management. As prior work has mainly focused on studying the theoretical limitations of the problem, very few implementations for checking such equivalences exist. In this paper, we present a new formalism and implementation for reasoning about the equivalences of SQL queries. Our formalism, U-semiring, extends SQL's semiring semantics with unbounded summation and duplicate elimination. U-semiring is defined using only very few axioms and can thus be easily implemented using proof assistants such as Coq for automated query reasoning. Yet, they are sufficient enough to enable us reason about sophisticated SQL queries that are evaluated over bags and sets, along with various integrity constraints. To evaluate the effectiveness of U-semiring, we have used it to formally verify 39 query rewrite rules from both classical data management research papers and real-world SQL engines, where many of them have never been proven correct before.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
89,739
2005.00147
Interpretable Entity Representations through Large-Scale Typing
In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model's decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
175,141
2208.04620
Cascade-based Echo Chamber Detection
Despite echo chambers in social media have been under considerable scrutiny, general models for their detection and analysis are missing. In this work, we aim to fill this gap by proposing a probabilistic generative model that explains social media footprints -- i.e., social network structure and propagations of information -- through a set of latent communities, characterized by a degree of echo-chamber behavior and by an opinion polarity. Specifically, echo chambers are modeled as communities that are permeable to pieces of information with similar ideological polarity, and impermeable to information of opposed leaning: this allows discriminating echo chambers from communities that lack a clear ideological alignment. To learn the model parameters we propose a scalable, stochastic adaptation of the Generalized Expectation Maximization algorithm, that optimizes the joint likelihood of observing social connections and information propagation. Experiments on synthetic data show that our algorithm is able to correctly reconstruct ground-truth latent communities with their degree of echo-chamber behavior and opinion polarity. Experiments on real-world data about polarized social and political debates, such as the Brexit referendum or the COVID-19 vaccine campaign, confirm the effectiveness of our proposal in detecting echo chambers. Finally, we show how our model can improve accuracy in auxiliary predictive tasks, such as stance detection and prediction of future propagations.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
312,177
1706.09553
Transforming Musical Signals through a Genre Classifying Convolutional Neural Network
Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
76,153
2104.07235
Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification
Developing a robust algorithm to diagnose and quantify the severity of COVID-19 using Chest X-ray (CXR) requires a large number of well-curated COVID-19 datasets, which is difficult to collect under the global COVID-19 pandemic. On the other hand, CXR data with other findings are abundant. This situation is ideally suited for the Vision Transformer (ViT) architecture, where a lot of unlabeled data can be used through structural modeling by the self-attention mechanism. However, the use of existing ViT is not optimal, since feature embedding through direct patch flattening or ResNet backbone in the standard ViT is not intended for CXR. To address this problem, here we propose a novel Vision Transformer that utilizes low-level CXR feature corpus obtained from a backbone network that extracts common CXR findings. Specifically, the backbone network is first trained with large public datasets to detect common abnormal findings such as consolidation, opacity, edema, etc. Then, the embedded features from the backbone network are used as corpora for a Transformer model for the diagnosis and the severity quantification of COVID-19. We evaluate our model on various external test datasets from totally different institutions to evaluate the generalization capability. The experimental results confirm that our model can achieve the state-of-the-art performance in both diagnosis and severity quantification tasks with superior generalization capability, which are sine qua non of widespread deployment.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
230,345
2311.11592
Predicting urban tree cover from incomplete point labels and limited background information
Trees inside cities are important for the urban microclimate, contributing positively to the physical and mental health of the urban dwellers. Despite their importance, often only limited information about city trees is available. Therefore in this paper, we propose a method for mapping urban trees in high-resolution aerial imagery using limited datasets and deep learning. Deep learning has become best-practice for this task, however, existing approaches rely on large and accurately labelled training datasets, which can be difficult and expensive to obtain. However, often noisy and incomplete data may be available that can be combined and utilized to solve more difficult tasks than those datasets were intended for. This paper studies how to combine accurate point labels of urban trees along streets with crowd-sourced annotations from an open geographic database to delineate city trees in remote sensing images, a task which is challenging even for humans. To that end, we perform semantic segmentation of very high resolution aerial imagery using a fully convolutional neural network. The main challenge is that our segmentation maps are sparsely annotated and incomplete. Small areas around the point labels of the street trees coming from official and crowd-sourced data are marked as foreground class. Crowd-sourced annotations of streets, buildings, etc. define the background class. Since the tree data is incomplete, we introduce a masking to avoid class confusion. Our experiments in Hamburg, Germany, showed that the system is able to produce tree cover maps, not limited to trees along streets, without providing tree delineations. We evaluated the method on manually labelled trees and show that performance drastically deteriorates if the open geographic database is not used.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,008
2202.02880
Continuous-Time Channel Gain Control for Minimum-Information Kalman-Bucy Filtering
We consider the problem of estimating a continuous-time Gauss-Markov source process observed through a vector Gaussian channel with an adjustable channel gain matrix. For a given (generally time-varying) channel gain matrix, we provide formulas to compute (i) the mean-square estimation error attainable by the classical Kalman-Bucy filter, and (ii) the mutual information between the source process and its Kalman-Bucy estimate. We then formulate a novel "optimal channel gain control problem" where the objective is to control the channel gain matrix strategically to minimize the weighted sum of these two performance metrics. To develop insights into the optimal solution, we first consider the problem of controlling a time-varying channel gain over a finite time interval. A necessary optimality condition is derived based on Pontryagin's minimum principle. For a scalar system, we show that the optimal channel gain is a piece-wise constant signal with at most two switches. We also consider the problem of designing the optimal time-invariant gain to minimize the average cost over an infinite time horizon. A novel semidefinite programming (SDP) heuristic is proposed and the exactness of the solution is discussed.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
278,983
2408.02348
Earth System Data Cubes: Avenues for advancing Earth system research
Recent advancements in Earth system science have been marked by the exponential increase in the availability of diverse, multivariate datasets characterised by moderate to high spatio-temporal resolutions. Earth System Data Cubes (ESDCs) have emerged as one suitable solution for transforming this flood of data into a simple yet robust data structure. ESDCs achieve this by organising data into an analysis-ready format aligned with a spatio-temporal grid, facilitating user-friendly analysis and diminishing the need for extensive technical data processing knowledge. Despite these significant benefits, the completion of the entire ESDC life cycle remains a challenging task. Obstacles are not only of a technical nature but also relate to domain-specific problems in Earth system research. There exist barriers to realising the full potential of data collections in light of novel cloud-based technologies, particularly in curating data tailored for specific application domains. These include transforming data to conform to a spatio-temporal grid with minimum distortions and managing complexities such as spatio-temporal autocorrelation issues. Addressing these challenges is pivotal for the effective application of Artificial Intelligence (AI) approaches. Furthermore, adhering to open science principles for data dissemination, reproducibility, visualisation, and reuse is crucial for fostering sustainable research. Overcoming these challenges offers a substantial opportunity to advance data-driven Earth system research, unlocking the full potential of an integrated, multidimensional view of Earth system processes. This is particularly true when such research is coupled with innovative research paradigms and technological progress.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
478,607
1511.07237
Predicting Relevance based on Assessor Disagreement: Analysis and Practical Applications for Search Evaluation
Evaluation of search engines relies on assessments of search results for selected test queries, from which we would ideally like to draw conclusions in terms of relevance of the results for general (e.g., future, unknown) users. In practice however, most evaluation scenarios only allow us to conclusively determine the relevance towards the particular assessor that provided the judgments. A factor that cannot be ignored when extending conclusions made from assessors towards users, is the possible disagreement on relevance, assuming that a single gold truth label does not exist. This paper presents and analyzes the Predicted Relevance Model (PRM), which allows predicting a particular result's relevance for a random user, based on an observed assessment and knowledge on the average disagreement between assessors. With the PRM, existing evaluation metrics designed to measure binary assessor relevance, can be transformed into more robust and effectively graded measures that evaluate relevance towards a random user. It also leads to a principled way of quantifying multiple graded or categorical relevance levels for use as gains in established graded relevance measures, such as normalized discounted cumulative gain (nDCG), which nowadays often use heuristic and data-independent gain values. Given a set of test topics with graded relevance judgments, the PRM allows evaluating systems on different scenarios, such as their capability of retrieving top results, or how well they are able to filter out non-relevant ones. Its use in actual evaluation scenarios is illustrated on several information retrieval test collections.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
49,399
2502.11094
SyncSpeech: Low-Latency and Efficient Dual-Stream Text-to-Speech based on Temporal Masked Transformer
This paper presents a dual-stream text-to-speech (TTS) model, SyncSpeech, capable of receiving streaming text input from upstream models while simultaneously generating streaming speech, facilitating seamless interaction with large language models. SyncSpeech has the following advantages: Low latency, as it begins generating streaming speech upon receiving the second text token; High efficiency, as it decodes all speech tokens corresponding to the each arrived text token in one step. To achieve this, we propose a temporal masked transformer as the backbone of SyncSpeech, combined with token-level duration prediction to predict speech tokens and the duration for the next step. Additionally, we design a two-stage training strategy to improve training efficiency and the quality of generated speech. We evaluated the SyncSpeech on both English and Mandarin datasets. Compared to the recent dual-stream TTS models, SyncSpeech significantly reduces the first packet delay of speech tokens and accelerates the real-time factor. Moreover, with the same data scale, SyncSpeech achieves performance comparable to that of traditional autoregressive-based TTS models in terms of both speech quality and robustness. Speech samples are available at https://SyncSpeech.github.io/}{https://SyncSpeech.github.io/.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
534,187
2207.02307
Variational energy based XPINNs for phase field analysis in brittle fracture
Modeling fracture is computationally expensive even in computational simulations of two-dimensional problems. Hence, scaling up the available approaches to be directly applied to large components or systems crucial for real applications become challenging. In this work. we propose domain decomposition framework for the variational physics-informed neural networks to accurately approximate the crack path defined using the phase field approach. We show that coupling domain decomposition and adaptive refinement schemes permits to focus the numerical effort where it is most needed: around the zones where crack propagates. No a priori knowledge of the damage pattern is required. The ability to use numerous deep or shallow neural networks in the smaller subdomains gives the proposed method the ability to be parallelized. Additionally, the framework is integrated with adaptive non-linear activation functions which enhance the learning ability of the networks, and results in faster convergence. The efficiency of the proposed approach is demonstrated numerically with three examples relevant to engineering fracture mechanics. Upon the acceptance of the manuscript, all the codes associated with the manuscript will be made available on Github.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
306,472
2410.15320
Amortized Probabilistic Conditioning for Optimization, Simulation and Inference
Amortized meta-learning methods based on pre-training have propelled fields like natural language processing and vision. Transformer-based neural processes and their variants are leading models for probabilistic meta-learning with a tractable objective. Often trained on synthetic data, these models implicitly capture essential latent information in the data-generation process. However, existing methods do not allow users to flexibly inject (condition on) and extract (predict) this probabilistic latent information at runtime, which is key to many tasks. We introduce the Amortized Conditioning Engine (ACE), a new transformer-based meta-learning model that explicitly represents latent variables of interest. ACE affords conditioning on both observed data and interpretable latent variables, the inclusion of priors at runtime, and outputs predictive distributions for discrete and continuous data and latents. We show ACE's modeling flexibility and performance in diverse tasks such as image completion and classification, Bayesian optimization, and simulation-based inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,482
1609.03552
Generative Visual Manipulation on the Natural Image Manifold
Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to "fall off" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
60,905
2010.09768
Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research
Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
201,652
2210.17238
Pneg: Prompt-based Negative Response Generation for Dialogue Response Selection Task
In retrieval-based dialogue systems, a response selection model acts as a ranker to select the most appropriate response among several candidates. However, such selection models tend to rely on context-response content similarity, which makes models vulnerable to adversarial responses that are semantically similar but not relevant to the dialogue context. Recent studies have shown that leveraging these adversarial responses as negative training samples is useful for improving the discriminating power of the selection model. Nevertheless, collecting human-written adversarial responses is expensive, and existing synthesizing methods often have limited scalability. To overcome these limitations, this paper proposes a simple but efficient method for generating adversarial negative responses leveraging a large-scale language model. Experimental results on dialogue selection tasks show that our method outperforms other methods of synthesizing adversarial negative responses. These results suggest that our method can be an effective alternative to human annotators in generating adversarial responses. Our dataset and generation code is available at https://github.com/leenw23/generating-negatives-by-gpt3.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
327,617
2109.13910
Online Object Model Reconstruction and Reuse for Lifelong Improvement of Robot Manipulation
This work proposes a robotic pipeline for picking and constrained placement of objects without geometric shape priors. Compared to recent efforts developed for similar tasks, where every object was assumed to be novel, the proposed system recognizes previously manipulated objects and performs online model reconstruction and reuse. Over a lifelong manipulation process, the system keeps learning features of objects it has interacted with and updates their reconstructed models. Whenever an instance of a previously manipulated object reappears, the system aims to first recognize it and then register its previously reconstructed model given the current observation. This step greatly reduces object shape uncertainty allowing the system to even reason for parts of objects, which are currently not observable. This also results in better manipulation efficiency as it reduces the need for active perception of the target object during manipulation. To get a reusable reconstructed model, the proposed pipeline adopts: i) TSDF for object representation, and ii) a variant of the standard particle filter algorithm for pose estimation and tracking of the partial object model. Furthermore, an effective way to construct and maintain a dataset of manipulated objects is presented. A sequence of real-world manipulation experiments is performed. They show how future manipulation tasks become more effective and efficient by reusing reconstructed models of previously manipulated objects, which were generated during their prior manipulation, instead of treating objects as novel every time.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
257,791
2106.09316
Optimized Power Control Design for Over-the-Air Federated Edge Learning
This paper investigates the transmission power control in over-the-air federated edge learning (Air-FEEL) system. Different from conventional power control designs (e.g., to minimize the individual mean squared error (MSE) of the over-the-air aggregation at each round), we consider a new power control design aiming at directly maximizing the convergence speed. Towards this end, we first analyze the convergence behavior of Air-FEEL (in terms of the optimality gap) subject to aggregation errors at different communication rounds. It is revealed that if the aggregation estimates are unbiased, then the training algorithm would converge exactly to the optimal point with mild conditions; while if they are biased, then the algorithm would converge with an error floor determined by the accumulated estimate bias over communication rounds. Next, building upon the convergence results, we optimize the power control to directly minimize the derived optimality gaps under both biased and unbiased aggregations, subject to a set of average and maximum power constraints at individual edge devices. We transform both problems into convex forms, and obtain their structured optimal solutions, both appearing in a form of regularized channel inversion, by using the Lagrangian duality method. Finally, numerical results show that the proposed power control policies achieve significantly faster convergence for Air-FEEL, as compared with benchmark policies with fixed power transmission or conventional MSE minimization.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
241,625
2206.02059
Empowering GNNs via Edge-Aware Weisfeiler-Leman Algorithm
Message passing graph neural networks (GNNs) are known to have their expressiveness upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) algorithm. To achieve more powerful GNNs, existing attempts either require ad hoc features, or involve operations that incur high time and space complexities. In this work, we propose a general and provably powerful GNN framework that preserves the scalability of the message passing scheme. In particular, we first propose to empower 1-WL for graph isomorphism test by considering edges among neighbors, giving rise to NC-1-WL. The expressiveness of NC-1-WL is shown to be strictly above 1-WL and below 3-WL theoretically. Further, we propose the NC-GNN framework as a differentiable neural version of NC-1-WL. Our simple implementation of NC-GNN is provably as powerful as NC-1-WL. Experiments demonstrate that our NC-GNN performs effectively and efficiently on various benchmarks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,727
1802.02813
Archetypal Analysis for Sparse Representation-based Hyperspectral Sub-pixel Quantification
The estimation of land cover fractions from remote sensing images is a frequently used indicator of the environmental quality. This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m$\times$30m. We use constrained sparse representation, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. We automatically determine the elementary spectra from image reference data using archetypal analysis by simplex volume maximization, and combine it with reversible jump Markov chain Monte Carlo method. In our experiments, the estimation of the automatically derived elementary spectra is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions, $R^2$, and the number of used elementary spectra. The experiments show that a collection of archetypes can be an adequate and efficient alternative to the manually designed spectral library with respect to the mentioned criteria.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,844
2006.06676
Training Generative Adversarial Networks with Limited Data
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
181,521
1412.0436
An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R
This document describes an infra-structure provided by the R package performanceEstimation that allows to estimate the predictive performance of different approaches (workflows) to predictive tasks. The infra-structure is generic in the sense that it can be used to estimate the values of any performance metrics, for any workflow on different predictive tasks, namely, classification, regression and time series tasks. The package also includes several standard workflows that allow users to easily set up their experiments limiting the amount of work and information they need to provide. The overall goal of the infra-structure provided by our package is to facilitate the task of estimating the predictive performance of different modeling approaches to predictive tasks in the R environment.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
38,023
1905.02882
Frame-Recurrent Video Inpainting by Robust Optical Flow Inference
In this paper, we present a new inpainting framework for recovering missing regions of video frames. Compared with image inpainting, performing this task on video presents new challenges such as how to preserving temporal consistency and spatial details, as well as how to handle arbitrary input video size and length fast and efficiently. Towards this end, we propose a novel deep learning architecture which incorporates ConvLSTM and optical flow for modeling the spatial-temporal consistency in videos. It also saves much computational resource such that our method can handle videos with larger frame size and arbitrary length streamingly in real-time. Furthermore, to generate an accurate optical flow from corrupted frames, we propose a robust flow generation module, where two sources of flows are fed and a flow blending network is trained to fuse them. We conduct extensive experiments to evaluate our method in various scenarios and different datasets, both qualitatively and quantitatively. The experimental results demonstrate the superior of our method compared with the state-of-the-art inpainting approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
130,070
2409.05977
Mathematical Formalized Problem Solving and Theorem Proving in Different Fields in Lean 4
Formalizing mathematical proofs using computerized verification languages like Lean 4 has the potential to significantly impact the field of mathematics, it offers prominent capabilities for advancing mathematical reasoning. However, existing efforts are largely limited to creating formalized versions of proofs from extensive online mathematical corpora, struggling to keep pace with the rapidly evolving nature of mathematics. To bridge the gap between traditional and computerized proof techniques, this paper explores the use of Large Language Models (LLMs) to generate formal proof steps and complete formalized proofs. By converting natural language (NL) mathematical proofs into formalized versions, this work introduces the basic structure and tactics of the Lean 4 language. The goal is to determine how AI can be leveraged to assist the mathematical formalization process and improve its performance. Several examples are provided that demonstrate solving problems using both traditional and Lean 4-based approaches. Ultimately, this paper presents an explanation of the foundations of Lean 4 and comparative analyses of the mathematical formalization process using traditional and AI-augmented techniques. The findings indicate that AI- powered tools have significant potential to accelerate and enhance the formalization of mathematical proofs, paving the way for more efficient and reliable theorem-proving for AI for Math in the future.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
486,957
2106.01228
Metaphor Generation with Conceptual Mappings
Generating metaphors is a difficult task as it requires understanding nuanced relationships between abstract concepts. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions. To achieve this, we develop two methods: 1) using FrameNet-based embeddings to learn mappings between domains and applying them at the lexical level (CM-Lex), and 2) deriving source/target pairs to train a controlled seq-to-seq generation model (CM-BART). We assess our methods through automatic and human evaluation for basic metaphoricity and conceptual metaphor presence. We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems, and CM-BART outperforms all other models both in automatic and human evaluations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,434
2309.04382
Emergent learning in physical systems as feedback-based aging in a glassy landscape
By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
390,710
2010.09426
LANNS: A Web-Scale Approximate Nearest Neighbor Lookup System
Nearest neighbor search (NNS) has a wide range of applications in information retrieval, computer vision, machine learning, databases, and other areas. Existing state-of-the-art algorithm for nearest neighbor search, Hierarchical Navigable Small World Networks(HNSW), is unable to scale to large datasets of 100M records in high dimensions. In this paper, we propose LANNS, an end-to-end platform for Approximate Nearest Neighbor Search, which scales for web-scale datasets. Library for Large Scale Approximate Nearest Neighbor Search (LANNS) is deployed in multiple production systems for identifying topK ($100 \leq topK \leq 200$) approximate nearest neighbors with a latency of a few milliseconds per query, high throughput of 2.5k Queries Per Second (QPS) on a single node, on large ($\sim$180M data points) high dimensional (50-2048 dimensional) datasets.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
201,532
2308.09041
A Mathematical Characterization of Minimally Sufficient Robot Brains
This paper addresses the lower limits of encoding and processing the information acquired through interactions between an internal system (robot algorithms or software) and an external system (robot body and its environment) in terms of action and observation histories. Both are modeled as transition systems. We want to know the weakest internal system that is sufficient for achieving passive (filtering) and active (planning) tasks. We introduce the notion of an information transition system for the internal system which is a transition system over a space of information states that reflect a robot's or other observer's perspective based on limited sensing, memory, computation, and actuation. An information transition system is viewed as a filter and a policy or plan is viewed as a function that labels the states of this information transition system. Regardless of whether internal systems are obtained by learning algorithms, planning algorithms, or human insight, we want to know the limits of feasibility for given robot hardware and tasks. We establish, in a general setting, that minimal information transition systems exist up to reasonable equivalence assumptions, and are unique under some general conditions. We then apply the theory to generate new insights into several problems, including optimal sensor fusion/filtering, solving basic planning tasks, and finding minimal representations for modeling a system given input-output relations.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
386,129
1906.11973
Harnessing Fluctuations in Thermodynamic Computing via Time-Reversal Symmetries
We experimentally demonstrate that highly structured distributions of work emerge during even the simple task of erasing a single bit. These are signatures of a refined suite of time-reversal symmetries in distinct functional classes of microscopic trajectories. As a consequence, we introduce a broad family of conditional fluctuation theorems that the component work distributions must satisfy. Since they identify entropy production, the component work distributions encode both the frequency of various mechanisms of success and failure during computing, as well giving improved estimates of the total irreversibly-dissipated heat. This new diagnostic tool provides strong evidence that thermodynamic computing at the nanoscale can be constructively harnessed. We experimentally verify this functional decomposition and the new class of fluctuation theorems by measuring transitions between flux states in a superconducting circuit.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
136,810
2406.05791
OD-DETR: Online Distillation for Stabilizing Training of Detection Transformer
DEtection TRansformer (DETR) becomes a dominant paradigm, mainly due to its common architecture with high accuracy and no post-processing. However, DETR suffers from unstable training dynamics. It consumes more data and epochs to converge compared with CNN-based detectors. This paper aims to stabilize DETR training through the online distillation. It utilizes a teacher model, accumulated by Exponential Moving Average (EMA), and distills its knowledge into the online model in following three aspects. First, the matching relation between object queries and ground truth (GT) boxes in the teacher is employed to guide the student, so queries within the student are not only assigned labels based on their own predictions, but also refer to the matching results from the teacher. Second, the teacher's initial query is given to the online student, and its prediction is directly constrained by the corresponding output from the teacher. Finally, the object queries from teacher's different decoding stages are used to build the auxiliary groups to accelerate the convergence. For each GT, two queries with the least matching costs are selected into this extra group, and they predict the GT box and participate the optimization. Extensive experiments show that the proposed OD-DETR successfully stabilizes the training, and significantly increases the performance without bringing in more parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
462,292
2308.10529
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
Large language models (LLMs) have shown impressive ability for open-domain NLP tasks. However, LLMs are sometimes too footloose for natural language understanding (NLU) tasks which always have restricted output and input format. Their performances on NLU tasks are highly related to prompts or demonstrations and are shown to be poor at performing several representative NLU tasks, such as event extraction and entity typing. To this end, we present SeqGPT, a bilingual (i.e., English and Chinese) open-source autoregressive model specially enhanced for open-domain natural language understanding. We express all NLU tasks with two atomic tasks, which define fixed instructions to restrict the input and output format but still ``open'' for arbitrarily varied label sets. The model is first instruction-tuned with extremely fine-grained labeled data synthesized by ChatGPT and then further fine-tuned by 233 different atomic tasks from 152 datasets across various domains. The experimental results show that SeqGPT has decent classification and extraction ability, and is capable of performing language understanding tasks on unseen domains. We also conduct empirical studies on the scaling of data and model size as well as on the transfer across tasks. Our model is accessible at https://github.com/Alibaba-NLP/SeqGPT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
386,774
2210.08490
STAR: Zero-Shot Chinese Character Recognition with Stroke- and Radical-Level Decompositions
Zero-shot Chinese character recognition has attracted rising attention in recent years. Existing methods for this problem are mainly based on either certain low-level stroke-based decomposition or medium-level radical-based decomposition. Considering that the stroke- and radical-level decompositions can provide different levels of information, we propose an effective zero-shot Chinese character recognition method by combining them. The proposed method consists of a training stage and an inference stage. In the training stage, we adopt two similar encoder-decoder models to yield the estimates of stroke and radical encodings, which together with the true encodings are then used to formalize the associated stroke and radical losses for training. A similarity loss is introduced to regularize stroke and radical encoders to yield features of the same characters with high correlation. In the inference stage, two key modules, i.e., the stroke screening module (SSM) and feature matching module (FMM) are introduced to tackle the deterministic and confusing cases respectively. In particular, we introduce an effective stroke rectification scheme in FMM to enlarge the candidate set of characters for final inference. Numerous experiments over three benchmark datasets covering the handwritten, printed artistic and street view scenarios are conducted to demonstrate the effectiveness of the proposed method. Numerical results show that the proposed method outperforms the state-of-the-art methods in both character and radical zero-shot settings, and maintains competitive performance in the traditional seen character setting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,174
1811.07732
Sensorless Control of the Levitated Ball
One of the most widely studied dynamical systems in nonlinear control theory is the levitated ball. Several full-state feedback controllers that ensure asymptotic regulation of the ball position have been reported in the literature. However, to the best of our knowledge, the design of a stabilizing law measuring only the current and the voltage - so-called sensorless control - is conspicuous by its absence. Besides its unquestionable theoretical interest, the high cost and poor reliability of position sensors for magnetic levitated systems, makes the problem of great practical application. Our main contribution is to provide the fist solution to this problem. Instrumental for the development of the theory is the use of parameter estimation-based observers, which combined with the dynamic regressor extension and mixing parameter estimation technique, allow the reconstruction of the magnetic flux. With the knowledge of the latter it is shown that the mechanical coordinates can be estimated with suitably tailored nonlinear observers. Replacing the observed states, in a certainty equivalent manner, with a full information asymptotically stabilising law completes the sensorless controller design. Simulation results are used to illustrate the performance of the proposed scheme.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
113,853
2010.13285
Malicious Requests Detection with Improved Bidirectional Long Short-term Memory Neural Networks
Detecting and intercepting malicious requests are one of the most widely used ways against attacks in the network security. Most existing detecting approaches, including matching blacklist characters and machine learning algorithms have all shown to be vulnerable to sophisticated attacks. To address the above issues, a more general and rigorous detection method is required. In this paper, we formulate the problem of detecting malicious requests as a temporal sequence classification problem, and propose a novel deep learning model namely Convolutional Neural Network-Bidirectional Long Short-term Memory-Convolutional Neural Network (CNN-BiLSTM-CNN). By connecting the shadow and deep feature maps of the convolutional layers, the malicious feature extracting ability is improved on more detailed functionality. Experimental results on HTTP dataset CSIC 2010 have demonstrated the effectiveness of the proposed method when compared with the state-of-the-arts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
203,068
2004.03264
Inspector Gadget: A Data Programming-based Labeling System for Industrial Images
As machine learning for images becomes democratized in the Software 2.0 era, one of the serious bottlenecks is securing enough labeled data for training. This problem is especially critical in a manufacturing setting where smart factories rely on machine learning for product quality control by analyzing industrial images. Such images are typically large and may only need to be partially analyzed where only a small portion is problematic (e.g., identifying defects on a surface). Since manual labeling these images is expensive, weak supervision is an attractive alternative where the idea is to generate weak labels that are not perfect, but can be produced at scale. Data programming is a recent paradigm in this category where it uses human knowledge in the form of labeling functions and combines them into a generative model. Data programming has been successful in applications based on text or structured data and can also be applied to images usually if one can find a way to convert them into structured data. In this work, we expand the horizon of data programming by directly applying it to images without this conversion, which is a common scenario for industrial applications. We propose Inspector Gadget, an image labeling system that combines crowdsourcing, data augmentation, and data programming to produce weak labels at scale for image classification. We perform experiments on real industrial image datasets and show that Inspector Gadget obtains better performance than other weak-labeling techniques: Snuba, GOGGLES, and self-learning baselines using convolutional neural networks (CNNs) without pre-training.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
171,497
2109.15005
SCIMAT: Science and Mathematics Dataset
In this work, we announce a comprehensive well curated and opensource dataset with millions of samples for pre-college and college level problems in mathematicsand science. A preliminary set of results using transformer architecture with character to character encoding is shown. The dataset identifies some challenging problem and invites research on better architecture search
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
true
258,139
1805.02641
Label Refinery: Improving ImageNet Classification through Label Progression
Among the three main components (data, labels, and models) of any supervised learning system, data and models have been the main subjects of active research. However, studying labels and their properties has received very little attention. Current principles and paradigms of labeling impose several challenges to machine learning algorithms. Labels are often incomplete, ambiguous, and redundant. In this paper we study the effects of various properties of labels and introduce the Label Refinery: an iterative procedure that updates the ground truth labels after examining the entire dataset. We show significant gain using refined labels across a wide range of models. Using a Label Refinery improves the state-of-the-art top-1 accuracy of (1) AlexNet from 59.3 to 67.2, (2) MobileNet from 70.6 to 73.39, (3) MobileNet-0.25 from 50.6 to 55.59, (4) VGG19 from 72.7 to 75.46, and (5) Darknet19 from 72.9 to 74.47.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
96,894
2407.13214
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
In a recent study, we found that publicly BCCD and BCD datasets have significant issues such as labeling errors, insufficient sample size, and poor data quality. To address these problems, we performed sample deletion, re-labeling, and integration of these two datasets. Additionally, we introduced the PBC and Raabin-WBC datasets, and ultimately created a high-quality, sample-balanced new dataset, which we named TXL-PBC. The dataset contains 1008 training sets, 288 validation sets, and 144 test sets. Firstly, The dataset underwent strict manual annotation, automatic annotation with YOLOv8n model, and manual audit steps to ensure the accuracy and consistency of annotations. Secondly, we addresses the blood cell mislabeling problem of the original datasets. The distribution of label boundary box areas and the number of labels are better than the BCCD and BCD datasets. Moreover, we used the YOLOv8n model to train these three datasets, the performance of the TXL-PBC dataset surpass the original two datasets. Finally, we employed YOLOv5n, YOLOv5s, YOLOv5l, YOLOv8s, YOLOv8m detection models as the baseline models for TXL-PBC. This study not only enhances the quality of the blood cell dataset but also supports researchers in improving models for blood cell target detection. We published our freely accessible TXL-PBC dataset at https://github.com/lugan113/TXL-PBC\_Dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,287
2104.06341
Constraint-coupled Optimization with Unknown Costs: A Distributed Primal Decomposition Approach
In this paper, we present a distributed algorithm for solving convex, constraint-coupled, optimization problems over peer-to-peer networks. We consider a network of processors that aim to cooperatively minimize the sum of local cost functions, subject to individual constraints and to global coupling constraints. The major assumption of this work is that the cost functions are unknown and must be learned online. We propose a fully distributed algorithm, based on a primal decomposition approach, that uses iteratively refined data-driven estimations of the cost functions over the iterations. The algorithm is scalable and maintains private information of agents. We prove that, asymptotically, the distributed algorithm provides the optimal solution of the problem even though the true cost functions are never used within the algorithm. The analysis requires an in-depth exploration of the primal decomposition approach and shows that the distributed algorithm can be thought of as an epsilon-subgradient method applied to a suitable reformulation of the original problem. Finally, numerical computations cor- roborate the theoretical findings and show the efficacy of the proposed approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
230,040
2407.15770
Examining Inequality in Park Quality for Promoting Health Across 35 Global Cities
Urban parks provide significant health benefits by offering spaces and facilities for various recreational and leisure activities. However, the capacity of specific park spaces and elements to foster health remains underexamined. Traditional studies have focused on parks' size, greenery, and accessibility, often overlooking their ability to facilitate specific health-promoting activities. To address this gap, we propose a taxonomy consisting of six categories of health-promoting activities in parks: physical, mind-body, nature appreciation, environmental, social, and cultural. We estimate the capacity of parks in 35 global cities to promote health by establishing a lexicon linking park spaces and elements with specific health-promoting activities from our taxonomy. Using this lexicon, we collected data on elements and spaces in all parks in 35 cities from OpenStreetMap. Our analysis covers 23,477 parks with a total of 827,038 elements and spaces. By first comparing similarly sized parks across cities, we found that North American parks offer more spaces for physical activities, while European parks focus more on nature appreciation. Second, by scoring parks based on both elements and spaces, we investigated the variability in their health-promoting potential. We found the most uniform provision across parks for physical activities and the highest disparities regarding social activities. Additionally, parks offering a variety of activities are usually located in city centers, while offerings diminish in parks towards the suburbs. Lastly, we identified significant inequalities in park standards across cities, regardless of their continental location: Tokyo and Paris offer the most uniform park standards, while Copenhagen and Rio de Janeiro exhibit the most pronounced disparities. Our study provides insights for making urban parks more equitable, engaging, and health-promoting.
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
475,321
1607.02524
The Replica-Symmetric Prediction for Compressed Sensing with Gaussian Matrices is Exact
This paper considers the fundamental limit of compressed sensing for i.i.d. signal distributions and i.i.d. Gaussian measurement matrices. Its main contribution is a rigorous characterization of the asymptotic mutual information (MI) and minimum mean-square error (MMSE) in this setting. Under mild technical conditions, our results show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics. This resolves a well-known problem that has remained open for over a decade.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
58,361
1902.01506
Learning to Prescribe Interventions for Tuberculosis Patients Using Digital Adherence Data
Digital Adherence Technologies (DATs) are an increasingly popular method for verifying patient adherence to many medications. We analyze data from one city served by 99DOTS, a phone-call-based DAT deployed for Tuberculosis (TB) treatment in India where nearly 3 million people are afflicted with the disease each year. The data contains nearly 17,000 patients and 2.1M dose records. We lay the groundwork for learning from this real-world data, including a method for avoiding the effects of unobserved interventions in training data used for machine learning. We then construct a deep learning model, demonstrate its interpretability, and show how it can be adapted and trained in different clinical scenarios to better target and improve patient care. In the real-time risk prediction setting our model could be used to proactively intervene with 21% more patients and before 76% more missed doses than current heuristic baselines. For outcome prediction, our model performs 40% better than baseline methods, allowing cities to target more resources to clinics with a heavier burden of patients at risk of failure. Finally, we present a case study demonstrating how our model can be trained in an end-to-end decision focused learning setting to achieve 15% better solution quality in an example decision problem faced by health workers.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
120,671
2209.06418
Graph Perceiver IO: A General Architecture for Graph Structured Data
Multimodal machine learning has been widely studied for the development of general intelligence. Recently, the remarkable multimodal algorithms, the Perceiver and Perceiver IO, show competitive results for diverse dataset domains and tasks. However, recent works, Perceiver and Perceiver IO, have focused on heterogeneous modalities, including image, text, and speech, and there are few research works for graph structured datasets. A graph is one of the most generalized dataset structures, and we can represent the other dataset, including images, text, and speech, as graph structured data. A graph has an adjacency matrix different from other dataset domains such as text and image, and it is not trivial to handle the topological information, relational information, and canonical positional information. In this study, we provide a Graph Perceiver IO, the Perceiver IO for the graph structured dataset. We keep the main structure of the Graph Perceiver IO as the Perceiver IO because the Perceiver IO already handles the diverse dataset well, except for the graph structured dataset. The Graph Perceiver IO is a general method, and it can handle diverse datasets such as graph structured data as well as text and images. Comparing the graph neural networks, the Graph Perceiver IO requires a lower complexity, and it can incorporate the local and global information efficiently. We show that Graph Perceiver IO shows competitive results for diverse graph-related tasks, including node classification, graph classification, and link prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
317,396
2006.03112
Embedding Directed Graphs in Potential Fields Using FastMap-D
Embedding undirected graphs in a Euclidean space has many computational benefits. FastMap is an efficient embedding algorithm that facilitates a geometric interpretation of problems posed on undirected graphs. However, Euclidean distances are inherently symmetric and, thus, Euclidean embeddings cannot be used for directed graphs. In this paper, we present FastMap-D, an efficient generalization of FastMap to directed graphs. FastMap-D embeds vertices using a potential field to capture the asymmetry between the pairwise distances in directed graphs. FastMap-D learns a potential function to define the potential field using a machine learning module. In experiments on various kinds of directed graphs, we demonstrate the advantage of FastMap-D over other approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,214
1203.4206
Low Complexity Turbo-Equalization: A Clustering Approach
We introduce a low complexity approach to iterative equalization and decoding, or "turbo equalization", that uses clustered models to better match the nonlinear relationship that exists between likelihood information from a channel decoder and the symbol estimates that arise in soft-input channel equalization. The introduced clustered turbo equalizer uses piecewise linear models to capture the nonlinear dependency of the linear minimum mean square error (MMSE) symbol estimate on the symbol likelihoods produced by the channel decoder and maintains a computational complexity that is only linear in the channel memory. By partitioning the space of likelihood information from the decoder, based on either hard or soft clustering, and using locally-linear adaptive equalizers within each clustered region, the performance gap between the linear MMSE equalizer and low-complexity, LMS-based linear turbo equalizers can be dramatically narrowed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
15,024
2309.06462
Action Segmentation Using 2D Skeleton Heatmaps and Multi-Modality Fusion
This paper presents a 2D skeleton-based action segmentation method with applications in fine-grained human activity recognition. In contrast with state-of-the-art methods which directly take sequences of 3D skeleton coordinates as inputs and apply Graph Convolutional Networks (GCNs) for spatiotemporal feature learning, our main idea is to use sequences of 2D skeleton heatmaps as inputs and employ Temporal Convolutional Networks (TCNs) to extract spatiotemporal features. Despite lacking 3D information, our approach yields comparable/superior performances and better robustness against missing keypoints than previous methods on action segmentation datasets. Moreover, we improve the performances further by using both 2D skeleton heatmaps and RGB videos as inputs. To our best knowledge, this is the first work to utilize 2D skeleton heatmap inputs and the first work to explore 2D skeleton+RGB fusion for action segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
391,425
2303.10401
Smart ROI Detection for Alzheimer's disease prediction using explainable AI
Purpose Predicting the progression of MCI to Alzheimer's disease is an important step in reducing the progression of the disease. Therefore, many methods have been introduced for this task based on deep learning. Among these approaches, the methods based on ROIs are in a good position in terms of accuracy and complexity. In these techniques, some specific parts of the brain are extracted as ROI manually for all of the patients. Extracting ROI manually is time-consuming and its results depend on human expertness and precision. Method To overcome these limitations, we propose a novel smart method for detecting ROIs automatically based on Explainable AI using Grad-Cam and a 3DCNN model that extracts ROIs per patient. After extracting the ROIs automatically, Alzheimer's disease is predicted using extracted ROI-based 3D CNN. Results We implement our method on 176 MCI patients of the famous ADNI dataset and obtain remarkable results compared to the state-of-the-art methods. The accuracy acquired using 5-fold cross-validation is 98.6 and the AUC is 1. We also compare the results of the ROI-based method with the whole brain-based method. The results show that the performance is impressively increased. Conclusion The experimental results show that the proposed smart ROI extraction, which extracts the ROIs automatically, performs well for Alzheimer's disease prediction. The proposed method can also be used for Alzheimer's disease classification and diagnosis.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
352,430
2305.11595
Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate
Large Language Models (LLMs) have shown impressive capabilities in various applications, but they still face various inconsistency issues. Existing works primarily focus on the inconsistency issues within a single LLM, while we complementarily explore the inter-consistency among multiple LLMs for collaboration. To examine whether LLMs can collaborate effectively to achieve a consensus for a shared goal, we focus on commonsense reasoning, and introduce a formal debate framework (FORD) to conduct a three-stage debate among LLMs with real-world scenarios alignment: fair debate, mismatched debate, and roundtable debate. Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs. Leveraging a more advanced LLM like GPT-4 as an authoritative judge can boost collaboration performance. Our work contributes to understanding the inter-consistency among LLMs and lays the foundation for developing future collaboration methods. Codes and data are available at https://github.com/Waste-Wood/FORD
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
365,614
2205.08615
Towards Robust Low Light Image Enhancement
In this paper, we study the problem of making brighter images from dark images found in the wild. The images are dark because they are taken in dim environments. They suffer from color shifts caused by quantization and from sensor noise. We don't know the true camera reponse function for such images and they are not RAW. We use a supervised learning method, relying on a straightforward simulation of an imaging pipeline to generate usable dataset for training and testing. On a number of standard datasets, our approach outperforms the state of the art quantitatively. Qualitative comparisons suggest strong improvements in reconstruction accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,999
2109.02935
Data Driven Content Creation using Statistical and Natural Language Processing Techniques for Financial Domain
Over the years customers' expectation of getting information instantaneously has given rise to the increased usage of channels like virtual assistants. Typically, customers try to get their questions answered by low-touch channels like search and virtual assistant first, before getting in touch with a live chat agent or the phone representative. Higher usage of these low-touch systems is a win-win for both customers and the organization since it enables organizations to attain a low cost of service while customers get served without delay. In this paper, we propose a two-part framework where the first part describes methods to combine the information from different interaction channels like call, search, and chat. We do this by summarizing (using a stacked Bi-LSTM network) the high-touch interaction channel data such as call and chat into short searchquery like customer intents and then creating an organically grown intent taxonomy from interaction data (using Hierarchical Agglomerative Clustering). The second part of the framework focuses on extracting customer questions by analyzing interaction data sources. It calculates similarity scores using TF-IDF and BERT(Devlin et al., 2019). It also maps these identified questions to the output of the first part of the framework using syntactic and semantic similarity.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
253,904
2404.05979
StoryImager: A Unified and Efficient Framework for Coherent Story Visualization and Completion
Story visualization aims to generate a series of realistic and coherent images based on a storyline. Current models adopt a frame-by-frame architecture by transforming the pre-trained text-to-image model into an auto-regressive manner. Although these models have shown notable progress, there are still three flaws. 1) The unidirectional generation of auto-regressive manner restricts the usability in many scenarios. 2) The additional introduced story history encoders bring an extremely high computational cost. 3) The story visualization and continuation models are trained and inferred independently, which is not user-friendly. To these ends, we propose a bidirectional, unified, and efficient framework, namely StoryImager. The StoryImager enhances the storyboard generative ability inherited from the pre-trained text-to-image model for a bidirectional generation. Specifically, we introduce a Target Frame Masking Strategy to extend and unify different story image generation tasks. Furthermore, we propose a Frame-Story Cross Attention Module that decomposes the cross attention for local fidelity and global coherence. Moreover, we design a Contextual Feature Extractor to extract contextual information from the whole storyline. The extensive experimental results demonstrate the excellent performance of our StoryImager. The code is available at https://github.com/tobran/StoryImager.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,283
2311.01052
Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis
We introduce Resilient Multiple Choice Learning (rMCL), an extension of the MCL approach for conditional distribution estimation in regression settings where multiple targets may be sampled for each training input. Multiple Choice Learning is a simple framework to tackle multimodal density estimation, using the Winner-Takes-All (WTA) loss for a set of hypotheses. In regression settings, the existing MCL variants focus on merging the hypotheses, thereby eventually sacrificing the diversity of the predictions. In contrast, our method relies on a novel learned scoring scheme underpinned by a mathematical framework based on Voronoi tessellations of the output space, from which we can derive a probabilistic interpretation. After empirically validating rMCL with experiments on synthetic data, we further assess its merits on the sound source localization problem, demonstrating its practical usefulness and the relevance of its interpretation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
404,902
2407.11054
Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations
This review introduces the transformative potential of generative Artificial Intelligence (AI) and foundation models, including large language models (LLMs), for health technology assessment (HTA). We explore their applications in four critical areas, evidence synthesis, evidence generation, clinical trials and economic modeling: (1) Evidence synthesis: Generative AI has the potential to assist in automating literature reviews and meta-analyses by proposing search terms, screening abstracts, and extracting data with notable accuracy; (2) Evidence generation: These models can potentially facilitate automating the process and analyze the increasingly available large collections of real-world data (RWD), including unstructured clinical notes and imaging, enhancing the speed and quality of real-world evidence (RWE) generation; (3) Clinical trials: Generative AI can be used to optimize trial design, improve patient matching, and manage trial data more efficiently; and (4) Economic modeling: Generative AI can also aid in the development of health economic models, from conceptualization to validation, thus streamlining the overall HTA process. Despite their promise, these technologies, while rapidly improving, are still nascent and continued careful evaluation in their applications to HTA is required. To ensure their responsible use and implementation, both developers and users of research incorporating these tools, should familiarize themselves with their current limitations, including the issues related to scientific validity, risk of bias, and consider equity and ethical implications. We also surveyed the current policy landscape and provide suggestions for HTA agencies on responsibly integrating generative AI into their workflows, emphasizing the importance of human oversight and the fast-evolving nature of these tools.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
473,274
2105.02803
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model
Deep neural networks have been shown to suffer from critical vulnerabilities under adversarial attacks. This phenomenon stimulated the creation of different attack and defense strategies similar to those adopted in cyberspace security. The dependence of such strategies on attack and defense mechanisms makes the associated algorithms on both sides appear as closely reciprocating processes. The defense strategies are particularly passive in these processes, and enhancing initiative of such strategies can be an effective way to get out of this arms race. Inspired by the dynamic defense approach in cyberspace, this paper builds upon stochastic ensemble smoothing based on defense method of random smoothing and model ensemble. Proposed method employs network architecture and smoothing parameters as ensemble attributes, and dynamically change attribute-based ensemble model before every inference prediction request. The proposed method handles the extreme transferability and vulnerability of ensemble models under white-box attacks. Experimental comparison of ASR-vs-distortion curves with different attack scenarios shows that even the attacker with the highest attack capability cannot easily exceed the attack success rate associated with the ensemble smoothed model, especially under untargeted attacks.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
233,935
2203.03513
Cartoon-texture evolution for two-region image segmentation
Two-region image segmentation is the process of dividing an image into two regions of interest, i.e., the foreground and the background. To this aim, Chan et al. [Chan, Esedo\=glu, Nikolova, SIAM Journal on Applied Mathematics 66(5), 1632-1648, 2006] designed a model well suited for smooth images. One drawback of this model is that it may produce a bad segmentation when the image contains oscillatory components. Based on a cartoon-texture decomposition of the image to be segmented, we propose a new model that is able to produce an accurate segmentation of images also containing noise or oscillatory information like texture. The novel model leads to a non-smooth constrained optimization problem which we solve by means of the ADMM method. The convergence of the numerical scheme is also proved. Several experiments on smooth, noisy, and textural images show the effectiveness of the proposed model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
284,111
1207.0543
Rate-splitting in the presence of multiple receivers
In the presence of multiple senders, one of the simplest decoding strategies that can be employed by a receiver is successive decoding. In a successive decoding strategy, the receiver decodes the messages one at a time using the knowledge of the previously decoded messages as side information. Recently, there have been two separate attempts to construct codes for the interference channel using successive decoding based on the idea of rate-splitting. In this note, we highlight a difficulty that arises when a rate-splitting codebook is to be decoded by multiple receivers. The main issue is that the rates of the split codebook are tightly coupled to the properties of the channel to the receiver, thus, rates chosen for one of the receivers may not be decodable for the other. We illustrate this issue by scrutinizing two recent arguments claiming to achieve the Han-Kobayashi rate region for the interference channel using rate-splitting and successive decoding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
17,174
1608.05143
A Systematic Approach for Cross-source Point Cloud Registration by Preserving Macro and Micro Structures
We propose a systematic approach for registering cross-source point clouds. The compelling need for cross-source point cloud registration is motivated by the rapid development of a variety of 3D sensing techniques, but many existing registration methods face critical challenges as a result of the large variations in cross-source point clouds. This paper therefore illustrates a novel registration method which successfully aligns two cross-source point clouds in the presence of significant missing data, large variations in point density, scale difference and so on. The robustness of the method is attributed to the extraction of macro and micro structures. Our work has three main contributions: (1) a systematic pipeline to deal with cross-source point cloud registration; (2) a graph construction method to maintain macro and micro structures; (3) a new graph matching method is proposed which considers the global geometric constraint to robustly register these variable graphs. Compared to most of the related methods, the experiments show that the proposed method successfully registers in cross-source datasets, while other methods have difficulty achieving satisfactory results. The proposed method also shows great ability in same-source datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,932
2108.00816
Progress and opportunities in modelling environmentally assisted cracking
Environmentally assisted cracking phenomena are widespread across the transport, defence, energy and construction sectors. However, predicting environmentally assisted fractures is a highly cross-disciplinary endeavour that requires resolving the multiple material-environment interactions taking place. In this manuscript, an overview is given of recent breakthroughs in the modelling of environmentally assisted cracking. The focus is on the opportunities created by two recent developments: phase field and multi-physics modelling. The possibilities enabled by the confluence of phase field methods and electro-chemo-mechanics modelling are discussed in the context of three environmental assisted cracking phenomena of particular engineering interest: hydrogen embrittlement, localised corrosion and corrosion fatigue. Mechanical processes such as deformation and fracture can be coupled with chemical phenomena like local reactions, ionic transport and hydrogen uptake and diffusion. Moreover, these can be combined with the prediction of an evolving interface, such as a growing pit or a crack, as dictated by a phase field variable that evolves based on thermodynamics and local kinetics. Suitable for both microstructural and continuum length scales, this new generation of simulation-based, multi-physics phase field models can open new modelling horizons and enable Virtual Testing in harmful environments.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
248,842
2112.02954
Reinforcement Learning for Navigation of Mobile Robot with LiDAR
This paper presents a technique for navigation of mobile robot with Deep Q-Network (DQN) combined with Gated Recurrent Unit (GRU). The DQN integrated with the GRU allows action skipping for improved navigation performance. This technique aims at efficient navigation of mobile robot such as autonomous parking robot. Framework for reinforcement learning can be applied to the DQN combined with the GRU in a real environment, which can be modeled by the Partially Observable Markov Decision Process (POMDP). By allowing action skipping, the ability of the DQN combined with the GRU in learning key-action can be improved. The proposed algorithm is applied to explore the feasibility of solution in real environment by the ROS-Gazebo simulator, and the simulation results show that the proposed algorithm achieves improved performance in navigation and collision avoidance as compared to the results obtained by DQN alone and DQN combined with GRU without allowing action skipping.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
270,023
1805.09211
Construnctions of LOCC indistinguishable set of generalized Bell states
In this paper, we mainly consider the local indistinguishability of the set of mutually orthogonal bipartite generalized Bell states (GBSs). We construct small sets of GBSs with cardinality smaller than $d$ which are not distinguished by one-way local operations and classical communication (1-LOCC) in $d\otimes d$. The constructions, based on linear system and Vandermonde matrix, is simple and effective. The results give a unified upper bound for the minimum cardinality of 1-LOCC indistinguishable set of GBSs, and greatly improve previous results in [Zhang \emph{et al.}, Phys. Rev. A 91, 012329 (2015); Wang \emph{et al.}, Quantum Inf. Process. 15, 1661 (2016)]. The case that $d$ is odd of the results also shows that the set of 4 GBSs in $5\otimes 5$ in [Fan, Phys. Rev. A 75, 014305 (2007)] is indeed a 1-LOCC indistinguishable set which can not be distinguished by Fan's method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
98,368
1712.10077
Aircraft trajectory control with feedback linearization for general nonlinear system
The feedback linearization method is further developed for the controller design on general nonlinear systems. Through the Lyapunov stability theory, the intractable nonlinear implicit algebraic control equations are effectively solved, and the asymptotically tracking performance is guaranteed. Moreover, it is proved that the controller may be used in an inverse-free version to the set-point control. With this method, a nonlinear aircraft outer-loop trajectory controller is developed. For the concern regarding the controller's robustness, the integral control technique is combined to counteract the adverse effect from modeling errors. Simulation results verify the well performance of the proposed controller.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
87,447
2303.09734
The Moderating Effect of Instant Runoff Voting
Instant runoff voting (IRV) has recently gained popularity as an alternative to plurality voting for political elections, with advocates claiming a range of advantages, including that it produces more moderate winners than plurality and could thus help address polarization. However, there is little theoretical backing for this claim, with existing evidence focused on case studies and simulations. In this work, we prove that IRV has a moderating effect relative to plurality voting in a precise sense, developed in a 1-dimensional Euclidean model of voter preferences. We develop a theory of exclusion zones, derived from properties of the voter distribution, which serve to show how moderate and extreme candidates interact during IRV vote tabulation. The theory allows us to prove that if voters are symmetrically distributed and not too concentrated at the extremes, IRV cannot elect an extreme candidate over a moderate. In contrast, we show plurality can and validate our results computationally. Our methods provide new frameworks for the analysis of voting systems, deriving exact winner distributions geometrically and establishing a connection between plurality voting and stick-breaking processes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
352,165
2407.00851
SAFE: a SAR Feature Extractor based on self-supervised learning and masked Siamese ViTs
Due to its all-weather and day-and-night capabilities, Synthetic Aperture Radar imagery is essential for various applications such as disaster management, earth monitoring, change detection and target recognition. However, the scarcity of labeled SAR data limits the performance of most deep learning algorithms. To address this issue, we propose a novel self-supervised learning framework based on masked Siamese Vision Transformers to create a General SAR Feature Extractor coined SAFE. Our method leverages contrastive learning principles to train a model on unlabeled SAR data, extracting robust and generalizable features. SAFE is applicable across multiple SAR acquisition modes and resolutions. We introduce tailored data augmentation techniques specific to SAR imagery, such as sub-aperture decomposition and despeckling. Comprehensive evaluations on various downstream tasks, including few-shot classification, segmentation, visualization, and pattern detection, demonstrate the effectiveness and versatility of the proposed approach. Our network competes with or surpasses other state-of-the-art methods in few-shot classification and segmentation tasks, even without being trained on the sensors used for the evaluation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
469,032