id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1606.07384
Robust Learning of Fixed-Structure Bayesian Networks
We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted. In this work, we study the fully observable discrete case where the structure of the network is given. Even in this basic setting, previous learning algorithms either run in exponential time or lose dimension-dependent factors in their error guarantees. We provide the first computationally efficient robust learning algorithm for this problem with dimension-independent error guarantees. Our algorithm has near-optimal sample complexity, runs in polynomial time, and achieves error that scales nearly-linearly with the fraction of adversarially corrupted samples. Finally, we show on both synthetic and semi-synthetic data that our algorithm performs well in practice.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
57,712
1905.01851
P-ODN: Prototype based Open Deep Network for Open Set Recognition
Most of the existing recognition algorithms are proposed for closed set scenarios, where all categories are known beforehand. However, in practice, recognition is essentially an open set problem. There are categories we know called "knowns", and there are more we do not know called "unknowns". Enumerating all categories beforehand is never possible, consequently it is infeasible to prepare sufficient training samples for those unknowns. Applying closed set recognition methods will naturally lead to unseen-category errors. To address this problem, we propose the prototype based Open Deep Network (P-ODN) for open set recognition tasks. Specifically, we introduce prototype learning into open set recognition. Prototypes and prototype radiuses are trained jointly to guide a CNN network to derive more discriminative features. Then P-ODN detects the unknowns by applying a multi-class triplet thresholding method based on the distance metric between features and prototypes. Manual labeling the unknowns which are detected in the previous process as new categories. Predictors for new categories are added to the classification layer to "open" the deep neural networks to incorporate new categories dynamically. The weights of new predictors are initialized exquisitely by applying a distances based algorithm to transfer the learned knowledge. Consequently, this initialization method speed up the fine-tuning process and reduce the samples needed to train new predictors. Extensive experiments show that P-ODN can effectively detect unknowns and needs only few samples with human intervention to recognize a new category. In the real world scenarios, our method achieves state-of-the-art performance on the UCF11, UCF50, UCF101 and HMDB51 datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,825
1908.02283
Triplet Based Embedding Distance and Similarity Learning for Text-independent Speaker Verification
Speaker embeddings become growing popular in the text-independent speaker verification task. In this paper, we propose two improvements during the training stage. The improvements are both based on triplet cause the training stage and the evaluation stage of the baseline x-vector system focus on different aims. Firstly, we introduce triplet loss for optimizing the Euclidean distances between embeddings while minimizing the multi-class cross entropy loss. Secondly, we design an embedding similarity measurement network for controlling the similarity between the two selected embeddings. We further jointly train the two new methods with the original network and achieve state-of-the-art. The multi-task training synergies are shown with a 9% reduction equal error rate (EER) and detected cost function (DCF) on the 2016 NIST Speaker Recognition Evaluation (SRE) Test Set.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
140,961
2007.04806
Client Adaptation improves Federated Learning with Simulated Non-IID Clients
We present a federated learning approach for learning a client adaptable, robust model when data is non-identically and non-independently distributed (non-IID) across clients. By simulating heterogeneous clients, we show that adding learned client-specific conditioning improves model performance, and the approach is shown to work on balanced and imbalanced data set from both audio and image domains. The client adaptation is implemented by a conditional gated activation unit and is particularly beneficial when there are large differences between the data distribution for each client, a common scenario in federated learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
186,480
1708.08680
Quaternions and Attitude Representation
The attitude space has been parameterized in various ways for practical purposes. Different representations gain preferences over others based on their intuitive understanding, ease of implementation, formulaic simplicity, and physical as well as mathematical complications involved in using them. This technical note gives a brief overview and discusses the quaternions, which are fourth dimensional extended complex numbers and used to represent orientation. Their relationship to other modes of attitude representation such as Euler angles and Axis-Angle representation is also explored and conversion from one representation to another is explained. The conventions, intuitive understanding and formulas most frequently used and indispensable to any quaternion application are stated and wherever possible, derived.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
79,670
2411.13774
Segment Any Class (SAC): Multi-Class Few-Shot Semantic Segmentation via Class Region Proposals
The Segment-Anything Model (SAM) is a vision foundation model for segmentation with a prompt-driven framework. SAM generates class-agnostic masks based on user-specified instance-referring prompts. However, adapting SAM for automated segmentation -- where manual input is absent -- of specific object classes often requires additional model training. We present Segment Any Class (SAC), a novel, training-free approach that task-adapts SAM for Multi-class segmentation. SAC generates Class-Region Proposals (CRP) on query images which allows us to automatically generate class-aware prompts on probable locations of class instances. CRPs are derived from elementary intra-class and inter-class feature distinctions without any additional training. Our method is versatile, accommodating any N-way K-shot configurations for the multi-class few-shot semantic segmentation (FSS) task. Unlike gradient-learning adaptation of generalist models which risk the loss of generalization and potentially suffer from catastrophic forgetting, SAC solely utilizes automated prompting and achieves superior results over state-of-the-art methods on the COCO-20i benchmark, particularly excelling in high N-way class scenarios. SAC is an interesting demonstration of a prompt-only approach to adapting foundation models for novel tasks with small, limited datasets without any modifications to the foundation model itself. This method offers interesting benefits such as intrinsic immunity to concept or feature loss and rapid, online task adaptation of foundation models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
509,914
2306.12344
An efficient, provably exact, practical algorithm for the 0-1 loss linear classification problem
Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the rigorous construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. We prove correctness using concepts from the theory of hyperplane arrangements and oriented matroids. We demonstrate the effectiveness of this algorithm on synthetic and real-world datasets, showing optimal accuracy both in and out-of-sample, in practical computational time. We also empirically demonstrate how the use of approximate upper bound leads to polynomial time run-time improvements to the algorithm whilst retaining exactness. To our knowledge, this is the first, rigorously-proven polynomial time, practical algorithm for this long-standing problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
374,910
0810.2352
Interference Channels with Correlated Receiver Side Information
The problem of joint source-channel coding in transmitting independent sources over interference channels with correlated receiver side information is studied. When each receiver has side information correlated with its own desired source, it is shown that source-channel code separation is optimal. When each receiver has side information correlated with the interfering source, sufficient conditions for reliable transmission are provided based on a joint source-channel coding scheme using the superposition encoding and partial decoding idea of Han and Kobayashi. When the receiver side information is a deterministic function of the interfering source, source-channel code separation is again shown to be optimal. As a special case, for a class of Z-interference channels, when the side information of the receiver facing interference is a deterministic function of the interfering source, necessary and sufficient conditions for reliable transmission are provided in the form of single letter expressions. As a byproduct of these joint source-channel coding results, the capacity region of a class of Z-channels with degraded message sets is also provided.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,498
1902.00125
US-net for robust and efficient nuclei instance segmentation
We present a novel neural network architecture, US-Net, for robust nuclei instance segmentation in histopathology images. The proposed framework integrates the nuclei detection and segmentation networks by sharing their outputs through the same foundation network, and thus enhancing the performance of both. The detection network takes into account the high-level semantic cues with contextual information, while the segmentation network focuses more on the low-level details like the edges. Extensive experiments reveal that our proposed framework can strengthen the performance of both branch networks in an integrated architecture and outperforms most of the state-of-the-art nuclei detection and segmentation networks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
120,322
2012.00351
HORAE: an annotated dataset of books of hours
We introduce in this paper a new dataset of annotated pages from books of hours, a type of handwritten prayer books owned and used by rich lay people in the late middle ages. The dataset was created for conducting historical research on the evolution of the religious mindset in Europe at this period since the book of hours represent one of the major sources of information thanks both to their rich illustrations and the different types of religious sources they contain. We first describe how the corpus was collected and manually annotated then present the evaluation of a state-of-the-art system for text line detection and for zone detection and typing. The corpus is freely available for research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
209,108
2403.05220
Synthetic Privileged Information Enhances Medical Image Representation Learning
Multimodal self-supervised representation learning has consistently proven to be a highly effective method in medical image analysis, offering strong task performance and producing biologically informed insights. However, these methods heavily rely on large, paired datasets, which is prohibitive for their use in scenarios where paired data does not exist, or there is only a small amount available. In contrast, image generation methods can work well on very small datasets, and can find mappings between unpaired datasets, meaning an effectively unlimited amount of paired synthetic data can be generated. In this work, we demonstrate that representation learning can be significantly improved by synthetically generating paired information, both compared to training on either single-modality (up to 4.4x error reduction) or authentic multi-modal paired datasets (up to 5.6x error reduction).
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
435,924
1106.4600
Perturbed and Permuted: Signal Integration in Network-Structured Dynamic Systems
Biological systems (among others) may respond to a large variety of distinct external stimuli, or signals. These perturbations will generally be presented to the system not singly, but in various combinations, so that a proper understanding of the system response requires assessment of the degree to which the effects of one signal modulate the effects of another. This paper develops a pair of structural metrics for sparse differential equation models of complex dynamic systems and demonstrates that said metrics correlate with proxies of the susceptibility of one signal-response to be altered in the context of a second signal. One of these metrics may be interpreted as a normalized arc density in the neighborhood of certain influential nodes; this metric appears to correlate with increased independence of signal response.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
10,962
2306.06044
GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes. Our goal is to mitigate these imperfections from various sources with a joint solution: we take advantage of the ability of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs. To this end, we learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction, thus improving realism in a 3D-consistent fashion. Thereby, rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints. In addition, we condition a generator with multi-resolution NeRF renderings which is adversarially trained to further improve rendering quality. We demonstrate that our approach significantly improves rendering quality, e.g., nearly halving LPIPS scores compared to Nerfacto while at the same time improving PSNR by 1.4dB on the advanced indoor scenes of Tanks and Temples.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
372,416
1702.08258
Upper and Lower Bounds for the Ergodic Capacity of MIMO Jacobi Fading Channels
In multi-(core/mode) optical fiber communication, the transmission channel can be modeled as a complex sub-matrix of the Haar-distributed unitary matrix (complex Jacobi unitary ensemble). In this letter, we present new analytical expressions of the upper and lower bounds for the ergodic capacity of multiple-input multiple-output Jacobi-fading channels. Recent results on the determinant of the Jacobi unitary ensemble are employed to derive a tight lower bound on the ergodic capacity. We use Jensen's inequality to provide an analytical closed-form upper bound to the ergodic capacity at any signal-to-noise ratio (SNR). Closed-form expressions of the ergodic capacity, at low and high SNR regimes, are also derived. Simulation results are presented to validate the accuracy of the derived expressions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,950
1706.00530
Integrated Deep and Shallow Networks for Salient Object Detection
Deep convolutional neural network (CNN) based salient object detection methods have achieved state-of-the-art performance and outperform those unsupervised methods with a wide margin. In this paper, we propose to integrate deep and unsupervised saliency for salient object detection under a unified framework. Specifically, our method takes results of unsupervised saliency (Robust Background Detection, RBD) and normalized color images as inputs, and directly learns an end-to-end mapping between inputs and the corresponding saliency maps. The color images are fed into a Fully Convolutional Neural Networks (FCNN) adapted from semantic segmentation to exploit high-level semantic cues for salient object detection. Then the results from deep FCNN and RBD are concatenated to feed into a shallow network to map the concatenated feature maps to saliency maps. Finally, to obtain a spatially consistent saliency map with sharp object boundaries, we fuse superpixel level saliency map at multi-scale. Extensive experimental results on 8 benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art approaches with a margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,637
1910.09312
Trends in the optimal location and sizing of electrical units in smart grids using meta-heuristic algorithms
The development of smart grids has effectively transformed the traditional grid system. This promises numerous advantages for economic values and autonomous control of energy sources. In smart grids development, there are various objectives such as voltage stability, minimized power loss, minimized economic cost and voltage profile improvement. Thus, researchers have investigated several approaches based on meta-heuristic optimization algorithms for the optimal location and sizing of electrical units in a distribution system. Meta-heuristic algorithms have been applied to solve different problems in power systems and they have been successfully used in distribution systems. This paper presents a comprehensive review on existing methods for the optimal location and sizing of electrical units in distribution networks while considering the improvement of major objective functions. Techniques such as voltage stability index, power loss index, and loss sensitivity factors have been implemented alongside the meta-heuristic optimization algorithms to reduce the search space of solutions for objective functions. However, these techniques can cause a loss of optimality. Another perceived problem is the inappropriate handling of multiple objectives, which can also affect the optimality of results. Hence, a recent method such as Pareto fronts generation has been developed to produce non-dominating solutions. This review shows a need for more research on (i) the effective handling of multiple objective functions, (ii) more efficient meta-heuristic optimization algorithms and/or (iii) better supporting techniques.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
150,146
2102.13002
Maximizing Cosine Similarity Between Spatial Features for Unsupervised Domain Adaptation in Semantic Segmentation
We propose a novel method that tackles the problem of unsupervised domain adaptation for semantic segmentation by maximizing the cosine similarity between the source and the target domain at the feature level. A segmentation network mainly consists of two parts, a feature extractor and a classification head. We expect that if we can make the two domains have small domain gap at the feature level, they would also have small domain discrepancy at the classification head. Our method computes a cosine similarity matrix between the source feature map and the target feature map, then we maximize the elements exceeding a threshold to guide the target features to have high similarity with the most similar source feature. Moreover, we use a class-wise source feature dictionary which stores the latest features of the source domain to prevent the unmatching problem when computing the cosine similarity matrix and be able to compare a target feature with various source features from various images. Through extensive experiments, we verify that our method gains performance on two unsupervised domain adaptation tasks (GTA5$\to$ Cityscaspes and SYNTHIA$\to$ Cityscapes).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
221,917
2308.04460
The Compatibility between the Pangu Weather Forecasting Model and Meteorological Operational Data
Recently, multiple data-driven models based on machine learning for weather forecasting have emerged. These models are highly competitive in terms of accuracy compared to traditional numerical weather prediction (NWP) systems. In particular, the Pangu-Weather model, which is open source for non-commercial use, has been validated for its forecasting performance by the European Centre for Medium-Range Weather Forecasts (ECMWF) and has recently been published in the journal "Nature". In this paper, we evaluate the compatibility of the Pangu-Weather model with several commonly used NWP operational analyses through case studies. The results indicate that the Pangu-Weather model is compatible with different operational analyses from various NWP systems as the model initial conditions, and it exhibits a relatively stable forecasting capability. Furthermore, we have verified that improving the quality of global or local initial conditions significantly contributes to enhancing the forecasting performance of the Pangu-Weather model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
384,422
2102.12959
Bayesian OOD detection with aleatoric uncertainty and outlier exposure
Typical Bayesian approaches to OOD detection use epistemic uncertainty. Surprisingly from the Bayesian perspective, there are a number of methods that successfully use aleatoric uncertainty to detect OOD points (e.g. Hendryks et al. 2018). In addition, it is difficult to use outlier exposure to improve a Bayesian OOD detection model, as it is not clear whether it is possible or desirable to increase posterior (epistemic) uncertainty at outlier points. We show that a generative model of data curation provides a principled account of aleatoric uncertainty for OOD detection. In particular, aleatoric uncertainty signals a specific type of OOD point: one without a well-defined class-label, and our model of data curation gives a likelihood for these points, giving us a mechanism for conditioning on outlier points and thus performing principled Bayesian outlier exposure. Our principled Bayesian approach, combining aleatoric and epistemic uncertainty with outlier exposure performs better than methods using aleatoric or epistemic alone.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
221,896
1901.05599
Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails
Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95% on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
118,822
2105.13799
Inaccuracy matters: accounting for solution accuracy in event-triggered nonlinear model predictive control
We consider the effect of using approximate system predictions in event-triggered control schemes. Such approximations may result from using numerical transcription methods for solving continuous-time optimal control problems. Mesh refinement can guarantee upper bounds on the error in the differential equations which model the system dynamics. With the accuracy guarantees of a mesh refinement scheme, we show that the proposed event-triggering scheme -- which compares the measured system with approximate state predictions -- can be used with a guaranteed strictly positive inter-update time. We show that if we have knowledge of the employed transcription scheme or the approximation errors, then we can obtain better online estimates of inter-update times. We additionally detail a method of tightening constraints on the approximate system trajectory used in the nonlinear programming problem to guarantee constraint satisfaction of the continuous-time system. This is the first work to incorporate prediction accuracy in triggering metrics. Using the solution accuracy we can guarantee reliable lower bounds for inter-update times and perform solution dependent constraint tightening.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
237,411
2411.17451
VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Vision-language generative reward models (VL-GenRMs) play a crucial role in aligning and evaluating multimodal AI systems, yet their own evaluation remains under-explored. Current assessment methods primarily rely on AI-annotated preference labels from traditional VL tasks, which can introduce biases and often fail to effectively challenge state-of-the-art models. To address these limitations, we introduce VL-RewardBench, a comprehensive benchmark spanning general multimodal queries, visual hallucination detection, and complex reasoning tasks. Through our AI-assisted annotation pipeline combining sample selection with human verification, we curate 1,250 high-quality examples specifically designed to probe model limitations. Comprehensive evaluation across 16 leading large vision-language models, demonstrates VL-RewardBench's effectiveness as a challenging testbed, where even GPT-4o achieves only 65.4% accuracy, and state-of-the-art open-source models such as Qwen2-VL-72B, struggle to surpass random-guessing. Importantly, performance on VL-RewardBench strongly correlates (Pearson's r > 0.9) with MMMU-Pro accuracy using Best-of-N sampling with VL-GenRMs. Analysis experiments uncover three critical insights for improving VL-GenRMs: (i) models predominantly fail at basic visual perception tasks rather than reasoning tasks; (ii) inference-time scaling benefits vary dramatically by model capacity; and (iii) training VL-GenRMs to learn to judge substantially boosts judgment capability (+14.7% accuracy for a 7B VL-GenRM). We believe VL-RewardBench along with the experimental insights will become a valuable resource for advancing VL-GenRMs.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
511,450
2410.19382
Multi-Agent Reinforcement Learning with Selective State-Space Models
The Transformer model has demonstrated success across a wide range of domains, including in Multi-Agent Reinforcement Learning (MARL) where the Multi-Agent Transformer (MAT) has emerged as a leading algorithm in the field. However, a significant drawback of Transformer models is their quadratic computational complexity relative to input size, making them computationally expensive when scaling to larger inputs. This limitation restricts MAT's scalability in environments with many agents. Recently, State-Space Models (SSMs) have gained attention due to their computational efficiency, but their application in MARL remains unexplored. In this work, we investigate the use of Mamba, a recent SSM, in MARL and assess whether it can match the performance of MAT while providing significant improvements in efficiency. We introduce a modified version of MAT that incorporates standard and bi-directional Mamba blocks, as well as a novel "cross-attention" Mamba block. Extensive testing shows that our Multi-Agent Mamba (MAM) matches the performance of MAT across multiple standard multi-agent environments, while offering superior scalability to larger agent scenarios. This is significant for the MARL community, because it indicates that SSMs could replace Transformers without compromising performance, whilst also supporting more effective scaling to higher numbers of agents. Our project page is available at https://sites.google.com/view/multi-agent-mamba .
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
502,292
2308.11757
Weakly Supervised Face and Whole Body Recognition in Turbulent Environments
Face and person recognition have recently achieved remarkable success under challenging scenarios, such as off-pose and cross-spectrum matching. However, long-range recognition systems are often hindered by atmospheric turbulence, leading to spatially and temporally varying distortions in the image. Current solutions rely on generative models to reconstruct a turbulent-free image, but often preserve photo-realism instead of discriminative features that are essential for recognition. This can be attributed to the lack of large-scale datasets of turbulent and pristine paired images, necessary for optimal reconstruction. To address this issue, we propose a new weakly supervised framework that employs a parameter-efficient self-attention module to generate domain agnostic representations, aligning turbulent and pristine images into a common subspace. Additionally, we introduce a new tilt map estimator that predicts geometric distortions observed in turbulent images. This estimate is used to re-rank gallery matches, resulting in up to 13.86\% improvement in rank-1 accuracy. Our method does not require synthesizing turbulent-free images or ground-truth paired images, and requires significantly fewer annotated samples, enabling more practical and rapid utility of increasingly large datasets. We analyze our framework using two datasets -- Long-Range Face Identification Dataset (LRFID) and BRIAR Government Collection 1 (BGC1) -- achieving enhanced discriminability under varying turbulence and standoff distance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
387,259
2401.09267
Risk-Aware Accelerated Wireless Federated Learning with Heterogeneous Clients
Wireless Federated Learning (FL) is an emerging distributed machine learning paradigm, particularly gaining momentum in domains with confidential and private data on mobile clients. However, the location-dependent performance, in terms of transmission rates and susceptibility to transmission errors, poses major challenges for wireless FL's convergence speed and accuracy. The challenge is more acute for hostile environments without a metric that authenticates the data quality and security profile of the clients. In this context, this paper proposes a novel risk-aware accelerated FL framework that accounts for the clients heterogeneity in the amount of possessed data, transmission rates, transmission errors, and trustworthiness. Classifying clients according to their location-dependent performance and trustworthiness profiles, we propose a dynamic risk-aware global model aggregation scheme that allows clients to participate in descending order of their transmission rates and an ascending trustworthiness constraint. In particular, the transmission rate is the dominant participation criterion for initial rounds to accelerate the convergence speed. Our model then progressively relaxes the transmission rate restriction to explore more training data at cell-edge clients. The aggregation rounds incorporate a debiasing factor that accounts for transmission errors. Risk-awareness is enabled by a validation set, where the base station eliminates non-trustworthy clients at the fine-tuning stage. The proposed scheme is benchmarked against a conservative scheme (i.e., only allowing trustworthy devices) and an aggressive scheme (i.e., oblivious to the trust metric). The numerical results highlight the superiority of the proposed scheme in terms of accuracy and convergence speed when compared to both benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
422,207
1203.3470
ALARMS: Alerting and Reasoning Management System for Next Generation Aircraft Hazards
The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit. With the introduction of such systems, the responsibilities of the pilot are expected to dramatically increase. In the ALARMS (ALerting And Reasoning Management System) project for NASA, we focus on a key challenge of this environment, the quick and efficient handling of aircraft sensor alerts. It is infeasible to alert the pilot on the state of all subsystems at all times. Furthermore, there is uncertainty as to the true hazard state despite the evidence of the alerts, and there is uncertainty as to the effect and duration of actions taken to address these alerts. This paper reports on the first steps in the construction of an application designed to handle Next Generation alerts. In ALARMS, we have identified 60 different aircraft subsystems and 20 different underlying hazards. In this paper, we show how a Bayesian network can be used to derive the state of the underlying hazards, based on the sensor input. Then, we propose a framework whereby an automated system can plan to address these hazards in cooperation with the pilot, using a Time-Dependent Markov Process (TMDP). Different hazards and pilot states will call for different alerting automation plans. We demonstrate this emerging application of Bayesian networks and TMDPs to cockpit automation, for a use case where a small number of hazards are present, and analyze the resulting alerting automation policies.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
14,918
1712.09300
Zero-Shot Learning via Latent Space Encoding
Zero-Shot Learning (ZSL) is typically achieved by resorting to a class semantic embedding space to transfer the knowledge from the seen classes to unseen ones. Capturing the common semantic characteristics between the visual modality and the class semantic modality (e.g., attributes or word vector) is a key to the success of ZSL. In this paper, we propose a novel encoder-decoder approach, namely Latent Space Encoding (LSE), to connect the semantic relations of different modalities. Instead of requiring a projection function to transfer information across different modalities like most previous work, LSE per- forms the interactions of different modalities via a feature aware latent space, which is learned in an implicit way. Specifically, different modalities are modeled separately but optimized jointly. For each modality, an encoder-decoder framework is performed to learn a feature aware latent space via jointly maximizing the recoverability of the original space from the latent space and the predictability of the latent space from the original space. To relate different modalities together, their features referring to the same concept are enforced to share the same latent codings. In this way, the common semantic characteristics of different modalities are generalized with the latent representations. Another property of the proposed approach is that it is easily extended to more modalities. Extensive experimental results on four benchmark datasets (AwA, CUB, aPY, and ImageNet) clearly demonstrate the superiority of the proposed approach on several ZSL tasks, including traditional ZSL, generalized ZSL, and zero-shot retrieval (ZSR).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,330
2403.03353
Hypothesis Spaces for Deep Learning
This paper introduces a hypothesis space for deep learning that employs deep neural networks (DNNs). By treating a DNN as a function of two variables, the physical variable and parameter variable, we consider the primitive set of the DNNs for the parameter variable located in a set of the weight matrices and biases determined by a prescribed depth and widths of the DNNs. We then complete the linear span of the primitive DNN set in a weak* topology to construct a Banach space of functions of the physical variable. We prove that the Banach space so constructed is a reproducing kernel Banach space (RKBS) and construct its reproducing kernel. We investigate two learning models, regularized learning and minimum interpolation problem in the resulting RKBS, by establishing representer theorems for solutions of the learning models. The representer theorems unfold that solutions of these learning models can be expressed as linear combination of a finite number of kernel sessions determined by given data and the reproducing kernel.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
435,155
0812.4446
The Latent Relation Mapping Engine: Algorithm and Experiments
Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
2,841
2112.13444
A CNN-BiLSTM Model with Attention Mechanism for Earthquake Prediction
Earthquakes, as natural phenomena, have continuously caused damage and loss of human life historically. Earthquake prediction is an essential aspect of any society's plans and can increase public preparedness and reduce damage to a great extent. Nevertheless, due to the stochastic character of earthquakes and the challenge of achieving an efficient and dependable model for earthquake prediction, efforts have been insufficient thus far, and new methods are required to solve this problem. Aware of these issues, this paper proposes a novel prediction method based on attention mechanism (AM), convolution neural network (CNN), and bi-directional long short-term memory (BiLSTM) models, which can predict the number and maximum magnitude of earthquakes in each area of mainland China-based on the earthquake catalog of the region. This model takes advantage of LSTM and CNN with an attention mechanism to better focus on effective earthquake characteristics and produce more accurate predictions. Firstly, the zero-order hold technique is applied as pre-processing on earthquake data, making the model's input data more proper. Secondly, to effectively use spatial information and reduce dimensions of input data, the CNN is used to capture the spatial dependencies between earthquake data. Thirdly, the Bi-LSTM layer is employed to capture the temporal dependencies. Fourthly, the AM layer is introduced to highlight its important features to achieve better prediction performance. The results show that the proposed method has better performance and generalize ability than other prediction methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,243
2105.10205
Rational Dynamic Price Model for Demand Response Programs in Modern Distribution Systems
Demand response (DR) refers to change in electricity consumption pattern of customers during on-peak hours in lieu of financial gains to reduce stress on distribution systems. Existing dynamic price models have not provided adequate success to price-based demand response (PBDR) programs. It happened as these models have raised typical socio-economic problems pertaining to cross-subsidy, free-riders, social inequity, assured profit of utilities, financial gains and comfort of customers, etc. This paper presents a new dynamic price model for PBDR in distribution systems which aims to overcome some of the above mentioned problems of the existing price models. The main aim of the developed price model is to overcome the problems of cross-subsidy and free-riders of the existing price models for widespread acceptance, deployment and efficient utilization of PBDR programs in contemporary distribution systems. Proposed price model generates demand-linked price signal that imposes different price signals to different customers during on-peak hours and remains static otherwise. This makes proposed model a class apart from other existing models. The novelty of the proposed model lies in the fact that the financial benefits and penalties pertaining to DR are self-adjusted among customers while preserving social equity and profit of the utility. Such an ideology has not been yet addressed in the literature. Detailed investigation of application results on a standard test bench reveals that the proposed model equally cares regarding the interests of both customers and utility. For economic assessment, a comparison of the proposed price model with the existing pricing models is also performed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
236,314
1908.02447
Optimization-Based Learning Control for Nonlinear Time-Varying Systems
Learning to perform perfect tracking tasks based on measurement data is desirable in the controller design of systems operating repetitively. This motivates the present paper to seek an optimization-based design approach for iterative learning control (ILC) of repetitive systems with unknown nonlinear time-varying dynamics. It is shown that perfect output tracking can be realized with updating inputs, where no explicit model knowledge but only measured input/output data are leveraged. In particular, adaptive updating strategies are proposed to obtain parameter estimations of nonlinearities. A double-dynamics analysis approach is applied to establish ILC convergence, together with boundedness of input, output, and estimated parameters, which benefits from employing properties of nonnegative matrices. Moreover, robust convergence is explored for optimization-based adaptive ILC in the presence of nonrepetitive uncertainties. Simulation tests are also implemented to verify the validity of our optimization-based adaptive ILC.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
141,005
2411.17283
BadScan: An Architectural Backdoor Attack on Visual State Space Models
The newly introduced Visual State Space Model (VMamba), which employs \textit{State Space Mechanisms} (SSM) to interpret images as sequences of patches, has shown exceptional performance compared to Vision Transformers (ViT) across various computer vision tasks. However, recent studies have highlighted that deep models are susceptible to adversarial attacks. One common approach is to embed a trigger in the training data to retrain the model, causing it to misclassify data samples into a target class, a phenomenon known as a backdoor attack. In this paper, we first evaluate the robustness of the VMamba model against existing backdoor attacks. Based on this evaluation, we introduce a novel architectural backdoor attack, termed BadScan, designed to deceive the VMamba model. This attack utilizes bit plane slicing to create visually imperceptible backdoored images. During testing, if a trigger is detected by performing XOR operations between the $k^{th}$ bit planes of the modified triggered patches, the traditional 2D selective scan (SS2D) mechanism in the visual state space (VSS) block of VMamba is replaced with our newly designed BadScan block, which incorporates four newly developed scanning patterns. We demonstrate that the BadScan backdoor attack represents a significant threat to visual state space models and remains effective even after complete retraining from scratch. Experimental results on two widely used image classification datasets, CIFAR-10, and ImageNet-1K, reveal that while visual state space models generally exhibit robustness against current backdoor attacks, the BadScan attack is particularly effective, achieving a higher Triggered Accuracy Ratio (TAR) in misleading the VMamba model and its variants.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
511,381
1905.07434
A Coordinated Search Strategy for Multiple Solitary Robots: An Extension
The problem of coordination without a priori information about the environment is important in robotics. Applications vary from formation control to search and rescue. This paper considers the problem of search by a group of solitary robots: self-interested robots without a priori knowledge about each other, and with restricted communication capacity. When the capacity of robots to communicate is limited, they may obliviously search in overlapping regions (i.e. be subject to interference). Interference hinders robot progress, and strategies have been proposed in the literature to mitigate interference [1], [2]. Interaction of solitary robots has attracted much interest in robotics, but the problem of mitigating interference when time for search is limited remains an important area of research. We propose a coordination strategy based on the method of cellular decomposition [3] where we employ the concept of soft obstacles: a robot considers cells assigned to other robots as obstacles. The performance of the proposed strategy is demonstrated by means of simulation experiments. Simulations indicate the utility of the strategy in situations where a known upper bound on the search time precludes search of the entire environment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
131,226
1911.11356
Semantic Interior Mapology: A Toolbox For Indoor Scene Description From Architectural Floor Plans
We introduce the Semantic Interior Mapology (SIM) toolbox for the conversion of a floor plan and its room contents (such as furnitures) to a vectorized form. The toolbox is composed of the Map Conversion toolkit and the Map Population toolkit. The Map Conversion toolkit allows one to quickly trace the layout of a floor plan, and to generate a GeoJSON file that can be rendered in 3D using web applications such as Mapbox. The Map Population toolkit takes the 3D scan of a room in the building (acquired from an RGB-D camera), and, through a semi-automatic process, populates individual objects of interest with a correct dimension and position in the GeoJSON representation of the building. SIM is easy to use and produces accurate results even in the case of complex building layouts.
true
false
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
155,093
2102.00665
Attributed Graph Alignment
Motivated by various data science applications including de-anonymizing user identities in social networks, we consider the graph alignment problem, where the goal is to identify the vertex/user correspondence between two correlated graphs. Existing work mostly recovers the correspondence by exploiting the user-user connections. However, in many real-world applications, additional information about the users, such as user profiles, might be publicly available. In this paper, we introduce the attributed graph alignment problem, where additional user information, referred to as attributes, is incorporated to assist graph alignment. We establish both the achievability and converse results on recovering vertex correspondence exactly, where the conditions match for certain parameter regimes. Our results span the full spectrum between models that only consider user-user connections and models where only attribute information is available.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
217,870
2305.18493
Insights from the Design Space Exploration of Flow-Guided Nanoscale Localization
Nanodevices with Terahertz (THz)-based wireless communication capabilities are providing a primer for flow-guided localization within the human bloodstreams. Such localization is allowing for assigning the locations of sensed events with the events themselves, providing benefits along the lines of early and precise diagnostics, and reduced costs and invasiveness. Flow-guided localization is still in a rudimentary phase, with only a handful of works targeting the problem. Nonetheless, the performance assessments of the proposed solutions are already carried out in a non-standardized way, usually along a single performance metric, and ignoring various aspects that are relevant at such a scale (e.g., nanodevices' limited energy) and for such a challenging environment (e.g., extreme attenuation of in-body THz propagation). As such, these assessments feature low levels of realism and cannot be compared in an objective way. Toward addressing this issue, we account for the environmental and scale-related peculiarities of the scenario and assess the performance of two state-of-the-art flow-guided localization approaches along a set of heterogeneous performance metrics such as the accuracy and reliability of localization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
369,107
1608.02611
OptMark: A Toolkit for Benchmarking Query Optimizers
Query optimizers have long been considered as among the most complex components of a database engine, while the assessment of an optimizer's quality remains a challenging task. Indeed, existing performance benchmarks for database engines (like TPC benchmarks) produce a performance assessment of the query runtime system rather than its query optimizer. To address this challenge, this paper introduces OptMark, a toolkit for evaluating the quality of a query optimizer. OptMark is designed to offer a number of desirable properties. First, it decouples the quality of an optimizer from the quality of its underlying execution engine. Second it evaluates independently both the effectiveness of an optimizer (i.e., quality of the chosen plans) and its efficiency (i.e., optimization time). OptMark includes also a generic benchmarking toolkit that is minimum invasive to the DBMS that wishes to use it. Any DBMS can provide a system-specific implementation of a simple API that allows OptMark to run and generate benchmark scores for the specific system. This paper discusses the metrics we propose for evaluating an optimizer's quality, the benchmark's design and the toolkit's API and functionality. We have implemented OptMark on the open-source MySQL engine as well as two commercial database systems. Using these implementations we are able to assess the quality of the optimizers on these three systems based on the TPC-DS benchmark queries.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
59,574
2502.10192
A Note on "Constructing Bent Functions Outside the Maiorana-McFarland Class Using a General Form of Rothaus"
In 2017, Zhang et al. proposed a question (not open problem) and two open problems in [IEEE TIT 63 (8): 5336--5349, 2017] about constructing bent functions by using Rothaus' construction. In this note, we prove that the sufficient conditions of Rothaus' construction are also necessary, which answers their question. Besides, we demonstrate that the second open problem, which considers the iterative method of constructing bent functions by using Rothaus' construction, has only a trivial solution. It indicates that all bent functions obtained by using Rothaus' construction iteratively can be generated from the direct sum of an initial bent function and a quadratic bent function. This directly means that Zhang et al.'s construction idea makes no contribution to the construction of bent functions. To compensate the weakness of their work, we propose an iterative construction of bent functions by using a secondary construction in [DCC 88: 2007--2035, 2020].
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
533,768
1809.04191
Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here we demonstrate ResNet-18, -34, -50, -152, Inception-v3, Densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models. We also demonstrate ResNet-18, -34, -50, -152, Densenet-161, and VGG-16bn 4-bit models that match the accuracy of the full-precision baseline networks -- the highest scores to date. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. We find that gradient noise due to quantization during training increases with reduced precision, and seek ways to overcome this noise. The number of iterations required by SGD to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. Therefore, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat gradient noise introduced by quantization by training longer and reducing learning rates. Sensitivity analysis indicates that these simple techniques, coupled with proper activation function range calibration to take full advantage of the limited precision, are sufficient to discover low-precision networks, if they exist, close to fp32 precision baseline networks. The results herein provide evidence that 4-bits suffice for classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
107,496
2402.05127
Illuminate: A novel approach for depression detection with explainable analysis and proactive therapy using prompt engineering
This paper introduces a novel paradigm for depression detection and treatment using advanced Large Language Models (LLMs): Generative Pre-trained Transformer 4 (GPT-4), Llama 2 chat, and Gemini. These LLMs are fine-tuned with specialized prompts to diagnose, explain, and suggest therapeutic interventions for depression. A unique few-shot prompting method enhances the models' ability to analyze and explain depressive symptoms based on the DSM-5 criteria. In the interaction phase, the models engage in empathetic dialogue management, drawing from resources like PsychDB and a Cognitive Behavioral Therapy (CBT) Guide, fostering supportive interactions with individuals experiencing major depressive disorders. Additionally, the research introduces the Illuminate Database, enriched with various CBT modules, aiding in personalized therapy recommendations. The study evaluates LLM performance using metrics such as F1 scores, Precision, Recall, Cosine similarity, and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) across different test sets, demonstrating their effectiveness. This comprehensive approach blends cutting-edge AI with established psychological methods, offering new possibilities in mental health care and showcasing the potential of LLMs in revolutionizing depression diagnosis and treatment strategies.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
427,735
2004.00251
Self-Augmentation: Generalizing Deep Networks to Unseen Classes for Few-Shot Learning
Few-shot learning aims to classify unseen classes with a few training examples. While recent works have shown that standard mini-batch training with a carefully designed training strategy can improve generalization ability for unseen classes, well-known problems in deep networks such as memorizing training statistics have been less explored for few-shot learning. To tackle this issue, we propose self-augmentation that consolidates self-mix and self-distillation. Specifically, we exploit a regional dropout technique called self-mix, in which a patch of an image is substituted into other values in the same image. Then, we employ a backbone network that has auxiliary branches with its own classifier to enforce knowledge sharing. Lastly, we present a local representation learner to further exploit a few training examples for unseen classes. Experimental results show that the proposed method outperforms the state-of-the-art methods for prevalent few-shot benchmarks and improves the generalization ability.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
170,571
2409.02497
A Learnable Color Correction Matrix for RAW Reconstruction
Autonomous driving algorithms usually employ sRGB images as model input due to their compatibility with the human visual system. However, visually pleasing sRGB images are possibly sub-optimal for downstream tasks when compared to RAW images. The availability of RAW images is constrained by the difficulties in collecting real-world driving data and the associated challenges of annotation. To address this limitation and support research in RAW-domain driving perception, we design a novel and ultra-lightweight RAW reconstruction method. The proposed model introduces a learnable color correction matrix (CCM), which uses only a single convolutional layer to approximate the complex inverse image signal processor (ISP). Experimental results demonstrate that simulated RAW (simRAW) images generated by our method provide performance improvements equivalent to those produced by more complex inverse ISP methods when pretraining RAW-domain object detectors, which highlights the effectiveness and practicality of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
485,729
2110.00201
Error-free approximation of explicit linear MPC through lattice piecewise affine expression
In this paper, the disjunctive and conjunctive lattice piecewise affine (PWA) approximations of explicit linear model predictive control (MPC) are proposed. The training data are generated uniformly in the domain of interest, consisting of the state samples and corresponding affine control laws, based on which the lattice PWA approximations are constructed. Re-sampling of data is also proposed to guarantee that the lattice PWA approximations are identical to explicit MPC control law in the unique order (UO) regions containing the sample points as interior points. Additionally, under mild assumptions, the equivalence of the two lattice PWA approximations guarantees that the approximations are error-free in the domain of interest. The algorithms for deriving statistically error-free approximation to the explicit linear MPC are proposed and the complexity of the entire procedure is analyzed, which is polynomial with respect to the number of samples. The performance of the proposed approximation strategy is tested through two simulation examples, and the result shows that with a moderate number of sample points, we can construct lattice PWA approximations that are equivalent to optimal control law of the explicit linear MPC.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
258,314
2305.17370
Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation
Computational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7% gain in MCC against stand-alone training. The proposed method presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
368,553
1812.09790
Exploratory Data Analysis of a Network Telescope Traffic and Prediction of Port Probing Rates
Understanding the properties exhibited by large scale network probing traffic would improve cyber threat intelligence. In addition, the prediction of probing rates is a key feature for security practitioners in their endeavors for making better operational decisions and for enhancing their defense strategy skills. In this work, we study different aspects of the traffic captured by a /20 network telescope. First, we perform an exploratory data analysis of the collected probing activities. The investigation includes probing rates at the port level, services interesting top network probers and the distribution of probing rates by geolocation. Second, we extract the network probers exploration patterns. We model these behaviors using transition graphs decorated with probabilities of switching from a port to another. Finally, we assess the capacity of Non-stationary Autoregressive and Vector Autoregressive models in predicting port probing rates as a first step towards using more robust models for better forecasting performance.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
117,229
2408.00496
SegStitch: Multidimensional Transformer for Robust and Efficient Medical Imaging Segmentation
Medical imaging segmentation plays a significant role in the automatic recognition and analysis of lesions. State-of-the-art methods, particularly those utilizing transformers, have been prominently adopted in 3D semantic segmentation due to their superior performance in scalability and generalizability. However, plain vision transformers encounter challenges due to their neglect of local features and their high computational complexity. To address these challenges, we introduce three key contributions: Firstly, we proposed SegStitch, an innovative architecture that integrates transformers with denoising ODE blocks. Instead of taking whole 3D volumes as inputs, we adapt axial patches and customize patch-wise queries to ensure semantic consistency. Additionally, we conducted extensive experiments on the BTCV and ACDC datasets, achieving improvements up to 11.48% and 6.71% respectively in mDSC, compared to state-of-the-art methods. Lastly, our proposed method demonstrates outstanding efficiency, reducing the number of parameters by 36.7% and the number of FLOPS by 10.7% compared to UNETR. This advancement holds promising potential for adapting our method to real-world clinical practice. The code will be available at https://github.com/goblin327/SegStitch
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
477,862
1702.01444
Printed Arabic Text Recognition using Linear and Nonlinear Regression
Arabic language is one of the most popular languages in the world. Hundreds of millions of people in many countries around the world speak Arabic as their native speaking. However, due to complexity of Arabic language, recognition of printed and handwritten Arabic text remained untouched for a very long time compared with English and Chinese. Although, in the last few years, significant number of researches has been done in recognizing printed and handwritten Arabic text, it stills an open research field due to cursive nature of Arabic script. This paper proposes automatic printed Arabic text recognition technique based on linear and ellipse regression techniques. After collecting all possible forms of each character, unique code is generated to represent each character form. Each code contains a sequence of lines and ellipses. To recognize fonts, a unique list of codes is identified to be used as a fingerprint of font. The proposed technique has been evaluated using over 14000 different Arabic words with different fonts and experimental results show that average recognition rate of the proposed technique is 86%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
67,815
2111.15635
Improving random walk rankings with feature selection and imputation
The Science4cast Competition consists of predicting new links in a semantic network, with each node representing a concept and each edge representing a link proposed by a paper relating two concepts. This network contains information from 1994-2017, with a discretization of days (which represents the publication date of the underlying papers). Team Hash Brown's final submission, \emph{ee5a}, achieved a score of 0.92738 on the test set. Our team's score ranks \emph{second place}, 0.01 below the winner's score. This paper details our model, its intuition, and the performance of its variations in the test set.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
268,994
1802.03074
Mining Open Government Data Used in Scientific Research
In the following paper, we describe results from mining citations, mentions, and links to open government data (OGD) in peer-reviewed literature. We inductively develop a method for categorizing how OGD are used by different research communities, and provide descriptive statistics about the publication years, publication outlets, and OGD sources. Our results demonstrate that, 1. The use of OGD in research is steadily increasing from 2009 to 2016; 2. Researchers use OGD from 96 different open government data portals, with data.gov.uk and data.gov being the most frequent sources; and, 3.Contrary to previous findings, we provide evidence suggesting that OGD from developing nations, notably India and Kenya, are being frequently used to fuel scientific discoveries. The findings of this paper contribute to ongoing research agendas aimed at tracking the impact of open government data initiatives, and provides an initial description of how open government data are valuable to diverse scientific research communities.
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
false
89,891
2307.03445
A GPU-accelerated simulator for the DEM analysis of granular systems composed of clump-shaped elements
We discuss the use of the Discrete Element Method (DEM) to simulate the dynamics of granular systems made up of elements with nontrivial geometries. The DEM simulator is GPU accelerated and can handle elements whose shape is defined as the union with overlap of diverse sets of spheres with user-specified radii. The simulator can also handle complex materials since each sphere in an element can have its own Young's modulus $E$, Poisson ratio $\nu$, friction coefficient $\mu$, and coefficient of restitution CoR. To demonstrate the simulator, we produce a "digital simulant" (DS), a replica of the GRC-1 lunar simulant. The DS follows an element size distribution similar but not identical to that of GRC-1. We validate the predictive attributes of the simulator via several numerical experiments: repose angle, cone penetration, drawbar pull, and rover incline-climbing tests. Subsequently, we carry out a sensitivity analysis to gauge how the slope vs. slip curves change when the element shape, element size, and friction coefficient change. The paper concludes with a VIPER rover simulation that confirms a recently proposed granular scaling law. The simulation involves more than 11 million elements composed of more than 34 million spheres of different radii. The simulator works in the Chrono framework and utilizes two GPUs concurrently. The GPU code for the simulator and all numerical experiments discussed are open-source and available on GitHub for reproducibility studies and unfettered use and distribution.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
378,047
2111.06806
First-Order Rewritability and Complexity of Two-Dimensional Temporal Ontology-Mediated Queries
Aiming at ontology-based data access to temporal data, we design two-dimensional temporal ontology and query languages by combining logics from the (extended) DL-Lite family with linear temporal logic LTL over discrete time (Z,<). Our main concern is first-order rewritability of ontology-mediated queries (OMQs) that consist of a 2D ontology and a positive temporal instance query. Our target languages for FO-rewritings are two-sorted FO(<) - first-order logic with sorts for time instants ordered by the built-in precedence relation < and for the domain of individuals - its extension FOE with the standard congruence predicates t \equiv 0 mod n, for any fixed n > 1, and FO(RPR) that admits relational primitive recursion. In terms of circuit complexity, FOE- and FO(RPR)-rewritability guarantee answering OMQs in uniform AC0 and NC1, respectively. We proceed in three steps. First, we define a hierarchy of 2D DL-Lite/LTL ontology languages and investigate the FO-rewritability of OMQs with atomic queries by constructing projections onto 1D LTL OMQs and employing recent results on the FO-rewritability of propositional LTL OMQs. As the projections involve deciding consistency of ontologies and data, we also consider the consistency problem for our languages. While the undecidability of consistency for 2D ontology languages with expressive Boolean role inclusions might be expected, we also show that, rather surprisingly, the restriction to Krom and Horn role inclusions leads to decidability (and ExpSpace-completeness), even if one admits full Booleans on concepts. As a final step, we lift some of the rewritability results for atomic OMQs to OMQs with expressive positive temporal instance queries. The lifting results are based on an in-depth study of the canonical models and only concern Horn ontologies.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
266,181
2409.06683
Alignist: CAD-Informed Orientation Distribution Estimation by Fusing Shape and Correspondences
Object pose distribution estimation is crucial in robotics for better path planning and handling of symmetric objects. Recent distribution estimation approaches employ contrastive learning-based approaches by maximizing the likelihood of a single pose estimate in the absence of a CAD model. We propose a pose distribution estimation method leveraging symmetry respecting correspondence distributions and shape information obtained using a CAD model. Contrastive learning-based approaches require an exhaustive amount of training images from different viewpoints to learn the distribution properly, which is not possible in realistic scenarios. Instead, we propose a pipeline that can leverage correspondence distributions and shape information from the CAD model, which are later used to learn pose distributions. Besides, having access to pose distribution based on correspondences before learning pose distributions conditioned on images, can help formulate the loss between distributions. The prior knowledge of distribution also helps the network to focus on getting sharper modes instead. With the CAD prior, our approach converges much faster and learns distribution better by focusing on learning sharper distribution near all the valid modes, unlike contrastive approaches, which focus on a single mode at a time. We achieve benchmark results on SYMSOL-I and T-Less datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
487,219
2206.09699
FoR$^2$M: Recognition and Repair of Foldings in Mesh Surfaces. Application to 3D Object Degradation
Triangular meshes are the most popular representations of 3D objects, but many mesh surfaces contain topological singularities that represent a challenge for displaying or further processing them properly. One such singularity is the self-intersections that may be present in mesh surfaces that have been created by a scanning procedure or by a deformation transformation, such as off-setting. Mesh foldings comprise a special case of mesh surface self-intersections, where the faces of the 3D model intersect and become reversed, with respect to the unfolded part of the mesh surface. A novel method for the recognition and repair of mesh surface foldings is presented, which exploits the structural characteristics of the foldings in order to efficiently detect the folded regions. Following detection, the foldings are removed and any gaps so created are filled based on the geometry of the 3D model. The proposed method is directly applicable to simple mesh surface representations while it does not perform any embedding of the 3D mesh (i.e. voxelization, projection). Target of the proposed method is to facilitate mesh degradation procedures in a fashion that retains the original structure, given the operator, in the most efficient manner.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
303,665
1607.00715
Path planning with Inventory-driven Jump-Point-Search
In many navigational domains the traversability of cells is conditioned on the path taken. This is often the case in video-games, in which a character may need to acquire a certain object (i.e., a key or a flying suit) to be able to traverse specific locations (e.g., doors or high walls). In order for non-player characters to handle such scenarios we present invJPS, an "inventory-driven" pathfinding approach based on the highly successful grid-based Jump-Point-Search (JPS) algorithm. We show, formally and experimentally, that the invJPS preserves JPS's optimality guarantees and its symmetry breaking advantages in inventory-based variants of game maps.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
58,128
2402.00318
Analog-digital Scheduling for Federated Learning: A Communication-Efficient Approach
Over-the-air (OTA) computation has recently emerged as a communication-efficient Federated Learning (FL) paradigm to train machine learning models over wireless networks. However, its performance is limited by the device with the worst SNR, resulting in fast yet noisy updates. On the other hand, allocating orthogonal resource blocks (RB) to individual devices via digital channels mitigates the noise problem, at the cost of increased communication latency. In this paper, we address this discrepancy and present ADFL, a novel Analog-Digital FL scheme: in each round, the parameter server (PS) schedules each device to either upload its gradient via the analog OTA scheme or transmit its quantized gradient over an orthogonal RB using the ``digital" scheme. Focusing on a single FL round, we cast the optimal scheduling problem as the minimization of the mean squared error (MSE) on the estimated global gradient at the PS, subject to a delay constraint, yielding the optimal device scheduling configuration and quantization bits for the digital devices. Our simulation results show that ADFL, by scheduling most of the devices in the OTA scheme while also occasionally employing the digital scheme for a few devices, consistently outperforms OTA-only and digital-only schemes, in both i.i.d. and non-i.i.d. settings.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
425,559
1212.6325
Existence of Oscillations in Cyclic Gene Regulatory Networks with Time Delay
This paper is concerned with conditions for the existence of oscillations in gene regulatory networks with negative cyclic feedback, where time delays in transcription, translation and translocation process are explicitly considered. The primary goal of this paper is to propose systematic analysis tools that are useful for a broad class of cyclic gene regulatory networks, and to provide novel biological insights. To this end, we adopt a simplified model that is suitable for capturing the essence of a large class of gene regulatory networks. It is first shown that local instability of the unique equilibrium state results in oscillations based on a Poincare-Bendixson type theorem. Then, a graphical existence condition, which is equivalent to the local instability of a unique equilibrium, is derived. Based on the graphical condition, the existence condition is analytically presented in terms of biochemical parameters. This allows us to find the dimensionless parameters that primarily affect the existence of oscillations, and to provide biological insights. The analytic conditions and biological insights are illustrated with two existing biochemical networks, Repressilator and the Hes7 gene regulatory networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
20,639
2101.05171
High-resolution agent-based modeling of COVID-19 spreading in a small town
Amid the ongoing COVID-19 pandemic, public health authorities and the general population are striving to achieve a balance between safety and normalcy. Ever changing conditions call for the development of theory and simulation tools to finely describe multiple strata of society while supporting the evaluation of "what-if" scenarios. Particularly important is to assess the effectiveness of potential testing approaches and vaccination strategies. Here, an agent-based modeling platform is proposed to simulate the spreading of COVID-19 in small towns and cities, with a single-individual resolution. The platform is validated on real data from New Rochelle, NY -- one of the first outbreaks registered in the United States. Supported by expert knowledge and informed by reported data, the model incorporates detailed elements of the spreading within a statistically realistic population. Along with pertinent functionality such as testing, treatment, and vaccination options, the model accounts for the burden of other illnesses with symptoms similar to COVID-19. Unique to the model is the possibility to explore different testing approaches -- in hospitals or drive-through facilities -- and vaccination strategies that could prioritize vulnerable groups. Decision making by public authorities could benefit from the model, for its fine-grain resolution, open-source nature, and wide range of features.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
215,351
1907.01308
A Direct Construction of Optimal ZCCS With Maximum Column Sequence PMEPR Two for MC-CDMA System
Multicarrier code-division multiple-access (MC-CDMA) combines an orthogonal frequency division multiplexing (OFDM) modulation and a code-division multiple-access (CDMA) scheme to exploits the benefits of both the technologies. The high peak-to-mean envelope power ratio (PMEPR) is a considerable problem in MC-CDMA system. However, the problem can be addressed by utilizing complete complementary codes (CCCs) in MC-CDMA system. But the set size upper bound of CCC does not allow the system to support large number of users for a given number of subcarriers in the system. In a CCC and Z-complementary code set (ZCCS) based asynchronous MC-CDMA system, the PMEPR is determined by column sequence PMEPR of the codes. In order to support a large number of users with low column sequence PMEPR, in this paper, we have proposed a new optimal ZCCS with larger set size. The code is constructed using Boolean function approach, i.e., by a direct construction method. The number of constituent sequences in ZCCS is the same as the number of subcarriers in MC-CDMA. So, large size ZCCS for large number of users in MC-CDMA can be constructed through a rapid hardware generation. The proposed ZCCS has maximum column sequence PMEPR of 2 and it achieves the theoretical upper bound of optimality. Our proposed construction can also generate inter-group complementary (IGC) code set for MC-CDMA with the same PMEPR. This work also establishes a link from ZCCS and IGC code set to higher-order ($\geq 2$) Reed-Muller (RM) code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
137,290
1906.01131
Hybrid Machine Learning Forecasts for the FIFA Women's World Cup 2019
In this work, we combine two different ranking methods together with several other predictors in a joint random forest approach for the scores of soccer matches. The first ranking method is based on the bookmaker consensus, the second ranking method estimates adequate ability parameters that reflect the current strength of the teams best. The proposed combined approach is then applied to the data from the two previous FIFA Women's World Cups 2011 and 2015. Finally, based on the resulting estimates, the FIFA Women's World Cup 2019 is simulated repeatedly and winning probabilities are obtained for all teams. The model clearly favors the defending champion USA before the host France.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,604
2307.04470
Test-Time Adaptation for Nighttime Color-Thermal Semantic Segmentation
The ability to scene understanding in adverse visual conditions, e.g., nighttime, has sparked active research for RGB-Thermal (RGB-T) semantic segmentation. However, it is essentially hampered by two critical problems: 1) the day-night gap of RGB images is larger than that of thermal images, and 2) the class-wise performance of RGB images at night is not consistently higher or lower than that of thermal images. we propose the first test-time adaptation (TTA) framework, dubbed Night-TTA, to address the problems for nighttime RGBT semantic segmentation without access to the source (daytime) data during adaptation. Our method enjoys three key technical parts. Firstly, as one modality (e.g., RGB) suffers from a larger domain gap than that of the other (e.g., thermal), Imaging Heterogeneity Refinement (IHR) employs an interaction branch on the basis of RGB and thermal branches to prevent cross-modal discrepancy and performance degradation. Then, Class Aware Refinement (CAR) is introduced to obtain reliable ensemble logits based on pixel-level distribution aggregation of the three branches. In addition, we also design a specific learning scheme for our TTA framework, which enables the ensemble logits and three student logits to collaboratively learn to improve the quality of predictions during the testing phase of our Night TTA. Extensive experiments show that our method achieves state-of-the-art (SoTA) performance with a 13.07% boost in mIoU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
378,415
2105.08822
Non-contact Pain Recognition from Video Sequences with Remote Physiological Measurements Prediction
Automatic pain recognition is paramount for medical diagnosis and treatment. The existing works fall into three categories: assessing facial appearance changes, exploiting physiological cues, or fusing them in a multi-modal manner. However, (1) appearance changes are easily affected by subjective factors which impedes objective pain recognition. Besides, the appearance-based approaches ignore long-range spatial-temporal dependencies that are important for modeling expressions over time; (2) the physiological cues are obtained by attaching sensors on human body, which is inconvenient and uncomfortable. In this paper, we present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition. The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended physiological cues (remote photoplethysmography, rPPG) that are recovered from videos in the auxiliary task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network (rSTAN) and allows us to establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases. It demonstrates that rPPG predictions can be used as an auxiliary task to facilitate non-contact automatic pain recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,874
1708.03698
Learning Rotation for Kernel Correlation Filter
Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
78,804
2312.01919
COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction
The autonomous driving community has shown significant interest in 3D occupancy prediction, driven by its exceptional geometric perception and general object recognition capabilities. To achieve this, current works try to construct a Tri-Perspective View (TPV) or Occupancy (OCC) representation extending from the Bird-Eye-View perception. However, compressed views like TPV representation lose 3D geometry information while raw and sparse OCC representation requires heavy but redundant computational costs. To address the above limitations, we propose Compact Occupancy TRansformer (COTR), with a geometry-aware occupancy encoder and a semantic-aware group decoder to reconstruct a compact 3D OCC representation. The occupancy encoder first generates a compact geometrical OCC feature through efficient explicit-implicit view transformation. Then, the occupancy decoder further enhances the semantic discriminability of the compact OCC representation by a coarse-to-fine semantic grouping strategy. Empirical experiments show that there are evident performance gains across multiple baselines, e.g., COTR outperforms baselines with a relative improvement of 8%-15%, demonstrating the superiority of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
412,631
2206.10160
Predicting Parking Lot Availability by Graph-to-Sequence Model: A Case Study with SmartSantander
Nowadays, so as to improve services and urban areas livability, multiple smart city initiatives are being carried out throughout the world. SmartSantander is a smart city project in Santander, Spain, which has relied on wireless sensor network technologies to deploy heterogeneous sensors within the city to measure multiple parameters, including outdoor parking information. In this paper, we study the prediction of parking lot availability using historical data from more than 300 outdoor parking sensors with SmartSantander. We design a graph-to-sequence model to capture the periodical fluctuation and geographical proximity of parking lots. For developing and evaluating our model, we use a 3-year dataset of parking lot availability in the city of Santander. Our model achieves a high accuracy compared with existing sequence-to-sequence models, which is accurate enough to provide a parking information service in the city. We apply our model to a smartphone application to be widely used by citizens and tourists.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
303,821
2101.05313
Whispered and Lombard Neural Speech Synthesis
It is desirable for a text-to-speech system to take into account the environment where synthetic speech is presented, and provide appropriate context-dependent output to the user. In this paper, we present and compare various approaches for generating different speaking styles, namely, normal, Lombard, and whisper speech, using only limited data. The following systems are proposed and assessed: 1) Pre-training and fine-tuning a model for each style. 2) Lombard and whisper speech conversion through a signal processing based approach. 3) Multi-style generation using a single model based on a speaker verification model. Our mean opinion score and AB preference listening tests show that 1) we can generate high quality speech through the pre-training/fine-tuning approach for all speaking styles. 2) Although our speaker verification (SV) model is not explicitly trained to discriminate different speaking styles, and no Lombard and whisper voice is used for pre-training this system, the SV model can be used as a style encoder for generating different style embeddings as input for the Tacotron system. We also show that the resulting synthetic Lombard speech has a significant positive impact on intelligibility gain.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
215,388
2010.09895
Multi-Window Data Augmentation Approach for Speech Emotion Recognition
We present a Multi-Window Data Augmentation (MWA-SER) approach for speech emotion recognition. MWA-SER is a unimodal approach that focuses on two key concepts; designing the speech augmentation method and building the deep learning model to recognize the underlying emotion of an audio signal. Our proposed multi-window augmentation approach generates additional data samples from the speech signal by employing multiple window sizes in the audio feature extraction process. We show that our augmentation method, combined with a deep learning model, improves speech emotion recognition performance. We evaluate the performance of our approach on three benchmark datasets: IEMOCAP, SAVEE, and RAVDESS. We show that the multi-window model improves the SER performance and outperforms a single-window model. The notion of finding the best window size is an essential step in audio feature extraction. We perform extensive experimental evaluations to find the best window choice and explore the windowing effect for SER analysis.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
201,693
2410.08226
EarthquakeNPP: Benchmark Datasets for Earthquake Forecasting with Neural Point Processes
Classical point process models, such as the epidemic-type aftershock sequence (ETAS) model, have been widely used for forecasting the event times and locations of earthquakes for decades. Recent advances have led to Neural Point Processes (NPPs), which promise greater flexibility and improvements over classical models. However, the currently-used benchmark dataset for NPPs does not represent an up-to-date challenge in the seismological community since it lacks a key earthquake sequence from the region and improperly splits training and testing data. Furthermore, initial earthquake forecast benchmarking lacks a comparison to state-of-the-art earthquake forecasting models typically used by the seismological community. To address these gaps, we introduce EarthquakeNPP: a collection of benchmark datasets to facilitate testing of NPPs on earthquake data, accompanied by a credible implementation of the ETAS model. The datasets cover a range of small to large target regions within California, dating from 1971 to 2021, and include different methodologies for dataset generation. In a benchmarking experiment, we compare three spatio-temporal NPPs against ETAS and find that none outperform ETAS in either spatial or temporal log-likelihood. These results indicate that current NPP implementations are not yet suitable for practical earthquake forecasting. However, EarthquakeNPP will serve as a platform for collaboration between the seismology and machine learning communities with the goal of improving earthquake predictability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
497,016
2110.06419
Federated Natural Language Generation for Personalized Dialogue System
Neural conversational models have long suffered from the problem of inconsistency and lacking coherent personality. To address the issue, persona-based models capturing individual characteristics have been proposed, but they still face the dilemma of model adaption and data privacy. To break this dilemma, we propose a novel Federated Natural Language Generation (FedNLG) framework, which learns personalized representations from various dataset on distributed devices, and thus implements the personalized dialogue system efficiently and safely. FedNLG first pre-trains parameters of standard neural conversational model over a large dialogue corpus, and then fine-tune the model parameters and persona embeddings on specific datasets, in a federated manner. Thus, the model could simultaneously learn the persona embeddings in local clients and learn shared model parameters by federated aggregation, which achieves accuracyprivacy balance. By conducting extensive experiments, we demonstrate the effectiveness of our model by pre-training model over Cornell Movie-Dialogs Corpus and fine-tuning the model over two TV series dataset.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
260,611
2105.13619
CRT-Net: A Generalized and Scalable Framework for the Computer-Aided Diagnosis of Electrocardiogram Signals
Electrocardiogram (ECG) signals play critical roles in the clinical screening and diagnosis of many types of cardiovascular diseases. Despite deep neural networks that have been greatly facilitated computer-aided diagnosis (CAD) in many clinical tasks, the variability and complexity of ECG in the clinic still pose significant challenges in both diagnostic performance and clinical applications. In this paper, we develop a robust and scalable framework for the clinical recognition of ECG. Considering the fact that hospitals generally record ECG signals in the form of graphic waves of 2-D images, we first extract the graphic waves of 12-lead images into numerical 1-D ECG signals by a proposed bi-directional connectivity method. Subsequently, a novel deep neural network, namely CRT-Net, is designed for the fine-grained and comprehensive representation and recognition of 1-D ECG signals. The CRT-Net can well explore waveform features, morphological characteristics and time domain features of ECG by embedding convolution neural network(CNN), recurrent neural network(RNN), and transformer module in a scalable deep model, which is especially suitable in clinical scenarios with different lengths of ECG signals captured from different devices. The proposed framework is first evaluated on two widely investigated public repositories, demonstrating the superior performance of ECG recognition in comparison with state-of-the-art. Moreover, we validate the effectiveness of our proposed bi-directional connectivity and CRT-Net on clinical ECG images collected from the local hospital, including 258 patients with chronic kidney disease (CKD), 351 patients with Type-2 Diabetes (T2DM), and around 300 patients in the control group. In the experiments, our methods can achieve excellent performance in the recognition of these two types of disease.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
237,357
2201.05918
Recursive Least Squares Advantage Actor-Critic Algorithms
As an important algorithm in deep reinforcement learning, advantage actor critic (A2C) has been widely succeeded in both discrete and continuous control tasks with raw pixel inputs, but its sample efficiency still needs to improve more. In traditional reinforcement learning, actor-critic algorithms generally use the recursive least squares (RLS) technology to update the parameter of linear function approximators for accelerating their convergence speed. However, A2C algorithms seldom use this technology to train deep neural networks (DNNs) for improving their sample efficiency. In this paper, we propose two novel RLS-based A2C algorithms and investigate their performance. Both proposed algorithms, called RLSSA2C and RLSNA2C, use the RLS method to train the critic network and the hidden layers of the actor network. The main difference between them is at the policy learning step. RLSSA2C uses an ordinary first-order gradient descent algorithm and the standard policy gradient to learn the policy parameter. RLSNA2C uses the Kronecker-factored approximation, the RLS method and the natural policy gradient to learn the compatible parameter and the policy parameter. In addition, we analyze the complexity and convergence of both algorithms, and present three tricks for further improving their convergence speed. Finally, we demonstrate the effectiveness of both algorithms on 40 games in the Atari 2600 environment and 11 tasks in the MuJoCo environment. From the experimental results, it is shown that our both algorithms have better sample efficiency than the vanilla A2C on most games or tasks, and have higher computational efficiency than other two state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
275,547
1809.04163
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Semantic specialization is the process of fine-tuning pre-trained distributional word vectors using external lexical knowledge (e.g., WordNet) to accentuate a particular semantic relation in the specialized vector space. While post-processing specialization methods are applicable to arbitrary distributional vectors, they are limited to updating only the vectors of words occurring in external lexicons (i.e., seen words), leaving the vectors of all other words unchanged. We propose a novel approach to specializing the full distributional vocabulary. Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space. We exploit words seen in the resources as training examples for learning a global specialization function. This function is learned by combining a standard L2-distance loss with an adversarial loss: the adversarial component produces more realistic output vectors. We show the effectiveness and robustness of the proposed method across three languages and on three tasks: word similarity, dialog state tracking, and lexical simplification. We report consistent improvements over distributional word vectors and vectors specialized by other state-of-the-art specialization frameworks. Finally, we also propose a cross-lingual transfer method for zero-shot specialization which successfully specializes a full target distributional space without any lexical knowledge in the target language and without any bilingual data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,485
2203.15588
Deep Multi-modal Fusion of Image and Non-image Data in Disease Diagnosis and Prognosis: A Review
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on the various images (e.g., radiological, pathological, and camera images) and non-image data (e.g., clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multi-modal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multi-modal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (1) overview of current multi-modal learning workflows, (2) summarization of multi-modal fusion methods, (3) discussion of the performance, (4) applications in disease diagnosis and prognosis, and (5) challenges and future directions.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
288,464
2402.08298
Time to Stop and Think: What kind of research do we want to do?
Experimentation is an intrinsic part of research in artificial intelligence since it allows for collecting quantitative observations, validating hypotheses, and providing evidence for their reformulation. For that reason, experimentation must be coherent with the purposes of the research, properly addressing the relevant questions in each case. Unfortunately, the literature is full of works whose experimentation is neither rigorous nor convincing, oftentimes designed to support prior beliefs rather than answering the relevant research questions. In this paper, we focus on the field of metaheuristic optimization, since it is our main field of work, and it is where we have observed the misconduct that has motivated this letter. Even if we limit the focus of this manuscript to the experimental part of the research, our main goal is to sew the seed of sincere critical assessment of our work, sparking a reflection process both at the individual and the community level. Such a reflection process is too complex and extensive to be tackled as a whole. Therefore, to bring our feet to the ground, we will include in this document our reflections about the role of experimentation in our work, discussing topics such as the use of benchmark instances vs instance generators, or the statistical assessment of empirical results. That is, all the statements included in this document are personal views and opinions, which can be shared by others or not. Certainly, having different points of view is the basis to establish a good discussion process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
429,051
2005.01234
One-Shot Image Classification by Learning to Restore Prototypes
One-shot image classification aims to train image classifiers over the dataset with only one image per category. It is challenging for modern deep neural networks that typically require hundreds or thousands of images per class. In this paper, we adopt metric learning for this problem, which has been applied for few- and many-shot image classification by comparing the distance between the test image and the center of each class in the feature space. However, for one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class. For example, if the image is far away from the class center in the feature space, the metric-learning based algorithms are unlikely to make correct predictions for the test images because the decision boundary is shifted by this noisy image. To address this issue, we propose a simple yet effective regression model, denoted by RestoreNet, which learns a class agnostic transformation on the image feature to move the image closer to the class center in the feature space. Experiments demonstrate that RestoreNet obtains superior performance over the state-of-the-art methods on a broad range of datasets. Moreover, RestoreNet can be easily combined with other methods to achieve further improvement.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
175,528
2209.00783
TypoSwype: An Imaging Approach to Detect Typo-Squatting
Typo-squatting domains are a common cyber-attack technique. It involves utilising domain names, that exploit possible typographical errors of commonly visited domains, to carry out malicious activities such as phishing, malware installation, etc. Current approaches typically revolve around string comparison algorithms like the Demaru-Levenschtein Distance (DLD) algorithm. Such techniques do not take into account keyboard distance, which researchers find to have a strong correlation with typical typographical errors and are trying to take account of. In this paper, we present the TypoSwype framework which converts strings to images that take into account keyboard location innately. We also show how modern state of the art image recognition techniques involving Convolutional Neural Networks, trained via either Triplet Loss or NT-Xent Loss, can be applied to learn a mapping to a lower dimensional space where distances correspond to image, and equivalently, textual similarity. Finally, we also demonstrate our method's ability to improve typo-squatting detection over the widely used DLD algorithm, while maintaining the classification accuracy as to which domain the input domain was attempting to typo-squat.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
315,682
2305.06707
A data-driven rutting depth short-time prediction model with metaheuristic optimization for asphalt pavements based on RIOHTrack
Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This study attempts to develop a robust artificial intelligence model to estimate different asphalt pavements' rutting depth clips, temperature, and load axes as primary characteristics. The experiment data were obtained from 19 asphalt pavements with different crude oil sources on a 2.038 km long full-scale field accelerated pavement test track (RIOHTrack, Road Track Institute) in Tongzhou, Beijing. In addition, this paper also proposes to build complex networks with different pavement rutting depths through complex network methods and the Louvain algorithm for community detection. The most critical structural elements can be selected from different asphalt pavement rutting data, and similar structural elements can be found. An extreme learning machine algorithm with residual correction (RELM) is designed and optimized using an independent adaptive particle swarm algorithm. The experimental results of the proposed method are compared with several classical machine learning algorithms, with predictions of Average Root Mean Squared Error, Average Mean Absolute Error, and Average Mean Absolute Percentage Error for 19 asphalt pavements reaching 1.742, 1.363, and 1.94\% respectively. The experiments demonstrate that the RELM algorithm has an advantage over classical machine learning methods in dealing with non-linear problems in road engineering. Notably, the method ensures the adaptation of the simulated environment to different levels of abstraction through the cognitive analysis of the production environment parameters.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
363,639
2411.03360
Pedestrian Volume Prediction Using a Diffusion Convolutional Gated Recurrent Unit Model
Effective models for analysing and predicting pedestrian flow are important to ensure the safety of both pedestrians and other road users. These tools also play a key role in optimising infrastructure design and geometry and supporting the economic utility of interconnected communities. The implementation of city-wide automatic pedestrian counting systems provides researchers with invaluable data, enabling the development and training of deep learning applications that offer better insights into traffic and crowd flows. Benefiting from real-world data provided by the City of Melbourne pedestrian counting system, this study presents a pedestrian flow prediction model, as an extension of Diffusion Convolutional Grated Recurrent Unit (DCGRU) with dynamic time warping, named DCGRU-DTW. This model captures the spatial dependencies of pedestrian flow through the diffusion process and the temporal dependency captured by Gated Recurrent Unit (GRU). Through extensive numerical experiments, we demonstrate that the proposed model outperforms the classic vector autoregressive model and the original DCGRU across multiple model accuracy metrics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,876
1804.05526
LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics
Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance. This capability is especially apparent during driving: a human driver can recognize where they are when travelling in the reverse direction along a route for the first time, without having to turn back and look. The difficulty of this problem exceeds any addressed in past appearance- and viewpoint-invariant visual place recognition (VPR) research, in part because large parts of the scene are not commonly observable from opposite directions. Consequently, as shown in this paper, the precision-recall performance of current state-of-the-art viewpoint- and appearance-invariant VPR techniques is orders of magnitude below what would be usable in a closed-loop system. Current engineered solutions predominantly rely on panoramic camera or LIDAR sensing setups; an eminently suitable engineering solution but one that is clearly very different to how humans navigate, which also has implications for how naturally humans could interact and communicate with the navigation system. In this paper we develop a suite of novel semantic- and appearance-based techniques to enable for the first time high performance place recognition in this challenging scenario. We first propose a novel Local Semantic Tensor (LoST) descriptor of images using the convolutional feature maps from a state-of-the-art dense semantic segmentation network. Then, to verify the spatial semantic arrangement of the top matching candidates, we develop a novel approach for mining semantically-salient keypoint correspondences.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
95,099
1102.3242
Weak randomness and Kamae's theorem on normal numbers
A function from sequences to their subsequences is called selection function. A selection function is called admissible (with respect to normal numbers) if for all normal numbers, their subsequences obtained by the selection function are normal numbers. In Kamae (1973) selection functions that are not depend on sequences (depend only on coordinates) are studied, and their necessary and sufficient condition for admissibility is given. In this paper we introduce a notion of weak randomness and study an algorithmic analogy to the Kamae's theorem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,228
2102.02127
The Pitfall of More Powerful Autoencoders in Lidar-Based Navigation
The benefit of pretrained autoencoders for reinforcement learning in comparison to training on raw observations is already known [1]. In this paper, we address the generation of a compact and information-rich state representation. In particular, we train a variational autoencoder for 2D-lidar scans to use its latent state for reinforcement learning of navigation tasks. To achieve high reconstruction power of our autoencoding pipeline, we propose an - in the context of autoencoding 2D-lidar scans - novel preprocessing into a local binary occupancy image. This has no additional requirements, neither self-localization nor robust mapping, and therefore can be applied in any setting and easily transferred from simulation in real-world. In a second stage, we show the usage of the compact state representation generated by our autoencoding pipeline in a simplistic navigation task and expose the pitfall that increased reconstruction power will always lead to an improved performance. We implemented our approach in python using tensorflow. Our datasets are simulated with pybullet as well as recorded using a slamtec rplidar A3. The experiments show the significantly improved reconstruction capabilities of our approach for 2D-lidar scans w.r.t. the state of the art. However, as we demonstrate in the experiments the impact on reinforcement learning in lidar-based navigation tasks is non-predictable when improving the latent state representation generated by an autoencoding pipeline. This is surprising and needs to be taken into account during the process of optimizing a pretrained autoencoder for reinforcement learning tasks.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
218,333
2409.09601
A Survey of Foundation Models for Music Understanding
Music is essential in daily life, fulfilling emotional and entertainment needs, and connecting us personally, socially, and culturally. A better understanding of music can enhance our emotions, cognitive skills, and cultural connections. The rapid advancement of artificial intelligence (AI) has introduced new ways to analyze music, aiming to replicate human understanding of music and provide related services. While the traditional models focused on audio features and simple tasks, the recent development of large language models (LLMs) and foundation models (FMs), which excel in various fields by integrating semantic information and demonstrating strong reasoning abilities, could capture complex musical features and patterns, integrate music with language and incorporate rich musical, emotional and psychological knowledge. Therefore, they have the potential in handling complex music understanding tasks from a semantic perspective, producing outputs closer to human perception. This work, to our best knowledge, is one of the early reviews of the intersection of AI techniques and music understanding. We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities. We also discussed their limitations and proposed possible future directions, offering insights for researchers in this field.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
488,389
2502.00711
VIKSER: Visual Knowledge-Driven Self-Reinforcing Reasoning Framework
Visual reasoning refers to the task of solving questions about visual information. Current visual reasoning methods typically employ pre-trained vision-language model (VLM) strategies or deep neural network approaches. However, existing efforts are constrained by limited reasoning interpretability, while hindering by the phenomenon of underspecification in the question text. Additionally, the absence of fine-grained visual knowledge limits the precise understanding of subject behavior in visual reasoning tasks. To address these issues, we propose VIKSER (Visual Knowledge-Driven Self-Reinforcing Reasoning Framework). Specifically, VIKSER, trained using knowledge distilled from large language models, extracts fine-grained visual knowledge with the assistance of visual relationship detection techniques. Subsequently, VIKSER utilizes fine-grained visual knowledge to paraphrase the question with underspecification. Additionally, we design a novel prompting method called Chain-of-Evidence (CoE), which leverages the power of ``evidence for reasoning'' to endow VIKSER with interpretable reasoning capabilities. Meanwhile, the integration of self-reflection technology empowers VIKSER with the ability to learn and improve from its mistakes. Experiments conducted on widely used datasets demonstrate that VIKSER achieves new state-of-the-art (SOTA) results in relevant tasks.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
529,512
2401.17585
Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks
Current approaches of knowledge editing struggle to effectively propagate updates to interconnected facts. In this work, we delve into the barriers that hinder the appropriate propagation of updated knowledge within these models for accurate reasoning. To support our analysis, we introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset) -- which covers six common reasoning schemes in real world. We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit. We found that all model editing methods show notably low performance on this dataset, especially in certain reasoning schemes. Our analysis over the chain-of-thought generation of edited models further uncover key reasons behind the inadequacy of existing knowledge editing methods from a reasoning standpoint, involving aspects on fact-wise editing, fact recall ability, and coherence in generation. We will make our benchmark publicly available.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
425,247
2410.17780
Fiber Activation by Bipolar Stimulation in Deep Brain Stimulation: A Patient Case Study
Deep Brain Stimulation (DBS) is a therapy widely used for treating the symptoms of neurological disorders. Electrical pulses are chronically delivered in DBS to a disease-specific brain target via a surgically implanted electrode. The stimulating contact configuration, stimulation polarity, as well as amplitude, frequency, and pulse width of the DBS pulse sequence are utilized to optimize the therapeutic effect. In this paper, the utility of therapy individualization by means of patient-specific mathematical modeling is investigated with respect to a specific case of a patient diagnosed with Essential Tremor (ET). Two computational models are compared in their ability to elucidate the impact of DBS stimulation on the dentato-rubrothalamic tract: (i) a conventional model of Volume of Tissue Activated (VTA) and (ii) a well-established neural fiber activation modeling framework known as OSS-DBS. The simulation results are compared with tremor measured in the patient under different DBS settings using a smartphone application. The findings of the study highlight that temporally static VTA models do not adequately describe the differences in the outcomes of bipolar stimulation settings with switched polarity, whereas neural fiber activation models hold potential in this regard. However, it is noted that neither of the investigated models fully accounts for the measured symptom pattern, particularly regarding a bilateral effect produced by unilateral stimulation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
501,612
1709.10142
Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack
In this paper, we show synchronization for a group of output passive agents that communicate with each other according to an underlying communication graph to achieve a common goal. We propose a distributed event-triggered control framework that will guarantee synchronization and considerably decrease the required communication load on the band-limited network. We define a general Byzantine attack on the event-triggered multi-agent network system and characterize its negative effects on synchronization. The Byzantine agents are capable of intelligently falsifying their data and manipulating the underlying communication graph by altering their respective control feedback weights. We introduce a decentralized detection framework and analyze its steady-state and transient performances. We propose a way of identifying individual Byzantine neighbors and a learning-based method of estimating the attack parameters. Lastly, we propose learning-based control approaches to mitigate the negative effects of the adversarial attack.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
81,730
2006.04103
An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy
Unmanned aerial vehicle (UAV) path planning enables UAVs to avoid obstacles and reach the target efficiently. To generate high-quality paths without obstacle collision for UAVs, this paper proposes a novel autonomous path planning algorithm based on a tangent intersection and target guidance strategy (APPATT). Guided by a target, the elliptic tangent graph method is used to generate two sub-paths, one of which is selected based on heuristic rules when confronting an obstacle. The UAV flies along the selected sub-path and repeatedly adjusts its flight path to avoid obstacles through this way until the collision-free path extends to the target. Considering the UAV kinematic constraints, the cubic B-spline curve is employed to smooth the waypoints for obtaining a feasible path. Compared with A*, PRM, RRT and VFH, the experimental results show that APPATT can generate the shortest collision-free path within 0.05 seconds for each instance under static environments. Moreover, compared with VFH and RRTRW, APPATT can generate satisfactory collision-free paths under uncertain environments in a nearly real-time manner. It is worth noting that APPATT has the capability of escaping from simple traps within a reasonable time.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
180,563
1602.04915
Gradient Descent Converges to Minimizers
We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
52,194
2409.15188
PALLM: Evaluating and Enhancing PALLiative Care Conversations with Large Language Models
Effective patient-provider communication is crucial in clinical care, directly impacting patient outcomes and quality of life. Traditional evaluation methods, such as human ratings, patient feedback, and provider self-assessments, are often limited by high costs and scalability issues. Although existing natural language processing (NLP) techniques show promise, they struggle with the nuances of clinical communication and require sensitive clinical data for training, reducing their effectiveness in real-world applications. Emerging large language models (LLMs) offer a new approach to assessing complex communication metrics, with the potential to advance the field through integration into passive sensing and just-in-time intervention systems. This study explores LLMs as evaluators of palliative care communication quality, leveraging their linguistic, in-context learning, and reasoning capabilities. Specifically, using simulated scripts crafted and labeled by healthcare professionals, we test proprietary models (e.g., GPT-4) and fine-tune open-source LLMs (e.g., LLaMA2) with a synthetic dataset generated by GPT-4 to evaluate clinical conversations, to identify key metrics such as `understanding' and `empathy'. Our findings demonstrated LLMs' superior performance in evaluating clinical communication, providing actionable feedback with reasoning, and demonstrating the feasibility and practical viability of developing in-house LLMs. This research highlights LLMs' potential to enhance patient-provider interactions and lays the groundwork for downstream steps in developing LLM-empowered clinical health systems.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
490,797
2212.09497
Target selection for Near-Earth Asteroids in-orbit sample collection missions
This work presents a mission concept for in-orbit particle collection for sampling and exploration missions towards Near-Earth asteroids. Ejecta is generated via a small kinetic impactor and two possible collection strategies are investigated: collecting the particle along the anti-solar direction, exploiting the dynamical features of the L$_2$ Lagrangian point or collecting them while the spacecraft orbits the asteroid and before they re-impact onto the asteroid surface. Combining the dynamics of the particles in the Circular Restricted Three-Body Problem perturbed by Solar Radiation Pressure with models for the ejecta generation, we identify possible target asteroids as a function of their physical properties, by evaluating the potential for particle collection.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
337,126
1907.07253
Fairness and Diversity in the Recommendation and Ranking of Participatory Media Content
Online participatory media platforms that enable one-to-many communication among users, see a significant amount of user generated content and consequently face a problem of being able to recommend a subset of this content to its users. We address the problem of recommending and ranking this content such that different viewpoints about a topic get exposure in a fair and diverse manner. We build our model in the context of a voice-based participatory media platform running in rural central India, for low-income and less-literate communities, that plays audio messages in a ranked list to users over a phone call and allows them to contribute their own messages. In this paper, we describe our model and evaluate it using call-logs from the platform, to compare the fairness and diversity performance of our model with the manual editorial processes currently being followed. Our models are generic and can be adapted and applied to other participatory media platforms as well.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
138,822
2412.09765
L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection And Enhancement
The currently leading artificial neural network (ANN) models of the visual ventral stream -- which are derived from a combination of performance optimization and robustification methods -- have demonstrated a remarkable degree of behavioral alignment with humans on visual categorization tasks. Extending upon previous work, we show that not only can these models guide image perturbations that change the induced human category percepts, but they also can enhance human ability to accurately report the original ground truth. Furthermore, we find that the same models can also be used out-of-the-box to predict the proportion of correct human responses to individual images, providing a simple, human-aligned estimator of the relative difficulty of each image. Motivated by these observations, we propose to augment visual learning in humans in a way that improves human categorization accuracy at test time. Our learning augmentation approach consists of (i) selecting images based on their model-estimated recognition difficulty, and (ii) using image perturbations that aid recognition for novice learners. We find that combining these model-based strategies gives rise to test-time categorization accuracy gains of 33-72% relative to control subjects without these interventions, despite using the same number of training feedback trials. Surprisingly, beyond the accuracy gain, the training time for the augmented learning group was also shorter by 20-23%. We demonstrate the efficacy of our approach in a fine-grained categorization task with natural images, as well as tasks in two clinically relevant image domains -- histology and dermoscopy -- where visual learning is notoriously challenging. To the best of our knowledge, this is the first application of ANNs to increase visual learning performance in humans by enhancing category-specific features.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
516,632
2206.02338
OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression
This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings; The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation. The code is available at https://github.com/xk-huang/OrdinalCLIP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
300,850
2002.05578
Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis
Efficient and interpretable spatial analysis is crucial in many fields such as geology, sports, and climate science. Tensor latent factor models can describe higher-order correlations for spatial data. However, they are computationally expensive to train and are sensitive to initialization, leading to spatially incoherent, uninterpretable results. We develop a novel Multiresolution Tensor Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns. MRTL initializes the latent factors from an approximate full-rank tensor model for improved interpretability and progressively learns from a coarse resolution to the fine resolution to reduce computation. We also prove the theoretical convergence and computational complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates 4~5x speedup compared to a fixed resolution approach while yielding accurate and interpretable latent factors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,940
2310.17425
Detecting Abrupt Change of Channel Covariance Matrix in IRS-Assisted Communication
The knowledge of channel covariance matrices is crucial to the design of intelligent reflecting surface (IRS) assisted communication. However, channel covariance matrices may change suddenly in practice. This letter focuses on the detection of the above change in IRS-assisted communication. Specifically, we consider the uplink communication system consisting of a single-antenna user (UE), an IRS, and a multi-antenna base station (BS). We first categorize two types of channel covariance matrix changes based on their impact on system design: Type I change, which denotes the change in the BS receive covariance matrix, and Type II change, which denotes the change in the IRS transmit/receive covariance matrix. Secondly, a powerful method is proposed to detect whether a Type I change occurs, a Type II change occurs, or no change occurs. The effectiveness of our proposed scheme is verified by numerical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
403,133
2307.15180
EnSolver: Uncertainty-Aware Ensemble CAPTCHA Solvers with Theoretical Guarantees
The popularity of text-based CAPTCHA as a security mechanism to protect websites from automated bots has prompted researches in CAPTCHA solvers, with the aim of understanding its failure cases and subsequently making CAPTCHAs more secure. Recently proposed solvers, built on advances in deep learning, are able to crack even the very challenging CAPTCHAs with high accuracy. However, these solvers often perform poorly on out-of-distribution samples that contain visual features different from those in the training set. Furthermore, they lack the ability to detect and avoid such samples, making them susceptible to being locked out by defense systems after a certain number of failed attempts. In this paper, we propose EnSolver, a family of CAPTCHA solvers that use deep ensemble uncertainty to detect and skip out-of-distribution CAPTCHAs, making it harder to be detected. We prove novel theoretical bounds on the effectiveness of our solvers and demonstrate their use with state-of-the-art CAPTCHA solvers. Our experiments show that the proposed approaches perform well when cracking CAPTCHA datasets that contain both in-distribution and out-of-distribution samples.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
382,178
2410.13580
EFX Exists for Three Types of Agents
In this paper, we study the problem of finding an envy-free allocation of indivisible goods among multiple agents. EFX, which stands for envy-freeness up to any good, is a well-studied relaxation of the envy-free allocation problem and has been shown to exist for specific scenarios. For instance, EFX is known to exist when there are only three agents [Chaudhury et al, EC 2020], and for any number of agents when there are only two types of valuations [Mahara, Discret. Appl. Math 2023]. We show that EFX allocations exist for any number of agents when there are at most three types of additive valuations.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
499,593
1206.4656
Machine Learning that Matters
Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field?s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
16,707
2009.05389
ZooBuilder: 2D and 3D Pose Estimation for Quadrupeds Using Synthetic Data
This work introduces a novel strategy for generating synthetic training data for 2D and 3D pose estimation of animals using keyframe animations. With the objective to automate the process of creating animations for wildlife, we train several 2D and 3D pose estimation models with synthetic data, and put in place an end-to-end pipeline called ZooBuilder. The pipeline takes as input a video of an animal in the wild, and generates the corresponding 2D and 3D coordinates for each joint of the animal's skeleton. With this approach, we produce motion capture data that can be used to create animations for wildlife.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
195,310
1802.03528
Coverless information hiding based on Generative Model
A new coverless image information hiding method based on generative model is proposed, we feed the secret image to the generative model database, and generate a meaning-normal and independent image different from the secret image, then, the generated image is transmitted to the receiver and is fed to the generative model database to generate another image visually the same as the secret image. So we only need to transmit the meaning-normal image which is not related to the secret image, and we can achieve the same effect as the transmission of the secret image. This is the first time to propose the coverless image information hiding method based on generative model, compared with the traditional image steganography, the transmitted image does not embed any information of the secret image in this method, therefore, can effectively resist steganalysis tools. Experimental results show that our method has high capacity, safety and reliability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,996