id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2402.14925
Efficient Unbiased Sparsification
An unbiased $m$-sparsification of a vector $p\in \mathbb{R}^n$ is a random vector $Q\in \mathbb{R}^n$ with mean $p$ that has at most $m<n$ nonzero coordinates. Unbiased sparsification compresses the original vector without introducing bias; it arises in various contexts, such as in federated learning and sampling sparse probability distributions. Ideally, unbiased sparsification should also minimize the expected value of a divergence function $\mathsf{Div}(Q,p)$ that measures how far away $Q$ is from the original $p$. If $Q$ is optimal in this sense, then we call it efficient. Our main results describe efficient unbiased sparsifications for divergences that are either permutation-invariant or additively separable. Surprisingly, the characterization for permutation-invariant divergences is robust to the choice of divergence function, in the sense that our class of optimal $Q$ for squared Euclidean distance coincides with our class of optimal $Q$ for Kullback-Leibler divergence, or indeed any of a wide variety of divergences.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
431,915
2412.08647
SegFace: Face Segmentation of Long-Tail Classes
Face parsing refers to the semantic segmentation of human faces into key facial regions such as eyes, nose, hair, etc. It serves as a prerequisite for various advanced applications, including face editing, face swapping, and facial makeup, which often require segmentation masks for classes like eyeglasses, hats, earrings, and necklaces. These infrequently occurring classes are called long-tail classes, which are overshadowed by more frequently occurring classes known as head classes. Existing methods, primarily CNN-based, tend to be dominated by head classes during training, resulting in suboptimal representation for long-tail classes. Previous works have largely overlooked the problem of poor segmentation performance of long-tail classes. To address this issue, we propose SegFace, a simple and efficient approach that uses a lightweight transformer-based model which utilizes learnable class-specific tokens. The transformer decoder leverages class-specific tokens, allowing each token to focus on its corresponding class, thereby enabling independent modeling of each class. The proposed approach improves the performance of long-tail classes, thereby boosting overall performance. To the best of our knowledge, SegFace is the first work to employ transformer models for face parsing. Moreover, our approach can be adapted for low-compute edge devices, achieving 95.96 FPS. We conduct extensive experiments demonstrating that SegFace significantly outperforms previous state-of-the-art models, achieving a mean F1 score of 88.96 (+2.82) on the CelebAMask-HQ dataset and 93.03 (+0.65) on the LaPa dataset. Code: https://github.com/Kartik-3004/SegFace
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
516,187
2304.08660
(LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place Recognition
Localization has been a challenging task for autonomous navigation. A loop detection algorithm must overcome environmental changes for the place recognition and re-localization of robots. Therefore, deep learning has been extensively studied for the consistent transformation of measurements into localization descriptors. Street view images are easily accessible; however, images are vulnerable to appearance changes. LiDAR can robustly provide precise structural information. However, constructing a point cloud database is expensive, and point clouds exist only in limited places. Different from previous works that train networks to produce shared embedding directly between the 2D image and 3D point cloud, we transform both data into 2.5D depth images for matching. In this work, we propose a novel cross-matching method, called (LC)$^2$, for achieving LiDAR localization without a prior point cloud map. To this end, LiDAR measurements are expressed in the form of range images before matching them to reduce the modality discrepancy. Subsequently, the network is trained to extract localization descriptors from disparity and range images. Next, the best matches are employed as a loop factor in a pose graph. Using public datasets that include multiple sessions in significantly different lighting conditions, we demonstrated that LiDAR-based navigation systems could be optimized from image databases and vice versa.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
358,780
2106.08686
Do Acoustic Word Embeddings Capture Phonological Similarity? An Empirical Study
Several variants of deep neural networks have been successfully employed for building parametric models that project variable-duration spoken word segments onto fixed-size vector representations, or acoustic word embeddings (AWEs). However, it remains unclear to what degree we can rely on the distance in the emerging AWE space as an estimate of word-form similarity. In this paper, we ask: does the distance in the acoustic embedding space correlate with phonological dissimilarity? To answer this question, we empirically investigate the performance of supervised approaches for AWEs with different neural architectures and learning objectives. We train AWE models in controlled settings for two languages (German and Czech) and evaluate the embeddings on two tasks: word discrimination and phonological similarity. Our experiments show that (1) the distance in the embedding space in the best cases only moderately correlates with phonological distance, and (2) improving the performance on the word discrimination task does not necessarily yield models that better reflect word phonological similarity. Our findings highlight the necessity to rethink the current intrinsic evaluations for AWEs.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
241,382
1005.2263
Context models on sequences of covers
We present a class of models that, via a simple construction, enables exact, incremental, non-parametric, polynomial-time, Bayesian inference of conditional measures. The approach relies upon creating a sequence of covers on the conditioning variable and maintaining a different model for each set within a cover. Inference remains tractable by specifying the probabilistic model in terms of a random walk within the sequence of covers. We demonstrate the approach on problems of conditional density estimation, which, to our knowledge is the first closed-form, non-parametric Bayesian approach to this problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
6,475
2301.04452
Uncertainty Estimation based on Geometric Separation
In machine learning, accurately predicting the probability that a specific input is correct is crucial for risk management. This process, known as uncertainty (or confidence) estimation, is particularly important in mission-critical applications such as autonomous driving. In this work, we put forward a novel geometric-based approach for improving uncertainty estimations in machine learning models. Our approach involves using the geometric distance of the current input from existing training inputs as a signal for estimating uncertainty, and then calibrating this signal using standard post-hoc techniques. We demonstrate that our method leads to more accurate uncertainty estimations than recently proposed approaches through extensive evaluation on a variety of datasets and models. Additionally, we optimize our approach so that it can be implemented on large datasets in near real-time applications, making it suitable for time-sensitive scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
340,068
1902.01328
Optimization-based Feedback Manipulation Through an Array of Ultrasonic Transducers
In this paper we document a novel laboratory experimental platform for non-contact planar manipulation (positioning) of millimeter-scale objects using acoustic pressure. The manipulated objects are either floating on a water surface or rolling on a solid surface. The pressure field is shaped in real time through an 8-by-8 array (matrix) of ultrasonic transducers. The transducers are driven with square voltages whose phase-shifts are updated periodically every few milliseconds based on the difference between the desired and true (estimated from video) position. Numerical optimization is used within every period of a discrete-time feedback loop to determine the phase shifts for the voltages. The platform can be used as an affordable testbed for algorithms for non-contact manipulation through arrays of actuators as all the design and implementation details for the presented platform are shared with the public through a dedicated git repository. The platform can certainly be extended towards higher numbers of simultaneously yet independently manipulated objects and larger manipulation areas by the expanding the transducer array.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
120,627
1302.4954
Probabilistic Temporal Reasoning with Endogenous Change
This paper presents a probabilistic model for reasoning about the state of a system as it changes over time, both due to exogenous and endogenous influences. Our target domain is a class of medical prediction problems that are neither so urgent as to preclude careful diagnosis nor progress so slowly as to allow arbitrary testing and treatment options. In these domains there is typically enough time to gather information about the patient's state and consider alternative diagnoses and treatments, but the temporal interaction between the timing of tests, treatments, and the course of the disease must also be considered. Our approach is to elicit a qualitative structural model of the patient from a human expert---the model identifies important attributes, the way in which exogenous changes affect attribute values, and the way in which the patient's condition changes endogenously. We then elicit probabilistic information to capture the expert's uncertainty about the effects of tests and treatments and the nature and timing of endogenous state changes. This paper describes the model in the context of a problem in treating vehicle accident trauma, and suggests a method for solving the model based on the technique of sequential imputation. A complementary goal of this work is to understand and synthesize a disparate collection of research efforts all using the name ?probabilistic temporal reasoning.? This paper analyzes related work and points out essential differences between our proposed model and other approaches in the literature.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
22,228
2410.03943
Oscillatory State-Space Models
We propose Linear Oscillatory State-Space models (LinOSS) for efficiently learning on long sequences. Inspired by cortical dynamics of biological neural networks, we base our proposed LinOSS model on a system of forced harmonic oscillators. A stable discretization, integrated over time using fast associative parallel scans, yields the proposed state-space model. We prove that LinOSS produces stable dynamics only requiring nonnegative diagonal state matrix. This is in stark contrast to many previous state-space models relying heavily on restrictive parameterizations. Moreover, we rigorously show that LinOSS is universal, i.e., it can approximate any continuous and causal operator mapping between time-varying functions, to desired accuracy. In addition, we show that an implicit-explicit discretization of LinOSS perfectly conserves the symmetry of time reversibility of the underlying dynamics. Together, these properties enable efficient modeling of long-range interactions, while ensuring stable and accurate long-horizon forecasting. Finally, our empirical results, spanning a wide range of time-series tasks from mid-range to very long-range classification and regression, as well as long-horizon forecasting, demonstrate that our proposed LinOSS model consistently outperforms state-of-the-art sequence models. Notably, LinOSS outperforms Mamba by nearly 2x and LRU by 2.5x on a sequence modeling task with sequences of length 50k.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
495,054
2303.04025
DeepSeeColor: Realtime Adaptive Color Correction for Autonomous Underwater Vehicles via Deep Learning Methods
Successful applications of complex vision-based behaviours underwater have lagged behind progress in terrestrial and aerial domains. This is largely due to the degraded image quality resulting from the physical phenomena involved in underwater image formation. Spectrally-selective light attenuation drains some colors from underwater images while backscattering adds others, making it challenging to perform vision-based tasks underwater. State-of-the-art methods for underwater color correction optimize the parameters of image formation models to restore the full spectrum of color to underwater imagery. However, these methods have high computational complexity that is unfavourable for realtime use by autonomous underwater vehicles (AUVs), as a result of having been primarily designed for offline color correction. Here, we present DeepSeeColor, a novel algorithm that combines a state-of-the-art underwater image formation model with the computational efficiency of deep learning frameworks. In our experiments, we show that DeepSeeColor offers comparable performance to the popular "Sea-Thru" algorithm (Akkaynak & Treibitz, 2019) while being able to rapidly process images at up to 60Hz, thus making it suitable for use onboard AUVs as a preprocessing step to enable more robust vision-based behaviours.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
349,937
2410.20532
Search Wide, Focus Deep: Automated Fetal Brain Extraction with Sparse Training Data
Automated fetal brain extraction from full-uterus MRI is a challenging task due to variable head sizes, orientations, complex anatomy, and prevalent artifacts. While deep-learning (DL) models trained on synthetic images have been successful in adult brain extraction, adapting these networks for fetal MRI is difficult due to the sparsity of labeled data, leading to increased false-positive predictions. To address this challenge, we propose a test-time strategy that reduces false positives in networks trained on sparse, synthetic labels. The approach uses a breadth-fine search (BFS) to identify a subvolume likely to contain the fetal brain, followed by a deep-focused sliding window (DFS) search to refine the extraction, pooling predictions to minimize false positives. We train models at different window sizes using synthetic images derived from a small number of fetal brain label maps, augmented with random geometric shapes. Each model is trained on diverse head positions and scales, including cases with partial or no brain tissue. Our framework matches state-of-the-art brain extraction methods on clinical HASTE scans of third-trimester fetuses and exceeds them by up to 5\% in terms of Dice in the second trimester as well as EPI scans across both trimesters. Our results demonstrate the utility of a sliding-window approach and combining predictions from several models trained on synthetic images, for improving brain-extraction accuracy by progressively refining regions of interest and minimizing the risk of missing brain mask slices or misidentifying other tissues as brain.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
502,850
2207.12061
Balancing Stability and Plasticity through Advanced Null Space in Continual Learning
Continual learning is a learning paradigm that learns tasks sequentially with resources constraints, in which the key challenge is stability-plasticity dilemma, i.e., it is uneasy to simultaneously have the stability to prevent catastrophic forgetting of old tasks and the plasticity to learn new tasks well. In this paper, we propose a new continual learning approach, Advanced Null Space (AdNS), to balance the stability and plasticity without storing any old data of previous tasks. Specifically, to obtain better stability, AdNS makes use of low-rank approximation to obtain a novel null space and projects the gradient onto the null space to prevent the interference on the past tasks. To control the generation of the null space, we introduce a non-uniform constraint strength to further reduce forgetting. Furthermore, we present a simple but effective method, intra-task distillation, to improve the performance of the current task. Finally, we theoretically find that null space plays a key role in plasticity and stability, respectively. Experimental results show that the proposed method can achieve better performance compared to state-of-the-art continual learning approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
309,878
2411.11515
Cascaded Diffusion Models for 2D and 3D Microscopy Image Synthesis to Enhance Cell Segmentation
Automated cell segmentation in microscopy images is essential for biomedical research, yet conventional methods are labor-intensive and prone to error. While deep learning-based approaches have proven effective, they often require large annotated datasets, which are scarce due to the challenges of manual annotation. To overcome this, we propose a novel framework for synthesizing densely annotated 2D and 3D cell microscopy images using cascaded diffusion models. Our method synthesizes 2D and 3D cell masks from sparse 2D annotations using multi-level diffusion models and NeuS, a 3D surface reconstruction approach. Following that, a pretrained 2D Stable Diffusion model is finetuned to generate realistic cell textures and the final outputs are combined to form cell populations. We show that training a segmentation model with a combination of our synthetic data and real data improves cell segmentation performance by up to 9\% across multiple datasets. Additionally, the FID scores indicate that the synthetic data closely resembles real data. The code for our proposed approach will be available at https://github.com/ruveydayilmaz0/cascaded_diffusion.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
509,085
2408.10624
WRIM-Net: Wide-Ranging Information Mining Network for Visible-Infrared Person Re-Identification
For the visible-infrared person re-identification (VI-ReID) task, one of the primary challenges lies in significant cross-modality discrepancy. Existing methods struggle to conduct modality-invariant information mining. They often focus solely on mining singular dimensions like spatial or channel, and overlook the extraction of specific-modality multi-dimension information. To fully mine modality-invariant information across a wide range, we introduce the Wide-Ranging Information Mining Network (WRIM-Net), which mainly comprises a Multi-dimension Interactive Information Mining (MIIM) module and an Auxiliary-Information-based Contrastive Learning (AICL) approach. Empowered by the proposed Global Region Interaction (GRI), MIIM comprehensively mines non-local spatial and channel information through intra-dimension interaction. Moreover, Thanks to the low computational complexity design, separate MIIM can be positioned in shallow layers, enabling the network to better mine specific-modality multi-dimension information. AICL, by introducing the novel Cross-Modality Key-Instance Contrastive (CMKIC) loss, effectively guides the network in extracting modality-invariant information. We conduct extensive experiments not only on the well-known SYSU-MM01 and RegDB datasets but also on the latest large-scale cross-modality LLCM dataset. The results demonstrate WRIM-Net's superiority over state-of-the-art methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
481,947
2410.12289
AI-Aided Kalman Filters
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing. These methods are used for state estimation of dynamic systems by relying on mathematical representations in the form of simple state-space (SS) models, which may be crude and inaccurate descriptions of the underlying dynamics. Emerging data-centric artificial intelligence (AI) techniques tackle these tasks using deep neural networks (DNNs), which are model-agnostic. Recent developments illustrate the possibility of fusing DNNs with classic Kalman-type filtering, obtaining systems that learn to track in partially known dynamics. This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms. We review both generic and dedicated DNN architectures suitable for state estimation, and provide a systematic presentation of techniques for fusing AI tools with KFs and for leveraging partial SS modeling and data, categorizing design approaches into task-oriented and SS model-oriented. The usefulness of each approach in preserving the individual strengths of model-based KFs and data-driven DNNs is investigated in a qualitative and quantitative study, whose code is publicly available, illustrating the gains of hybrid model-based/data-driven designs. We also discuss existing challenges and future research directions that arise from fusing AI and Kalman-type algorithms.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
498,953
2002.05603
Performance Analysis of Intelligent Reflective Surfaces for Wireless Communication
A statistical characterization of the fundamental performance bounds of an intelligent reflective surface (IRS) intended for aiding wireless communications is presented. To this end, the outage probability, average symbol error probability, and achievable rate bounds are derived in closed-form. By virtue of asymptotic analysis in the high signal-to-noise ratio (SNR) regime, the achievable diversity order is derived. Thereby, we show that a diversity gain in the order of the number of passive reflective elements embedded within the IRS can be achieved with only controllable phase adjustments. Thus, IRS has a great potential of boosting the wireless performance by intelligently controlling the propagation channels without employing additional active radio frequency chains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
163,944
2410.03151
Media Framing through the Lens of Event-Centric Narratives
From a communications perspective, a frame defines the packaging of the language used in such a way as to encourage certain interpretations and to discourage others. For example, a news article can frame immigration as either a boost or a drain on the economy, and thus communicate very different interpretations of the same phenomenon. In this work, we argue that to explain framing devices we have to look at the way narratives are constructed. As a first step in this direction, we propose a framework that extracts events and their relations to other events, and groups them into high-level narratives that help explain frames in news articles. We show that our framework can be used to analyze framing in U.S. news for two different domains: immigration and gun control.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
494,649
2312.06979
On the notion of Hallucinations from the lens of Bias and Validity in Synthetic CXR Images
Medical imaging has revolutionized disease diagnosis, yet the potential is hampered by limited access to diverse and privacy-conscious datasets. Open-source medical datasets, while valuable, suffer from data quality and clinical information disparities. Generative models, such as diffusion models, aim to mitigate these challenges. At Stanford, researchers explored the utility of a fine-tuned Stable Diffusion model (RoentGen) for medical imaging data augmentation. Our work examines specific considerations to expand the Stanford research question, Could Stable Diffusion Solve a Gap in Medical Imaging Data? from the lens of bias and validity of the generated outcomes. We leveraged RoentGen to produce synthetic Chest-XRay (CXR) images and conducted assessments on bias, validity, and hallucinations. Diagnostic accuracy was evaluated by a disease classifier, while a COVID classifier uncovered latent hallucinations. The bias analysis unveiled disparities in classification performance among various subgroups, with a pronounced impact on the Female Hispanic subgroup. Furthermore, incorporating race and gender into input prompts exacerbated fairness issues in the generated images. The quality of synthetic images exhibited variability, particularly in certain disease classes, where there was more significant uncertainty compared to the original images. Additionally, we observed latent hallucinations, with approximately 42% of the images incorrectly indicating COVID, hinting at the presence of hallucinatory elements. These identifications provide new research directions towards interpretability of synthetic CXR images, for further understanding of associated risks and patient safety in medical applications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
414,750
2011.11751
Multimodal dynamics modeling for off-road autonomous vehicles
Dynamics modeling in outdoor and unstructured environments is difficult because different elements in the environment interact with the robot in ways that can be hard to predict. Leveraging multiple sensors to perceive maximal information about the robot's environment is thus crucial when building a model to perform predictions about the robot's dynamics with the goal of doing motion planning. We design a model capable of long-horizon motion predictions, leveraging vision, lidar and proprioception, which is robust to arbitrarily missing modalities at test time. We demonstrate in simulation that our model is able to leverage vision to predict traction changes. We then test our model using a real-world challenging dataset of a robot navigating through a forest, performing predictions in trajectories unseen during training. We try different modality combinations at test time and show that, while our model performs best when all modalities are present, it is still able to perform better than the baseline even when receiving only raw vision input and no proprioception, as well as when only receiving proprioception. Overall, our study demonstrates the importance of leveraging multiple sensors when doing dynamics modeling in outdoor conditions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
207,925
2412.12641
Lagrangian Index Policy for Restless Bandits with Average Reward
We study the Lagrangian Index Policy (LIP) for restless multi-armed bandits with long-run average reward. In particular, we compare the performance of LIP with the performance of the Whittle Index Policy (WIP), both heuristic policies known to be asymptotically optimal under certain natural conditions. Even though in most cases their performances are very similar, in the cases when WIP shows bad performance, LIP continues to perform very well. We then propose reinforcement learning algorithms, both tabular and NN-based, to obtain online learning schemes for LIP in the model-free setting. The proposed reinforcement learning schemes for LIP requires significantly less memory than the analogous scheme for WIP. We calculate analytically the Lagrangian index for the restart model, which describes the optimal web crawling and the minimization of the weighted age of information. We also give a new proof of asymptotic optimality in case of homogeneous bandits as the number of arms goes to infinity, based on exchangeability and de Finetti's theorem.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
517,961
2206.01272
Data-Driven Linear Koopman Embedding for Networked Systems: Model-Predictive Grid Control
This paper presents a data-learned linear Koopman embedding of nonlinear networked dynamics and uses it to enable real-time model predictive emergency voltage control in a power network. The approach involves a novel data-driven ``basis-dictionary free" lifting of the system dynamics into a higher dimensional linear space over which an MPC (model predictive control) is exercised, making it both scalable and rapid for practical real-time implementation. A Koopman-inspired deep neural network (KDNN) encoder-decoder architecture for the linear embedding of the underlying dynamics under distributed controls is presented, in which the end-to-end components of the KDNN comprising of a triple of transforms is learned from the system trajectory data in one go: A Neural Network (NN)-based lifting to a higher dimension, a linear dynamics within that higher dimension, and an NN-based projection to the original space. This data-learned approach relieves the burden of the ad-hoc selection of the nonlinear basis functions (e.g., polynomial or radial) used in conventional approaches for lifting to higher dimensional linear space. We validate the efficacy and robustness of the approach via application to the standard IEEE 39-bus system.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
300,406
2004.01483
Cascade Extended State Observer for Active Disturbance Rejection Control Applications under Measurement Noise
The extended state observer (ESO) plays an important role in the design of feedback control for nonlinear systems. However, its high-gain nature creates a challenge in engineering practice in cases where the output measurement is corrupted by non-negligible, high-frequency noise. The presence of such noise puts a constraint on how high the observer gains can be, which forces a trade-off between fast convergence of state estimates and quality of control task realization. In this work, a new observer design is proposed to improve the estimation performance in the presence of noise. In particular, a unique cascade combination of ESOs is developed, which is capable of fast and accurate signals reconstruction, while avoiding over-amplification of the measurement noise. The effectiveness of the introduced observer structure is verified here while working as a part of an active disturbance rejection control (ADRC) scheme. The conducted numerical validation and theoretical analysis of the new observer structure show improvement over standard solution in terms of noise attenuation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
170,923
1410.7357
Statistical models for cores decomposition of an undirected random graph
The $k$-core decomposition is a widely studied summary statistic that describes a graph's global connectivity structure. In this paper, we move beyond using $k$-core decomposition as a tool to summarize a graph and propose using $k$-core decomposition as a tool to model random graphs. We propose using the shell distribution vector, a way of summarizing the decomposition, as a sufficient statistic for a family of exponential random graph models. We study the properties and behavior of the model family, implement a Markov chain Monte Carlo algorithm for simulating graphs from the model, implement a direct sampler from the set of graphs with a given shell distribution, and explore the sampling distributions of some of the commonly used complementary statistics as good candidates for heuristic model fitting. These algorithms provide first fundamental steps necessary for solving the following problems: parameter estimation in this ERGM, extending the model to its Bayesian relative, and developing a rigorous methodology for testing goodness of fit of the model and model selection. The methods are applied to a synthetic network as well as the well-known Sampson monks dataset.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
37,067
2402.14832
Integrating Simulation Budget Management into Drum-Buffer-Rope: A Study on Parametrization and Reducing Computational Effort
In manufacturing, a bottleneck workstation frequently emerges, complicating production planning and escalating costs. To address this, Drum-Buffer-Rope (DBR) is a widely recognized production planning and control method that focuses on centralizing the bottleneck workstation, thereby improving production system performance. Although DBR is primarily focused on cre-ating a bottleneck schedule, the selection of planning parameters is crucial, as they significantly influence the scheduling process. Conducting a compre-hensive full factorial enumeration to identify the ideal planning parameters requires substantial computational effort. Simulation Budget Management (SBM) offers an effective concept to reduce this effort by skipping less promising parameter combinations. This publication introduces a method for integrating SBM into multi-stage multi-item DBR planned and controlled production system with limited capacity, aimed at determining the optimal planning parameters. Furthermore, we conduct a simulation study to analyze the effects of different production system environments, i.e., varying levels of shop load and process uncertainty, on both the performance and parame-terization of DBR and the efficacy of SBM. Our results show significant re-duction in simulation budget for identifying optimal planning parameters compared to traditional full factorial enumeration.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
431,856
2003.08272
Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training
West African Pidgin English is a language that is significantly spoken in West Africa, consisting of at least 75 million speakers. Nevertheless, proper machine translation systems and relevant NLP datasets for pidgin English are virtually absent. In this work, we develop techniques targeted at bridging the gap between Pidgin English and English in the context of natural language generation. %As a proof of concept, we explore the proposed techniques in the area of data-to-text generation. By building upon the previously released monolingual Pidgin English text and parallel English data-to-text corpus, we hope to build a system that can automatically generate Pidgin English descriptions from structured data. We first train a data-to-English text generation system, before employing techniques in unsupervised neural machine translation and self-training to establish the Pidgin-to-English cross-lingual alignment. The human evaluation performed on the generated Pidgin texts shows that, though still far from being practically usable, the pivoting + self-training technique improves both Pidgin text fluency and relevance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
168,678
2110.08465
Heterogeneous Graph-Based Multimodal Brain Network Learning
Graph neural networks (GNNs) provide powerful insights for brain neuroimaging technology from the view of graphical networks. However, most existing GNN-based models assume that the neuroimaging-produced brain connectome network is a homogeneous graph with single types of nodes and edges. In fact, emerging studies have reported and emphasized the significance of heterogeneity among human brain activities, especially between the two cerebral hemispheres. Thus, homogeneous-structured brain network-based graph methods are insufficient for modelling complicated cerebral activity states. To overcome this problem, in this paper, we present a heterogeneous graph neural network (HebrainGNN) for multimodal brain neuroimaging fusion learning. We first model the brain network as a heterogeneous graph with multitype nodes (i.e., left and right hemispheric nodes) and multitype edges (i.e., intra- and interhemispheric edges). Then, we propose a self-supervised pretraining strategy based on a heterogeneous brain network to address the potential overfitting problem caused by the conflict between a large parameter size and a small medical data sample size. Our results show the superiority of the proposed model over other existing methods in brain-related disease prediction tasks. Ablation experiments show that our heterogeneous graph-based model attaches more importance to hemispheric connections that may be neglected due to their low strength by previous homogeneous graph models. Other experiments also indicate that our proposed model with a pretraining strategy alleviates the problem of limited labelled data and yields a significant improvement in accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
261,413
2501.11883
An Improved Lower Bound on Oblivious Transfer Capacity Using Polarization and Interaction
We consider the oblivious transfer (OT) capacities of noisy channels against the passive adversary; this problem has not been solved even for the binary symmetric channel (BSC). In the literature, the general construction of OT has been known only for generalized erasure channels (GECs); for the BSC, we convert the channel to the binary symmetric erasure channel (BSEC), which is a special instance of the GEC, via alphabet extension and erasure emulation. In a previous paper by the authors, we derived an improved lower bound on the OT capacity of BSC by proposing a method to recursively emulate BSEC via interactive communication. In this paper, we introduce two new ideas of OT construction: (i) via ``polarization" and interactive communication, we recursively emulate GECs that are not necessarily a BSEC; (ii) in addition to the GEC emulation part, we also utilize interactive communication in the key agreement part of OT protocol. By these methods, we derive lower bounds on the OT capacity of BSC that are superior to the previous one for a certain range of crossover probabilities of the BSC. Via our new lower bound, we show that, at the crossover probability being zero, the slope of tangent of the OT capacity is unbounded.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
526,077
2006.00290
Spatial Distribution of the Mean Peak Age of Information in Wireless Networks
This paper considers a large-scale wireless network consisting of source-destination (SD) pairs, where the sources send time-sensitive information, termed status updates, to their corresponding destinations in a time-slotted fashion. We employ Age of information (AoI) for quantifying the freshness of the status updates measured at the destination nodes for two different queuing disciplines, namely Type I and II queues. Type I queue is assumed to transmit the status updates in a first-come-first-served (FCFS) fashion with no storage facility. However, Type I queue may not necessarily minimize AoI because a new update will not be allowed to enter a server until the current update has been successfully transmitted. To overcome this shortcoming, we consider Type II queue in which the most recent status update available at a given transmission slot is transmitted in order to minimize the AoI. As the update delivery rate for a given link is a function of the interference field seen from the receiver, the temporal mean AoI can be treated as a random variable over space. Our goal in this paper is to characterize the spatial distribution of the mean AoI observed by the SD pairs by modeling them as a Poisson bipolar process. Towards this objective, we first derive accurate bounds on the moments of success probability while efficiently capturing the interference-induced coupling in the activities of the SD pairs. Using this result, we then derive tight bounds on the moments as well as the spatial distribution of peak AoI. Our numerical results verify our analytical findings and demonstrate the impact of various system design parameters on the mean peak AoI.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
179,428
1905.05350
Understanding Pedestrian-Vehicle Interactions with Vehicle Mounted Vision: An LSTM Model and Empirical Analysis
Pedestrians and vehicles often share the road in complex inner city traffic. This leads to interactions between the vehicle and pedestrians, with each affecting the other's motion. In order to create robust methods to reason about pedestrian behavior and to design interfaces of communication between self-driving cars and pedestrians we need to better understand such interactions. In this paper, we present a data-driven approach to implicitly model pedestrians' interactions with vehicles, to better predict pedestrian behavior. We propose a LSTM model that takes as input the past trajectories of the pedestrian and ego-vehicle, and pedestrian head orientation, and predicts the future positions of the pedestrian. Our experiments based on a real-world, inner city dataset captured with vehicle mounted cameras, show that the usage of such cues improve pedestrian prediction when compared to a baseline that purely uses the past trajectory of the pedestrian.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
130,697
1909.04455
Learning review representations from user and product level information for spam detection
Opinion spam has become a widespread problem in social media, where hired spammers write deceptive reviews to promote or demote products to mislead the consumers for profit or fame. Existing works mainly focus on manually designing discrete textual or behavior features, which cannot capture complex semantics of reviews. Although recent works apply deep learning methods to learn review-level semantic features, their models ignore the impact of the user-level and product-level information on learning review semantics and the inherent user-review-product relationship information. In this paper, we propose a Hierarchical Fusion Attention Network (HFAN) to automatically learn the semantics of reviews from the user and product level. Specifically, we design a multi-attention unit to extract user(product)-related review information. Then, we use orthogonal decomposition and fusion attention to learn a user, review, and product representation from the review information. Finally, we take the review as a relation between user and product entity and apply TransH to jointly encode this relationship into review representation. Experimental results obtained more than 10\% absolute precision improvement over the state-of-the-art performances on four real-world datasets, which show the effectiveness and versatility of the model.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
144,811
2206.05712
Graph-based Spatial Transformer with Memory Replay for Multi-future Pedestrian Trajectory Prediction
Pedestrian trajectory prediction is an essential and challenging task for a variety of real-life applications such as autonomous driving and robotic motion planning. Besides generating a single future path, predicting multiple plausible future paths is becoming popular in some recent work on trajectory prediction. However, existing methods typically emphasize spatial interactions between pedestrians and surrounding areas but ignore the smoothness and temporal consistency of predictions. Our model aims to forecast multiple paths based on a historical trajectory by modeling multi-scale graph-based spatial transformers combined with a trajectory smoothing algorithm named ``Memory Replay'' utilizing a memory graph. Our method can comprehensively exploit the spatial information as well as correct the temporally inconsistent trajectories (e.g., sharp turns). We also propose a new evaluation metric named ``Percentage of Trajectory Usage'' to evaluate the comprehensiveness of diverse multi-future predictions. Our extensive experiments show that the proposed model achieves state-of-the-art performance on multi-future prediction and competitive results for single-future prediction. Code released at https://github.com/Jacobieee/ST-MR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
302,107
2103.11594
Deep Neural Networks Learn Meta-Structures from Noisy Labels in Semantic Segmentation
How deep neural networks (DNNs) learn from noisy labels has been studied extensively in image classification but much less in image segmentation. So far, our understanding of the learning behavior of DNNs trained by noisy segmentation labels remains limited. In this study, we address this deficiency in both binary segmentation of biological microscopy images and multi-class segmentation of natural images. We generate extremely noisy labels by randomly sampling a small fraction (e.g., 10%) or flipping a large fraction (e.g., 90%) of the ground truth labels. When trained with these noisy labels, DNNs provide largely the same segmentation performance as trained by the original ground truth. This indicates that DNNs learn structures hidden in labels rather than pixel-level labels per se in their supervised training for semantic segmentation. We refer to these hidden structures in labels as meta-structures. When DNNs are trained by labels with different perturbations to the meta-structure, we find consistent degradation in their segmentation performance. In contrast, incorporation of meta-structure information substantially improves performance of an unsupervised segmentation model developed for binary semantic segmentation. We define meta-structures mathematically as spatial density distributions and show both theoretically and experimentally how this formulation explains key observed learning behavior of DNNs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
225,864
2410.02917
Deep image-based Adaptive BRDF Measure
Efficient and accurate measurement of the bi-directional reflectance distribution function (BRDF) plays a key role in high quality image rendering and physically accurate sensor simulation. However, obtaining the reflectance properties of a material is both time-consuming and challenging. This paper presents a novel method for minimizing the number of samples required for high quality BRDF capture using a gonio-reflectometer setup. Taking an image of the physical material sample as input a lightweight neural network first estimates the parameters of an analytic BRDF model, and the distribution of the sample locations. In a second step we use an image based loss to find the number of samples required to meet the accuracy required. This approach significantly accelerates the measurement process while maintaining a high level of accuracy and fidelity in the BRDF representation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
494,523
2402.18490
TAMM: TriAdapter Multi-Modal Learning for 3D Shape Understanding
The limited scale of current 3D shape datasets hinders the advancements in 3D shape understanding, and motivates multi-modal learning approaches which transfer learned knowledge from data-abundant 2D image and language modalities to 3D shapes. However, even though the image and language representations have been aligned by cross-modal models like CLIP, we find that the image modality fails to contribute as much as the language in existing multi-modal 3D representation learning methods. This is attributed to the domain shift in the 2D images and the distinct focus of each modality. To more effectively leverage both modalities in the pre-training, we introduce TriAdapter Multi-Modal Learning (TAMM) -- a novel two-stage learning approach based on three synergistic adapters. First, our CLIP Image Adapter mitigates the domain gap between 3D-rendered images and natural images, by adapting the visual representations of CLIP for synthetic image-text pairs. Subsequently, our Dual Adapters decouple the 3D shape representation space into two complementary sub-spaces: one focusing on visual attributes and the other for semantic understanding, which ensure a more comprehensive and effective multi-modal pre-training. Extensive experiments demonstrate that TAMM consistently enhances 3D representations for a wide range of 3D encoder architectures, pre-training datasets, and downstream tasks. Notably, we boost the zero-shot classification accuracy on Objaverse-LVIS from 46.8\% to 50.7\%, and improve the 5-way 10-shot linear probing classification accuracy on ModelNet40 from 96.1\% to 99.0\%. Project page: https://alanzhangcs.github.io/tamm-page.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
433,448
1907.13120
An Experiment on Measurement of Pavement Roughness via Android-Based Smartphones
The study focuses on the experiment of using three different smartphones to collect acceleration data from vibration for the road roughness detection. The Android operating system is used in the application. The study takes place on asphaltic pavement of the expressway system of Thailand, with 9 km distance. The run vehicle has an inertial profiler with laser line sensors to record road roughness according to the IRI. The RMS and Machine Learning (ML) methods are used in this study. There is different ability of each smartphone to detect the vibration, thus different detected values are obtained. The results are compared to the IRI observation. ML method gives better result compared to RMS. This study finds little relationship between IRI and acceleration data from vibration collected from smartphones.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
140,302
2005.13101
State Estimation-Based Robust Optimal Control of Influenza Epidemics in an Interactive Human Society
This paper presents a state estimation-based robust optimal control strategy for influenza epidemics in an interactive human society in the presence of modeling uncertainties. Interactive society is influenced by the random entrance of individuals from other human societies whose effects can be modeled as a non-Gaussian noise. Since only the number of exposed and infected humans can be measured, states of the influenza epidemics are first estimated by an extended maximum correntropy Kalman filter (EMCKF) to provide a robust state estimation in the presence of the non-Gaussian noise. An online quadratic program (QP) optimization is then synthesized subject to a robust control Lyapunov function (RCLF) to minimize susceptible and infected humans, while minimizing and bounding the rates of vaccination and antiviral treatment. The joint QP-RCLF-EMCKF meets multiple design specifications such as state estimation, tracking, pointwise control optimality, and robustness to parameter uncertainty and state estimation errors that have not been achieved simultaneously in previous studies. The uniform ultimate boundedness (UUB)/convergence of error trajectories is guaranteed using a Lyapunov stability argument. The soundness of the proposed approach is validated on the influenza epidemics of an interactive human society with a population of 16000. Simulation results show that the QP-RCLF-EMCKF achieves appropriate tracking and state estimation performance. The robustness of the proposed controller is finally illustrated in the presence of modeling error and non-Gaussian noise.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
178,901
2003.11723
Learning transferable and discriminative features for unsupervised domain adaptation
Although achieving remarkable progress, it is very difficult to induce a supervised classifier without any labeled data. Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain. Transferability and discriminability are two key criteria for characterizing the superiority of feature representations to enable successful domain adaptation. In this paper, a novel method called \textit{learning TransFerable and Discriminative Features for unsupervised domain adaptation} (TFDF) is proposed to optimize these two objectives simultaneously. On the one hand, distribution alignment is performed to reduce domain discrepancy and learn more transferable representations. Instead of adopting \textit{Maximum Mean Discrepancy} (MMD) which only captures the first-order statistical information to measure distribution discrepancy, we adopt a recently proposed statistic called \textit{Maximum Mean and Covariance Discrepancy} (MMCD), which can not only capture the first-order statistical information but also capture the second-order statistical information in the reproducing kernel Hilbert space (RKHS). On the other hand, we propose to explore both local discriminative information via manifold regularization and global discriminative information via minimizing the proposed \textit{class confusion} objective to learn more discriminative features, respectively. We integrate these two objectives into the \textit{Structural Risk Minimization} (RSM) framework and learn a domain-invariant classifier. Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
169,708
2011.04106
Ensemble Knowledge Distillation for CTR Prediction
Recently, deep learning-based models have been widely studied for click-through rate (CTR) prediction and lead to improved prediction accuracy in many industrial applications. However, current research focuses primarily on building complex network architectures to better capture sophisticated feature interactions and dynamic user behaviors. The increased model complexity may slow down online inference and hinder its adoption in real-time applications. Instead, our work targets at a new model training strategy based on knowledge distillation (KD). KD is a teacher-student learning framework to transfer knowledge learned from a teacher model to a student model. The KD strategy not only allows us to simplify the student model as a vanilla DNN model but also achieves significant accuracy improvements over the state-of-the-art teacher models. The benefits thus motivate us to further explore the use of a powerful ensemble of teachers for more accurate student model training. We also propose some novel techniques to facilitate ensembled CTR prediction, including teacher gating and early stopping by distillation loss. We conduct comprehensive experiments against 12 existing models and across three industrial datasets. Both offline and online A/B testing results show the effectiveness of our KD-based training strategy.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
205,463
2207.02000
Disentangling private classes through regularization
Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. However, little attention has been devoted to connected legal aspects. In 2016, the European Union approved the General Data Protection Regulation which entered into force in 2018. Its main rationale was to protect the privacy and data protection of its citizens by the way of operating of the so-called "Data Economy". As data is the fuel of modern Artificial Intelligence, it is argued that the GDPR can be partly applicable to a series of algorithmic decision making tasks before a more structured AI Regulation enters into force. In the meantime, AI should not allow undesired information leakage deviating from the purpose for which is created. In this work we propose DisP, an approach for deep learning models disentangling the information related to some classes we desire to keep private, from the data processed by AI. In particular, DisP is a regularization strategy de-correlating the features belonging to the same private class at training time, hiding the information of private classes membership. Our experiments on state-of-the-art deep learning models show the effectiveness of DisP, minimizing the risk of extraction for the classes we desire to keep private.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
306,369
2104.07381
On the Assessment of Benchmark Suites for Algorithm Comparison
Benchmark suites, i.e. a collection of benchmark functions, are widely used in the comparison of black-box optimization algorithms. Over the years, research has identified many desired qualities for benchmark suites, such as diverse topology, different difficulties, scalability, representativeness of real-world problems among others. However, while the topology characteristics have been subjected to previous studies, there is no study that has statistically evaluated the difficulty level of benchmark functions, how well they discriminate optimization algorithms and how suitable is a benchmark suite for algorithm comparison. In this paper, we propose the use of an item response theory (IRT) model, the Bayesian two-parameter logistic model for multiple attempts, to statistically evaluate these aspects with respect to the empirical success rate of algorithms. With this model, we can assess the difficulty level of each benchmark, how well they discriminate different algorithms, the ability score of an algorithm, and how much information the benchmark suite adds in the estimation of the ability scores. We demonstrate the use of this model in two well-known benchmark suites, the Black-Box Optimization Benchmark (BBOB) for continuous optimization and the Pseudo Boolean Optimization (PBO) for discrete optimization. We found that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination. For the PBO, most functions have good discrimination parameters but are often considered too easy. We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites, to measure multiple aspects of the algorithms, and to design adaptive suites.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
230,395
2404.00885
Modeling Output-Level Task Relatedness in Multi-Task Learning with Feedback Mechanism
Multi-task learning (MTL) is a paradigm that simultaneously learns multiple tasks by sharing information at different levels, enhancing the performance of each individual task. While previous research has primarily focused on feature-level or parameter-level task relatedness, and proposed various model architectures and learning algorithms to improve learning performance, we aim to explore output-level task relatedness. This approach introduces a posteriori information into the model, considering that different tasks may produce correlated outputs with mutual influences. We achieve this by incorporating a feedback mechanism into MTL models, where the output of one task serves as a hidden feature for another task, thereby transforming a static MTL model into a dynamic one. To ensure the training process converges, we introduce a convergence loss that measures the trend of a task's outputs during each iteration. Additionally, we propose a Gumbel gating mechanism to determine the optimal projection of feedback signals. We validate the effectiveness of our method and evaluate its performance through experiments conducted on several baseline models in spoken language understanding.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
443,148
2305.03327
FlowText: Synthesizing Realistic Scene Text Video with Optical Flow Estimation
Current video text spotting methods can achieve preferable performance, powered with sufficient labeled training data. However, labeling data manually is time-consuming and labor-intensive. To overcome this, using low-cost synthetic data is a promising alternative. This paper introduces a novel video text synthesis technique called FlowText, which utilizes optical flow estimation to synthesize a large amount of text video data at a low cost for training robust video text spotters. Unlike existing methods that focus on image-level synthesis, FlowText concentrates on synthesizing temporal information of text instances across consecutive frames using optical flow. This temporal information is crucial for accurately tracking and spotting text in video sequences, including text movement, distortion, appearance, disappearance, shelter, and blur. Experiments show that combining general detectors like TransDETR with the proposed FlowText produces remarkable results on various datasets, such as ICDAR2015video and ICDAR2013video. Code is available at https://github.com/callsys/FlowText.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
362,355
1609.06277
Design of Admissible Heuristics for Kinodynamic Motion Planning via Sum-of-Squares Programming
How does one obtain an admissible heuristic for a kinodynamic motion planning problem? This paper develops the analytical tools and techniques to answer this question. A sufficient condition for the admissibility of a heuristic is presented which can be checked directly from the problem data. This condition is also used to formulate a concave program to optimize an admissible heuristic. This optimization is then approximated and solved in polynomial time using sum-of-squares programming techniques. A number of examples are provided to demonstrate these concepts.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
61,264
2011.09456
The Effect of Modern Traffic Information on Braess' Paradox
Braess' paradox has been shown to appear rather generically in many systems of transport on networks. It is especially relevant for vehicular traffic where it shows that in certain situations building a new road in an urban or highway network can lead to increased average travel times for all users. Here we address the question whether this changes if the drivers (agents) have access to traffic information as available for modern traffic networks, i.e. through navigation apps and or personal experiences in the past. We study the effect of traffic information in the classical Braess network, but using a microscopic model for the traffic dynamics, to find out if the paradox can really be observed in such a scenario or if it only exists in some theoretically available user optima that are never realized by drivers that base their route choice decisions intelligently upon realistic traffic information. We address this question for different splits of the two information types.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
207,193
2205.01921
Second Order Path Variationals in Non-Stationary Online Learning
We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses. We show that appropriately designed Strongly Adaptive algorithms achieve a dynamic regret of $\tilde O(d^2 n^{1/5} C_n^{2/5} \vee d^2)$, where $n$ is the time horizon and $C_n$ a path variational based on second order differences of the comparator sequence. Such a path variational naturally encodes comparator sequences that are piecewise linear -- a powerful family that tracks a variety of non-stationarity patterns in practice (Kim et al, 2009). The aforementioned dynamic regret rate is shown to be optimal modulo dimension dependencies and poly-logarithmic factors of $n$. Our proof techniques rely on analysing the KKT conditions of the offline oracle and requires several non-trivial generalizations of the ideas in Baby and Wang, 2021, where the latter work only leads to a slower dynamic regret rate of $\tilde O(d^{2.5}n^{1/3}C_n^{2/3} \vee d^{2.5})$ for the current problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
294,768
1712.04332
Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA
We present a framework for analyzing the exact dynamics of a class of online learning algorithms in the high-dimensional scaling limit. Our results are applied to two concrete examples: online regularized linear regression and principal component analysis. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measures of the target feature vector and its estimates provided by the algorithms will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE can be efficiently obtained. These solutions lead to precise predictions of the performance of the algorithms, as many practical performance metrics are linear functionals of the joint empirical measures. In addition to characterizing the dynamic performance of online learning algorithms, our asymptotic analysis also provides useful insights. In particular, in the high-dimensional limit, and due to exchangeability, the original coupled dynamics associated with the algorithms will be asymptotically "decoupled", with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight for nonconvex optimization problems may prove an interesting line of future research.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
86,590
0809.5191
A Coded Bit-Loading Linear Precoded Discrete Multitone Solution for Power Line Communication
Linear precoded discrete multitone modulation (LP-DMT) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (PLC) context. In this paper, we investigate the bit and energy allocation algorithm of an adaptive LP-DMT system taking into account the channel coding scheme. A coded adaptive LP-DMT system is presented in the PLC context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. The performance of a concatenated channel coding scheme, consisting of an inner Wei's 4-dimensional 16-states trellis code and an outer Reed-Solomon code, in combination with the roposed algorithm is analyzed. Simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. Using a multipath model of PLC channel, it is shown that the proposed coded adaptive LP-DMT system performs better than classical coded discrete multitone.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,431
1406.6323
Dense Correspondences Across Scenes and Scales
We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only small subsets of image pixels; matching only pixels for which stable scales may be reliably estimated. More recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work here is motivated by the observation that pixels in the image have contexts -- the pixels around them -- which may be exploited in order to estimate local scales reliably and repeatably. Specifically, we make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image, not just in detected interest points. (ii) We present three different means for propagating this information: using only the scales at detected interest points, using the underlying image information to guide the propagation of this information across each image, separately, and using both images simultaneously. Finally, (iii), we provide extensive results, both qualitative and quantitative, demonstrating that accurate dense correspondences can be obtained even between very different images, with little computational costs beyond those required by existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
34,112
1904.05519
Efficient and Robust Registration on the 3D Special Euclidean Group
We present an accurate, robust and fast method for registration of 3D scans. Our motion estimation optimizes a robust cost function on the intrinsic representation of rigid motions, i.e., the Special Euclidean group $\mathbb{SE}(3)$. We exploit the geometric properties of Lie groups as well as the robustness afforded by an iteratively reweighted least squares optimization. We also generalize our approach to a joint multiview method that simultaneously solves for the registration of a set of scans. We demonstrate the efficacy of our approach by thorough experimental validation. Our approach significantly outperforms the state-of-the-art robust 3D registration method based on a line process in terms of both speed and accuracy. We also show that this line process method is a special case of our principled geometric solution. Finally, we also present scenarios where global registration based on feature correspondences fails but multiview ICP based on our robust motion estimation is successful.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,337
1612.07157
New Convolutional Codes Derived from Algebraic Geometry Codes
In this paper, we construct new families of convolutional codes. Such codes are obtained by means of algebraic geometry codes. Additionally, more families of convolutional codes are constructed by means of puncturing, extending, expanding and by the direct product code construction applied to algebraic geometry codes. The parameters of the new convolutional codes are better than or comparable to the ones available in literature. In particular, a family of almost near MDS codes is presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
65,908
2110.13048
Nonuniform Negative Sampling and Log Odds Correction with Rare Events Data
We investigate the issue of parameter estimation with nonuniform negative sampling for imbalanced data. We first prove that, with imbalanced data, the available information about unknown parameters is only tied to the relatively small number of positive instances, which justifies the usage of negative sampling. However, if the negative instances are subsampled to the same level of the positive cases, there is information loss. To maintain more information, we derive the asymptotic distribution of a general inverse probability weighted (IPW) estimator and obtain the optimal sampling probability that minimizes its variance. To further improve the estimation efficiency over the IPW method, we propose a likelihood-based estimator by correcting log odds for the sampled data and prove that the improved estimator has the smallest asymptotic variance among a large class of estimators. It is also more robust to pilot misspecification. We validate our approach on simulated data as well as a real click-through rate dataset with more than 0.3 trillion instances, collected over a period of a month. Both theoretical and empirical results demonstrate the effectiveness of our method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
263,053
2201.05545
Multimodal registration of FISH and nanoSIMS images using convolutional neural network models
Nanoscale secondary ion mass spectrometry (nanoSIMS) and fluorescence in situ hybridization (FISH) microscopy provide high-resolution, multimodal image representations of the identity and cell activity respectively of targeted microbial communities in microbiological research. Despite its importance to microbiologists, multimodal registration of FISH and nanoSIMS images is challenging given the morphological distortion and background noise in both images. In this study, we use convolutional neural networks (CNNs) for multiscale feature extraction, shape context for computation of the minimum transformation cost feature matching and the thin-plate spline (TPS) model for multimodal registration of the FISH and nanoSIMS images. Registration accuracy was quantitatively assessed against manually registered images, at both, the pixel and structural levels using standard metrics. Although all six tested CNN models performed well, ResNet18 was observed to outperform VGG16, VGG19, GoogLeNet and ShuffleNet and ResNet101 based on most metrics. This study demonstrates the utility of CNNs in the registration of multimodal images with significant background noise and morphology distortion. We also show aggregate shape, preserved by binarization, to be a robust feature for registering multimodal microbiology-related images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
275,419
2107.06970
Identifying Competition and Mutualism Between Online Groups
Platforms often host multiple online groups with overlapping topics and members. How can researchers and designers understand how related groups affect each other? Inspired by population ecology, prior research in social computing and human-computer interaction has studied related groups by correlating group size with degrees of overlap in content and membership, but has produced puzzling results: overlap is associated with competition in some contexts but with mutualism in others. We suggest that this inconsistency results from aggregating intergroup relationships into an overall environmental effect that obscures the diversity of competition and mutualism among related groups. Drawing on the framework of community ecology, we introduce a time-series method for inferring competition and mutualism. We then use this framework to inform a large-scale analysis of clusters of subreddits that all have high user overlap. We find that mutualism is more common than competition.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
246,256
1611.00228
Application Specific Instrumentation (ASIN): A Bio-inspired Paradigm to Instrumentation using recognition before detection
In this paper we present a new scheme for instrumentation, which has been inspired by the way small mammals sense their environment. We call this scheme Application Specific Instrumentation (ASIN). A conventional instrumentation system focuses on gathering as much information about the scene as possible. This, usually, is a generic system whose data can be used by another system to take a specific action. ASIN fuses these two steps into one. The major merit of the proposed scheme is that it uses low resolution sensors and much less computational overhead to give good performance for a highly specialised application
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
63,190
2111.01396
Boundary Distribution Estimation for Precise Object Detection
In the field of state-of-the-art object detection, the task of object localization is typically accomplished through a dedicated subnet that emphasizes bounding box regression. This subnet traditionally predicts the object's position by regressing the box's center position and scaling factors. Despite the widespread adoption of this approach, we have observed that the localization results often suffer from defects, leading to unsatisfactory detector performance. In this paper, we address the shortcomings of previous methods through theoretical analysis and experimental verification and present an innovative solution for precise object detection. Instead of solely focusing on the object's center and size, our approach enhances the accuracy of bounding box localization by refining the box edges based on the estimated distribution at the object's boundary. Experimental results demonstrate the potential and generalizability of our proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
264,546
2107.09577
How Does Cell-Free Massive MIMO Support Multiple Federated Learning Groups?
Federated learning (FL) has been considered as a promising learning framework for future machine learning systems due to its privacy preservation and communication efficiency. In beyond-5G/6G systems, it is likely to have multiple FL groups with different learning purposes. This scenario leads to a question: How does a wireless network support multiple FL groups? As an answer, we first propose to use a cell-free massive multiple-input multiple-output (MIMO) network to guarantee the stable operation of multiple FL processes by letting the iterations of these FL processes be executed together within a large-scale coherence time. We then develop a novel scheme that asynchronously executes the iterations of FL processes under multicasting downlink and conventional uplink transmission protocols. Finally, we propose a simple/low-complexity resource allocation algorithm which optimally chooses the power and computation resources to minimize the execution time of each iteration of each FL process.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
247,066
2405.09789
LeMeViT: Efficient Vision Transformer with Learnable Meta Tokens for Remote Sensing Image Interpretation
Due to spatial redundancy in remote sensing images, sparse tokens containing rich information are usually involved in self-attention (SA) to reduce the overall token numbers within the calculation, avoiding the high computational cost issue in Vision Transformers. However, such methods usually obtain sparse tokens by hand-crafted or parallel-unfriendly designs, posing a challenge to reach a better balance between efficiency and performance. Different from them, this paper proposes to use learnable meta tokens to formulate sparse tokens, which effectively learn key information meanwhile improving the inference speed. Technically, the meta tokens are first initialized from image tokens via cross-attention. Then, we propose Dual Cross-Attention (DCA) to promote information exchange between image tokens and meta tokens, where they serve as query and key (value) tokens alternatively in a dual-branch structure, significantly reducing the computational complexity compared to self-attention. By employing DCA in the early stages with dense visual tokens, we obtain the hierarchical architecture LeMeViT with various sizes. Experimental results in classification and dense prediction tasks show that LeMeViT has a significant $1.7 \times$ speedup, fewer parameters, and competitive performance compared to the baseline models, and achieves a better trade-off between efficiency and performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
454,529
1404.1978
An Abrupt Change Detection Heuristic with Applications to Cyber Data Attacks on Power Systems
We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
32,161
2411.07954
Learning Memory Mechanisms for Decision Making through Demonstrations
In Partially Observable Markov Decision Processes, integrating an agent's history into memory poses a significant challenge for decision-making. Traditional imitation learning, relying on observation-action pairs for expert demonstrations, fails to capture the expert's memory mechanisms used in decision-making. To capture memory processes as demonstrations, we introduce the concept of memory dependency pairs $(p, q)$ indicating that events at time $p$ are recalled for decision-making at time $q$. We introduce AttentionTuner to leverage memory dependency pairs in Transformers and find significant improvements across several tasks compared to standard Transformers when evaluated on Memory Gym and the Long-term Memory Benchmark. Code is available at https://github.com/WilliamYue37/AttentionTuner.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
507,730
2402.16888
Chaotic attractor reconstruction using small reservoirs -- the influence of topology
Forecasting timeseries based upon measured data is needed in a wide range of applications and has been the subject of extensive research. A particularly challenging task is the forecasting of timeseries generated by chaotic dynamics. In recent years reservoir computing has been shown to be an effective method of forecasting chaotic dynamics and reconstructing chaotic attractors from data. In this work strides are made toward smaller and lower complexity reservoirs with the goal of improved hardware implementability and more reliable production of adequate surrogate models. We show that a reservoir of uncoupled nodes more reliably produces long term timeseries predictions than complex reservoir topologies. We then link the improved attractor reconstruction of the uncoupled reservoir with smaller spectral radii of the resulting surrogate systems. These results indicate that, the node degree plays an important role in determining whether the desired dynamics will be stable in the autonomous surrogate system which is attained via closed-loop operation of the trained reservoir. In terms of hardware implementability, uncoupled nodes would allow for greater freedom in the hardware architecture because no complex coupling setups are needed and because, for uncoupled nodes, the system response is equivalent for space and time multiplexing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
432,750
1406.5273
Identifiability of the Simplex Volume Minimization Criterion for Blind Hyperspectral Unmixing: The No Pure-Pixel Case
In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are verified by numerical simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
34,012
2108.05516
Text Anchor Based Metric Learning for Small-footprint Keyword Spotting
Keyword Spotting (KWS) remains challenging to achieve the trade-off between small footprint and high accuracy. Recently proposed metric learning approaches improved the generalizability of models for the KWS task, and 1D-CNN based KWS models have achieved the state-of-the-arts (SOTA) in terms of model size. However, for metric learning, due to data limitations, the speech anchor is highly susceptible to the acoustic environment and speakers. Also, we note that the 1D-CNN models have limited capability to capture long-term temporal acoustic features. To address the above problems, we propose to utilize text anchors to improve the stability of anchors. Furthermore, a new type of model (LG-Net) is exquisitely designed to promote long-short term acoustic feature modeling based on 1D-CNN and self-attention. Experiments are conducted on Google Speech Commands Dataset version 1 (GSCDv1) and 2 (GSCDv2). The results demonstrate that the proposed text anchor based metric learning method shows consistent improvements over speech anchor on representative CNN-based models. Moreover, our LG-Net model achieves SOTA accuracy of 97.67% and 96.79% on two datasets, respectively. It is encouraged to see that our lighter LG-Net with only 74k parameters obtains 96.82% KWS accuracy on the GSCDv1 and 95.77% KWS accuracy on the GSCDv2.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
250,319
2309.13939
The Time Traveler's Guide to Semantic Web Research: Analyzing Fictitious Research Themes in the ESWC "Next 20 Years" Track
What will Semantic Web research focus on in 20 years from now? We asked this question to the community and collected their visions in the "Next 20 years" track of ESWC 2023. We challenged the participants to submit "future" research papers, as if they were submitting to the 2043 edition of the conference. The submissions - entirely fictitious - were expected to be full scientific papers, with research questions, state of the art references, experimental results and future work, with the goal to get an idea of the research agenda for the late 2040s and early 2050s. We received ten submissions, eight of which were accepted for presentation at the conference, that mixed serious ideas of potential future research themes and discussion topics with some fun and irony. In this paper, we intend to provide a survey of those "science fiction" papers, considering the emerging research themes and topics, analysing the research methods applied by the authors in these very special submissions, and investigating also the most fictitious parts (e.g., neologisms, fabricated references). Our goal is twofold: on the one hand, we investigate what this special track tells us about the Semantic Web community and, on the other hand, we aim at getting some insights on future research practices and directions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
394,411
2104.12953
Exploring Uncertainty in Deep Learning for Construction of Prediction Intervals
Deep learning has achieved impressive performance on many tasks in recent years. However, it has been found that it is still not enough for deep neural networks to provide only point estimates. For high-risk tasks, we need to assess the reliability of the model predictions. This requires us to quantify the uncertainty of model prediction and construct prediction intervals. In this paper, We explore the uncertainty in deep learning to construct the prediction intervals. In general, We comprehensively consider two categories of uncertainties: aleatory uncertainty and epistemic uncertainty. We design a special loss function, which enables us to learn uncertainty without uncertainty label. We only need to supervise the learning of regression task. We learn the aleatory uncertainty implicitly from the loss function. And that epistemic uncertainty is accounted for in ensembled form. Our method correlates the construction of prediction intervals with the uncertainty estimation. Impressive results on some publicly available datasets show that the performance of our method is competitive with other state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
232,366
2001.07136
Sampling Graphlets of Multi-layer Networks: A Restricted Random Walk Approach
Graphlets are induced subgraph patterns that are crucial to the understanding of the structure and function of a large network. A lot of efforts have been devoted to calculating graphlet statistics where random walk based approaches are commonly used to access restricted graphs through the available application programming interfaces (APIs). However, most of them merely consider individual networks while overlooking the strong coupling between different networks. In this paper, we estimate the graphlet concentration in multi-layer networks with real-world applications. An inter-layer edge connects two nodes in different layers if they belong to the same person. The access to a multi-layer network is restrictive in the sense that the upper layer allows random walk sampling, whereas the nodes of lower layers can be accessed only though the inter-layer edges and only support random node or edge sampling. To cope with this new challenge, we define a suit of two-layer graphlets and propose a novel random walk sampling algorithm to estimate the proportion of all the 3-node graphlets. An analytical bound on the sampling steps is proved to guarantee the convergence of our unbiased estimator. We further generalize our algorithm to explore the tradeoff between the estimated accuracies of different graphlets when the sample size is split on different layers. Experimental evaluation on real-world and synthetic multi-layer networks demonstrate the accuracy and high efficiency of our unbiased estimators.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
160,974
1206.1402
A New Greedy Algorithm for Multiple Sparse Regression
This paper proposes a new algorithm for multiple sparse regression in high dimensions, where the task is to estimate the support and values of several (typically related) sparse vectors from a few noisy linear measurements. Our algorithm is a "forward-backward" greedy procedure that -- uniquely -- operates on two distinct classes of objects. In particular, we organize our target sparse vectors as a matrix; our algorithm involves iterative addition and removal of both (a) individual elements, and (b) entire rows (corresponding to shared features), of the matrix. Analytically, we establish that our algorithm manages to recover the supports (exactly) and values (approximately) of the sparse vectors, under assumptions similar to existing approaches based on convex optimization. However, our algorithm has a much smaller computational complexity. Perhaps most interestingly, it is seen empirically to require visibly fewer samples. Ours represents the first attempt to extend greedy algorithms to the class of models that can only/best be represented by a combination of component structural assumptions (sparse and group-sparse, in our case).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,365
2009.14404
Learning to Reflect and to Beamform for Intelligent Reflecting Surface with Implicit Channel Estimation
Intelligent reflecting surface (IRS), which consists of a large number of tunable reflective elements, is capable of enhancing the wireless propagation environment in a cellular network by intelligently reflecting the electromagnetic waves from the base-station (BS) toward the users. The optimal tuning of the phase shifters at the IRS is, however, a challenging problem, because due to the passive nature of reflective elements, it is difficult to directly measure the channels between the IRS, the BS, and the users. Instead of following the traditional paradigm of first estimating the channels then optimizing the system parameters, this paper advocates a machine learning approach capable of directly optimizing both the beamformers at the BS and the reflective coefficients at the IRS based on a system objective. This is achieved by using a deep neural network to parameterize the mapping from the received pilots (plus any additional information, such as the user locations) to an optimized system configuration, and by adopting a permutation invariant/equivariant graph neural network (GNN) architecture to capture the interactions among the different users in the cellular network. Simulation results show that the proposed implicit channel estimation based approach is generalizable, can be interpreted, and can efficiently learn to maximize a sum-rate or minimum-rate objective from a much fewer number of pilots than the traditional explicit channel estimation based approaches.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
198,017
2411.04480
CFPNet: Improving Lightweight ToF Depth Completion via Cross-zone Feature Propagation
Depth completion using lightweight time-of-flight (ToF) depth sensors is attractive due to their low cost. However, lightweight ToF sensors usually have a limited field of view (FOV) compared with cameras. Thus, only pixels in the zone area of the image can be associated with depth signals. Previous methods fail to propagate depth features from the zone area to the outside-zone area effectively, thus suffering from degraded depth completion performance outside the zone. To this end, this paper proposes the CFPNet to achieve cross-zone feature propagation from the zone area to the outside-zone area with two novel modules. The first is a direct-attention-based propagation module (DAPM), which enforces direct cross-zone feature acquisition. The second is a large-kernel-based propagation module (LKPM), which realizes cross-zone feature propagation by utilizing convolution layers with kernel sizes up to 31. CFPNet achieves state-of-the-art (SOTA) depth completion performance by combining these two modules properly, as verified by extensive experimental results on the ZJU-L5 dataset. The code is available at https://github.com/denyingmxd/CFPNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
506,289
2307.12981
3D-LLM: Injecting the 3D World into Large Language Models
Large language models (LLMs) and Vision-Language Models (VLMs) have been proven to excel at multiple tasks, such as commonsense reasoning. Powerful as these models can be, they are not grounded in the 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics, layout, and so on. In this work, we propose to inject the 3D world into large language models and introduce a whole new family of 3D-LLMs. Specifically, 3D-LLMs can take 3D point clouds and their features as input and perform a diverse set of 3D-related tasks, including captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on. Using three types of prompting mechanisms that we design, we are able to collect over 300k 3D-language data covering these tasks. To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that obtains 3D features from rendered multi- view images. Then, we use 2D VLMs as our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism, 3D-LLMs can better capture 3D spatial information. Experiments on ScanQA show that our model outperforms state-of-the-art baselines by a large margin (e.g., the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore, experiments on our held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative examples also show that our model could perform more tasks beyond the scope of existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.
false
false
false
false
true
false
true
true
true
false
false
true
false
false
false
false
false
false
381,448
2304.04343
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence
Black-box adversarial attacks have demonstrated strong potential to compromise machine learning models by iteratively querying the target model or leveraging transferability from a local surrogate model. Recently, such attacks can be effectively mitigated by state-of-the-art (SOTA) defenses, e.g., detection via the pattern of sequential queries, or injecting noise into the model. To our best knowledge, we take the first step to study a new paradigm of black-box attacks with provable guarantees -- certifiable black-box attacks that can guarantee the attack success probability (ASP) of adversarial examples before querying over the target model. This new black-box attack unveils significant vulnerabilities of machine learning models, compared to traditional empirical black-box attacks, e.g., breaking strong SOTA defenses with provable confidence, constructing a space of (infinite) adversarial examples with high ASP, and the ASP of the generated adversarial examples is theoretically guaranteed without verification/queries over the target model. Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs). Then, we propose several novel techniques to craft the randomized AEs while reducing the perturbation size for better imperceptibility. Finally, we have comprehensively evaluated the certifiable black-box attacks on the CIFAR10/100, ImageNet, and LibriSpeech datasets, while benchmarking with 16 SOTA black-box attacks, against various SOTA defenses in the domains of computer vision and speech recognition. Both theoretical and experimental results have validated the significance of the proposed attack. The code and all the benchmarks are available at \url{https://github.com/datasec-lab/CertifiedAttack}.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
357,188
2212.11235
Machine Learning Assisted Inertia Estimation using Ambient Measurements
With the increasing penetration of converter-based renewable resources, different types of dynamics have been introduced to the power system. Due to the complexity and high order of the modern power system, mathematical model-based inertia estimation method becomes more difficult. This paper proposes two novel machine learning assisted inertia estimation methods based on long-recurrent convolutional neural (LRCN) network and graph convolutional neural (GCN) network respectively. Informative features are extracted from ambient measurements collected through phasor measurement units (PMU). Spatial structure with high dimensional features and graphical information are then incorporated to improve the accuracy of the inertia estimation. Case studies are conducted on the IEEE 24-bus system. The proposed LRCN and GCN based inertia estimation models achieve an accuracy of 97.34% and 98.15% respectively. Furthermore, the proposed zero generation injection bus based optimal PMU placement (ZGIB-OPP) has been proved to be able to maximize the system observability, which subsequently improves the performance of all proposed inertia estimation models.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
337,735
2110.00687
Investigating Robustness of Dialog Models to Popular Figurative Language Constructs
Humans often employ figurative language use in communication, including during interactions with dialog systems. Thus, it is important for real-world dialog systems to be able to handle popular figurative language constructs like metaphor and simile. In this work, we analyze the performance of existing dialog models in situations where the input dialog context exhibits use of figurative language. We observe large gaps in handling of figurative language when evaluating the models on two open domain dialog datasets. When faced with dialog contexts consisting of figurative language, some models show very large drops in performance compared to contexts without figurative language. We encourage future research in dialog modeling to separately analyze and report results on figurative language in order to better test model capabilities relevant to real-world use. Finally, we propose lightweight solutions to help existing models become more robust to figurative language by simply using an external resource to translate figurative language to literal (non-figurative) forms while preserving the meaning to the best extent possible.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,490
1804.02767
YOLOv3: An Incremental Improvement
We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at https://pjreddie.com/yolo/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,488
2406.18906
Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets
Large language models (LLMs) can now generate and recognize poetry. But what do LLMs really know about poetry? We develop a task to evaluate how well LLMs recognize one aspect of English-language poetry--poetic form--which captures many different poetic features, including rhyme scheme, meter, and word or line repetition. By using a benchmark dataset of over 4.1k human expert-annotated poems, we show that state-of-the-art LLMs can successfully identify both common and uncommon fixed poetic forms--such as sonnets, sestinas, and pantoums--with surprisingly high accuracy. However, performance varies significantly by poetic form; the models struggle to identify unfixed poetic forms, especially those based on topic or visual features. We additionally measure how many poems from our benchmark dataset are present in popular pretraining datasets or memorized by GPT-4, finding that pretraining presence and memorization may improve performance on this task, but results are inconclusive. We release a benchmark evaluation dataset with 1.4k public domain poems and form annotations, results of memorization experiments and data audits, and code.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
468,224
2501.04597
FrontierNet: Learning Visual Cues to Explore
Exploration of unknown environments is crucial for autonomous robots; it allows them to actively reason and decide on what new data to acquire for tasks such as mapping, object discovery, and environmental assessment. Existing methods, such as frontier-based methods, rely heavily on 3D map operations, which are limited by map quality and often overlook valuable context from visual cues. This work aims at leveraging 2D visual cues for efficient autonomous exploration, addressing the limitations of extracting goal poses from a 3D map. We propose a image-only frontier-based exploration system, with FrontierNet as a core component developed in this work. FrontierNet is a learning-based model that (i) detects frontiers, and (ii) predicts their information gain, from posed RGB images enhanced by monocular depth priors. Our approach provides an alternative to existing 3D-dependent exploration systems, achieving a 16% improvement in early-stage exploration efficiency, as validated through extensive simulations and real-world experiments.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
523,277
2211.13416
Data Origin Inference in Machine Learning
It is a growing direction to utilize unintended memorization in ML models to benefit real-world applications, with recent efforts like user auditing, dataset ownership inference and forgotten data measurement. Standing on the point of ML model development, we introduce a process named data origin inference, to assist ML developers in locating missed or faulty data origin in training set without maintaining strenuous metadata. We formally define the data origin and the data origin inference task in the development of the ML model (mainly neural networks). Then we propose a novel inference strategy combining embedded-space multiple instance classification and shadow training. Diverse use cases cover language, visual and structured data, with various kinds of data origin (e.g. business, county, movie, mobile user, text author). A comprehensive performance analysis of our proposed strategy contains referenced target model layers, available testing data for each origin, and in shadow training, the implementations of feature extraction as well as shadow models. Our best inference accuracy achieves 98.96% in the language use case when the target model is a transformer-based deep neural network. Furthermore, we give a statistical analysis of different kinds of data origin to investigate what kind of origin is probably to be inferred correctly.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
332,464
2311.06279
A novel method of restoration path optimization for the AC-DC bulk power grid after a major blackout
The restoration control of the modern alternating current-direct current (AC-DC) hybrid power grid after a major blackout is difficult and complex. Taking into account the interaction between the line-commutated converter high-voltage direct current (LCC-HVDC) and the AC power grid, this paper proposes a novel optimization method of restoration path to reconfigure the skeleton network for the blackout power grid. Based on the system strength, the supporting capability of the AC power grid for the LCC-HVDC is first analysed from the aspects of start-up and operation of LCC-HVDCs. Subsequently, the quantitative relationship between the restoration path and the restoration characteristic of LCC-HVDC is derived in detail based on the system strength indices of the short-circuit capacity and the frequency regulation capability. Then, an optimization model of restoration path considering non-tree paths is formulated and a feasible optimization algorithm is proposed to achieve the optimal path restoration scheme. A modified IEEE 39-bus system and a partial power grid of Southwest China are simulated to show that the proposed method is suitable for the restoration of AC-DC power grids and can improve restoration efficiency. This research can be an important guidance for operators to rapidly restore the AC-DC power grid.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
406,879
1903.11680
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Modern neural networks are typically trained in an over-parameterized regime where the parameters of the model far exceed the size of the training data. Such neural networks in principle have the capacity to (over)fit any set of labels including pure noise. Despite this, somewhat paradoxically, neural network models trained via first-order methods continue to predict well on yet unseen test data. This paper takes a step towards demystifying this phenomena. Under a rich dataset model, we show that gradient descent is provably robust to noise/corruption on a constant fraction of the labels despite overparameterization. In particular, we prove that: (i) In the first few iterations where the updates are still in the vicinity of the initialization gradient descent only fits to the correct labels essentially ignoring the noisy labels. (ii) to start to overfit to the noisy labels network must stray rather far from from the initialization which can only occur after many more iterations. Together, these results show that gradient descent with early stopping is provably robust to label noise and shed light on the empirical robustness of deep networks as well as commonly adopted heuristics to prevent overfitting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
125,555
1102.4272
Bounds on the Achievable Rate for the Fading Relay Channel with Finite Input Constellations
We consider the wireless Rayleigh fading relay channel with finite complex input constellations. Assuming global knowledge of the channel state information and perfect synchronization, upper and lower bounds on the achievable rate, for the full-duplex relay, as well as the more practical half-duplex relay (in which the relay cannot transmit and receive simultaneously), are studied. Assuming the power constraint at the source node and the relay node to be equal, the gain in rate offered by the use of relay over the direct transmission (without the relay) is investigated. It is shown that for the case of finite complex input constellations, the relay gain attains the maximum at a particular SNR and at higher SNRs the relay gain tends to become zero. Since practical schemes always use finite complex input constellation, the above result means that the relay offers maximum advantage over the direct transmission when we operate at a particular SNR and offers no advantage at very high SNRs. This is contrary to the results already known for the relay channel with Gaussian input alphabet.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,304
1905.12561
Anti-efficient encoding in emergent communication
Despite renewed interest in emergent language simulations with neural networks, little is known about the basic properties of the induced code, and how they compare to human language. One fundamental characteristic of the latter, known as Zipf's Law of Abbreviation (ZLA), is that more frequent words are efficiently associated to shorter strings. We study whether the same pattern emerges when two neural networks, a "speaker" and a "listener", are trained to play a signaling game. Surprisingly, we find that networks develop an \emph{anti-efficient} encoding scheme, in which the most frequent inputs are associated to the longest messages, and messages in general are skewed towards the maximum length threshold. This anti-efficient code appears easier to discriminate for the listener, and, unlike in human communication, the speaker does not impose a contrasting least-effort pressure towards brevity. Indeed, when the cost function includes a penalty for longer messages, the resulting message distribution starts respecting ZLA. Our analysis stresses the importance of studying the basic features of emergent communication in a highly controlled setup, to ensure the latter will not strand too far from human language. Moreover, we present a concrete illustration of how different functional pressures can lead to successful communication codes that lack basic properties of human language, thus highlighting the role such pressures play in the latter.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
true
false
false
false
132,797
2102.02509
An Analysis of International Use of Robots for COVID-19
This article analyses data collected on 338 instances of robots used explicitly in response to COVID-19 from 24 Jan, 2020, to 23 Jan, 2021, in 48 countries. The analysis was guided by four overarching questions: 1) What were robots used for in the COVID-19 response? 2) When were they used? 3) How did different countries innovate? and 4) Did having a national policy on robotics influence a country's innovation and insertion of robotics for COVID-19? The findings indicate that robots were used for six different sociotechnical work domains and 29 discrete use cases. When robots were used varied greatly on the country; although many countries did report an increase at the beginning of their first surge. To understand the findings of how innovation occurred, the data was examined through the lens of the technology's maturity according to NASA's Technical Readiness Assessment metrics. Through this lens, findings note that existing robots were used for more than 78% of the instances; slightly modified robots made up 10%; and truly novel robots or novel use cases constituted 12% of the instances. The findings clearly indicate that countries with a national robotics initiative were more likely to use robotics more often and for broader purposes. Finally, the dataset and analysis produces a broad set of implications that warrant further study and investigation. The results from this analysis are expected to be of value to the robotics and robotics policy community in preparing robots for rapid insertion into future disasters.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
218,431
1907.00263
A Power Efficient Artificial Neuron Using Superconducting Nanowires
With the rising societal demand for more information-processing capacity with lower power consumption, alternative architectures inspired by the parallelism and robustness of the human brain have recently emerged as possible solutions. In particular, spiking neural networks (SNNs) offer a bio-realistic approach, relying on pulses analogous to action potentials as units of information. While software encoded networks provide flexibility and precision, they are often computationally expensive. As a result, hardware SNNs based on the spiking dynamics of a device or circuit represent an increasingly appealing direction. Here, we propose to use superconducting nanowires as a platform for the development of an artificial neuron. Building on an architecture first proposed for Josephson junctions, we rely on the intrinsic nonlinearity of two coupled nanowires to generate spiking behavior, and use electrothermal circuit simulations to demonstrate that the nanowire neuron reproduces multiple characteristics of biological neurons. Furthermore, by harnessing the nonlinearity of the superconducting nanowire's inductance, we develop a design for a variable inductive synapse capable of both excitatory and inhibitory control. We demonstrate that this synapse design supports direct fanout, a feature that has been difficult to achieve in other superconducting architectures, and that the nanowire neuron's nominal energy performance is competitive with that of current technologies.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
136,984
1912.00862
ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network
Automated ICD coding, which assigns the International Classification of Disease codes to patient visits, has attracted much research attention since it can save time and labor for billing. The previous state-of-the-art model utilized one convolutional layer to build document representations for predicting ICD codes. However, the lengths and grammar of text fragments, which are closely related to ICD coding, vary a lot in different documents. Therefore, a flat and fixed-length convolutional architecture may not be capable of learning good document representations. In this paper, we proposed a Multi-Filter Residual Convolutional Neural Network (MultiResCNN) for ICD coding. The innovations of our model are two-folds: it utilizes a multi-filter convolutional layer to capture various text patterns with different lengths and a residual convolutional layer to enlarge the receptive field. We evaluated the effectiveness of our model on the widely-used MIMIC dataset. On the full code set of MIMIC-III, our model outperformed the state-of-the-art model in 4 out of 6 evaluation metrics. On the top-50 code set of MIMIC-III and the full code set of MIMIC-II, our model outperformed all the existing and state-of-the-art models in all evaluation metrics. The code is available at https://github.com/foxlf823/Multi-Filter-Residual-Convolutional-Neural-Network.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
155,915
2101.10399
Anchor Distance for 3D Multi-Object Distance Estimation from 2D Single Shot
Visual perception of the objects in a 3D environment is a key to successful performance in autonomous driving and simultaneous localization and mapping (SLAM). In this paper, we present a real time approach for estimating the distances to multiple objects in a scene using only a single-shot image. Given a 2D Bounding Box (BBox) and object parameters, a 3D distance to the object can be calculated directly using 3D reprojection; however, such methods are prone to significant errors because an error from the 2D detection can be amplified in 3D. In addition, it is also challenging to apply such methods to a real-time system due to the computational burden. In the case of the traditional multi-object detection methods, %they mostly pay attention to existing works have been developed for specific tasks such as object segmentation or 2D BBox regression. These methods introduce the concept of anchor BBox for elaborate 2D BBox estimation, and predictors are specialized and trained for specific 2D BBoxes. In order to estimate the distances to the 3D objects from a single 2D image, we introduce the notion of \textit{anchor distance} based on an object's location and propose a method that applies the anchor distance to the multi-object detector structure. We let the predictors catch the distance prior using anchor distance and train the network based on the distance. The predictors can be characterized to the objects located in a specific distance range. By propagating the distance prior using a distance anchor to the predictors, it is feasible to perform the precise distance estimation and real-time execution simultaneously. The proposed method achieves about 30 FPS speed, and shows the lowest RMSE compared to the existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
216,915
2311.03146
Enabling In-Situ Resources Utilisation by leveraging collaborative robotics and astronaut-robot interaction
Space exploration and establishing human presence on other planets demand advanced technology and effective collaboration between robots and astronauts. Efficient space resource utilization is also vital for extraterrestrial settlements. The Collaborative In-Situ Resources Utilisation (CISRU) project has developed a software suite comprising five key modules. The first module manages multi-agent autonomy, facilitating communication between agents and mission control. The second focuses on environment perception, employing AI algorithms for tasks like environment segmentation and object pose estimation. The third module ensures safe navigation, covering obstacle avoidance, social navigation with astronauts, and cooperation among robots. The fourth module addresses manipulation functions, including multi-tool capabilities and tool-changer design for diverse tasks in In-Situ Resources Utilization (ISRU) scenarios. Finally, the fifth module controls cooperative behaviour, incorporating astronaut commands, Mixed Reality interfaces, map fusion, task supervision, and error control. The suite was tested using an astronaut-rover interaction dataset in a planetary environment and GMV SPoT analogue environments. Results demonstrate the advantages of E4 autonomy and AI in space systems, benefiting astronaut-robot collaboration. This paper details CISRU's development, field test preparation, and analysis, highlighting its potential to revolutionize planetary exploration through AI-powered technology.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
405,732
2412.03600
Social Media Informatics for Sustainable Cities and Societies: An Overview of the Applications, associated Challenges, and Potential Solutions
In the modern world, our cities and societies face several technological and societal challenges, such as rapid urbanization, global warming & climate change, the digital divide, and social inequalities, increasing the need for more sustainable cities and societies. Addressing these challenges requires a multifaceted approach involving all the stakeholders, sustainable planning, efficient resource management, innovative solutions, and modern technologies. Like other modern technologies, social media informatics also plays its part in developing more sustainable and resilient cities and societies. Despite its limitations, social media informatics has proven very effective in various sustainable cities and society applications. In this paper, we review and analyze the role of social media informatics in sustainable cities and society by providing a detailed overview of its applications, associated challenges, and potential solutions. This work is expected to provide a baseline for future research in the domain.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
514,031
2209.13428
LitCovid in 2022: an information resource for the COVID-19 literature
LitCovid (https://www.ncbi.nlm.nih.gov/research/coronavirus/), first launched in February 2020, is a first-of-its-kind literature hub for tracking up-to-date published research on COVID-19. The number of articles in LitCovid has increased from 55,000 to ~300,000 over the past two and half years, with a consistent growth rate of ~10,000 articles per month. In addition to the rapid literature growth, the COVID-19 pandemic has evolved dramatically. For instance, the Omicron variant has now accounted for over 98% of new infections in the U.S. In response to the continuing evolution of the COVID-19 pandemic, this article describes significant updates to LitCovid over the last two years. First, we introduced the Long Covid collection consisting of the articles on COVID-19 survivors experiencing ongoing multisystemic symptoms, including respiratory issues, cardiovascular disease, cognitive impairment, and profound fatigue. Second, we provided new annotations on the latest COVID-19 strains and vaccines mentioned in the literature. Third, we improved several existing features with more accurate machine learning algorithms for annotating topics and classifying articles relevant to COVID-19. LitCovid has been widely used with millions of accesses by users worldwide on various information needs and continues to play a critical role in collecting, curating, and standardizing the latest knowledge on the COVID-19 literature.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
319,889
2409.00014
DivDiff: A Conditional Diffusion Model for Diverse Human Motion Prediction
Diverse human motion prediction (HMP) aims to predict multiple plausible future motions given an observed human motion sequence. It is a challenging task due to the diversity of potential human motions while ensuring an accurate description of future human motions. Current solutions are either low-diversity or limited in expressiveness. Recent denoising diffusion models (DDPM) hold potential generative capabilities in generative tasks. However, introducing DDPM directly into diverse HMP incurs some issues. Although DDPM can increase the diversity of the potential patterns of human motions, the predicted human motions become implausible over time because of the significant noise disturbances in the forward process of DDPM. This phenomenon leads to the predicted human motions being hard to control, seriously impacting the quality of predicted motions and restricting their practical applicability in real-world scenarios. To alleviate this, we propose a novel conditional diffusion-based generative model, called DivDiff, to predict more diverse and realistic human motions. Specifically, the DivDiff employs DDPM as our backbone and incorporates Discrete Cosine Transform (DCT) and transformer mechanisms to encode the observed human motion sequence as a condition to instruct the reverse process of DDPM. More importantly, we design a diversified reinforcement sampling function (DRSF) to enforce human skeletal constraints on the predicted human motions. DRSF utilizes the acquired information from human skeletal as prior knowledge, thereby reducing significant disturbances introduced during the forward process. Extensive results received in the experiments on two widely-used datasets (Human3.6M and HumanEva-I) demonstrate that our model obtains competitive performance on both diversity and accuracy.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
484,716
2401.05583
Diffusion Priors for Dynamic View Synthesis from Monocular Videos
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos. Existing methods struggle to distinguishing between motion and structure, particularly in scenarios where camera poses are either unknown or constrained compared to object motion. Furthermore, with information solely from reference images, it is extremely challenging to hallucinate unseen regions that are occluded or partially observed in the given videos. To address these issues, we first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique. Subsequently, we distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields (NeRF) components. The proposed pipeline achieves geometric consistency while preserving the scene identity. We perform thorough experiments to evaluate the efficacy of the proposed method qualitatively and quantitatively. Our results demonstrate the robustness and utility of our approach in challenging cases, further advancing dynamic novel view synthesis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,840
2305.09579
Private Everlasting Prediction
A private learner is trained on a sample of labeled points and generates a hypothesis that can be used for predicting the labels of newly sampled points while protecting the privacy of the training set [Kasiviswannathan et al., FOCS 2008]. Research uncovered that private learners may need to exhibit significantly higher sample complexity than non-private learners as is the case with, e.g., learning of one-dimensional threshold functions [Bun et al., FOCS 2015, Alon et al., STOC 2019]. We explore prediction as an alternative to learning. Instead of putting forward a hypothesis, a predictor answers a stream of classification queries. Earlier work has considered a private prediction model with just a single classification query [Dwork and Feldman, COLT 2018]. We observe that when answering a stream of queries, a predictor must modify the hypothesis it uses over time, and, furthermore, that it must use the queries for this modification, hence introducing potential privacy risks with respect to the queries themselves. We introduce private everlasting prediction taking into account the privacy of both the training set and the (adaptively chosen) queries made to the predictor. We then present a generic construction of private everlasting predictors in the PAC model. The sample complexity of the initial training sample in our construction is quadratic (up to polylog factors) in the VC dimension of the concept class. Our construction allows prediction for all concept classes with finite VC dimension, and in particular threshold functions with constant size initial training sample, even when considered over infinite domains, whereas it is known that the sample complexity of privately learning threshold functions must grow as a function of the domain size and hence is impossible for infinite domains.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
364,691
2501.10808
Optimizing MACD Trading Strategies A Dance of Finance, Wavelets, and Genetics
In today's financial markets, quantitative trading has become an essential trading method, with the MACD indicator widely employed in quantitative trading strategies. This paper begins by screening and cleaning the dataset, establishing a model that adheres to the basic buy and sell rules of the MACD, and calculating key metrics such as the win rate, return, Sharpe ratio, and maximum drawdown for each stock. However, the MACD often generates erroneous signals in highly volatile markets. To address this, wavelet transform is applied to reduce noise, smoothing the DIF image, and a model is developed based on this to optimize the identification of buy and sell points. The results show that the annualized return has increased by 5%, verifying the feasibility of the method. Subsequently, the divergence principle is used to further optimize the trading strategy, enhancing the model's performance. Additionally, a genetic algorithm is employed to optimize the MACD parameters, tailoring the strategy to the characteristics of different stocks. To improve computational efficiency, the MindSpore framework is used for resource management and parallel computing. The optimized strategy demonstrates improved win rates, returns, Sharpe ratios, and a reduction in maximum drawdown in backtesting.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
525,667
2406.14744
Training Next Generation AI Users and Developers at NCSA
This article focuses on training work carried out in artificial intelligence (AI) at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign via a research experience for undergraduates (REU) program named FoDOMMaT. It also describes why we are interested in AI, and concludes by discussing what we've learned from running this program and its predecessor over six years.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
466,441
1909.11193
Scaling-Translation-Equivariant Networks with Decomposed Convolutional Filters
Encoding the scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many computer vision tasks especially when dealing with multiscale inputs. We study, in this paper, a scaling-translation-equivariant (ST-equivariant) CNN with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve equivariance for the regular representation of the scaling-translation group ST . To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components. A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation, a property which is theoretically analyzed and empirically verified. Numerical experiments demonstrate that the proposed scaling-translation-equivariant network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
146,731
1904.11742
Capacity per Unit-Energy of Gaussian Many-Access Channels
We consider a Gaussian multiple-access channel where the number of transmitters grows with the blocklength $n$. For this setup, the maximum number of bits that can be transmitted reliably per unit-energy is analyzed. We show that if the number of users is of an order strictly above $n/\log n$, then the users cannot achieve any positive rate per unit-energy. In contrast, if the number of users is of order strictly below $n/\log n$, then each user can achieve the single-user capacity per unit-energy $(\log e)/N_0$ (where $N_0/ 2$ is the noise power) by using an orthogonal access scheme such as time division multiple access. We further demonstrate that orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
128,938
2007.03311
An Accelerated DFO Algorithm for Finite-sum Convex Functions
Derivative-free optimization (DFO) has recently gained a lot of momentum in machine learning, spawning interest in the community to design faster methods for problems where gradients are not accessible. While some attention has been given to the concept of acceleration in the DFO literature, existing stochastic algorithms for objective functions with a finite-sum structure have not been shown theoretically to achieve an accelerated rate of convergence. Algorithms that use acceleration in such a setting are prone to instabilities, making it difficult to reach convergence. In this work, we exploit the finite-sum structure of the objective in order to design a variance-reduced DFO algorithm that provably yields acceleration. We prove rates of convergence for both smooth convex and strongly-convex finite-sum objective functions. Finally, we validate our theoretical results empirically on several tasks and datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,023
1612.09434
Data driven estimation of Laplace-Beltrami operator
Approximations of Laplace-Beltrami operators on manifolds through graph Lapla-cians have become popular tools in data analysis and machine learning. These discretized operators usually depend on bandwidth parameters whose tuning remains a theoretical and practical problem. In this paper, we address this problem for the unnormalized graph Laplacian by establishing an oracle inequality that opens the door to a well-founded data-driven procedure for the bandwidth selection. Our approach relies on recent results by Lacour and Massart [LM15] on the so-called Lepski's method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
66,188
2407.03194
Prediction Instability in Machine Learning Ensembles
In machine learning ensembles predictions from multiple models are aggregated. Despite widespread use and strong performance of ensembles in applied problems little is known about the mathematical properties of aggregating models and associated consequences for safe, explainable use of such models. In this paper we prove a theorem that shows that any ensemble will exhibit at least one of the following forms of prediction instability. It will either ignore agreement among all underlying models, change its mind when none of the underlying models have done so, or be manipulable through inclusion or exclusion of options it would never actually predict. As a consequence, ensemble aggregation procedures will always need to balance the benefits of information use against the risk of these prediction instabilities. This analysis also sheds light on what specific forms of prediction instability to expect from particular ensemble algorithms; for example popular tree ensembles like random forest, or xgboost will violate basic, intuitive fairness properties. Finally, we show that this can be ameliorated by using consistent models in asymptotic conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
470,066
2412.01657
PassionNet: An Innovative Framework for Duplicate and Conflicting Requirements Identification
Early detection and resolution of duplicate and conflicting requirements can significantly enhance project efficiency and overall software quality. Researchers have developed various computational predictors by leveraging Artificial Intelligence (AI) potential to detect duplicate and conflicting requirements. However, these predictors lack in performance and requires more effective approaches to empower software development processes. Following the need of a unique predictor that can accurately identify duplicate and conflicting requirements, this research offers a comprehensive framework that facilitate development of 3 different types of predictive pipelines: language models based, multi-model similarity knowledge-driven and large language models (LLMs) context + multi-model similarity knowledge-driven. Within first type predictive pipelines landscape, framework facilitates conflicting/duplicate requirements identification by leveraging 8 distinct types of LLMs. In second type, framework supports development of predictive pipelines that leverage multi-scale and multi-model similarity knowledge, ranging from traditional similarity computation methods to advanced similarity vectors generated by LLMs. In the third type, the framework synthesizes predictive pipelines by integrating contextual insights from LLMs with multi-model similarity knowledge. Across 6 public benchmark datasets, extensive testing of 760 distinct predictive pipelines demonstrates that hybrid predictive pipelines consistently outperforms other two types predictive pipelines in accurately identifying duplicate and conflicting requirements. This predictive pipeline outperformed existing state-of-the-art predictors performance with an overall performance margin of 13% in terms of F1-score
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
513,214
1901.03264
The Capacity Achieving Distribution for the Amplitude Constrained Additive Gaussian Channel: An Upper Bound on the Number of Mass Points
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and when $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,371
2310.10925
Path Following Control of Automated Vehicle Considering Uncertainties and Disturbances with Parametric Varying
Automated Vehicle Path Following Control (PFC) is an advanced control system that can regulate the vehicle into a collision-free region in the presence of other objects on the road. Common collision avoidance functions, such as forward collision warning and automatic emergency braking, have recently been developed and equipped on production vehicles. However, it is impossible to develop a perfectly precise vehicle model when the vehicle is driving. Most PFCs did not consider uncertainties in the vehicle model, external disturbances, and parameter variations at the same time. To address the issues associated with this important feature and function in autonomous driving, a new vehicle PFC is proposed using a robust model predictive control (MPC) design technique based on matrix inequality and the theoretical approach of the hybrid $\&$ switched system. The proposed methodology requires a combination of continuous and discrete states, e.g. regulating the continuous states of the AV (e.g., velocity and yaw angle) and discrete switching of the control strategy that affects the dynamic behaviors of the AV under different driving speeds. Firstly, considering bounded model uncertainties, and norm-bounded external disturbances, the system states and control matrices are modified.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
400,435