id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2006.15646
Expressive Power of Invariant and Equivariant Graph Neural Networks
Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this paper, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. Our theoretical results are proved for invariant GNNs computing a graph embedding (permutation of the nodes of the input graph does not affect the output) and equivariant GNNs computing an embedding of the nodes (permutation of the input permutes the output). We show that Folklore Graph Neural Networks (FGNN), which are tensor based GNNs augmented with matrix multiplication are the most expressive architectures proposed so far for a given tensor order. We illustrate our results on the Quadratic Assignment Problem (a NP-Hard combinatorial problem) by showing that FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms (based on spectral, SDP or other GNNs architectures). On a practical side, we also implement masked tensors to handle batches of graphs of varying sizes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
184,576
2310.10665
Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey
The metaverse is a nascent concept that envisions a virtual universe, a collaborative space where individuals can interact, create, and participate in a wide range of activities. Privacy in the metaverse is a critical concern as the concept evolves and immersive virtual experiences become more prevalent. The metaverse privacy problem refers to the challenges and concerns surrounding the privacy of personal information and data within Virtual Reality (VR) environments as the concept of a shared VR space becomes more accessible. Metaverse will harness advancements from various technologies such as Artificial Intelligence (AI), Extended Reality (XR), Mixed Reality (MR), and 5G/6G-based communication to provide personalized and immersive services to its users. Moreover, to enable more personalized experiences, the metaverse relies on the collection of fine-grained user data that leads to various privacy issues. Therefore, before the potential of the metaverse can be fully realized, privacy concerns related to personal information and data within VR environments must be addressed. This includes safeguarding users' control over their data, ensuring the security of their personal information, and protecting in-world actions and interactions from unauthorized sharing. In this paper, we explore various privacy challenges that future metaverses are expected to face, given their reliance on AI for tracking users, creating XR and MR experiences, and facilitating interactions. Moreover, we thoroughly analyze technical solutions such as differential privacy, Homomorphic Encryption (HE), and Federated Learning (FL) and discuss related sociotechnical issues regarding privacy.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
400,335
1811.04713
Gauges, Loops, and Polynomials for Partition Functions of Graphical Models
Graphical models represent multivariate and generally not normalized probability distributions. Computing the normalization factor, called the partition function, is the main inference challenge relevant to multiple statistical and optimization applications. The problem is of an exponential complexity with respect to the number of variables. In this manuscript, aimed at approximating the PF, we consider Multi-Graph Models where binary variables and multivariable factors are associated with edges and nodes, respectively, of an undirected multi-graph. We suggest a new methodology for analysis and computations that combines the Gauge Function technique with the technique from the field of real stable polynomials. We show that the Gauge Function has a natural polynomial representation in terms of gauges/variables associated with edges of the multi-graph. Moreover, it can be used to recover the Partition Function through a sequence of transformations allowing appealing algebraic and graphical interpretations. Algebraically, one step in the sequence consists in application of a differential operator over gauges associated with an edge. Graphically, the sequence is interpreted as a repetitive elimination of edges resulting in a sequence of models on decreasing in size graphs with the same Partition Function. Even though complexity of computing factors in the sequence models grow exponentially with the number of eliminated edges, polynomials associated with the new factors remain bi-stable if the original factors have this property. Moreover, we show that Belief Propagation estimations in the sequence do not decrease, each low-bounding the Partition Function.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,155
1504.05308
Automatic Face Recognition from Video
The objective of this work is to automatically recognize faces from video sequences in a realistic, unconstrained setup in which illumination conditions are extreme and greatly changing, viewpoint and user motion pattern have a wide variability, and video input is of low quality. At the centre of focus are face appearance manifolds: this thesis presents a significant advance of their understanding and application in the sphere of face recognition. The two main contributions are the Generic Shape-Illumination Manifold recognition algorithm and the Anisotropic Manifold Space clustering. The Generic Shape-Illumination Manifold is evaluated on a large data corpus acquired in real-world conditions and its performance is shown to greatly exceed that of state-of-the-art methods in the literature and the best performing commercial software. Empirical evaluation of the Anisotropic Manifold Space clustering on a popular situation comedy is also described with excellent preliminary results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
42,252
2303.10974
Translate your gibberish: black-box adversarial attack on machine translation systems
Neural networks are deployed widely in natural language processing tasks on the industrial scale, and perhaps the most often they are used as compounds of automatic machine translation systems. In this work, we present a simple approach to fool state-of-the-art machine translation tools in the task of translation from Russian to English and vice versa. Using a novel black-box gradient-free tensor-based optimizer, we show that many online translation tools, such as Google, DeepL, and Yandex, may both produce wrong or offensive translations for nonsensical adversarial input queries and refuse to translate seemingly benign input phrases. This vulnerability may interfere with understanding a new language and simply worsen the user's experience while using machine translation systems, and, hence, additional improvements of these tools are required to establish better translation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
352,657
2104.01864
FedPandemic: A Cross-Device Federated Learning Approach Towards Elementary Prognosis of Diseases During a Pandemic
The amount of data, manpower and capital required to understand, evaluate and agree on a group of symptoms for the elementary prognosis of pandemic diseases is enormous. In this paper, we present FedPandemic, a novel noise implementation algorithm integrated with cross-device Federated learning for Elementary symptom prognosis during a pandemic, taking COVID-19 as a case study. Our results display consistency and enhance robustness in recovering the common symptoms displayed by the disease, paving a faster and cheaper path towards symptom retrieval while also preserving the privacy of patient's symptoms via Federated learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
228,507
2406.00431
SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead
The large communication and computation overhead of federated learning (FL) is one of the main challenges facing its practical deployment over resource-constrained clients and systems. In this work, SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead. In SpaFL, a trainable threshold is defined for each filter/neuron to prune its all connected parameters, thereby leading to structured sparsity. To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters, thereby learning how to prune. Further, global thresholds are used to update model parameters by extracting aggregated parameter importance. The generalization bound of SpaFL is also derived, thereby proving key insights on the relation between sparsity and performance. Experimental results show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines. The code is available at https://github.com/news-vt/SpaFL_NeruIPS_2024
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
459,850
2210.10664
Deep Multi-Representation Model for Click-Through Rate Prediction
Click-Through Rate prediction (CTR) is a crucial task in recommender systems, and it gained considerable attention in the past few years. The primary purpose of recent research emphasizes obtaining meaningful and powerful representations through mining low and high feature interactions using various components such as Deep Neural Networks (DNN), CrossNets, or transformer blocks. In this work, we propose the Deep Multi-Representation model (DeepMR) that jointly trains a mixture of two powerful feature representation learning components, namely DNNs and multi-head self-attentions. Furthermore, DeepMR integrates the novel residual with zero initialization (ReZero) connections to the DNN and the multi-head self-attention components for learning superior input representations. Experiments on three real-world datasets show that the proposed model significantly outperforms all state-of-the-art models in the task of click-through rate prediction.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
325,016
cmp-lg/9409012
Towards an Automatic Dictation System for Translators: the TransTalk Project
Professional translators often dictate their translations orally and have them typed afterwards. The TransTalk project aims at automating the second part of this process. Its originality as a dictation system lies in the fact that both the acoustic signal produced by the translator and the source text under translation are made available to the system. Probable translations of the source text can be predicted and these predictions used to help the speech recognition system in its lexical choices. We present the results of the first prototype, which show a marked improvement in the performance of the speech recognition task when translation predictions are taken into account.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,182
2312.17601
The Tyranny of Possibilities in the Design of Task-Oriented LLM Systems: A Scoping Survey
This scoping survey focuses on our current understanding of the design space for task-oriented LLM systems and elaborates on definitions and relationships among the available design parameters. The paper begins by defining a minimal task-oriented LLM system and exploring the design space of such systems through a thought experiment contemplating the performance of diverse LLM system configurations (involving single LLMs, single LLM-based agents, and multiple LLM-based agent systems) on a complex software development task and hypothesizes the results. We discuss a pattern in our results and formulate them into three conjectures. While these conjectures may be partly based on faulty assumptions, they provide a starting point for future research. The paper then surveys a select few design parameters: covering and organizing research in LLM augmentation, prompting techniques, and uncertainty estimation, and discussing their significance. The paper notes the lack of focus on computational and energy efficiency in evaluating research in these areas. Our survey findings provide a basis for developing the concept of linear and non-linear contexts, which we define and use to enable an agent-centric projection of prompting techniques providing a lens through which prompting techniques can be viewed as multi-agent systems. The paper discusses the implications of this lens, for the cross-pollination of research between LLM prompting and LLM-based multi-agent systems; and also, for the generation of synthetic training data based on existing prompting techniques in research. In all, the scoping survey presents seven conjectures that can help guide future research efforts.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
418,804
1902.01790
Mobile Information Retrieval
Mobile Information Retrieval (Mobile IR) is a relatively recent branch of Information Retrieval (IR) that is concerned with enabling users to carry out, using a mobile device, all the classical IR operations that they were used to carry out on a desktop. This includes finding content available on local repositories or on the web in response to a user query, interacting with the system in an explicit or implicit way, reformulate the query and/or visualise the content of the retrieved documents, as well as providing relevance judgments to improve the retrieval process. This book is structured as follows. Chapter 2 provides a very brief overview of IR and of Mobile IR, briefly outlining what in Mobile IR is different from IR. Chapter 3 provides the foundations of Mobile IR, looking at the characteristics of mobile devices and what they bring to IR, but also looking at how the concept of relevance changed from standard IR to Mobile IR. Chapter 4 presents an overview of the document collections that are searchable by a Mobile IR system, and that are somehow different from classical IR ones; available for experimentation, including collections of data that have become complementary to Mobile IR. Similarly, Chapter 5 reviews mobile information needs studies and users log analysis. Chapter 6 reviews studies aimed at adapting and improving the users interface to the needs of Mobile IR. Chapter 7, instead, reviews work on context awareness, which studies the many aspects of the user context that Mobile IR employs. Chapter 8 reviews some of evaluation work done in Mobile IR, highlighting the distinctions with classical IR from the perspectives of two main IR evaluation methodologies: users studies and test collections. Finally, Chapter 9 reports the conclusions of this review, highlighting briefly some trends in Mobile IR that we believe will drive research in the next few years.
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
120,733
2412.13081
Prompt Augmentation for Self-supervised Text-guided Image Manipulation
Text-guided image editing finds applications in various creative and practical fields. While recent studies in image generation have advanced the field, they often struggle with the dual challenges of coherent image transformation and context preservation. In response, our work introduces prompt augmentation, a method amplifying a single input prompt into several target prompts, strengthening textual context and enabling localised image editing. Specifically, we use the augmented prompts to delineate the intended manipulation area. We propose a Contrastive Loss tailored to driving effective image editing by displacing edited areas and drawing preserved regions closer. Acknowledging the continuous nature of image manipulations, we further refine our approach by incorporating the similarity concept, creating a Soft Contrastive Loss. The new losses are incorporated to the diffusion model, demonstrating improved or competitive image editing results on public datasets and generated images over state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
518,153
2310.00158
Feedback-guided Data Synthesis for Imbalanced Classification
Current status quo in machine learning is to use static datasets of real images for training, which often come from long-tailed distributions. With the recent advances in generative models, researchers have started augmenting these static datasets with synthetic data, reporting moderate performance improvements on classification tasks. We hypothesize that these performance gains are limited by the lack of feedback from the classifier to the generative model, which would promote the usefulness of the generated samples to improve the classifier's performance. In this work, we introduce a framework for augmenting static datasets with useful synthetic samples, which leverages one-shot feedback from the classifier to drive the sampling of the generative model. In order for the framework to be effective, we find that the samples must be close to the support of the real data of the task at hand, and be sufficiently diverse. We validate three feedback criteria on a long-tailed dataset (ImageNet-LT) as well as a group-imbalanced dataset (NICO++). On ImageNet-LT, we achieve state-of-the-art results, with over 4 percent improvement on underrepresented classes while being twice efficient in terms of the number of generated synthetic samples. NICO++ also enjoys marked boosts of over 5 percent in worst group accuracy. With these results, our framework paves the path towards effectively leveraging state-of-the-art text-to-image models as data sources that can be queried to improve downstream applications.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
395,846
2306.11556
NeRF synthesis with shading guidance
The emerging Neural Radiance Field (NeRF) shows great potential in representing 3D scenes, which can render photo-realistic images from novel view with only sparse views given. However, utilizing NeRF to reconstruct real-world scenes requires images from different viewpoints, which limits its practical application. This problem can be even more pronounced for large scenes. In this paper, we introduce a new task called NeRF synthesis that utilizes the structural content of a NeRF patch exemplar to construct a new radiance field of large size. We propose a two-phase method for synthesizing new scenes that are continuous in geometry and appearance. We also propose a boundary constraint method to synthesize scenes of arbitrary size without artifacts. Specifically, we control the lighting effects of synthesized scenes using shading guidance instead of decoupling the scene. We have demonstrated that our method can generate high-quality results with consistent geometry and appearance, even for scenes with complex lighting. We can also synthesize new scenes on curved surface with arbitrary lighting effects, which enhances the practicality of our proposed NeRF synthesis approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
374,640
2405.03701
QxEAI: Quantum-like evolutionary algorithm for automated probabilistic forecasting
Forecasting, to estimate future events, is crucial for business and decision-making. This paper proposes QxEAI, a methodology that produces a probabilistic forecast that utilizes a quantum-like evolutionary algorithm based on training a quantum-like logic decision tree and a classical value tree on a small number of related time series. We demonstrate how the application of our quantum-like evolutionary algorithm to forecasting can overcome the challenges faced by classical and other machine learning approaches. By using three real-world datasets (Dow Jones Index, retail sales, gas consumption), we show how our methodology produces accurate forecasts while requiring little to none manual work.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
452,283
2406.18536
Reliable Interval Prediction of Minimum Operating Voltage Based on On-chip Monitors via Conformalized Quantile Regression
Predicting the minimum operating voltage ($V_{min}$) of chips is one of the important techniques for improving the manufacturing testing flow, as well as ensuring the long-term reliability and safety of in-field systems. Current $V_{min}$ prediction methods often provide only point estimates, necessitating additional techniques for constructing prediction confidence intervals to cover uncertainties caused by different sources of variations. While some existing techniques offer region predictions, but they rely on certain distributional assumptions and/or provide no coverage guarantees. In response to these limitations, we propose a novel distribution-free $V_{min}$ interval estimation methodology possessing a theoretical guarantee of coverage. Our approach leverages conformalized quantile regression and on-chip monitors to generate reliable prediction intervals. We demonstrate the effectiveness of the proposed method on an industrial 5nm automotive chip dataset. Moreover, we show that the use of on-chip monitors can reduce the interval length significantly for $V_{min}$ prediction.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
true
468,045
1907.08051
Self-supervised Training of Proposal-based Segmentation via Background Prediction
While supervised object detection methods achieve impressive accuracy, they generalize poorly to images whose appearance significantly differs from the data they have been trained on. To address this in scenarios where annotating data is prohibitively expensive, we introduce a self-supervised approach to object detection and segmentation, able to work with monocular images captured with a moving camera. At the heart of our approach lies the observation that segmentation and background reconstruction are linked tasks, and the idea that, because we observe a structured scene, background regions can be re-synthesized from their surroundings, whereas regions depicting the object cannot. We therefore encode this intuition as a self-supervised loss function that we exploit to train a proposal-based segmentation network. To account for the discrete nature of object proposals, we develop a Monte Carlo-based training strategy that allows us to explore the large space of object proposals. Our experiments demonstrate that our approach yields accurate detections and segmentations in images that visually depart from those of standard benchmarks, outperforming existing self-supervised methods and approaching weakly supervised ones that exploit large annotated datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
139,019
1606.01341
Neural Architectures for Fine-grained Entity Type Classification
In this work, we investigate several neural network architectures for fine-grained entity type classification. Particularly, we consider extensions to a recently proposed attentive neural architecture and make three key contributions. Previous work on attentive neural architectures do not consider hand-crafted features, we combine learnt and hand-crafted features and observe that they complement each other. Additionally, through quantitative analysis we establish that the attention mechanism is capable of learning to attend over syntactic heads and the phrase containing the mention, where both are known strong hand-crafted features for our task. We enable parameter sharing through a hierarchical label encoding method, that in low-dimensional projections show clear clusters for each type hierarchy. Lastly, despite using the same evaluation dataset, the literature frequently compare models trained using different data. We establish that the choice of training data has a drastic impact on performance, with decreases by as much as 9.85% loose micro F1 score for a previously proposed method. Despite this, our best model achieves state-of-the-art results with 75.36% loose micro F1 score on the well- established FIGER (GOLD) dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
56,794
2309.07749
OmnimatteRF: Robust Omnimatte with 3D Background Modeling
Video matting has broad applications, from adding interesting effects to casually captured movies to assisting video production professionals. Matting with associated effects such as shadows and reflections has also attracted increasing research activity, and methods like Omnimatte have been proposed to separate dynamic foreground objects of interest into their own layers. However, prior works represent video backgrounds as 2D image layers, limiting their capacity to express more complicated scenes, thus hindering application to real-world videos. In this paper, we propose a novel video matting method, OmnimatteRF, that combines dynamic 2D foreground layers and a 3D background model. The 2D layers preserve the details of the subjects, while the 3D background robustly reconstructs scenes in real-world videos. Extensive experiments demonstrate that our method reconstructs scenes with better quality on various videos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
391,895
2001.07926
Optimized Generic Feature Learning for Few-shot Classification across Domains
To learn models or features that generalize across tasks and domains is one of the grand goals of machine learning. In this paper, we propose to use cross-domain, cross-task data as validation objective for hyper-parameter optimization (HPO) to improve on this goal. Given a rich enough search space, optimization of hyper-parameters learn features that maximize validation performance and, due to the objective, generalize across tasks and domains. We demonstrate the effectiveness of this strategy on few-shot image classification within and across domains. The learned features outperform all previous few-shot and meta-learning approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
161,162
2004.00831
Improving 3D Object Detection through Progressive Population Based Augmentation
Data augmentation has been widely adopted for object detection in 3D point clouds. However, all previous related efforts have focused on manually designing specific data augmentation methods for individual architectures. In this work, we present the first attempt to automate the design of data augmentation policies for 3D object detection. We introduce the Progressive Population Based Augmentation (PPBA) algorithm, which learns to optimize augmentation strategies by narrowing down the search space and adopting the best parameters discovered in previous iterations. On the KITTI 3D detection test set, PPBA improves the StarNet detector by substantial margins on the moderate difficulty category of cars, pedestrians, and cyclists, outperforming all current state-of-the-art single-stage detection models. Additional experiments on the Waymo Open Dataset indicate that PPBA continues to effectively improve the StarNet and PointPillars detectors on a 20x larger dataset compared to KITTI. The magnitude of the improvements may be comparable to advances in 3D perception architectures and the gains come without an incurred cost at inference time. In subsequent experiments, we find that PPBA may be up to 10x more data efficient than baseline 3D detection models without augmentation, highlighting that 3D detection models may achieve competitive accuracy with far fewer labeled examples.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
170,753
2012.15637
Exploiting Transitivity for Top-k Selection with Score-Based Dueling Bandits
We consider the problem of top-k subset selection in Dueling Bandit problems with score information. Real-world pairwise ranking problems often exhibit a high degree of transitivity and prior work has suggested sampling methods that exploit such transitivity through the use of parametric preference models like the Bradley-Terry-Luce (BTL) and Thurstone models. To date, this work has focused on cases where sample outcomes are win/loss binary responses. We extend this to selection problems where sampling results contain quantitative information by proposing a Thurstonian style model and adapting the Pairwise Optimal Computing Budget Allocation for subset selection (POCBAm) sampling method to exploit this model for efficient sample selection. We compare the empirical performance against standard POCBAm and other competing algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
213,852
1806.05713
SIMD Vectorization for the Lennard-Jones Potential with AVX2 and AVX-512 instructions
This work describes the SIMD vectorization of the force calculation of the Lennard-Jones potential with Intel AVX2 and AVX-512 instruction sets. Since the force-calculation kernel of the molecular dynamics method involves indirect access to memory, the data layout is one of the most important factors in vectorization. We find that the Array of Structures (AoS) with padding exhibits better performance than Structure of Arrays (SoA) with appropriate vectorization and optimizations. In particular, AoS with 512-bit width exhibits the best performance among the architectures. While the difference in performance between AoS and SoA is significant for the vectorization with AVX2, that with AVX-512 is minor. The effect of other optimization techniques, such as software pipelining together with vectorization, is also discussed. We present results for benchmarks on three CPU architectures: Intel Haswell (HSW), Knights Landing (KNL), and Skylake (SKL). The performance gains by vectorization are about 42\% on HSW compared with the code optimized without vectorization. On KNL, the hand-vectorized codes exhibit 34\% better performance than the codes vectorized automatically by the Intel compiler. On SKL, the code vectorized with AVX2 exhibits slightly better performance than that with vectorized AVX-512.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
100,536
2002.11089
Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Multi-task reinforcement learning (RL) aims to simultaneously learn policies for solving many tasks. Several prior works have found that relabeling past experience with different reward functions can improve sample efficiency. Relabeling methods typically ask: if, in hindsight, we assume that our experience was optimal for some task, for what task was it optimal? In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks. We use this idea to generalize goal-relabeling techniques from prior work to arbitrary classes of tasks. Our experiments confirm that relabeling data using inverse RL accelerates learning in general multi-task settings, including goal-reaching, domains with discrete sets of rewards, and those with linear reward functions.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
165,606
2304.09376
Physical Knowledge Enhanced Deep Neural Network for Sea Surface Temperature Prediction
Traditionally, numerical models have been deployed in oceanography studies to simulate ocean dynamics by representing physical equations. However, many factors pertaining to ocean dynamics seem to be ill-defined. We argue that transferring physical knowledge from observed data could further improve the accuracy of numerical models when predicting Sea Surface Temperature (SST). Recently, the advances in earth observation technologies have yielded a monumental growth of data. Consequently, it is imperative to explore ways in which to improve and supplement numerical models utilizing the ever-increasing amounts of historical observational data. To this end, we introduce a method for SST prediction that transfers physical knowledge from historical observations to numerical models. Specifically, we use a combination of an encoder and a generative adversarial network (GAN) to capture physical knowledge from the observed data. The numerical model data is then fed into the pre-trained model to generate physics-enhanced data, which can then be used for SST prediction. Experimental results demonstrate that the proposed method considerably enhances SST prediction performance when compared to several state-of-the-art baselines.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
359,032
2406.16968
Multimodal Physiological Signals Representation Learning via Multiscale Contrasting for Depression Recognition
Depression recognition based on physiological signals such as functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG) has made considerable progress. However, most existing studies ignore the complementarity and semantic consistency of multimodal physiological signals under the same stimulation task in complex spatio-temporal patterns. In this paper, we introduce a multimodal physiological signals representation learning framework using Siamese architecture via multiscale contrasting for depression recognition (MRLMC). First, fNIRS and EEG are transformed into different but correlated data based on a time-domain data augmentation strategy. Then, we design a spatio-temporal contrasting module to learn the representation of fNIRS and EEG through weight-sharing multiscale spatio-temporal convolution. Furthermore, to enhance the learning of semantic representation associated with stimulation tasks, a semantic consistency contrast module is proposed, aiming to maximize the semantic similarity of fNIRS and EEG. Extensive experiments on publicly available and self-collected multimodal physiological signals datasets indicate that MRLMC outperforms the state-of-the-art models. Moreover, our proposed framework is capable of transferring to multimodal time series downstream tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
467,370
1910.02043
Fair-by-design explainable models for prediction of recidivism
Recidivism prediction provides decision makers with an assessment of the likelihood that a criminal defendant will reoffend that can be used in pre-trial decision-making. It can also be used for prediction of locations where crimes most occur, profiles that are more likely to commit violent crimes. While such instruments are gaining increasing popularity, their use is controversial as they may present potential discriminatory bias in the risk assessment. In this paper we propose a new fair-by-design approach to predict recidivism. It is prototype-based, learns locally and extracts empirically the data distribution. The results show that the proposed method is able to reduce the bias and provide human interpretable rules to assist specialists in the explanation of the given results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,113
2304.02012
EGC: Image Generation and Classification via a Diffusion Energy-Based Model
Learning image classification and image generation using the same set of network parameters is a challenging problem. Recent advanced approaches perform well in one task often exhibit poor performance in the other. This work introduces an energy-based classifier and generator, namely EGC, which can achieve superior performance in both tasks using a single neural network. Unlike a conventional classifier that outputs a label given an image (i.e., a conditional distribution $p(y|\mathbf{x})$), the forward pass in EGC is a classifier that outputs a joint distribution $p(\mathbf{x},y)$, enabling an image generator in its backward pass by marginalizing out the label $y$. This is done by estimating the energy and classification probability given a noisy image in the forward pass, while denoising it using the score function estimated in the backward pass. EGC achieves competitive generation results compared with state-of-the-art approaches on ImageNet-1k, CelebA-HQ and LSUN Church, while achieving superior classification accuracy and robustness against adversarial attacks on CIFAR-10. This work represents the first successful attempt to simultaneously excel in both tasks using a single set of network parameters. We believe that EGC bridges the gap between discriminative and generative learning.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
356,292
2211.14931
UAV-Assisted Space-Air-Ground Integrated Networks: A Technical Review of Recent Learning Algorithms
Recent technological advancements in space, air, and ground components have made possible a new network paradigm called space-air-ground integrated network (SAGIN). Unmanned aerial vehicles (UAVs) play a key role in SAGINs. However, due to UAVs' high dynamics and complexity, real-world deployment of a SAGIN becomes a significant barrier to realizing such SAGINs. UAVs are expected to meet key performance requirements with limited maneuverability and resources with space and terrestrial components. Therefore, employing UAVs in various usage scenarios requires well-designed planning in algorithmic approaches. This paper provides an essential review and analysis of recent learning algorithms in a UAV-assisted SAGIN. We consider possible reward functions and discuss the state-of-the-art algorithms for optimizing the reward functions, including Q-learning, deep Q-learning, multi-armed bandit, particle swarm optimization, and satisfaction-based learning algorithms. Unlike other survey papers, we focus on the methodological perspective of the optimization problem, applicable to various missions on a SAGIN. We consider real-world configurations and the 2-dimensional (2D) and 3-dimensional (3D) UAV trajectories to reflect deployment cases. Our simulations suggest the 3D satisfaction-based learning algorithm outperforms other approaches in most cases. With open challenges discussed at the end, we aim to provide design and deployment guidelines for UAV-assisted SAGINs.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
333,050
2401.06161
Trustworthy human-centric based Automated Decision-Making Systems
Automated Decision-Making Systems (ADS) have become pervasive across various fields, activities, and occupations, to enhance performance. However, this widespread adoption introduces potential risks, including the misuse of ADS. Such misuse may manifest when ADS is employed in situations where it is unnecessary or when essential requirements, conditions, and terms are overlooked, leading to unintended consequences. This research paper presents a thorough examination of the implications, distinctions, and ethical considerations associated with digitalization, digital transformation, and the utilization of ADS in contemporary society and future contexts. Emphasis is placed on the imperative need for regulation, transparency, and ethical conduct in the deployment of ADS.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
421,046
2105.07876
The challenges and realities of retailing in a COVID-19 world: Identifying trending and Vital During Crisis keywords during Covid-19 using Machine Learning (Austria as a case study)
From global pandemics to geopolitical turmoil, leaders in logistics, product allocation, procurement and operations are facing increasing difficulty with safeguarding their organizations against supply chain vulnerabilities. It is recommended to opt for forecasting against trending based benchmark because auditing a future forecast puts more focus on seasonality. The forecasting models provide with end-to-end, real time oversight of the entire supply chain, while utilizing predictive analytics and artificial intelligence to identify potential disruptions before they occur. By combining internal and external data points, coming up with an AI-enabled modelling engine can greatly reduce risk by helping retail companies proactively respond to supply and demand variability. This research paper puts focus on creating an ingenious way to tackle the impact of COVID19 on Supply chain, product allocation, trending and seasonality. Key words: Supply chain, covid-19, forecasting, coronavirus, manufacturing, seasonality, trending, retail.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
235,582
2406.13870
Splatter a Video: Video Gaussian Representation for Versatile Processing
Video representation is a long-standing problem that is crucial for various down-stream tasks, such as tracking,depth prediction,segmentation,view synthesis,and editing. However, current methods either struggle to model complex motions due to the absence of 3D structure or rely on implicit 3D representations that are ill-suited for manipulation tasks. To address these challenges, we introduce a novel explicit 3D representation-video Gaussian representation -- that embeds a video into 3D Gaussians. Our proposed representation models video appearance in a 3D canonical space using explicit Gaussians as proxies and associates each Gaussian with 3D motions for video motion. This approach offers a more intrinsic and explicit representation than layered atlas or volumetric pixel matrices. To obtain such a representation, we distill 2D priors, such as optical flow and depth, from foundation models to regularize learning in this ill-posed setting. Extensive applications demonstrate the versatility of our new video representation. It has been proven effective in numerous video processing tasks, including tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation. Project page: https://sunyangtian.github.io/spatter_a_video_web/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
466,033
2212.00802
An Introduction to Kernel and Operator Learning Methods for Homogenization by Self-consistent Clustering Analysis
Recent advances in operator learning theory have improved our knowledge about learning maps between infinite dimensional spaces. However, for large-scale engineering problems such as concurrent multiscale simulation for mechanical properties, the training cost for the current operator learning methods is very high. The article presents a thorough analysis on the mathematical underpinnings of the operator learning paradigm and proposes a kernel learning method that maps between function spaces. We first provide a survey of modern kernel and operator learning theory, as well as discuss recent results and open problems. From there, the article presents an algorithm to how we can analytically approximate the piecewise constant functions on R for operator learning. This implies the potential feasibility of success of neural operators on clustered functions. Finally, a k-means clustered domain on the basis of a mechanistic response is considered and the Lippmann-Schwinger equation for micro-mechanical homogenization is solved. The article briefly discusses the mathematics of previous kernel learning methods and some preliminary results with those methods. The proposed kernel operator learning method uses graph kernel networks to come up with a mechanistic reduced order method for multiscale homogenization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
334,203
2412.20662
Enhancing Table Recognition with Vision LLMs: A Benchmark and Neighbor-Guided Toolchain Reasoner
Pre-trained foundation models have recently significantly progressed in structured table understanding and reasoning. However, despite advancements in areas such as table semantic understanding and table question answering, recognizing the structure and content of unstructured tables using Vision Large Language Models (VLLMs) remains under-explored. In this work, we address this research gap by employing VLLMs in a training-free reasoning paradigm. First, we design a benchmark with various hierarchical dimensions relevant to table recognition. Subsequently, we conduct in-depth evaluations using pre-trained VLLMs, finding that low-quality image input is a significant bottleneck in the recognition process. Drawing inspiration from these findings, we propose the Neighbor-Guided Toolchain Reasoner (NGTR) framework, which is characterized by integrating multiple lightweight models for low-level visual processing operations aimed at mitigating issues with low-quality input images. Specifically, we utilize a neighbor retrieval mechanism to guide the generation of multiple tool invocation plans, transferring tool selection experiences from similar neighbors to the given input, thereby facilitating suitable tool selection. Additionally, we introduce a reflection module to supervise the tool invocation process. Extensive experiments on public table recognition datasets demonstrate that our approach significantly enhances the recognition capabilities of the vanilla VLLMs. We believe that the designed benchmark and the proposed NGTR framework could provide an alternative solution in table recognition.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
521,305
2406.17282
SetBERT: Enhancing Retrieval Performance for Boolean Logic and Set Operation Queries
We introduce SetBERT, a fine-tuned BERT-based model designed to enhance query embeddings for set operations and Boolean logic queries, such as Intersection (AND), Difference (NOT), and Union (OR). SetBERT significantly improves retrieval performance for logic-structured queries, an area where both traditional and neural retrieval methods typically underperform. We propose an innovative use of inversed-contrastive loss, focusing on identifying the negative sentence, and fine-tuning BERT with a dataset generated via prompt GPT. Furthermore, we demonstrate that, unlike other BERT-based models, fine-tuning with triplet loss actually degrades performance for this specific task. Our experiments reveal that SetBERT-base not only significantly outperforms BERT-base (up to a 63% improvement in Recall) but also achieves performance comparable to the much larger BERT-large model, despite being only one-third the size.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
467,503
2110.08418
Nuances in Margin Conditions Determine Gains in Active Learning
We consider nonparametric classification with smooth regression functions, where it is well known that notions of margin in $E[Y|X]$ determine fast or slow rates in both active and passive learning. Here we elucidate a striking distinction between the two settings. Namely, we show that some seemingly benign nuances in notions of margin -- involving the uniqueness of the Bayes classifier, and which have no apparent effect on rates in passive learning -- determine whether or not any active learner can outperform passive learning rates. In particular, for Audibert-Tsybakov's margin condition (allowing general situations with non-unique Bayes classifiers), no active learner can gain over passive learning in commonly studied settings where the marginal on $X$ is near uniform. Our results thus negate the usual intuition from past literature that active rates should improve over passive rates in nonparametric settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,386
2502.07971
ReTreever: Tree-based Coarse-to-Fine Representations for Retrieval
Document retrieval is a core component of question-answering systems, as it enables conditioning answer generation on new and large-scale corpora. While effective, the standard practice of encoding documents into high-dimensional embeddings for similarity search entails large memory and compute footprints, and also makes it hard to inspect the inner workings of the system. In this paper, we propose a tree-based method for organizing and representing reference documents at various granular levels, which offers the flexibility to balance cost and utility, and eases the inspection of the corpus content and retrieval operations. Our method, called ReTreever, jointly learns a routing function per internal node of a binary tree such that query and reference documents are assigned to similar tree branches, hence directly optimizing for retrieval performance. Our evaluations show that ReTreever generally preserves full representation accuracy. Its hierarchical structure further provides strong coarse representations and enhances transparency by indirectly learning meaningful semantic groupings. Among hierarchical retrieval methods, ReTreever achieves the best retrieval accuracy at the lowest latency, proving that this family of techniques can be viable in practical applications.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
532,839
2410.10833
Online Client Scheduling and Resource Allocation for Efficient Federated Edge Learning
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without sharing their raw data. Due to its privacy-protecting benefits, FL has been deployed in many real-world applications. However, deploying FL over mobile edge networks with constrained resources such as power, bandwidth, and computation suffers from high training latency and low model accuracy, particularly under data and system heterogeneity. In this paper, we investigate the optimal client scheduling and resource allocation for FL over mobile edge networks under resource constraints and uncertainty to minimize the training latency while maintaining the model accuracy. Specifically, we first analyze the impact of client sampling on model convergence in FL and formulate a stochastic optimization problem that captures the trade-off between the running time and model performance under heterogeneous and uncertain system resources. To solve the formulated problem, we further develop an online control scheme based on Lyapunov-based optimization for client sampling and resource allocation without requiring the knowledge of future dynamics in the FL system. Extensive experimental results demonstrate that the proposed scheme can improve both the training latency and resource efficiency compared with the existing schemes.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
498,259
1703.08362
A new class of three-weight linear codes from weakly regular plateaued functions
Linear codes with few weights have many applications in secret sharing schemes, authentication codes, communication and strongly regular graphs. In this paper, we consider linear codes with three weights in arbitrary characteristic. To do this, we generalize the recent contribution of Mesnager given in [Cryptography and Communications 9(1), 71-84, 2017]. We first present a new class of binary linear codes with three weights from plateaued Boolean functions and their weight distributions. We next introduce the notion of (weakly) regular plateaued functions in odd characteristic $p$ and give concrete examples of these functions. Moreover, we construct a new class of three-weight linear $p$-ary codes from weakly regular plateaued functions and determine their weight distributions. We finally analyse the constructed linear codes for secret sharing schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
70,568
2411.18440
Learning the Evolution of Physical Structure of Galaxies via Diffusion Models
In astrophysics, understanding the evolution of galaxies in primarily through imaging data is fundamental to comprehending the formation of the Universe. This paper introduces a novel approach to conditioning Denoising Diffusion Probabilistic Models (DDPM) on redshifts for generating galaxy images. We explore whether this advanced generative model can accurately capture the physical characteristics of galaxies based solely on their images and redshift measurements. Our findings demonstrate that this model not only produces visually realistic galaxy images but also encodes the underlying changes in physical properties with redshift that are the result of galaxy evolution. This approach marks a significant advancement in using generative models to enhance our scientific insight into cosmic phenomena.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
511,868
2204.00806
HLDC: Hindi Legal Documents Corpus
Many populous countries including India are burdened with a considerable backlog of legal cases. Development of automated systems that could process legal documents and augment legal practitioners can mitigate this. However, there is a dearth of high-quality corpora that is needed to develop such data-driven systems. The problem gets even more pronounced in the case of low resource languages such as Hindi. In this resource paper, we introduce the Hindi Legal Documents Corpus (HLDC), a corpus of more than 900K legal documents in Hindi. Documents are cleaned and structured to enable the development of downstream applications. Further, as a use-case for the corpus, we introduce the task of bail prediction. We experiment with a battery of models and propose a Multi-Task Learning (MTL) based model for the same. MTL models use summarization as an auxiliary task along with bail prediction as the main task. Experiments with different models are indicative of the need for further research in this area. We release the corpus and model implementation code with this paper: https://github.com/Exploration-Lab/HLDC
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
289,394
1307.5697
Dimension Reduction via Colour Refinement
Colour refinement is a basic algorithmic routine for graph isomorphism testing, appearing as a subroutine in almost all practical isomorphism solvers. It partitions the vertices of a graph into "colour classes" in such a way that all vertices in the same colour class have the same number of neighbours in every colour class. Tinhofer (Disc. App. Math., 1991), Ramana, Scheinerman, and Ullman (Disc. Math., 1994) and Godsil (Lin. Alg. and its App., 1997) established a tight correspondence between colour refinement and fractional isomorphisms of graphs, which are solutions to the LP relaxation of a natural ILP formulation of graph isomorphism. We introduce a version of colour refinement for matrices and extend existing quasilinear algorithms for computing the colour classes. Then we generalise the correspondence between colour refinement and fractional automorphisms and develop a theory of fractional automorphisms and isomorphisms of matrices. We apply our results to reduce the dimensions of systems of linear equations and linear programs. Specifically, we show that any given LP L can efficiently be transformed into a (potentially) smaller LP L' whose number of variables and constraints is the number of colour classes of the colour refinement algorithm, applied to a matrix associated with the LP. The transformation is such that we can easily (by a linear mapping) map both feasible and optimal solutions back and forth between the two LPs. We demonstrate empirically that colour refinement can indeed greatly reduce the cost of solving linear programs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
25,971
1503.02313
Achieving Secrecy Capacity of the Gaussian Wiretap Channel with Polar Lattices
In this work, an explicit wiretap coding scheme based on polar lattices is proposed to achieve the secrecy capacity of the additive white Gaussian noise (AWGN) wiretap channel. Firstly, polar lattices are used to construct secrecy-good lattices for the mod-$\Lambda_s$ Gaussian wiretap channel. Then we propose an explicit shaping scheme to remove this mod-$\Lambda_s$ front end and extend polar lattices to the genuine Gaussian wiretap channel. The shaping technique is based on the lattice Gaussian distribution, which leads to a binary asymmetric channel at each level for the multilevel lattice codes. By employing the asymmetric polar coding technique, we construct an AWGN-good lattice and a secrecy-good lattice with optimal shaping simultaneously. As a result, the encoding complexity for the sender and the decoding complexity for the legitimate receiver are both O(N logN log(logN)). The proposed scheme is proven to be semantically secure.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,921
1707.05909
Recovering Latent Signals from a Mixture of Measurements using a Gaussian Process Prior
In sensing applications, sensors cannot always measure the latent quantity of interest at the required resolution, sometimes they can only acquire a blurred version of it due the sensor's transfer function. To recover latent signals when only noisy mixed measurements of the signal are available, we propose the Gaussian process mixture of measurements (GPMM), which models the latent signal as a Gaussian process (GP) and allows us to perform Bayesian inference on such signal conditional to a set of noisy mixture of measurements. We describe how to train GPMM, that is, to find the hyperparameters of the GP and the mixing weights, and how to perform inference on the latent signal under GPMM; additionally, we identify the solution to the underdetermined linear system resulting from a sensing application as a particular case of GPMM. The proposed model is validated in the recovery of three signals: a smooth synthetic signal, a real-world heart-rate time series and a step function, where GPMM outperformed the standard GP in terms of estimation error, uncertainty representation and recovery of the spectral content of the latent signal.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
77,312
1402.3392
Interleaved entropy coders
The ANS family of arithmetic coders developed by Jarek Duda has the unique property that encoder and decoder are completely symmetric in the sense that a decoder reading bits will be in the exact same state that the encoder was in when writing those bits---all "buffering" of information is explicitly part of the coder state and identical between encoder and decoder. As a consequence, the output from multiple ABS/ANS coders can be interleaved into the same bitstream without any additional metadata. This allows for very efficient encoding and decoding on CPUs supporting superscalar execution or SIMD instructions, as well as GPU implementations. We also show how interleaving without additional metadata can be implemented for any entropy coder, at some increase in encoder complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
30,871
2305.02499
AutoML-GPT: Automatic Machine Learning with GPT
AI tasks encompass a wide range of domains and fields. While numerous AI models have been designed for specific tasks and applications, they often require considerable human efforts in finding the right model architecture, optimization algorithm, and hyperparameters. Recent advances in large language models (LLMs) like ChatGPT show remarkable capabilities in various aspects of reasoning, comprehension, and interaction. Consequently, we propose developing task-oriented prompts and automatically utilizing LLMs to automate the training pipeline. To implement this concept, we present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyperparameters. AutoML-GPT dynamically takes user requests from the model and data cards and composes the corresponding prompt paragraph. Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct the experiments from data processing to model architecture, hyperparameter tuning, and predicted training log. By leveraging {\ours}'s robust language capabilities and the available AI models, AutoML-GPT can tackle numerous intricate AI tasks across various tasks and datasets. This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many AI tasks.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
362,059
2109.02693
Improving Transferability of Domain Adaptation Networks Through Domain Alignment Layers
Deep learning (DL) has been the primary approach used in various computer vision tasks due to its relevant results achieved on many tasks. However, on real-world scenarios with partially or no labeled data, DL methods are also prone to the well-known domain shift problem. Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models. However, most works conduct domain adaptation leveraging only the extracted features and reducing their domain shift from the perspective of loss function designs. In this paper, we argue that it is not sufficient to handle domain shift only based on domain-level features, but it is also essential to align such information on the feature space. Unlike previous works, we focus on the network design and propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor. These layers are designed to match the feature distributions between different domains and can be easily applied to various MSDA methods. To show the robustness of our approach, we conducted an extensive experimental evaluation considering two challenging scenarios: digit recognition and object classification. The experimental results indicated that our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,818
2303.01640
Hierarchical Graph Neural Networks for Particle Track Reconstruction
We introduce a novel variant of GNN for particle tracking called Hierarchical Graph Neural Network (HGNN). The architecture creates a set of higher-level representations which correspond to tracks and assigns spacepoints to these tracks, allowing disconnected spacepoints to be assigned to the same track, as well as multiple tracks to share the same spacepoint. We propose a novel learnable pooling algorithm called GMPool to generate these higher-level representations called "super-nodes", as well as a new loss function designed for tracking problems and HGNN specifically. On a standard tracking problem, we show that, compared with previous ML-based tracking algorithms, the HGNN has better tracking efficiency performance, better robustness against inefficient input graphs, and better convergence compared with traditional GNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
349,045
2403.07924
AI and Identity
AI-empowered technologies' impact on the world is undeniable, reshaping industries, revolutionizing how humans interact with technology, transforming educational paradigms, and redefining social codes. However, this rapid growth is accompanied by two notable challenges: a lack of diversity within the AI field and a widening AI divide. In this context, This paper examines the intersection of AI and identity as a pathway to understand biases, inequalities, and ethical considerations in AI development and deployment. We present a multifaceted definition of AI identity, which encompasses its creators, applications, and their broader impacts. Understanding AI's identity involves understanding the associations between the individuals involved in AI's development, the technologies produced, and the social, ethical, and psychological implications. After exploring the AI identity ecosystem and its societal dynamics, We propose a framework that highlights the need for diversity in AI across three dimensions: Creators, Creations, and Consequences through the lens of identity. This paper proposes the need for a comprehensive approach to fostering a more inclusive and responsible AI ecosystem through the lens of identity.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
437,096
2412.03766
End to End Collaborative Synthetic Data Generation
The success of AI is based on the availability of data to train models. While in some cases a single data custodian may have sufficient data to enable AI, often multiple custodians need to collaborate to reach a cumulative size required for meaningful AI research. The latter is, for example, often the case for rare diseases, with each clinical site having data for only a small number of patients. Recent algorithms for federated synthetic data generation are an important step towards collaborative, privacy-preserving data sharing. Existing techniques, however, focus exclusively on synthesizer training, assuming that the training data is already preprocessed and that the desired synthetic data can be delivered in one shot, without any hyperparameter tuning. In this paper, we propose an end-to-end collaborative framework for publishing of synthetic data that accounts for privacy-preserving preprocessing as well as evaluation. We instantiate this framework with Secure Multiparty Computation (MPC) protocols and evaluate it in a use case for privacy-preserving publishing of synthetic genomic data for leukemia.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
514,093
2007.13503
Receptive-Field Regularized CNNs for Music Classification and Tagging
Convolutional Neural Networks (CNNs) have been successfully used in various Music Information Retrieval (MIR) tasks, both as end-to-end models and as feature extractors for more complex systems. However, the MIR field is still dominated by the classical VGG-based CNN architecture variants, often in combination with more complex modules such as attention, and/or techniques such as pre-training on large datasets. Deeper models such as ResNet -- which surpassed VGG by a large margin in other domains -- are rarely used in MIR. One of the main reasons for this, as we will show, is the lack of generalization of deeper CNNs in the music domain. In this paper, we present a principled way to make deep architectures like ResNet competitive for music-related tasks, based on well-designed regularization strategies. In particular, we analyze the recently introduced Receptive-Field Regularization and Shake-Shake, and show that they significantly improve the generalization of deep CNNs on music-related tasks, and that the resulting deep CNNs can outperform current more complex models such as CNNs augmented with pre-training and attention. We demonstrate this on two different MIR tasks and two corresponding datasets, thus offering our deep regularized CNNs as a new baseline for these datasets, which can also be used as a feature-extracting module in future, more complex approaches.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
189,151
1807.00571
The Interplay between Lexical Resources and Natural Language Processing
Incorporating linguistic, world and common sense knowledge into AI/NLP systems is currently an important research area, with several open problems and challenges. At the same time, processing and storing this knowledge in lexical resources is not a straightforward task. This tutorial proposes to address these complementary goals from two methodological perspectives: the use of NLP methods to help the process of constructing and enriching lexical resources and the use of lexical resources for improving NLP applications. Two main types of audience can benefit from this tutorial: those working on language resources who are interested in becoming acquainted with automatic NLP techniques, with the end goal of speeding and/or easing up the process of resource curation; and on the other hand, researchers in NLP who would like to benefit from the knowledge of lexical resources to improve their systems and models. The slides of the tutorial are available at https://bitbucket.org/luisespinosa/lr-nlp/
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
101,871
2407.19077
Flexible graph convolutional network for 3D human pose estimation
Although graph convolutional networks exhibit promising performance in 3D human pose estimation, their reliance on one-hop neighbors limits their ability to capture high-order dependencies among body joints, crucial for mitigating uncertainty arising from occlusion or depth ambiguity. To tackle this limitation, we introduce Flex-GCN, a flexible graph convolutional network designed to learn graph representations that capture broader global information and dependencies. At its core is the flexible graph convolution, which aggregates features from both immediate and second-order neighbors of each node, while maintaining the same time and memory complexity as the standard convolution. Our network architecture comprises residual blocks of flexible graph convolutional layers, as well as a global response normalization layer for global feature aggregation, normalization and calibration. Quantitative and qualitative results demonstrate the effectiveness of our model, achieving competitive performance on benchmark datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
476,632
1311.2526
User recommendation in reciprocal and bipartite social networks -- a case study of online dating
Many social networks in our daily life are bipartite networks built on reciprocity. How can we recommend users/friends to a user, so that the user is interested in and attractive to recommended users? In this research, we propose a new collaborative filtering model to improve user recommendations in reciprocal and bipartite social networks. The model considers a user's "taste" in picking others and "attractiveness" in being picked by others. A case study of an online dating network shows that the new model has good performance in recommending both initial and reciprocal contacts.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
28,326
2501.02342
Optimizing Small Language Models for In-Vehicle Function-Calling
We propose a holistic approach for deploying Small Language Models (SLMs) as function-calling agents within vehicles as edge devices, offering a more flexible and robust alternative to traditional rule-based systems. By leveraging SLMs, we simplify vehicle control mechanisms and enhance the user experience. Given the in-vehicle hardware constraints, we apply state-of-the-art model compression techniques, including structured pruning, healing, and quantization, ensuring that the model fits within the resource limitations while maintaining acceptable performance. Our work focuses on optimizing a representative SLM, Microsoft's Phi-3 mini, and outlines best practices for enabling embedded models, including compression, task-specific fine-tuning, and vehicle integration. We demonstrate that, despite significant reduction in model size which removes up to 2 billion parameters from the original model, our approach preserves the model's ability to handle complex in-vehicle tasks accurately and efficiently. Furthermore, by executing the model in a lightweight runtime environment, we achieve a generation speed of 11 tokens per second, making real-time, on-device inference feasible without hardware acceleration. Our results demonstrate the potential of SLMs to transform vehicle control systems, enabling more intuitive interactions between users and their vehicles for an enhanced driving experience.
true
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
522,442
2310.19558
Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification
Federated learning (FL) has been recognized as a rapidly growing research area, where the model is trained over massively distributed clients under the orchestration of a parameter server (PS) without sharing clients' data. This paper delves into a class of federated problems characterized by non-convex and non-smooth loss functions, that are prevalent in FL applications but challenging to handle due to their intricate non-convexity and non-smoothness nature and the conflicting requirements on communication efficiency and privacy protection. In this paper, we propose a novel federated primal-dual algorithm with bidirectional model sparsification tailored for non-convex and non-smooth FL problems, and differential privacy is applied for privacy guarantee. Its unique insightful properties and some privacy and convergence analyses are also presented as the FL algorithm design guidelines. Extensive experiments on real-world data are conducted to demonstrate the effectiveness of the proposed algorithm and much superior performance than some state-of-the-art FL algorithms, together with the validation of all the analytical results and properties.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
404,034
2407.02474
Free Energy in a Circumplex Model of Emotion
Previous active inference accounts of emotion translate fluctuations in free energy to a sense of emotion, mainly focusing on valence. However, in affective science, emotions are often represented as multi-dimensional. In this paper, we propose to adopt a Circumplex Model of emotion by mapping emotions into a two-dimensional spectrum of valence and arousal. We show how one can derive a valence and arousal signal from an agent's expected free energy, relating arousal to the entropy of posterior beliefs and valence to utility less expected utility. Under this formulation, we simulate artificial agents engaged in a search task. We show that the manipulation of priors and object presence results in commonsense variability in emotional states.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
469,746
2209.05550
Mathematical Framework for Online Social Media Auditing
Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards. Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making, compared to what it would have been under a natural/fair content selection. As we have witnessed over the past decade, algorithmic filtering can cause detrimental side effects, ranging from biasing individual decisions to shaping those of society as a whole, for example, diverting users' attention from whether to get the COVID-19 vaccine or inducing the public to choose a presidential candidate. The government's constant attempts to regulate the adverse effects of AF are often complicated, due to bureaucracy, legal affairs, and financial considerations. On the other hand SMPs seek to monitor their own algorithmic activities to avoid being fined for exceeding the allowable threshold. In this paper, we mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees. This state-of-the-art algorithm can be used either by authorities acting as external regulators or by SMPs for self-auditing.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
317,121
2006.01687
SeqXFilter: A Memory-efficient Denoising Filter for Dynamic Vision Sensors
Neuromorphic event-based dynamic vision sensors (DVS) have much faster sampling rates and a higher dynamic range than frame-based imaging sensors. However, they are sensitive to background activity (BA) events that are unwanted. There are some filters for tackling this problem based on spatio-temporal correlation. However, they are either memory-intensive or computing-intensive. We propose \emph{SeqXFilter}, a spatio-temporal correlation filter with only a past event window that has an O(1) space complexity and has simple computations. We explore the spatial correlation of an event with its past few events by analyzing the distribution of the events when applying different functions on the spatial distances. We find the best function to check the spatio-temporal correlation for an event for \emph{SeqXFilter}, best separating real events and noise events. We not only give the visual denoising effect of the filter but also use two metrics for quantitatively analyzing the filter's performance. Four neuromorphic event-based datasets, recorded from four DVS with different output sizes, are used for validation of our method. The experimental results show that \emph{SeqXFilter} achieves similar performance as baseline NNb filters, but with extremely small memory cost and simple computation logic.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
179,843
2303.11390
Dual-Weight Particle Filter for Radar-Based Dynamic Bayesian Grid Maps
Through constant improvements in recent years radar sensors have become a viable alternative to lidar as the main distancing sensor of an autonomous vehicle. Although robust and with the possibility to directly measure the radial velocity, it brings it's own set of challenges, for which existing algorithms need to be adapted. One core algorithm of a perception system is dynamic occupancy grid mapping, which has traditionally relied on lidar. In this paper we present a dual-weight particle filter as an extension for a Bayesian occupancy grid mapping framework to allow to operate it with radar as its main sensors. It uses two separate particle weights that are computed differently to compensate that a radial velocity measurement in many situations is not able to capture the actual velocity of an object. We evaluate the method extensively with simulated data and show the advantages over existing single weight solutions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
352,828
0910.4901
Distortion Exponent in MIMO Channels with Feedback
The transmission of a Gaussian source over a block-fading multiple antenna channel in the presence of a feedback link is considered. The feedback link is assumed to be an error and delay free link of capacity 1 bit per channel use. Under the short-term power constraint, the optimal exponential behavior of the end-to-end average distortion is characterized for all source-channel bandwidth ratios. It is shown that the optimal transmission strategy is successive refinement source coding followed by progressive transmission over the channel, in which the channel block is allocated dynamically among the layers based on the channel state using the feedback link as an instantaneous automatic repeat request (ARQ) signal.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
4,800
2502.10801
FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation
DeepFakes pose a significant threat to our society. One representative DeepFake application is face-swapping, which replaces the identity in a facial image with that of a victim. Although existing methods partially mitigate these risks by degrading the quality of swapped images, they often fail to disrupt the identity transformation effectively. To fill this gap, we propose FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake face-swapping threats. Specifically, FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders. When shared online, these perturbed images mislead face-swapping techniques, causing them to generate facial images with identities significantly different from the original user. Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques, reducing the face match rate from 90\% (without defense) to below 10\%. Both qualitative and quantitative studies further confirm its ability to confuse human perception, highlighting its practical utility. Additionally, we investigate key factors that may influence FSG and evaluate its robustness against various adaptive adversaries.
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
false
534,050
2207.11490
Towards Smart Fake News Detection Through Explainable AI
People now see social media sites as their sole source of information due to their popularity. The Majority of people get their news through social media. At the same time, fake news has grown exponentially on social media platforms in recent years. Several artificial intelligence-based solutions for detecting fake news have shown promising results. On the other hand, these detection systems lack explanation capabilities, i.e., the ability to explain why they made a prediction. This paper highlights the current state of the art in explainable fake news detection. We discuss the pitfalls in the current explainable AI-based fake news detection models and present our ongoing research on multi-modal explainable fake news detection model.
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
false
309,660
1610.02454
Learning What and Where to Draw
Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations. We also show preliminary results on the more challenging domain of text- and location-controllable synthesis of images of human actions on the MPII Human Pose dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
62,098
2303.10728
Training Deep Boltzmann Networks with Sparse Ising Machines
The slowing down of Moore's law has driven the development of unconventional computing paradigms, such as specialized Ising machines tailored to solve combinatorial optimization problems. In this paper, we show a new application domain for probabilistic bit (p-bit) based Ising machines by training deep generative AI models with them. Using sparse, asynchronous, and massively parallel Ising machines we train deep Boltzmann networks in a hybrid probabilistic-classical computing setup. We use the full MNIST and Fashion MNIST (FMNIST) dataset without any downsampling and a reduced version of CIFAR-10 dataset in hardware-aware network topologies implemented in moderately sized Field Programmable Gate Arrays (FPGA). For MNIST, our machine using only 4,264 nodes (p-bits) and about 30,000 parameters achieves the same classification accuracy (90%) as an optimized software-based restricted Boltzmann Machine (RBM) with approximately 3.25 million parameters. Similar results follow for FMNIST and CIFAR-10. Additionally, the sparse deep Boltzmann network can generate new handwritten digits and fashion products, a task the 3.25 million parameter RBM fails at despite achieving the same accuracy. Our hybrid computer takes a measured 50 to 64 billion probabilistic flips per second, which is at least an order of magnitude faster than superficially similar Graphics and Tensor Processing Unit (GPU/TPU) based implementations. The massively parallel architecture can comfortably perform the contrastive divergence algorithm (CD-n) with up to n = 10 million sweeps per update, beyond the capabilities of existing software implementations. These results demonstrate the potential of using Ising machines for traditionally hard-to-train deep generative Boltzmann networks, with further possible improvement in nanodevice-based realizations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
352,561
1506.00179
A Deterministic Analysis of Decimation for Sigma-Delta Quantization of Bandlimited Functions
We study Sigma-Delta ($\Sigma\Delta$) quantization of oversampled bandlimited functions. We prove that digitally integrating blocks of bits and then down-sampling, a process known as decimation, can efficiently encode the associated $\Sigma\Delta$ bit-stream. It allows a large reduction in the bit-rate while still permitting good approximation of the underlying bandlimited function via an appropriate reconstruction kernel. Specifically, in the case of stable $r$th order $\Sigma\Delta$ schemes we show that the reconstruction error decays exponentially in the bit-rate. For example, this result applies to the 1-bit, greedy, first-order $\Sigma\Delta$ scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
43,630
2012.11774
Differentially Private Synthetic Medical Data Generation using Convolutional GANs
Deep learning models have demonstrated superior performance in several application problems, such as image classification and speech processing. However, creating a deep learning model using health record data requires addressing certain privacy challenges that bring unique concerns to researchers working in this domain. One effective way to handle such private data issues is to generate realistic synthetic data that can provide practically acceptable data quality and correspondingly the model performance. To tackle this challenge, we develop a differentially private framework for synthetic data generation using R\'enyi differential privacy. Our approach builds on convolutional autoencoders and convolutional generative adversarial networks to preserve some of the critical characteristics of the generated synthetic data. In addition, our model can also capture the temporal information and feature correlations that might be present in the original data. We demonstrate that our model outperforms existing state-of-the-art models under the same privacy budget using several publicly available benchmark medical datasets in both supervised and unsupervised settings.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
212,722
1903.01777
A New Approach to Adaptive Data Analysis and Learning via Maximal Leakage
There is an increasing concern that most current published research findings are false. The main cause seems to lie in the fundamental disconnection between theory and practice in data analysis. While the former typically relies on statistical independence, the latter is an inherently adaptive process: new hypotheses are formulated based on the outcomes of previous analyses. A recent line of work tries to mitigate these issues by enforcing constraints, such as differential privacy, that compose adaptively while degrading gracefully and thus provide statistical guarantees even in adaptive contexts. Our contribution consists in the introduction of a new approach, based on the concept of Maximal Leakage, an information-theoretic measure of leakage of information. The main result allows us to compare the probability of an event happening when adaptivity is considered with respect to the non-adaptive scenario. The bound we derive represents a generalization of the bounds used in non-adaptive scenarios (e.g., McDiarmid's inequality for $c$-sensitive functions, false discovery error control via significance level, etc.), and allows us to replicate or even improve, in certain regimes, the results obtained using Max-Information or Differential Privacy. In contrast with the line of work started by Dwork et al., our results do not rely on Differential Privacy but are, in principle, applicable to every algorithm that has a bounded leakage, including the differentially private algorithms and the ones with a short description length.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
123,339
1704.00993
A New Transmitted Reference Pulse Cluster Based Ultra-Wideband Transmitter Design
An energy efficient ultra-wideband (UWB) transmitter based on the novel transmitted reference pulse cluster (TRPC) modulation scheme is presented. The TRPC-UWB transmitter integrates, namely, wideband active baluns, wideband I-Q modulator based up-conversion mixer, and differential to single-ended converter. The integrated circuits of TRPC-UWB front end is designed and implemented in the 130-nm CMOS process technology. the measured worst-case carrier leakage suppression is 22.4 dBc, while the single sideband suppression is higher than 31.6 dBc, operating at the frequency from 3.1 GHz to 8.2 GHz. With adjustable data rate of 10 to 300 Mbps, the transmitter achieves a high energy efficiency of 38.4 pJ/pulse.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
71,175
2403.15189
Forecasting the load of Parcel Pickup Points using a Markov Jump Process
The growth of e-commerce has resulted in a surge in parcel deliveries, increasing transportation costs and pollution issues. Alternatives to home delivery have emerged, such as the delivery to so-called parcel pick-up points (PUPs), which eliminates delivery failure due to customers not being at home. Nevertheless, parcels reaching overloaded PUPs may need to be redirected to alternative PUPs, sometimes far from the chosen ones, which may generate customer dissatisfaction. Consequently, predicting the PUP load is critical for a PUP management company to infer the availability of PUPs for future orders and better balance parcel flows between PUPs. This paper proposes a new approach to forecasting the PUP load evolution using a Markov jump process that models the parcel life cycle. The latest known status of each parcel is considered to estimate its contribution to the future load of its target PUP. This approach can account for the variability of activity, the various parcel preparation delays by sellers, and the diversity of parcel carriers that may result in different delivery delays. Here, results are provided for predicting the load associated with parcels ordered from online retailers by customers (Business-to-Customer, B2C). The proposed approach is generic and can also be applied to other parcel flows to PUPs, such as second-hand products (Customer-to-Customer, C2C) sent via a PUP network.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
440,438
2205.10861
Contrastive Learning of Coarse-Grained Force Fields
Coarse-grained models have proven helpful for simulating complex systems over long timescales to provide molecular insights into various processes. Methodologies for systematic parameterization of the underlying energy function, or force field that describes the interactions among different components of the system are of great interest for ensuring simulation accuracy. We present a new method, potential contrasting, to enable efficient learning of force fields that can accurately reproduce the conformational distribution produced with all-atom simulations. Potential contrasting generalizes the noise contrastive estimation method with umbrella sampling to better learn the complex energy landscape of molecular systems. When applied to the Trp-cage protein, we found that the technique produces force fields that thoroughly capture the thermodynamics of the folding process despite the use of only $\alpha$-Carbons in the coarse-grained model. We further showed that potential contrasting could be applied over large datasets that combine the conformational ensembles of many proteins to ensure the transferability of coarse-grained force fields. We anticipate potential contrasting to be a powerful tool for building general-purpose coarse-grained force fields.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
297,904
2110.01485
JuriBERT: A Masked-Language Model Adaptation for French Legal Text
Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic language models pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,789
2010.01477
Generalized Two-Dimensional Quaternion Principal Component Analysis with Weighting for Color Image Recognition
One of the most powerful methods of color image recognition is the two-dimensional principle component analysis (2DQPCA) approach, which is based on quaternion representation and preserves color information very well. However, the current versions of 2DQPCA are still not feasible to extract different geometric properties of color images according to practical data analysis requirements and they are vulnerable to strong noise. In this paper, a generalized 2DQPCA approach with weighting is presented with imposing $L_{p}$ norms on both constraint and objective functions. As a unit 2DQPCA framework, this new version makes it possible to choose adaptive regularizations and constraints according to actual applications and can extract both geometric properties and color information of color images. The projection vectors generated by the deflating scheme are required to be orthogonal to each other. A weighting matrix is defined to magnify the effect of main features. This overcomes the shortcomings of traditional 2DQPCA that the recognition rate decreases as the number of principal components increases. The numerical results based on the real face databases validate that the newly proposed method is robust to noise and performs better than the state-of-the-art 2DQPCA-based algorithms and four prominent deep learning methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
198,667
1902.01342
Self-Tuning Spectral Clustering for Adaptive Tracking Areas Design in 5G Ultra-Dense Networks
In this paper, we address the issue of automatic tracking areas (TAs) planning in fifth generation (5G) ultra-dense networks (UDNs). By invoking handover (HO) attempts and measurement reports (MRs) statistics of a 4G live network, we first introduce a new kernel function mapping HO attempts, MRs and inter-site distances (ISDs) into the so-called similarity weight. The corresponding matrix is then fed to a self-tuning spectral clustering (STSC) algorithm to automatically define the TAs number and borders. After evaluating its performance in terms of the $Q$-metric as well as the silhouette score for various kernel parameters, we show that the clustering scheme yields a significant reduction of tracking area updates and average paging requests per TA; optimizing thereby network resources.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
120,629
2410.02156
The why, what, and how of AI-based coding in scientific research
Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential to transform coding into intuitive conversations, but best practices and effective workflows are only emerging. We dissect AI-based coding through three key lenses: the nature and role of LLMs in coding (why), six types of coding assistance they provide (what), and a five-step workflow in action with practical implementation strategies (how). Additionally, we address the limitations and future outlook of AI in coding. By offering actionable insights, this framework helps to guide researchers in effectively leveraging AI to enhance coding practices and education, accelerating scientific progress.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
true
494,138
2303.15187
Avatarm: an Avatar With Manipulation Capabilities for the Physical Metaverse
Metaverse is an immersive shared space that remote users can access through virtual and augmented reality interfaces, enabling their avatars to interact with each other and the surrounding. Although digital objects can be manipulated, physical objects cannot be touched, grasped, or moved within the metaverse due to the lack of a suitable interface. This work proposes a solution to overcome this limitation by introducing the concept of a Physical Metaverse enabled by a new interface named "Avatarm". The Avatarm consists in an avatar enhanced with a robotic arm that performs physical manipulation tasks while remaining entirely hidden in the metaverse. The users have the illusion that the avatar is directly manipulating objects without the mediation by a robot. The Avatarm is the first step towards a new metaverse, the "Physical Metaverse", where users can physically interact each other and with the environment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
354,396
2201.06978
ASOCEM: Automatic Segmentation Of Contaminations in cryo-EM
Particle picking is currently a critical step in the cryo-electron microscopy single particle reconstruction pipeline. Contaminations in the acquired micrographs severely degrade the performance of particle pickers, resulting is many ``non-particles'' in the collected stack of particles. In this paper, we present ASOCEM (Automatic Segmentation Of Contaminations in cryo-EM), an automatic method to detect and segment contaminations, which requires as an input only the approximated particle size. In particular, it does not require any parameter tuning nor manual intervention. Our method is based on the observation that the statistical distribution of contaminated regions is different from that of the rest of the micrograph. This nonrestrictive assumption allows to automatically detect various types of contaminations, from the carbon edges of the supporting grid to high contrast blobs of different sizes. We demonstrate the efficiency of our algorithm using various experimental data sets containing various types of contaminations. ASOCEM is integrated as part of the KLT picker \cite{ELDAR2020107473} and is available at \url{https://github.com/ShkolniskyLab/kltpicker2}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
275,896
2501.00142
Minimalist Vision with Freeform Pixels
A minimalist vision system uses the smallest number of pixels needed to solve a vision task. While traditional cameras use a large grid of square pixels, a minimalist camera uses freeform pixels that can take on arbitrary shapes to increase their information content. We show that the hardware of a minimalist camera can be modeled as the first layer of a neural network, where the subsequent layers are used for inference. Training the network for any given task yields the shapes of the camera's freeform pixels, each of which is implemented using a photodetector and an optical mask. We have designed minimalist cameras for monitoring indoor spaces (with 8 pixels), measuring room lighting (with 8 pixels), and estimating traffic flow (with 8 pixels). The performance demonstrated by these systems is on par with a traditional camera with orders of magnitude more pixels. Minimalist vision has two major advantages. First, it naturally tends to preserve the privacy of individuals in the scene since the captured information is inadequate for extracting visual details. Second, since the number of measurements made by a minimalist camera is very small, we show that it can be fully self-powered, i.e., function without an external power supply or a battery.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
521,550
1608.05374
DNN-based Speech Synthesis for Indian Languages from ASCII text
Text-to-Speech synthesis in Indian languages has a seen lot of progress over the decade partly due to the annual Blizzard challenges. These systems assume the text to be written in Devanagari or Dravidian scripts which are nearly phonemic orthography scripts. However, the most common form of computer interaction among Indians is ASCII written transliterated text. Such text is generally noisy with many variations in spelling for the same word. In this paper we evaluate three approaches to synthesize speech from such noisy ASCII text: a naive Uni-Grapheme approach, a Multi-Grapheme approach, and a supervised Grapheme-to-Phoneme (G2P) approach. These methods first convert the ASCII text to a phonetic script, and then learn a Deep Neural Network to synthesize speech from that. We train and test our models on Blizzard Challenge datasets that were transliterated to ASCII using crowdsourcing. Our experiments on Hindi, Tamil and Telugu demonstrate that our models generate speech of competetive quality from ASCII text compared to the speech synthesized from the native scripts. All the accompanying transliterated datasets are released for public access.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
59,967
2405.03352
Salient Object Detection From Arbitrary Modalities
Toward desirable saliency prediction, the types and numbers of inputs for a salient object detection (SOD) algorithm may dynamically change in many real-life applications. However, existing SOD algorithms are mainly designed or trained for one particular type of inputs, failing to be generalized to other types of inputs. Consequentially, more types of SOD algorithms need to be prepared in advance for handling different types of inputs, raising huge hardware and research costs. Differently, in this paper, we propose a new type of SOD task, termed Arbitrary Modality SOD (AM SOD). The most prominent characteristics of AM SOD are that the modality types and modality numbers will be arbitrary or dynamically changed. The former means that the inputs to the AM SOD algorithm may be arbitrary modalities such as RGB, depths, or even any combination of them. While, the latter indicates that the inputs may have arbitrary modality numbers as the input type is changed, e.g. single-modality RGB image, dual-modality RGB-Depth (RGB-D) images or triple-modality RGB-Depth-Thermal (RGB-D-T) images. Accordingly, a preliminary solution to the above challenges, \i.e. a modality switch network (MSN), is proposed in this paper. In particular, a modality switch feature extractor (MSFE) is first designed to extract discriminative features from each modality effectively by introducing some modality indicators, which will generate some weights for modality switching. Subsequently, a dynamic fusion module (DFM) is proposed to adaptively fuse features from a variable number of modalities based on a novel Transformer structure. Finally, a new dataset, named AM-XD, is constructed to facilitate research on AM SOD. Extensive experiments demonstrate that our AM SOD method can effectively cope with changes in the type and number of input modalities for robust salient object detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
452,152
1511.07218
Convex Optimization Based State Estimation against Sparse Integrity Attacks
We consider the problem of robust estimation in the presence of integrity attacks. There are m sensors monitoring the state and p of them are under attack. The malicious measurements collected by the compromised sensors can be manipulated arbitrarily by the attacker. The classical estimators such as the least squares estimator may not provide a reliable estimate under the so-called (p,m)-sparse attack. In this work, we are not restricting our efforts in studying whether any specific estimator is resilient to the attack or not, but instead we aim to present some generic sufficient and necessary conditions for robustness by considering a general class of convex optimization based estimators. The sufficient and necessary conditions are shown to be tight, with a trivial gap.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
49,396
2012.02099
Performance Indicators Contributing To Success At The Group And Play-Off Stages Of The 2019 Rugby World Cup
Performance indicators that contributed to success at the group stage and play-off stages of the 2019 Rugby World Cup were analysed using publicly available data obtained from the official tournament website using both a non-parametric statistical technique, Wilcoxon's signed rank test, and a decision rules technique from machine learning called RIPPER. Our statistical results found that ball carry effectiveness (percentage of ball carries that penetrated the opposition gain-line) and total metres gained (kick metres plus carry metres) were found to contribute to success at both stages of the tournament and that indicators that contributed to success during the group stages (dominating possession, making more ball carries, making more passes, winning more rucks, and making less tackles) did not contribute to success at the play-off stage. Our results using RIPPER found that low ball carries and a low lineout success percentage jointly contributed to losing at the group stage, while winning a low number of rucks and carrying over the gain-line a sufficient number of times contributed to winning at the play-off stage of the tournament. The results emphasise the need for teams to adapt their playing strategies from the group stage to the play-off stage at tournament in order to be successful.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,645
2211.16756
Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled Learning
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
333,738
1411.3550
Investigating Rumor Propagation with TwitterTrails
Social media have become part of modern news reporting, used by journalists to spread information and find sources, or as a news source by individuals. The quest for prominence and recognition on social media sites like Twitter can sometimes eclipse accuracy and lead to the spread of false information. As a way to study and react to this trend, we introduce {\sc TwitterTrails}, an interactive, web-based tool ({\tt twittertrails.com}) that allows users to investigate the origin and propagation characteristics of a rumor and its refutation, if any, on Twitter. Visualizations of burst activity, propagation timeline, retweet and co-retweeted networks help its users trace the spread of a story. Within minutes {\sc TwitterTrails} will collect relevant tweets and automatically answer several important questions regarding a rumor: its originator, burst characteristics, propagators and main actors according to the audience. In addition, it will compute and report the rumor's level of visibility and, as an example of the power of crowdsourcing, the audience's skepticism towards it which correlates with the rumor's credibility. We envision {\sc TwitterTrails} as valuable tool for individual use, but we especially for amateur and professional journalists investigating recent and breaking stories. Further, its expanding collection of investigated rumors can be used to answer questions regarding the amount and success of misinformation on Twitter.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
37,510
1904.03259
Is 'Unsupervised Learning' a Misconceived Term?
Is all of machine learning supervised to some degree? The field of machine learning has traditionally been categorized pedagogically into $supervised~vs~unsupervised~learning$; where supervised learning has typically referred to learning from labeled data, while unsupervised learning has typically referred to learning from unlabeled data. In this paper, we assert that all machine learning is in fact supervised to some degree, and that the scope of supervision is necessarily commensurate to the scope of learning potential. In particular, we argue that clustering algorithms such as k-means, and dimensionality reduction algorithms such as principal component analysis, variational autoencoders, and deep belief networks are each internally supervised by the data itself to learn their respective representations of its features. Furthermore, these algorithms are not capable of external inference until their respective outputs (clusters, principal components, or representation codes) have been identified and externally labeled in effect. As such, they do not suffice as examples of unsupervised learning. We propose that the categorization `supervised vs unsupervised learning' be dispensed with, and instead, learning algorithms be categorized as either $internally~or~externally~supervised$ (or both). We believe this change in perspective will yield new fundamental insights into the structure and character of data and of learning algorithms.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
126,649
2402.09240
Switch EMA: A Free Lunch for Better Flatness and Sharpness
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization. Despite achieving better flatness, existing WA methods might fall into worse final performances or require extra test-time computations. This work unveils the full potential of EMA with a single line of modification, i.e., switching the EMA parameters to the original model after each epoch, dubbed as Switch EMA (SEMA). From both theoretical and empirical aspects, we demonstrate that SEMA can help DNNs to reach generalization optima that better trade-off between flatness and sharpness. To verify the effectiveness of SEMA, we conduct comparison experiments with discriminative, generative, and regression tasks on vision and language datasets, including image classification, self-supervised learning, object detection and segmentation, image generation, video prediction, attribute regression, and language modeling. Comprehensive results with popular optimizers and networks show that SEMA is a free lunch for DNN training by improving performances and boosting convergence speeds.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
429,429
2205.15695
On Preemption and Learning in Stochastic Scheduling
We study single-machine scheduling of jobs, each belonging to a job type that determines its duration distribution. We start by analyzing the scenario where the type characteristics are known and then move to two learning scenarios where the types are unknown: non-preemptive problems, where each started job must be completed before moving to another job; and preemptive problems, where job execution can be paused in the favor of moving to a different job. In both cases, we design algorithms that achieve sublinear excess cost, compared to the performance with known types, and prove lower bounds for the non-preemptive case. Notably, we demonstrate, both theoretically and through simulations, how preemptive algorithms can greatly outperform non-preemptive ones when the durations of different job types are far from one another, a phenomenon that does not occur when the type durations are known.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,832
1206.5259
A Tractable Approach to Finding Closest Truncated-commute-time Neighbors in Large Graphs
Recently there has been much interest in graph-based learning, with applications in collaborative filtering for recommender networks, link prediction for social networks and fraud detection. These networks can consist of millions of entities, and so it is very important to develop highly efficient techniques. We are especially interested in accelerating random walk approaches to compute some very interesting proximity measures of these kinds of graphs. These measures have been shown to do well empirically (Liben-Nowell & Kleinberg, 2003; Brand, 2005). We introduce a truncated variation on a well-known measure, namely commute times arising from random walks on graphs. We present a very novel algorithm to compute all interesting pairs of approximate nearest neighbors in truncated commute times, without computing it between all pairs. We show results on both simulated and real graphs of size up to 100; 000 entities, which indicate near-linear scaling in computation time.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
16,797
cs/0407029
Static versus Dynamic Arbitrage Bounds on Multivariate Option Prices
We compare static arbitrage price bounds on basket calls, i.e. bounds that only involve buy-and-hold trading strategies, with the price range obtained within a multi-variate generalization of the Black-Scholes model. While there is no gap between these two sets of prices in the univariate case, we observe here that contrary to our intuition about model risk for at-the-money calls, there is a somewhat large gap between model prices and static arbitrage prices, hence a similarly large set of prices on which a multivariate Black-Scholes model cannot be calibrated but where no conclusion can be drawn on the presence or not of a static arbitrage opportunity.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
538,267
1806.03378
Cultural Investment and Urban Socio-Economic Development: A Geo-Social Network Approach
Being able to assess the impact of government-led investment onto socio-economic indicators in cities has long been an important target of urban planning. However, due to the lack of large-scale data with a fine spatio-temporal resolution, there have been limitations in terms of how planners can track the impact and measure the effectiveness of cultural investment in small urban areas. Taking advantage of nearly 4 million transition records for three years in London from a popular location-based social network service, Foursquare, we study how the socio-economic impact of government cultural expenditure can be detected and predicted. Our analysis shows that network indicators such as average clustering coefficient or centrality can be exploited to estimate the likelihood of local growth in response to cultural investment. We subsequently integrate these features in supervised learning models to infer socio-economic deprivation changes for London's neighbourhoods. This research presents how geo-social and mobile services can be used as a proxy to track and predict socio-economic deprivation changes as government financial effort is put in developing urban areas and thus gives evidence and suggestions for further policy-making and investment optimisation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
99,981
2310.10659
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
In recent years, the security issues of artificial intelligence have become increasingly prominent due to the rapid development of deep learning research and applications. Backdoor attack is an attack targeting the vulnerability of deep learning models, where hidden backdoors are activated by triggers embedded by the attacker, thereby outputting malicious predictions that may not align with the intended output for a given input. In this work, we propose a novel black-box backdoor attack based on machine unlearning. The attacker first augments the training set with carefully designed samples, including poison and mitigation data, to train a `benign' model. Then, the attacker posts unlearning requests for the mitigation samples to remove the impact of relevant data on the model, gradually activating the hidden backdoor. Since backdoors are implanted during the iterative unlearning process, it significantly increases the computational overhead of existing defense methods for backdoor detection or mitigation. To address this new security threat, we proposes two methods for detecting or mitigating such malicious unlearning requests. We conduct the experiment in both exact unlearning and approximate unlearning (i.e., SISA) settings. Experimental results indicate that: 1) our attack approach can successfully implant backdoor into the model, and sharding increases the difficult of attack; 2) our detection algorithms are effective in identifying the mitigation samples, while sharding reduces the effectiveness of our detection algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
400,331
2410.03981
A Survey on LLM-based Code Generation for Low-Resource and Domain-Specific Programming Languages
Large Language Models (LLMs) have shown impressive capabilities in code generation for popular programming languages. However, their performance on Low-Resource Programming Languages (LRPLs) and Domain-Specific Languages (DSLs) remains a significant challenge, affecting millions of developers-3.5 million users in Rust alone-who cannot fully utilize LLM capabilities. LRPLs and DSLs encounter unique obstacles, including data scarcity and, for DSLs, specialized syntax that is poorly represented in general-purpose datasets. Addressing these challenges is crucial, as LRPLs and DSLs enhance development efficiency in specialized domains, such as finance and science. While several surveys discuss LLMs in software engineering, none focus specifically on the challenges and opportunities associated with LRPLs and DSLs. Our survey fills this gap by systematically reviewing the current state, methodologies, and challenges in leveraging LLMs for code generation in these languages. We filtered 111 papers from over 27,000 published studies between 2020 and 2024 to evaluate the capabilities and limitations of LLMs in LRPLs and DSLs. We report the LLMs used, benchmarks, and metrics for evaluation, strategies for enhancing performance, and methods for dataset collection and curation. We identified four main evaluation techniques and several metrics for assessing code generation in LRPLs and DSLs. Our analysis categorizes improvement methods into six groups and summarizes novel architectures proposed by researchers. Despite various techniques and metrics, a standard approach and benchmark dataset for evaluating code generation in LRPLs and DSLs are lacking. This survey serves as a resource for researchers and practitioners at the intersection of LLMs, software engineering, and specialized programming languages, laying the groundwork for future advancements in code generation for LRPLs and DSLs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
495,074
2303.04614
Densely Connected $G$-invariant Deep Neural Networks with Signed Permutation Representations
We introduce and investigate, for finite groups $G$, $G$-invariant deep neural network ($G$-DNN) architectures with ReLU activation that are densely connected-- i.e., include all possible skip connections. In contrast to other $G$-invariant architectures in the literature, the preactivations of the$G$-DNNs presented here are able to transform by \emph{signed} permutation representations (signed perm-reps) of $G$. Moreover, the individual layers of the $G$-DNNs are not required to be $G$-equivariant; instead, the preactivations are constrained to be $G$-equivariant functions of the network input in a way that couples weights across all layers. The result is a richer family of $G$-invariant architectures never seen previously. We derive an efficient implementation of $G$-DNNs after a reparameterization of weights, as well as necessary and sufficient conditions for an architecture to be ``admissible''-- i.e., nondegenerate and inequivalent to smaller architectures. We include code that allows a user to build a $G$-DNN interactively layer-by-layer, with the final architecture guaranteed to be admissible. We show that there are far more admissible $G$-DNN architectures than those accessible with the ``concatenated ReLU'' activation function from the literature. Finally, we apply $G$-DNNs to two example problems -- (1) multiplication in $\{-1, 1\}$ (with theoretical guarantees) and (2) 3D object classification -- % finding that the inclusion of signed perm-reps significantly boosts predictive performance compared to baselines with only ordinary (i.e., unsigned) perm-reps.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
350,156
1801.04929
Generalizing, Decoding, and Optimizing Support Vector Machine Classification
The classification of complex data usually requires the composition of processing steps. Here, a major challenge is the selection of optimal algorithms for preprocessing and classification (including parameterizations). Nowadays, parts of the optimization process are automized but expert knowledge and manual work are still required. We present three steps to face this process and ease the optimization. Namely, we take a theoretical view on classical classifiers, provide an approach to interpret the classifier together with the preprocessing, and integrate both into one framework which enables a semiautomatic optimization of the processing chain and which interfaces numerous algorithms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
88,363
2110.10018
Dynamic pricing and assortment under a contextual MNL demand
We consider dynamic multi-product pricing and assortment problems under an unknown demand over T periods, where in each period, the seller decides on the price for each product or the assortment of products to offer to a customer who chooses according to an unknown Multinomial Logit Model (MNL). Such problems arise in many applications, including online retail and advertising. We propose a randomized dynamic pricing policy based on a variant of the Online Newton Step algorithm (ONS) that achieves a $O(d\sqrt{T}\log(T))$ regret guarantee under an adversarial arrival model. We also present a new optimistic algorithm for the adversarial MNL contextual bandits problem, which achieves a better dependency than the state-of-the-art algorithms in a problem-dependent constant $\kappa_2$ (potentially exponentially small). Our regret upper bound scales as $\tilde{O}(d\sqrt{\kappa_2 T}+ \log(T)/\kappa_2)$, which gives a stronger bound than the existing $\tilde{O}(d\sqrt{T}/\kappa_2)$ guarantees.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
262,008
1704.01250
Relative Learning from Web Images for Content-adaptive Enhancement
Personalized and content-adaptive image enhancement can find many applications in the age of social media and mobile computing. This paper presents a relative-learning-based approach, which, unlike previous methods, does not require matching original and enhanced images for training. This allows the use of massive online photo collections to train a ranking model for improved enhancement. We first propose a multi-level ranking model, which is learned from only relatively-labeled inputs that are automatically crawled. Then we design a novel parameter sampling scheme under this model to generate the desired enhancement parameters for a new image. For evaluation, we first verify the effectiveness and the generalization abilities of our approach, using images that have been enhanced/labeled by experts. Then we carry out subjective tests, which show that users prefer images enhanced by our approach over other existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,223
1812.06397
Connecting Spectral Clustering to Maximum Margins and Level Sets
We study the connections between spectral clustering and the problems of maximum margin clustering, and estimation of the components of level sets of a density function. Specifically, we obtain bounds on the eigenvectors of graph Laplacian matrices in terms of the between cluster separation, and within cluster connectivity. These bounds ensure that the spectral clustering solution converges to the maximum margin clustering solution as the scaling parameter is reduced towards zero. The sensitivity of maximum margin clustering solutions to outlying points is well known, but can be mitigated by first removing such outliers, and applying maximum margin clustering to the remaining points. If outliers are identified using an estimate of the underlying probability density, then the remaining points may be seen as an estimate of a level set of this density function. We show that such an approach can be used to consistently estimate the components of the level sets of a density function under very mild assumptions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
116,606
2412.01410
CellSeg1: Robust Cell Segmentation with One Training Image
Recent trends in cell segmentation have shifted towards universal models to handle diverse cell morphologies and imaging modalities. However, for continuously emerging cell types and imaging techniques, these models still require hundreds or thousands of annotated cells for fine-tuning. We introduce CellSeg1, a practical solution for segmenting cells of arbitrary morphology and modality with a few dozen cell annotations in 1 image. By adopting Low-Rank Adaptation of the Segment Anything Model (SAM), we achieve robust cell segmentation. Tested on 19 diverse cell datasets, CellSeg1 trained on 1 image achieved 0.81 average mAP at 0.5 IoU, performing comparably to existing models trained on over 500 images. It also demonstrated superior generalization in cross-dataset tests on TissueNet. We found that high-quality annotation of a few dozen densely packed cells of varied sizes is key to effective segmentation. CellSeg1 provides an efficient solution for cell segmentation with minimal annotation effort.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,103
2408.10672
Neural Exploratory Landscape Analysis
Recent research in Meta-Black-Box Optimization (MetaBBO) have shown that meta-trained neural networks can effectively guide the design of black-box optimizers, significantly reducing the need for expert tuning and delivering robust performance across complex problem distributions. Despite their success, a paradox remains: MetaBBO still rely on human-crafted Exploratory Landscape Analysis features to inform the meta-level agent about the low-level optimization progress. To address the gap, this paper proposes Neural Exploratory Landscape Analysis (NeurELA), a novel framework that dynamically profiles landscape features through a two-stage, attention-based neural network, executed in an entirely end-to-end fashion. NeurELA is pre-trained over a variety of MetaBBO algorithms using a multi-task neuroevolution strategy. Extensive experiments show that NeurELA achieves consistently superior performance when integrated into different and even unseen MetaBBO tasks and can be efficiently fine-tuned for further performance boost. This advancement marks a pivotal step in making MetaBBO algorithms more autonomous and broadly applicable.The source code of NeurELA can be accessed at https://anonymous.4open.science/r/Neur-ELA-303C.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
481,972
2202.05714
Modeling Reservoir Release Using Pseudo-Prospective Learning and Physical Simulations to Predict Water Temperature
This paper proposes a new data-driven method for predicting water temperature in stream networks with reservoirs. The water flows released from reservoirs greatly affect the water temperature of downstream river segments. However, the information of released water flow is often not available for many reservoirs, which makes it difficult for data-driven models to capture the impact to downstream river segments. In this paper, we first build a state-aware graph model to represent the interactions amongst streams and reservoirs, and then propose a parallel learning structure to extract the reservoir release information and use it to improve the prediction. In particular, for reservoirs with no available release information, we mimic the water managers' release decision process through a pseudo-prospective learning method, which infers the release information from anticipated water temperature dynamics. For reservoirs with the release information, we leverage a physics-based model to simulate the water release temperature and transfer such information to guide the learning process for other reservoirs. The evaluation for the Delaware River Basin shows that the proposed method brings over 10\% accuracy improvement over existing data-driven models for stream temperature prediction when the release data is not available for any reservoirs. The performance is further improved after we incorporate the release data and physical simulations for a subset of reservoirs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
279,961