id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1110.1209
Audio Watermarking with Error Correction
In recent times, communication through the internet has tremendously facilitated the distribution of multimedia data. Although this is indubitably a boon, one of its repercussions is that it has also given impetus to the notorious issue of online music piracy. Unethical attempts can also be made to deliberately alter such copyrighted data and thus, misuse it. Copyright violation by means of unauthorized distribution, as well as unauthorized tampering of copyrighted audio data is an important technological and research issue. Audio watermarking has been proposed as a solution to tackle this issue. The main purpose of audio watermarking is to protect against possible threats to the audio data and in case of copyright violation or unauthorized tampering, authenticity of such data can be disputed by virtue of audio watermarking.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
true
12,509
2104.08480
Learning to Share by Masking the Non-shared for Multi-domain Sentiment Classification
Multi-domain sentiment classification deals with the scenario where labeled data exists for multiple domains but insufficient for training effective sentiment classifiers that work across domains. Thus, fully exploiting sentiment knowledge shared across domains is crucial for real world applications. While many existing works try to extract domain-invariant features in high-dimensional space, such models fail to explicitly distinguish between shared and private features at text-level, which to some extent lacks interpretablity. Based on the assumption that removing domain-related tokens from texts would help improve their domain-invariance, we instead first transform original sentences to be domain-agnostic. To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations. Empirical experiments on a well-adopted multiple domain sentiment classification dataset demonstrate the effectiveness of our proposed model on both multi-domain sentiment classification and cross-domain settings, by increasing the accuracy by 0.94% and 1.8% respectively. Further analysis on masking proves that removing those domain-related and sentiment irrelevant tokens decreases texts' domain distinction, resulting in the performance degradation of a BERT-based domain classifier by over 12%.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
230,820
1301.6150
Polar Codes For Broadcast Channels
Polar codes are introduced for discrete memoryless broadcast channels. For $m$-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from $m$ independent messages to one codeword while satisfying broadcast constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: i) Cover's superposition codes; ii) Marton's codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multi-user setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is $O(n*log n)$ where $n$ is the block length. In addition, polar code sequences obtain a stretched-exponential decay of $O(2^{-n^{\beta}})$ of the average block error probability where $0 < \beta < 0.5$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
21,388
1702.07662
A Network Epidemic Model for Online Community Commissioning Data
A statistical model assuming a preferential attachment network, which is generated by adding nodes sequentially according to a few simple rules, usually describes real-life networks better than a model assuming, for example, a Bernoulli random graph, in which any two nodes have the same probability of being connected, does. Therefore, to study the propogation of "infection" across a social network, we propose a network epidemic model by combining a stochastic epidemic model and a preferential attachment model. A simulation study based on the subsequent Markov Chain Monte Carlo algorithm reveals an identifiability issue with the model parameters. Finally, the network epidemic model is applied to a set of online commissioning data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
68,820
2002.01625
Illumination adaptive person reid based on teacher-student model and adversarial training
Most existing works in Person Re-identification (ReID) focus on settings where illumination either is kept the same or has very little fluctuation. However, the changes in the illumination degree may affect the robustness of a ReID algorithm significantly. To address this problem, we proposed a Two-Stream Network that can separate ReID features from lighting features to enhance ReID performance. Its innovations are threefold: (1) A discriminative entropy loss to ensure the ReID features contain no lighting information. (2) A ReID Teacher model trained by images under "neutral" lighting conditions to guide ReID classification. (3) An illumination Teacher model trained by the differences between the illumination-adjusted and original images to guide illumination classification. We construct two augmented datasets by synthetically changing a set of predefined lighting conditions in two of the most popular ReID benchmarks: Market1501 and DukeMTMC-ReID. Experiments demonstrate that our algorithm outperforms other state-of-the-art works and particularly potent in handling images under extremely low light.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
162,702
2207.11518
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
The teacher-free online Knowledge Distillation (KD) aims to train an ensemble of multiple student models collaboratively and distill knowledge from each other. Although existing online KD methods achieve desirable performance, they often focus on class probabilities as the core knowledge type, ignoring the valuable feature representational information. We present a Mutual Contrastive Learning (MCL) framework for online KD. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of networks in an online manner. Our MCL can aggregate cross-network embedding information and maximize the lower bound to the mutual information between two networks. This enables each network to learn extra contrastive knowledge from others, leading to better feature representations, thus improving the performance of visual recognition tasks. Beyond the final layer, we extend MCL to intermediate layers and perform an adaptive layer-matching mechanism trained by meta-optimization. Experiments on image classification and transfer learning to visual recognition tasks show that layer-wise MCL can lead to consistent performance gains against state-of-the-art online KD approaches. The superiority demonstrates that layer-wise MCL can guide the network to generate better feature representations. Our code is publicly avaliable at https://github.com/winycg/L-MCL.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
309,671
2008.01604
Adaptive Physics-Informed Neural Networks for Markov-Chain Monte Carlo
In this paper, we propose the Adaptive Physics-Informed Neural Networks (APINNs) for accurate and efficient simulation-free Bayesian parameter estimation via Markov-Chain Monte Carlo (MCMC). We specifically focus on a class of parameter estimation problems for which computing the likelihood function requires solving a PDE. The proposed method consists of: (1) constructing an offline PINN-UQ model as an approximation to the forward model; and (2) refining this approximate model on the fly using samples generated from the MCMC sampler. The proposed APINN method constantly refines this approximate model on the fly and guarantees that the approximation error is always less than a user-defined residual error threshold. We numerically demonstrate the performance of the proposed APINN method in solving a parameter estimation problem for a system governed by the Poisson equation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
190,387
2211.04610
PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping
Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis. This conventional training causes overfitting for both the discriminators and the generator, leading to the periodicity artifacts in the generated audio signal. In this work, we present PhaseAug, the first differentiable augmentation for speech synthesis that rotates the phase of each frequency bin to simulate one-to-many mapping. With our proposed method, we outperform baselines without any architecture modification. Code and audio samples will be available at https://github.com/mindslab-ai/phaseaug.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
329,296
2004.11207
Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
The great success of Transformer-based models benefits from the powerful multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input. Prior work strives to attribute model decisions to individual input features with different saliency measures, but they fail to explain how these input features interact with each other to reach predictions. In this paper, we propose a self-attention attribution method to interpret the information interactions inside Transformer. We take BERT as an example to conduct extensive studies. Firstly, we apply self-attention attribution to identify the important attention heads, while others can be pruned with marginal performance degradation. Furthermore, we extract the most salient dependencies in each layer to construct an attribution tree, which reveals the hierarchical interactions inside Transformer. Finally, we show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
173,850
2412.01522
InfinityDrive: Breaking Time Limits in Driving World Models
Autonomous driving systems struggle with complex scenarios due to limited access to diverse, extensive, and out-of-distribution driving data which are critical for safe navigation. World models offer a promising solution to this challenge; however, current driving world models are constrained by short time windows and limited scenario diversity. To bridge this gap, we introduce InfinityDrive, the first driving world model with exceptional generalization capabilities, delivering state-of-the-art performance in high fidelity, consistency, and diversity with minute-scale video generation. InfinityDrive introduces an efficient spatio-temporal co-modeling module paired with an extended temporal training strategy, enabling high-resolution (576$\times$1024) video generation with consistent spatial and temporal coherence. By incorporating memory injection and retention mechanisms alongside an adaptive memory curve loss to minimize cumulative errors, achieving consistent video generation lasting over 1500 frames (more than 2 minutes). Comprehensive experiments in multiple datasets validate InfinityDrive's ability to generate complex and varied scenarios, highlighting its potential as a next-generation driving world model built for the evolving demands of autonomous driving. Our project homepage: https://metadrivescape.github.io/papers_project/InfinityDrive/page.html
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,150
2307.02740
Dense Retrieval Adaptation using Target Domain Description
In information retrieval (IR), domain adaptation is the process of adapting a retrieval model to a new domain whose data distribution is different from the source domain. Existing methods in this area focus on unsupervised domain adaptation where they have access to the target document collection or supervised (often few-shot) domain adaptation where they additionally have access to (limited) labeled data in the target domain. There also exists research on improving zero-shot performance of retrieval models with no adaptation. This paper introduces a new category of domain adaptation in IR that is as-yet unexplored. Here, similar to the zero-shot setting, we assume the retrieval model does not have access to the target document collection. In contrast, it does have access to a brief textual description that explains the target domain. We define a taxonomy of domain attributes in retrieval tasks to understand different properties of a source domain that can be adapted to a target domain. We introduce a novel automatic data construction pipeline that produces a synthetic document collection, query set, and pseudo relevance labels, given a textual domain description. Extensive experiments on five diverse target domains show that adapting dense retrieval models using the constructed synthetic data leads to effective retrieval performance on the target domain.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
377,795
1504.04133
On the Error Performance of Systematic Polar Codes
Systematic polar codes are shown to outperform non-systematic polar codes in terms of the bit-error-rate (BER) performance. However theoretically the mechanism behind the better performance of systematic polar codes is not yet clear. In this paper, we set the theoretical framework to analyze the performance of systematic polar codes. The exact evaluation of the BER of systematic polar codes conditioned on the BER of non-systematic polar codes involves in $2^{NR}$ terms where $N$ is the code block length and $R$ is the code rate, resulting in a prohibitive number of computations for large block lengths. By analyzing the polar code construction and the successive-cancellation (SC) decoding process, we use a statistical model to quantify the advantage of systematic polar codes over non-systematic polar codes, so called the systematic gain in this paper. A composite model is proposed to approximate the dominant error cases in the SC decoding process. This composite model divides the errors into independent regions and coupled regions, controlled by a coupling coefficient. Based on this model, the systematic gain can be conveniently calculated. Numerical simulations are provided in the paper showing very close approximations of the proposed model in quantifying the systematic gain.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,110
2409.00866
Vehicle-to-Everything (V2X) Communication: A Roadside Unit for Adaptive Intersection Control of Autonomous Electric Vehicles
Recent advances in autonomous vehicle technologies and cellular network speeds motivate developments in vehicle-to-everything (V2X) communications. Enhanced road safety features and improved fuel efficiency are some of the motivations behind V2X for future transportation systems. Adaptive intersection control systems have considerable potential to achieve these goals by minimizing idle times and predicting short-term future traffic conditions. Integrating V2X into traffic management systems introduces the infrastructure necessary to make roads safer for all users and initiates the shift towards more intelligent and connected cities. To demonstrate our solution, we implement both a simulated and real-world representation of a 4-way intersection and crosswalk scenario with 2 self-driving electric vehicles, a roadside unit (RSU), and traffic light. Our architecture minimizes fuel consumption through intersections by reducing acceleration and braking by up to 75.35%. We implement a cost-effective solution to intelligent and connected intersection control to serve as a proof-of-concept model suitable as the basis for continued research and development. Code for this project is available at https://github.com/MMachado05/REU-2024.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
485,107
1911.11120
KerGM: Kernelized Graph Matching
Graph matching plays a central role in such fields as computer vision, pattern recognition, and bioinformatics. Graph matching problems can be cast as two types of quadratic assignment problems (QAPs): Koopmans-Beckmann's QAP or Lawler's QAP. In our paper, we provide a unifying view for these two problems by introducing new rules for array operations in Hilbert spaces. Consequently, Lawler's QAP can be considered as the Koopmans-Beckmann's alignment between two arrays in reproducing kernel Hilbert spaces (RKHS), making it possible to efficiently solve the problem without computing a huge affinity matrix. Furthermore, we develop the entropy-regularized Frank-Wolfe (EnFW) algorithm for optimizing QAPs, which has the same convergence rate as the original FW algorithm while dramatically reducing the computational burden for each outer iteration. We conduct extensive experiments to evaluate our approach, and show that our algorithm significantly outperforms the state-of-the-art in both matching accuracy and scalability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
155,028
2001.07578
Adequate and fair explanations
Explaining sophisticated machine-learning based systems is an important issue at the foundations of AI. Recent efforts have shown various methods for providing explanations. These approaches can be broadly divided into two schools: those that provide a local and human interpreatable approximation of a machine learning algorithm, and logical approaches that exactly characterise one aspect of the decision. In this paper we focus upon the second school of exact explanations with a rigorous logical foundation. There is an epistemological problem with these exact methods. While they can furnish complete explanations, such explanations may be too complex for humans to understand or even to write down in human readable form. Interpretability requires epistemically accessible explanations, explanations humans can grasp. Yet what is a sufficiently complete epistemically accessible explanation still needs clarification. We do this here in terms of counterfactuals, following [Wachter et al., 2017]. With counterfactual explanations, many of the assumptions needed to provide a complete explanation are left implicit. To do so, counterfactual explanations exploit the properties of a particular data point or sample, and as such are also local as well as partial explanations. We explore how to move from local partial explanations to what we call complete local explanations and then to global ones. But to preserve accessibility we argue for the need for partiality. This partiality makes it possible to hide explicit biases present in the algorithm that may be injurious or unfair.We investigate how easy it is to uncover these biases in providing complete and fair explanations by exploiting the structure of the set of counterfactuals providing a complete local explanation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
161,074
2104.08667
SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations
Next generation task-oriented dialog systems need to understand conversational contexts with their perceived surroundings, to effectively help users in the real-world multimodal environment. Existing task-oriented dialog datasets aimed towards virtual assistance fall short and do not situate the dialog in the user's multimodal context. To overcome, we present a new dataset for Situated and Interactive Multimodal Conversations, SIMMC 2.0, which includes 11K task-oriented user<->assistant dialogs (117K utterances) in the shopping domain, grounded in immersive and photo-realistic scenes. The dialogs are collected using a two-phase pipeline: (1) A novel multimodal dialog simulator generates simulated dialog flows, with an emphasis on diversity and richness of interactions, (2) Manual paraphrasing of the generated utterances to collect diverse referring expressions. We provide an in-depth analysis of the collected dataset, and describe in detail the four main benchmark tasks we propose. Our baseline model, powered by the state-of-the-art language model, shows promising results, and highlights new challenges and directions for the community to study.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
230,905
2406.12042
Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models
Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits resource-constrained organizations from deploying T2I models after fine-tuning them on their internal target data. While pruning techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a prompt router model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as target datasets. APTP outperforms the single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, e.g. prompts for generating text images, assigning them to higher capacity codes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
465,193
1908.08843
Fairness in Deep Learning: A Computational Perspective
Deep learning is increasingly being used in high-stake decision making applications that affect individual lives. However, deep learning models might exhibit algorithmic discrimination behaviors with respect to protected groups, potentially posing negative impacts on individuals and society. Therefore, fairness in deep learning has attracted tremendous attention recently. We provide a review covering recent progresses to tackle algorithmic fairness problems of deep learning from the computational perspective. Specifically, we show that interpretability can serve as a useful ingredient to diagnose the reasons that lead to algorithmic discrimination. We also discuss fairness mitigation approaches categorized according to three stages of deep learning life-cycle, aiming to push forward the area of fairness in deep learning and build genuinely fair and reliable deep learning systems.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
142,666
2409.20064
Knowledge Discovery using Unsupervised Cognition
Knowledge discovery is key to understand and interpret a dataset, as well as to find the underlying relationships between its components. Unsupervised Cognition is a novel unsupervised learning algorithm that focus on modelling the learned data. This paper presents three techniques to perform knowledge discovery over an already trained Unsupervised Cognition model. Specifically, we present a technique for pattern mining, a technique for feature selection based on the previous pattern mining technique, and a technique for dimensionality reduction based on the previous feature selection technique. The final goal is to distinguish between relevant and irrelevant features and use them to build a model from which to extract meaningful patterns. We evaluated our proposals with empirical experiments and found that they overcome the state-of-the-art in knowledge discovery.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
492,981
1802.06624
Osteoarthritis Disease Detection System using Self Organizing Maps Method based on Ossa Manus X-Ray
Osteoarthritis is a disease found in the world, including in Indonesia. The purpose of this study was to detect the disease Osteoarthritis using Self Organizing mapping (SOM), and to know the procedure of artificial intelligence on the methods of Self Organizing Mapping (SOM). In this system, there are several stages to preserve to detect disease Osteoarthritis using Self Organizing maps is the result of photographic images rontgen Ossa Manus normal and sick with the resolution (150 x 200 pixels) do the repair phase contrast, the Gray scale, thresholding process, Histogram of process , and do the last process, where the process of doing training (Training) and testing on images that have kept the shape data (.text). the conclusion is the result of testing by using a data image, where 42 of data have 12 Normal image data and image data 30 sick. On the results of the process of training data there are 8 X-ray image revealed normal right and 19 data x-ray image of pain expressed is correct. Then the accuracy on the process of training was 96.42%, and in the process of testing normal true image 4 obtained revealed Normal, 9 data pain stated true pain and 1 data imagery hurts stated incorrectly, then the accuracy gained from the results of testing are 92,8%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
90,717
2305.02866
Hierarchical Transformer for Scalable Graph Learning
Graph Transformer is gaining increasing attention in the field of machine learning and has demonstrated state-of-the-art performance on benchmarks for graph representation learning. However, as current implementations of Graph Transformer primarily focus on learning representations of small-scale graphs, the quadratic complexity of the global self-attention mechanism presents a challenge for full-batch training when applied to larger graphs. Additionally, conventional sampling-based methods fail to capture necessary high-level contextual information, resulting in a significant loss of performance. In this paper, we introduce the Hierarchical Scalable Graph Transformer (HSGT) as a solution to these challenges. HSGT successfully scales the Transformer architecture to node representation learning tasks on large-scale graphs, while maintaining high performance. By utilizing graph hierarchies constructed through coarsening techniques, HSGT efficiently updates and stores multi-scale information in node embeddings at different levels. Together with sampling-based training methods, HSGT effectively captures and aggregates multi-level information on the hierarchical graph using only Transformer blocks. Empirical evaluations demonstrate that HSGT achieves state-of-the-art performance on large-scale benchmarks with graphs containing millions of nodes with high efficiency.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
362,198
2404.05884
GBEC: Geometry-Based Hand-Eye Calibration
Hand-eye calibration is the problem of solving the transformation from the end-effector of a robot to the sensor attached to it. Commonly employed techniques, such as AXXB or AXZB formulations, rely on regression methods that require collecting pose data from different robot configurations, which can produce low accuracy and repeatability. However, the derived transformation should solely depend on the geometry of the end-effector and the sensor attachment. We propose Geometry-Based End-Effector Calibration (GBEC) that enhances the repeatability and accuracy of the derived transformation compared to traditional hand-eye calibrations. To demonstrate improvements, we apply the approach to two different robot-assisted procedures: Transcranial Magnetic Stimulation (TMS) and femoroplasty. We also discuss the generalizability of GBEC for camera-in-hand and marker-in-hand sensor mounting methods. In the experiments, we perform GBEC between the robot end-effector and an optical tracker's rigid body marker attached to the TMS coil or femoroplasty drill guide. Previous research documents low repeatability and accuracy of the conventional methods for robot-assisted TMS hand-eye calibration. When compared to some existing methods, the proposed method relies solely on the geometry of the flange and the pose of the rigid-body marker, making it independent of workspace constraints or robot accuracy, without sacrificing the orthogonality of the rotation matrix. Our results validate the accuracy and applicability of the approach, providing a new and generalizable methodology for obtaining the transformation from the end-effector to a sensor.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
445,243
2209.07303
Differentially Private Estimation of Hawkes Process
Point process models are of great importance in real world applications. In certain critical applications, estimation of point process models involves large amounts of sensitive personal data from users. Privacy concerns naturally arise which have not been addressed in the existing literature. To bridge this glaring gap, we propose the first general differentially private estimation procedure for point process models. Specifically, we take the Hawkes process as an example, and introduce a rigorous definition of differential privacy for event stream data based on a discretized representation of the Hawkes process. We then propose two differentially private optimization algorithms, which can efficiently estimate Hawkes process models with the desired privacy and utility guarantees under two different settings. Experiments are provided to back up our theoretical analysis.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
317,696
2408.17394
Continual learning with the neural tangent ensemble
A natural strategy for continual learning is to weigh a Bayesian ensemble of fixed functions. This suggests that if a (single) neural network could be interpreted as an ensemble, one could design effective algorithms that learn without forgetting. To realize this possibility, we observe that a neural network classifier with N parameters can be interpreted as a weighted ensemble of N classifiers, and that in the lazy regime limit these classifiers are fixed throughout learning. We term these classifiers the neural tangent experts and show they output valid probability distributions over the labels. We then derive the likelihood and posterior probability of each expert given past data. Surprisingly, we learn that the posterior updates for these experts are equivalent to a scaled and projected form of stochastic gradient descent (SGD) over the network weights. Away from the lazy regime, networks can be seen as ensembles of adaptive experts which improve over time. These results offer a new interpretation of neural networks as Bayesian ensembles of experts, providing a principled framework for understanding and mitigating catastrophic forgetting in continual learning settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
484,690
2405.18869
Towards Data-Driven Electricity Management: Multi-Region Harmonized Data and Knowledge Graph
Due to growing population and technological advances, global electricity consumption, and consequently also CO2 emissions are increasing. The residential sector makes up 25% of global electricity consumption and has great potential to increase efficiency and reduce CO2 footprint without sacrificing comfort. However, a lack of uniform consumption data at the household level spanning multiple regions hinders large-scale studies and robust multi-region model development. This paper introduces a multi-region dataset compiled from publicly available sources and presented in a uniform format. This data enables machine learning tasks such as disaggregation, demand forecasting, appliance ON/OFF classification, etc. Furthermore, we develop an RDF knowledge graph that characterizes the electricity consumption of the households and contextualizes it with household related properties enabling semantic queries and interoperability with other open knowledge bases like Wikidata and DBpedia. This structured data can be utilized to inform various stakeholders towards data-driven policy and business development.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
458,634
2103.00780
Towards Unbiased COVID-19 Lesion Localisation and Segmentation via Weakly Supervised Learning
Despite tremendous efforts, it is very challenging to generate a robust model to assist in the accurate quantification assessment of COVID-19 on chest CT images. Due to the nature of blurred boundaries, the supervised segmentation methods usually suffer from annotation biases. To support unbiased lesion localisation and to minimise the labeling costs, we propose a data-driven framework supervised by only image-level labels. The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder. Experiments on two COVID-19 datasets demonstrate the effectiveness of the proposed framework and its superior performance to several existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,396
2210.03529
Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
Recent advances in synthesizing realistic faces have shown that synthetic training data can replace real data for various face-related computer vision tasks. A question arises: how important is realism? Is the pursuit of photorealism excessive? In this work, we show otherwise. We boost the realism of our synthetic faces by introducing dynamic skin wrinkles in response to facial expressions and observe significant performance improvements in downstream computer vision tasks. Previous approaches for producing such wrinkles either required prohibitive artist effort to scale across identities and expressions or were not capable of reconstructing high-frequency skin details with sufficient fidelity. Our key contribution is an approach that produces realistic wrinkles across a large and diverse population of digital humans. Concretely, we formalize the concept of mesh-tension and use it to aggregate possible wrinkles from high-quality expression scans into albedo and displacement texture maps. At synthesis, we use these maps to produce wrinkles even for expressions not represented in the source scans. Additionally, to provide a more nuanced indicator of model performance under deformations resulting from compressed expressions, we introduce the 300W-winks evaluation subset and the Pexels dataset of closed eyes and winks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
322,076
2502.10205
Looking around you: external information enhances representations for event sequences
Representation learning produces models in different domains, such as store purchases, client transactions, and general people's behaviour. However, such models for sequential data usually process a single sequence, ignoring context from other relevant ones, even in domains with rapidly changing external environments like finance or misguiding the prediction for a user with no recent events. We are the first to propose a method that aggregates information from multiple user representations augmenting a specific user one for a scenario of multiple co-occurring event sequences. Our study considers diverse aggregation approaches, ranging from simple pooling techniques to trainable attention-based approaches, especially Kernel attention aggregation, that can highlight more complex information flow from other users. The proposed method operates atop an existing encoder and supports its efficient fine-tuning. Across considered datasets of financial transactions and downstream tasks, Kernel attention improves ROC AUC scores, both with and without fine-tuning, while mean pooling yields a smaller but still significant gain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,776
2404.15225
PHLP: Sole Persistent Homology for Link Prediction - Interpretable Feature Extraction
Link prediction (LP), inferring the connectivity between nodes, is a significant research area in graph data, where a link represents essential information on relationships between nodes. Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks. We employ persistent homology (PH), a topological data analysis method that helps analyze the topological information of graphs, to interpret the features used for prediction. We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology. The PHLP utilizes the angle hop subgraph and new node labeling called degree double radius node labeling (Degree DRNL), distinguishing the information of graphs better than DRNL. Using only a classifier, PHLP performs similarly to state-of-the-art (SOTA) models on most benchmark datasets. Incorporating the outputs calculated using PHLP into the existing GNN-based SOTA models improves performance across all benchmark datasets. To the best of our knowledge, PHLP is the first method of applying PH to LP without GNNs. The proposed approach, employing PH while not relying on neural networks, enables the identification of crucial factors for improving performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
448,995
2408.01162
PreMix: Addressing Label Scarcity in Whole Slide Image Classification with Pre-trained Multiple Instance Learning Aggregators
Multiple instance learning (MIL) has emerged as a powerful framework for weakly supervised whole slide image (WSI) classification, enabling slide-level predictions without requiring detailed patch-level annotations. However, a key limitation of MIL lies in the underexplored potential of pre-training the MIL aggregator. Most existing approaches train it from scratch, resulting in performance heavily dependent on the number of labeled WSIs, while overlooking the abundance of unlabeled WSIs available in real-world scenarios. To address this, we propose PreMix, a novel framework that leverages a non-contrastive pre-training method, Barlow Twins, augmented with the Slide Mixing approach to generate additional positive pairs and enhance feature learning, particularly under limited labeled WSI conditions. Fine-tuning with Mixup and Manifold Mixup further enhances robustness by effectively handling the diverse sizes of gigapixel WSIs. Experimental results demonstrate that integrating HIPT into PreMix achieves an average F1 improvement of 4.7% over the baseline HIPT across various WSI training datasets and label sizes. These findings underscore its potential to advance WSI classification with limited labeled data and its applicability to real-world histopathology practices. The code is available at https://anonymous.4open.science/r/PreMix
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
478,129
1507.01098
Perfect Secrecy in Physical Layer Network Coding Systems from Structured Interference
Physical layer network coding (PNC) has been proposed for next generation networks. In this contribution, we investigate PNC schemes with embedded perfect secrecy by exploiting structured interference in relay networks with two users and a single relay. In a practical scenario where both users employ finite and uniform signal input distributions we propose upper bounds (UBs) on the achievable perfect secrecy rates and make these explicit when PAM modems are used. We then describe two simple, explicit encoders that can achieve perfect secrecy rates close to these UBs with respect to an untrustworthy relay in the single antenna and single relay setting. Lastly, we generalize our system to a MIMO relay channel where the relay has more antennas than the users and optimal precoding matrices which maintain a required secrecy constraint are studied. Our results establish that the design of PNC transmission schemes with enhanced throughput and guaranteed data confidentiality is feasible in next generation systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
44,820
1807.00456
Evenly Cascaded Convolutional Networks
We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN's design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN's simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
101,840
2207.13701
Branch Ranking for Efficient Mixed-Integer Programming via Offline Ranking-based Policy Learning
Deriving a good variable selection strategy in branch-and-bound is essential for the efficiency of modern mixed-integer programming (MIP) solvers. With MIP branching data collected during the previous solution process, learning to branch methods have recently become superior over heuristics. As branch-and-bound is naturally a sequential decision making task, one should learn to optimize the utility of the whole MIP solving process instead of being myopic on each step. In this work, we formulate learning to branch as an offline reinforcement learning (RL) problem, and propose a long-sighted hybrid search scheme to construct the offline MIP dataset, which values the long-term utilities of branching decisions. During the policy training phase, we deploy a ranking-based reward assignment scheme to distinguish the promising samples from the long-term or short-term view, and train the branching model named Branch Ranking via offline policy learning. Experiments on synthetic MIP benchmarks and real-world tasks demonstrate that Branch Rankink is more efficient and robust, and can better generalize to large scales of MIP instances compared to the widely used heuristics and state-of-the-art learning-based branching models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
310,381
1806.10674
Author-Based Analysis of Conference versus Journal Publication in Computer Science
Conference publications in computer science (CS) have attracted scholarly attention due to their unique status as a main research outlet unlike other science fields where journals are dominantly used for communicating research findings. One frequent research question has been how different conference and journal publications are, considering a paper as a unit of analysis. This study takes an author-based approach to analyze publishing patterns of 517,763 scholars who have ever published both in CS conferences and journals for the last 57 years, as recorded in DBLP. The analysis shows that the majority of CS scholars tend to make their scholarly debut, publish more papers, and collaborate with more coauthors in conferences than in journals. Importantly, conference papers seem to serve as a distinct channel of scholarly communication, not a mere preceding step to journal publications: coauthors and title words of authors across conferences and journals tend not to overlap much. This study corroborates findings of previous studies on this topic from a distinctive perspective and suggests that conference authorship in CS calls for more special attention from scholars and administrators outside CS who have focused on journal publications to mine authorship data and evaluate scholarly performance.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
101,571
cs/0104011
Potholes on the Royal Road
It is still unclear how an evolutionary algorithm (EA) searches a fitness landscape, and on what fitness landscapes a particular EA will do well. The validity of the building-block hypothesis, a major tenet of traditional genetic algorithm theory, remains controversial despite its continued use to justify claims about EAs. This paper outlines a research program to begin to answer some of these open questions, by extending the work done in the royal road project. The short-term goal is to find a simple class of functions which the simple genetic algorithm optimizes better than other optimization methods, such as hillclimbers. A dialectical heuristic for searching for such a class is introduced. As an example of using the heuristic, the simple genetic algorithm is compared with a set of hillclimbers on a simple subset of the hyperplane-defined functions, the pothole functions.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
537,315
1610.03235
Model-Free Closed-Loop Stability Analysis: A Linear Functional Approach
Performing a stability analysis during the design of any electronic circuit is critical to guarantee its correct operation. A closed-loop stability analysis can be performed by analysing the impedance presented by the circuit at a well-chosen node without internal access to the simulator. If any of the poles of this impedance lie in the complex right half-plane, the circuit is unstable. The classic way to detect unstable poles is to fit a rational model on the impedance. In this paper, a projection-based method is proposed which splits the impedance into a stable and an unstable part by projecting on an orthogonal basis of stable and unstable functions. When the unstable part lies significantly above the interpolation error of the method, the circuit is considered unstable. Working with a projection provides one, at small cost, with a first appraisal of the unstable part of the system. Both small-signal and large-signal stability analysis can be performed with this projection-based method. In the small-signal case, a low-order rational approximation can be fitted on the unstable part to find the location of the unstable poles.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
62,220
2311.01477
FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models
We introduce FaithScore (Faithfulness to Atomic Image Facts Score), a reference-free and fine-grained evaluation metric that measures the faithfulness of the generated free-form answers from large vision-language models (LVLMs). The FaithScore evaluation first identifies sub-sentences containing descriptive statements that need to be verified, then extracts a comprehensive list of atomic facts from these sub-sentences, and finally conducts consistency verification between fine-grained atomic facts and the input image. Meta-evaluation demonstrates that our metric highly correlates with human judgments of faithfulness. We collect two benchmark datasets (i.e. LLaVA-1k and MSCOCO-Cap) for evaluating LVLMs instruction-following hallucinations. We measure hallucinations in state-of-the-art LVLMs with FaithScore on the datasets. Results reveal that current systems are prone to generate hallucinated content unfaithful to the image, which leaves room for future improvements. We hope our metric FaithScore can help evaluate future LVLMs in terms of faithfulness and provide insightful advice for enhancing LVLMs' faithfulness.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
405,064
1904.05992
An Introduction to Person Re-identification with Generative Adversarial Networks
Person re-identification is a basic subject in the field of computer vision. The traditional methods have several limitations in solving the problems of person illumination like occlusion, pose variation and feature variation under complex background. Fortunately, deep learning paradigm opens new ways of the person re-identification research and becomes a hot spot in this field. Generative Adversarial Nets (GANs) in the past few years attracted lots of attention in solving these problems. This paper reviews the GAN based methods for person re-identification focuses on the related papers about different GAN based frameworks and discusses their advantages and disadvantages. Finally, it proposes the direction of future research, especially the prospect of person re-identification methods based on GANs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,453
2108.06958
Aegis: A Trusted, Automatic and Accurate Verification Framework for Vertical Federated Learning
Vertical federated learning (VFL) leverages various privacy-preserving algorithms, e.g., homomorphic encryption or secret sharing based SecureBoost, to ensure data privacy. However, these algorithms all require a semi-honest secure definition, which raises concerns in real-world applications. In this paper, we present Aegis, a trusted, automatic, and accurate verification framework to verify the security of VFL jobs. Aegis is separated from local parties to ensure the security of the framework. Furthermore, it automatically adapts to evolving VFL algorithms by defining the VFL job as a finite state machine to uniformly verify different algorithms and reproduce the entire job to provide more accurate verification. We implement and evaluate Aegis with different threat models on financial and medical datasets. Evaluation results show that: 1) Aegis can detect 95% threat models, and 2) it provides fine-grained verification results within 84% of the total VFL job time.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
true
250,792
1607.08335
Reverse Data-Processing Theorems and Computational Second Laws
Drawing on an analogy with the second law of thermodynamics for adiabatically isolated systems, Cover argued that data-processing inequalities may be seen as second laws for "computationally isolated systems," namely, systems evolving without an external memory. Here we develop Cover's idea in two ways: on the one hand, we clarify its meaning and formulate it in a general framework able to describe both classical and quantum systems. On the other hand, we prove that also the reverse holds: the validity of data-processing inequalities is not only necessary, but also sufficient to conclude that a system is computationally isolated. This constitutes an information-theoretic analogue of Lieb's and Yngvason's entropy principle. We finally speculate about the possibility of employing Maxwell's demon to show that adiabaticity and memorylessness are in fact connected in a deeper way than what the formal analogy proposed here prima facie seems to suggest.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
59,147
2109.00778
Better Self-training for Image Classification through Self-supervision
Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision -- learning without manual supervision by solving an automatically-generated pretext task -- has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,218
2402.02569
On the Complexity of Finite-Sum Smooth Optimization under the Polyak-{\L}ojasiewicz Condition
This paper considers the optimization problem of the form $\min_{{\bf x}\in{\mathbb R}^d} f({\bf x})\triangleq \frac{1}{n}\sum_{i=1}^n f_i({\bf x})$, where $f(\cdot)$ satisfies the Polyak--{\L}ojasiewicz (PL) condition with parameter $\mu$ and $\{f_i(\cdot)\}_{i=1}^n$ is $L$-mean-squared smooth. We show that any gradient method requires at least $\Omega(n+\kappa\sqrt{n}\log(1/\epsilon))$ incremental first-order oracle (IFO) calls to find an $\epsilon$-suboptimal solution, where $\kappa\triangleq L/\mu$ is the condition number of the problem. This result nearly matches upper bounds of IFO complexity for best-known first-order methods. We also study the problem of minimizing the PL function in the distributed setting such that the individuals $f_1(\cdot),\dots,f_n(\cdot)$ are located on a connected network of $n$ agents. We provide lower bounds of $\Omega(\kappa/\sqrt{\gamma}\,\log(1/\epsilon))$, $\Omega((\kappa+\tau\kappa/\sqrt{\gamma}\,)\log(1/\epsilon))$ and $\Omega\big(n+\kappa\sqrt{n}\log(1/\epsilon)\big)$ for communication rounds, time cost and local first-order oracle calls respectively, where $\gamma\in(0,1]$ is the spectral gap of the mixing matrix associated with the network and~$\tau>0$ is the time cost of per communication round. Furthermore, we propose a decentralized first-order method that nearly matches above lower bounds in expectation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
426,625
1905.07102
Mastering the Game of Sungka from Random Play
Recent work in reinforcement learning demonstrated that learning solely through self-play is not only possible, but could also result in novel strategies that humans never would have thought of. However, optimization methods cast as a game between two players require careful tuning to prevent suboptimal results. Hence, we look at random play as an alternative method. In this paper, we train a DQN agent to play Sungka, a two-player turn-based board game wherein the players compete to obtain more stones than the other. We show that even with purely random play, our training algorithm converges very fast and is stable. Moreover, we test our trained agent against several baselines and show its ability to consistently win against these.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
131,150
1307.0813
Multi-Task Policy Search
Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics. Training individual policies for every single potential task is often impractical, especially for continuous task variations, requiring more principled approaches to share and transfer knowledge among similar tasks. We present a novel approach for learning a nonlinear feedback policy that generalizes across multiple tasks. The key idea is to define a parametrized policy as a function of both the state and the task, which allows learning a single policy that generalizes across multiple known and unknown tasks. Applications of our novel approach to reinforcement and imitation learning in real-robot experiments are shown.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
25,578
2301.06882
Multi-Biometric Fuzzy Vault based on Face and Fingerprints
The fuzzy vault scheme has been established as cryptographic primitive suitable for privacy-preserving biometric authentication. To improve accuracy and privacy protection, biometric information of multiple characteristics can be fused at feature level prior to locking it in a fuzzy vault. We construct a multi-biometric fuzzy vault based on face and multiple fingerprints. On a multi-biometric database constructed from the FRGCv2 face and the MCYT-100 fingerprint databases, a perfect recognition accuracy is achieved at a false accept security above 30 bits. Further, we provide a formalisation of feature-level fusion in multi-biometric fuzzy vaults, on the basis of which relevant security issues are elaborated. Said security issues, for which we define countermeasures, are commonly ignored and may impair the overall system's security.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
340,769
1801.00880
Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models
The health and function of tissue rely on its vasculature network to provide reliable blood perfusion. Volumetric imaging approaches, such as multiphoton microscopy, are able to generate detailed 3D images of blood vessels that could contribute to our understanding of the role of vascular structure in normal physiology and in disease mechanisms. The segmentation of vessels, a core image analysis problem, is a bottleneck that has prevented the systematic comparison of 3D vascular architecture across experimental populations. We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture, which we call DeepVess, yielded a segmentation accuracy that was better than both the current state-of-the-art and a trained human annotator, while also being orders of magnitude faster. To explore the effects of aging and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of cortical blood vessels in young and old mouse models of Alzheimer's disease and wild type littermates. We found little difference in the distribution of capillary diameter or tortuosity between these groups, but did note a decrease in the number of longer capillary segments ($>75\mu m$) in aged animals as compared to young, in both wild type and Alzheimer's disease mouse models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,641
2209.08648
Through a fair looking-glass: mitigating bias in image datasets
With the recent growth in computer vision applications, the question of how fair and unbiased they are has yet to be explored. There is abundant evidence that the bias present in training data is reflected in the models, or even amplified. Many previous methods for image dataset de-biasing, including models based on augmenting datasets, are computationally expensive to implement. In this study, we present a fast and effective model to de-bias an image dataset through reconstruction and minimizing the statistical dependence between intended variables. Our architecture includes a U-net to reconstruct images, combined with a pre-trained classifier which penalizes the statistical dependence between target attribute and the protected attribute. We evaluate our proposed model on CelebA dataset, compare the results with a state-of-the-art de-biasing method, and show that the model achieves a promising fairness-accuracy combination.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
318,207
2012.14143
From Quantum Source Compression to Quantum Thermodynamics
This thesis addresses problems in the field of quantum information theory. The first part of the thesis is opened with concrete definitions of general quantum source models and their compression, and each subsequent chapter addresses the compression of a specific source model as a special case of the initially defined general models. First, we find the optimal compression rate of a general mixed state source which includes as special cases all the previously studied models such as Schumacher's pure and ensemble sources and other mixed state ensemble models. For an interpolation between the visible and blind Schumacher's ensemble model, we find the optimal compression rate region for the entanglement and quantum rates. Later, we study the classical-quantum variation of the celebrated Slepian-Wolf problem and the ensemble model of quantum state redistribution for which we find the optimal compression rate considering per-copy fidelity and single-letter achievable and converse bounds matching up to continuity of functions which appear in the corresponding bounds. The second part of the thesis revolves around information theoretical perspective of quantum thermodynamics. We start with a resource theory point of view of a quantum system with multiple non-commuting charges. Subsequently, we apply this resource theory framework to study a traditional thermodynamics setup with multiple non-commuting conserved quantities consisting of a main system, a thermal bath and batteries to store various conserved quantities of the system. We state the laws of the thermodynamics for this system, and show that a purely quantum effect happens in some transformations of the system, that is, some transformations are feasible only if there are quantum correlations between the final state of the system and the thermal bath.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
213,424
2306.02705
Situational Adaptive Motion Prediction for Firefighting Squads in Indoor Search and Rescue
Firefighting is a complex, yet low automated task. To mitigate ergonomic and safety related risks on the human operators, robots could be deployed in a collaborative approach. To allow human-robot teams in firefighting, important basics are missing. Amongst other aspects, the robot must predict the human motion as occlusion is ever-present. In this work, we propose a novel motion prediction pipeline for firefighters' squads in indoor search and rescue. The squad paths are generated with an optimal graph-based planning approach representing firefighters' tactics. Paths are generated per room which allows to dynamically adapt the path locally without global re-planning. The motion of singular agents is simulated using a modification of the headed social force model. We evaluate the pipeline for feasibility with a novel data set generated from real footage and show the computational efficiency.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
371,020
2310.06854
Learning with Noisy Labels for Human Fall Events Classification: Joint Cooperative Training with Trinity Networks
With the increasing ageing population, fall events classification has drawn much research attention. In the development of deep learning, the quality of data labels is crucial. Most of the datasets are labelled automatically or semi-automatically, and the samples may be mislabeled, which constrains the performance of Deep Neural Networks (DNNs). Recent research on noisy label learning confirms that neural networks first focus on the clean and simple instances and then follow the noisy and hard instances in the training stage. To address the learning with noisy label problem and protect the human subjects' privacy, we propose a simple but effective approach named Joint Cooperative training with Trinity Networks (JoCoT). To mitigate the privacy issue, human skeleton data are used. The robustness and performance of the noisy label learning framework is improved by using the two teacher modules and one student module in the proposed JoCoT. To mitigate the incorrect selections, the predictions from the teacher modules are applied with the consensus-based method to guide the student module training. The performance evaluation on the widely used UP-Fall dataset and comparison with the state-of-the-art, confirms the effectiveness of the proposed JoCoT in high noise rates. Precisely, JoCoT outperforms the state-of-the-art by 5.17% and 3.35% with the averaged pairflip and symmetric noises, respectively.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
398,744
2408.00619
Harnessing Uncertainty-aware Bounding Boxes for Unsupervised 3D Object Detection
Unsupervised 3D object detection aims to identify objects of interest from unlabeled raw data, such as LiDAR points. Recent approaches usually adopt pseudo 3D bounding boxes (3D bboxes) from clustering algorithm to initialize the model training. However, pseudo bboxes inevitably contain noise, and such inaccuracies accumulate to the final model, compromising the performance. Therefore, in an attempt to mitigate the negative impact of inaccurate pseudo bboxes, we introduce a new uncertainty-aware framework for unsupervised 3D object detection, dubbed UA3D. In particular, our method consists of two phases: uncertainty estimation and uncertainty regularization. (1) In the uncertainty estimation phase, we incorporate an extra auxiliary detection branch alongside the original primary detector. The prediction disparity between the primary and auxiliary detectors could reflect fine-grained uncertainty at the box coordinate level. (2) Based on the assessed uncertainty, we adaptively adjust the weight of every 3D bbox coordinate via uncertainty regularization, refining the training process on pseudo bboxes. For pseudo bbox coordinate with high uncertainty, we assign a relatively low loss weight. Extensive experiments verify that the proposed method is robust against the noisy pseudo bboxes, yielding substantial improvements on nuScenes and Lyft compared to existing approaches, with increases of +6.9% AP$_{BEV}$ and +2.5% AP$_{3D}$ on nuScenes, and +4.1% AP$_{BEV}$ and +2.0% AP$_{3D}$ on Lyft.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
477,904
2209.12350
Unsupervised Reward Shaping for a Robotic Sequential Picking Task from Visual Observations in a Logistics Scenario
We focus on an unloading problem, typical of the logistics sector, modeled as a sequential pick-and-place task. In this type of task, modern machine learning techniques have shown to work better than classic systems since they are more adaptable to stochasticity and better able to cope with large uncertainties. More specifically, supervised and imitation learning have achieved outstanding results in this regard, with the shortcoming of requiring some form of supervision which is not always obtainable for all settings. On the other hand, reinforcement learning (RL) requires much milder form of supervision but still remains impracticable due to its inefficiency. In this paper, we propose and theoretically motivate a novel Unsupervised Reward Shaping algorithm from expert's observations which relaxes the level of supervision required by the agent and works on improving RL performance in our task.
false
false
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
319,501
2206.11871
Offline RL for Natural Language Generation with Implicit Language Q Learning
Large language models distill broad knowledge from text corpora. However, they can be inconsistent when it comes to completing user specified tasks. This issue can be addressed by finetuning such models via supervised learning on curated datasets, or via reinforcement learning. In this work, we propose a novel offline RL method, implicit language Q-learning (ILQL), designed for use on language models, that combines both the flexible utility maximization framework of RL algorithms with the ability of supervised learning to leverage previously collected data, as well as its simplicity and stability. Our method employs a combination of value conservatism alongside an implicit dataset support constraint in learning value functions, which are then used to guide language model generations towards maximizing user-specified utility functions. In addition to empirically validating ILQL, we present a detailed empirical analysis of situations where offline RL can be useful in natural language generation settings, demonstrating how it can be a more effective utility optimizer than prior approaches for end-to-end dialogue, and how it can effectively optimize high variance reward functions based on subjective judgement, such as whether to label a comment as toxic or not.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
304,402
1811.09998
Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation
Typically, the deployment of face recognition models in the wild needs to identify low-resolution faces with extremely low computational cost. To address this problem, a feasible solution is compressing a complex face model to achieve higher speed and lower memory at the cost of minimal performance drop. Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation. In this approach, a two-stream convolutional neural network (CNN) is first initialized to recognize high-resolution faces and resolution-degraded faces with a teacher stream and a student stream, respectively. The teacher stream is represented by a complex CNN for high-accuracy recognition, and the student stream is represented by a much simpler CNN for low-complexity recognition. To avoid significant performance drop at the student stream, we then selectively distil the most informative facial features from the teacher stream by solving a sparse graph optimization problem, which are then used to regularize the fine-tuning process of the student stream. In this way, the student stream is actually trained by simultaneously handling two tasks with limited computational resources: approximating the most informative facial cues via feature regression, and recovering the missing facial cues via low-resolution face classification. Experimental results show that the student stream performs impressively in recognizing low-resolution faces and costs only 0.15MB memory and runs at 418 faces per second on CPU and 9,433 faces per second on GPU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,384
2310.14045
Training Image Derivatives: Increased Accuracy and Universal Robustness
Derivative training is an established method that can significantly increase the accuracy of neural networks in certain low-dimensional tasks. In this paper, we extend this improvement to an illustrative image analysis problem: reconstructing the vertices of a cube from its image. By training the derivatives with respect to the cube's six degrees of freedom, we achieve a 25-fold increase in accuracy for noiseless inputs. Additionally, derivative knowledge offers a novel approach to enhancing network robustness, which has traditionally been understood in terms of two types of vulnerabilities: excessive sensitivity to minor perturbations and failure to detect significant image changes. Conventional robust training relies on output invariance, which inherently creates a trade-off between these two vulnerabilities. By leveraging derivative information we compute non-trivial output changes in response to arbitrary input perturbations. This resolves the trade-off, yielding a network that is twice as robust and five times more accurate than the best case under the invariance assumption. Unlike conventional robust training, this outcome can be further improved by simply increasing the network capacity. This approach is applicable to phase retrieval problems and other scenarios where a sufficiently smooth manifold parametrization can be obtained.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
401,689
2405.02598
UDUC: An Uncertainty-driven Approach for Learning-based Robust Control
Learning-based techniques have become popular in both model predictive control (MPC) and reinforcement learning (RL). Probabilistic ensemble (PE) models offer a promising approach for modelling system dynamics, showcasing the ability to capture uncertainty and scalability in high-dimensional control scenarios. However, PE models are susceptible to mode collapse, resulting in non-robust control when faced with environments slightly different from the training set. In this paper, we introduce the $\textbf{u}$ncertainty-$\textbf{d}$riven rob$\textbf{u}$st $\textbf{c}$ontrol (UDUC) loss as an alternative objective for training PE models, drawing inspiration from contrastive learning. We analyze the robustness of UDUC loss through the lens of robust optimization and evaluate its performance on the challenging Real-world Reinforcement Learning (RWRL) benchmark, which involves significant environmental mismatches between the training and testing environments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
451,830
1301.6791
Guarantees of Total Variation Minimization for Signal Recovery
In this paper, we consider using total variation minimization to recover signals whose gradients have a sparse support, from a small number of measurements. We establish the proof for the performance guarantee of total variation (TV) minimization in recovering \emph{one-dimensional} signal with sparse gradient support. This partially answers the open problem of proving the fidelity of total variation minimization in such a setting \cite{TVMulti}. In particular, we have shown that the recoverable gradient sparsity can grow linearly with the signal dimension when TV minimization is used. Recoverable sparsity thresholds of TV minimization are explicitly computed for 1-dimensional signal by using the Grassmann angle framework. We also extend our results to TV minimization for multidimensional signals. Stability of recovering signal itself using 1-D TV minimization has also been established through a property called "almost Euclidean property for 1-dimensional TV norm". We further give a lower bound on the number of random Gaussian measurements for recovering 1-dimensional signal vectors with $N$ elements and $K$-sparse gradients. Interestingly, the number of needed measurements is lower bounded by $\Omega((NK)^{\frac{1}{2}})$, rather than the $O(K\log(N/K))$ bound frequently appearing in recovering $K$-sparse signal vectors.
false
false
false
false
false
false
true
false
false
true
false
true
false
false
false
false
false
false
21,550
2203.12420
A Novel Simulation-Based Quality Metric for Evaluating Grasps on 3D Deformable Objects
Evaluation of grasps on deformable 3D objects is a little-studied problem, even if the applicability of rigid object grasp quality measures for deformable ones is an open question. A central issue with most quality measures is their dependence on contact points which for deformable objects depend on the deformations. This paper proposes a grasp quality measure for deformable objects that uses information about object deformation to calculate the grasp quality. Grasps are evaluated by simulating the deformations during grasping and predicting the contacts between the gripper and the grasped object. The contact information is then used as input for a new grasp quality metric to quantify the grasp quality. The approach is benchmarked against two classical rigid-body quality metrics on over 600 grasps in the Isaac gym simulation and over 50 real-world grasps. Experimental results show an average improvement of 18\% in the grasp success rate for deformable objects compared to the classical rigid-body quality metrics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
287,262
2002.00401
Provable Noisy Sparse Subspace Clustering using Greedy Neighbor Selection: A Coherence-Based Perspective
Sparse subspace clustering (SSC) using greedy-based neighbor selection, such as matching pursuit (MP) and orthogonal matching pursuit (OMP), has been known as a popular computationally-efficient alternative to the conventional L1-minimization based methods. Under deterministic bounded noise corruption, in this paper we derive coherence-based sufficient conditions guaranteeing correct neighbor identification using MP/OMP. Our analyses exploit the maximum/minimum inner product between two noisy data points subject to a known upper bound on the noise level. The obtained sufficient condition clearly reveals the impact of noise on greedy-based neighbor recovery. Specifically, it asserts that, as long as noise is sufficiently small so that the resultant perturbed residual vectors stay close to the desired subspace, both MP and OMP succeed in returning a correct neighbor subset. A striking finding is that, when the ground truth subspaces are well-separated from each other and noise is not large, MP-based iterations, while enjoying lower algorithmic complexity, yield smaller perturbation of residuals, thereby better able to identify correct neighbors and, in turn, achieving higher global data clustering accuracy. Extensive numerical experiments are used to corroborate our theoretical study.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
162,340
2405.10885
FA-Depth: Toward Fast and Accurate Self-supervised Monocular Depth Estimation
Most existing methods often rely on complex models to predict scene depth with high accuracy, resulting in slow inference that is not conducive to deployment. To better balance precision and speed, we first designed SmallDepth based on sparsity. Second, to enhance the feature representation ability of SmallDepth during training under the condition of equal complexity during inference, we propose an equivalent transformation module(ETM). Third, to improve the ability of each layer in the case of a fixed SmallDepth to perceive different context information and improve the robustness of SmallDepth to the left-right direction and illumination changes, we propose pyramid loss. Fourth, to further improve the accuracy of SmallDepth, we utilized the proposed function approximation loss (APX) to transfer knowledge in the pretrained HQDecv2, obtained by optimizing the previous HQDec to address grid artifacts in some regions, to SmallDepth. Extensive experiments demonstrate that each proposed component improves the precision of SmallDepth without changing the complexity of SmallDepth during inference, and the developed approach achieves state-of-the-art results on KITTI at an inference speed of more than 500 frames per second and with approximately 2 M parameters. The code and models will be publicly available at https://github.com/fwucas/FA-Depth.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
454,921
2206.07316
Online Contextual Decision-Making with a Smart Predict-then-Optimize Method
We study an online contextual decision-making problem with resource constraints. At each time period, the decision-maker first predicts a reward vector and resource consumption matrix based on a given context vector and then solves a downstream optimization problem to make a decision. The final goal of the decision-maker is to maximize the summation of the reward and the utility from resource consumption, while satisfying the resource constraints. We propose an algorithm that mixes a prediction step based on the "Smart Predict-then-Optimize (SPO)" method with a dual update step based on mirror descent. We prove regret bounds and demonstrate that the overall convergence rate of our method depends on the $\mathcal{O}(T^{-1/2})$ convergence of online mirror descent as well as risk bounds of the surrogate loss function used to learn the prediction model. Our algorithm and regret bounds apply to a general convex feasible region for the resource constraints, including both hard and soft resource constraint cases, and they apply to a wide class of prediction models in contrast to the traditional settings of linear contextual models or finite policy spaces. We also conduct numerical experiments to empirically demonstrate the strength of our proposed SPO-type methods, as compared to traditional prediction-error-only methods, on multi-dimensional knapsack and longest path instances.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
302,695
2311.10719
Analysis of frequent trading effects of various machine learning models
In recent years, high-frequency trading has emerged as a crucial strategy in stock trading. This study aims to develop an advanced high-frequency trading algorithm and compare the performance of three different mathematical models: the combination of the cross-entropy loss function and the quasi-Newton algorithm, the FCNN model, and the vector machine. The proposed algorithm employs neural network predictions to generate trading signals and execute buy and sell operations based on specific conditions. By harnessing the power of neural networks, the algorithm enhances the accuracy and reliability of the trading strategy. To assess the effectiveness of the algorithm, the study evaluates the performance of the three mathematical models. The combination of the cross-entropy loss function and the quasi-Newton algorithm is a widely utilized logistic regression approach. The FCNN model, on the other hand, is a deep learning algorithm that can extract and classify features from stock data. Meanwhile, the vector machine is a supervised learning algorithm recognized for achieving improved classification results by mapping data into high-dimensional spaces. By comparing the performance of these three models, the study aims to determine the most effective approach for high-frequency trading. This research makes a valuable contribution by introducing a novel methodology for high-frequency trading, thereby providing investors with a more accurate and reliable stock trading strategy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
408,617
2502.13794
LESA: Learnable LLM Layer Scaling-Up
Training Large Language Models (LLMs) from scratch requires immense computational resources, making it prohibitively expensive. Model scaling-up offers a promising solution by leveraging the parameters of smaller models to create larger ones. However, existing depth scaling-up methods rely on empirical heuristic rules for layer duplication, which result in poorer initialization and slower convergence during continual pre-training. We propose \textbf{LESA}, a novel learnable method for depth scaling-up. By concatenating parameters from each layer and applying Singular Value Decomposition, we uncover latent patterns between layers, suggesting that inter-layer parameters can be learned. LESA uses a neural network to predict the parameters inserted between adjacent layers, enabling better initialization and faster training. Experiments show that LESA outperforms existing baselines, achieving superior performance with less than half the computational cost during continual pre-training. Extensive analyses demonstrate its effectiveness across different model sizes and tasks.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
535,521
1902.09493
Dynamic Fault Tolerance Through Resource Pooling
Miniaturized satellites are currently not considered suitable for critical, high-priority, and complex multi-phased missions, due to their low reliability. As hardware-side fault tolerance (FT) solutions designed for larger spacecraft can not be adopted aboard very small satellites due to budget, energy, and size constraints, we developed a hybrid FT-approach based upon only COTS components, commodity processor cores, library IP, and standard software. This approach facilitates fault detection, isolation, and recovery in software, and utilizes fault-coverage techniques across the embedded stack within an multiprocessor system-on-chip (MPSoC). This allows our FPGA-based proof-of-concept implementation to deliver strong fault-coverage even for missions with a long duration, but also to adapt to varying performance requirements during the mission. The operator of a spacecraft utilizing this approach can define performance profiles, which allow an on-board computer (OBC) to trade between processing capacity, fault coverage, and energy consumption using simple heuristics. The software-side FT approach developed also offers advantages if deployed aboard larger spacecraft through spare resource pooling, enabling an OBC to more efficiently handle permanent faults. This FT approach in part mimics a critical biological systems's way of tolerating and adjusting to failures, enabling graceful ageing of an MPSoC.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
122,422
2308.02723
Towards Improving Harmonic Sensitivity and Prediction Stability for Singing Melody Extraction
In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance. In this paper, we propose an input feature modification and a training objective modification based on two assumptions. First, harmonics in the spectrograms of audio data decay rapidly along the frequency axis. To enhance the model's sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity (CFP) representation using discrete z-transform. Second, the vocal and non-vocal segments with extremely short duration are uncommon. To ensure a more stable melody contour, we design a differentiable loss function that prevents the model from predicting such segments. We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network. Our experimental results demonstrate that the proposed modifications are empirically effective for singing melody extraction.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
383,743
2307.11261
SimCol3D -- 3D Reconstruction during Colonoscopy Challenge
Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,843
2204.06213
Defensive Patches for Robust Recognition in the Physical World
To operate in real-world high-stakes environments, deep learning systems have to endure noises that have been continuously thwarting their robustness. Data-end defense, which improves robustness by operations on input data instead of modifying models, has attracted intensive attention due to its feasibility in practice. However, previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models. Motivated by the fact that robust recognition depends on both local and global features, we propose a defensive patch generation framework to address these problems by helping models better exploit these features. For the generalization against diverse noises, we inject class-specific identifiable patterns into a confined local patch prior, so that defensive patches could preserve more recognizable features towards specific classes, leading models for better recognition under noises. For the transferability across multiple models, we guide the defensive patches to capture more global feature correlations within a class, so that they could activate model-shared global perceptions and transfer better among models. Our defensive patches show great potentials to improve application robustness in practice by simply sticking them around target objects. Extensive experiments show that we outperform others by large margins (improve 20+\% accuracy for both adversarial and corruption robustness on average in the digital and physical world). Our codes are available at https://github.com/nlsde-safety-team/DefensivePatch
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
291,264
2011.05440
Emergency Incident Detection from Crowdsourced Waze Data using Bayesian Information Fusion
The number of emergencies have increased over the years with the growth in urbanization. This pattern has overwhelmed the emergency services with limited resources and demands the optimization of response processes. It is partly due to traditional `reactive' approach of emergency services to collect data about incidents, where a source initiates a call to the emergency number (e.g., 911 in U.S.), delaying and limiting the potentially optimal response. Crowdsourcing platforms such as Waze provides an opportunity to develop a rapid, `proactive' approach to collect data about incidents through crowd-generated observational reports. However, the reliability of reporting sources and spatio-temporal uncertainty of the reported incidents challenge the design of such a proactive approach. Thus, this paper presents a novel method for emergency incident detection using noisy crowdsourced Waze data. We propose a principled computational framework based on Bayesian theory to model the uncertainty in the reliability of crowd-generated reports and their integration across space and time to detect incidents. Extensive experiments using data collected from Waze and the official reported incidents in Nashville, Tenessee in the U.S. show our method can outperform strong baselines for both F1-score and AUC. The application of this work provides an extensible framework to incorporate different noisy data sources for proactive incident detection to improve and optimize emergency response operations in our communities.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
205,919
2008.07788
CinC-GAN for Effective F0 prediction for Whisper-to-Normal Speech Conversion
Recently, Generative Adversarial Networks (GAN)-based methods have shown remarkable performance for the Voice Conversion and WHiSPer-to-normal SPeeCH (WHSP2SPCH) conversion. One of the key challenges in WHSP2SPCH conversion is the prediction of fundamental frequency (F0). Recently, authors have proposed state-of-the-art method Cycle-Consistent Generative Adversarial Networks (CycleGAN) for WHSP2SPCH conversion. The CycleGAN-based method uses two different models, one for Mel Cepstral Coefficients (MCC) mapping, and another for F0 prediction, where F0 is highly dependent on the pre-trained model of MCC mapping. This leads to additional non-linear noise in predicted F0. To suppress this noise, we propose Cycle-in-Cycle GAN (i.e., CinC-GAN). It is specially designed to increase the effectiveness in F0 prediction without losing the accuracy of MCC mapping. We evaluated the proposed method on a non-parallel setting and analyzed on speaker-specific, and gender-specific tasks. The objective and subjective tests show that CinC-GAN significantly outperforms the CycleGAN. In addition, we analyze the CycleGAN and CinC-GAN for unseen speakers and the results show the clear superiority of CinC-GAN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
192,224
2309.05072
Uncertainty-Aware Probabilistic Graph Neural Networks for Road-Level Traffic Accident Prediction
Traffic accidents present substantial challenges to human safety and socio-economic development in urban areas. Developing a reliable and responsible traffic accident prediction model is crucial to addressing growing public safety concerns and enhancing the safety of urban mobility systems. Traditional methods face limitations at fine spatiotemporal scales due to the sporadic nature of highrisk accidents and the predominance of non-accident characteristics. Furthermore, while most current models show promising occurrence prediction, they overlook the uncertainties arising from the inherent nature of accidents, and then fail to adequately map the hierarchical ranking of accident risk values for more precise insights. To address these issues, we introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural Network STZITDGNN -- the first uncertainty-aware probabilistic graph deep learning model in roadlevel traffic accident prediction for multisteps. This model integrates the interpretability of the statistical Tweedie family model and the expressive power of graph neural networks. Its decoder innovatively employs a compound Tweedie model,a Poisson distribution to model the frequency of accident occurrences and a Gamma distribution to assess injury severity, supplemented by a zeroinflated component to effectively identify exessive nonincident instances. Empirical tests using realworld traffic data from London, UK, demonstrate that the STZITDGNN surpasses other baseline models across multiple benchmarks and metrics, including accident risk value prediction, uncertainty minimisation, non-accident road identification and accident occurrence accuracy. Our study demonstrates that STZTIDGNN can effectively inform targeted road monitoring, thereby improving urban road safety strategies.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
390,947
2502.08904
MIH-TCCT: Mitigating Inconsistent Hallucinations in LLMs via Event-Driven Text-Code Cyclic Training
Recent methodologies utilizing synthetic datasets have aimed to address inconsistent hallucinations in large language models (LLMs); however,these approaches are primarily tailored to specific tasks, limiting their generalizability. Inspired by the strong performance of code-trained models in logic-intensive domains, we propose a novel framework that leverages event-based text to generate corresponding code and employs cyclic training to transfer the logical consistency of code to natural language effectively. Our method significantly reduces inconsistent hallucinations across three leading LLMs and two categories of natural language tasks while maintaining overall performance. This framework effectively alleviates hallucinations without necessitating adaptation to downstream tasks, demonstrating generality and providing new perspectives to tackle the challenge of inconsistent hallucinations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
533,214
1807.08023
signProx: One-Bit Proximal Algorithm for Nonconvex Stochastic Optimization
Stochastic gradient descent (SGD) is one of the most widely used optimization methods for parallel and distributed processing of large datasets. One of the key limitations of distributed SGD is the need to regularly communicate the gradients between different computation nodes. To reduce this communication bottleneck, recent work has considered a one-bit variant of SGD, where only the sign of each gradient element is used in optimization. In this paper, we extend this idea by proposing a stochastic variant of the proximal-gradient method that also uses one-bit per update element. We prove the theoretical convergence of the method for non-convex optimization under a set of explicit assumptions. Our results indicate that the compressed method can match the convergence rate of the uncompressed one, making the proposed method potentially appealing for distributed processing of large datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
103,435
2110.14207
How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI
Many real-world problems require the combined application of multiple reasoning abilities employing suitable abstractions, commonsense knowledge, and creative synthesis of problem-solving strategies. To help advance AI systems towards such capabilities, we propose a new reasoning challenge, namely Fermi Problems (FPs), which are questions whose answers can only be approximately estimated because their precise computation is either impractical or impossible. For example, "How much would the sea level rise if all ice in the world melted?" FPs are commonly used in quizzes and interviews to bring out and evaluate the creative reasoning abilities of humans. To do the same for AI systems, we present two datasets: 1) A collection of 1k real-world FPs sourced from quizzes and olympiads; and 2) a bank of 10k synthetic FPs of intermediate complexity to serve as a sandbox for the harder real-world challenge. In addition to question answer pairs, the datasets contain detailed solutions in the form of an executable program and supporting facts, helping in supervision and evaluation of intermediate steps. We demonstrate that even extensively fine-tuned large scale language models perform poorly on these datasets, on average making estimates that are off by two orders of magnitude. Our contribution is thus the crystallization of several unsolved AI problems into a single, new challenge that we hope will spur further advances in building systems that can reason.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
263,463
1205.4971
Data Gathering in Networks of Bacteria Colonies: Collective Sensing and Relaying Using Molecular Communication
The prospect of new biological and industrial applications that require communication in micro-scale, encourages research on the design of bio-compatible communication networks using networking primitives already available in nature. One of the most promising candidates for constructing such networks is to adapt and engineer specific types of bacteria that are capable of sensing, actuation, and above all, communication with each other. In this paper, we describe a new architecture for networks of bacteria to form a data collecting network, as in traditional sensor networks. The key to this architecture is the fact that the node in the network itself is a bacterial colony; as an individual bacterium (biological agent) is a tiny unreliable element with limited capabilities. We describe such a network under two different scenarios. We study the data gathering (sensing and multihop communication) scenario as in sensor networks followed by the consensus problem in a multi-node network. We will explain as to how the bacteria in the colony collectively orchestrate their actions as a node to perform sensing and relaying tasks that would not be possible (at least reliably) by an individual bacterium. Each single bacterium in the colony forms a belief by sensing external parameter (e.g., a molecular signal from another node) from the medium and shares its belief with other bacteria in the colony. Then, after some interactions, all the bacteria in the colony form a common belief and act as a single node. We will model the reception process of each individual bacteria and will study its impact on the overall functionality of a node. We will present results on the reliability of the multihop communication for data gathering scenario as well as the speed of convergence in the consensus scenario.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,131
1811.06934
Image Pre-processing Using OpenCV Library on MORPH-II Face Database
This paper outlines the steps taken toward pre-processing the 55,134 images of the MORPH-II non-commercial dataset. Following the introduction, section two begins with an overview of each step in the pre-processing pipeline. Section three expands upon each stage of the process and includes details on all calculations made, by providing the OpenCV functionality paired with each step. The last portion of this paper discusses the potential improvements to this pre-processing pipeline that became apparent in retrospect.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,623
1704.00331
Restoration of Images with Wavefront Aberrations
This contribution deals with image restoration in optical systems with coherent illumination, which is an important topic in astronomy, coherent microscopy and radar imaging. Such optical systems suffer from wavefront distortions, which are caused by imperfect imaging components and conditions. Known image restoration algorithms work well for incoherent imaging, they fail in case of coherent images. In this paper a novel wavefront correction algorithm is presented, which allows image restoration under coherent conditions. In most coherent imaging systems, especially in astronomy, the wavefront deformation is known. Using this information, the proposed algorithm allows a high quality restoration even in case of severe wavefront distortions. We present two versions of this algorithm, which are an evolution of the Gerchberg-Saxton and the Hybrid-Input-Output algorithm. The algorithm is verified on simulated and real microscopic images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,065
2012.09430
Maximum Entropy competes with Maximum Likelihood
Maximum entropy (MAXENT) method has a large number of applications in theoretical and applied machine learning, since it provides a convenient non-parametric tool for estimating unknown probabilities. The method is a major contribution of statistical physics to probabilistic inference. However, a systematic approach towards its validity limits is currently missing. Here we study MAXENT in a Bayesian decision theory set-up, i.e. assuming that there exists a well-defined prior Dirichlet density for unknown probabilities, and that the average Kullback-Leibler (KL) distance can be employed for deciding on the quality and applicability of various estimators. These allow to evaluate the relevance of various MAXENT constraints, check its general applicability, and compare MAXENT with estimators having various degrees of dependence on the prior, viz. the regularized maximum likelihood (ML) and the Bayesian estimators. We show that MAXENT applies in sparse data regimes, but needs specific types of prior information. In particular, MAXENT can outperform the optimally regularized ML provided that there are prior rank correlations between the estimated random quantity and its probabilities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,078
2305.18426
Employing Explainable Artificial Intelligence (XAI) Methodologies to Analyze the Correlation between Input Variables and Tensile Strength in Additively Manufactured Samples
This research paper explores the impact of various input parameters, including Infill percentage, Layer Height, Extrusion Temperature, and Print Speed, on the resulting Tensile Strength in objects produced through additive manufacturing. The main objective of this study is to enhance our understanding of the correlation between the input parameters and Tensile Strength, as well as to identify the key factors influencing the performance of the additive manufacturing process. To achieve this objective, we introduced the utilization of Explainable Artificial Intelligence (XAI) techniques for the first time, which allowed us to analyze the data and gain valuable insights into the system's behavior. Specifically, we employed SHAP (SHapley Additive exPlanations), a widely adopted framework for interpreting machine learning model predictions, to provide explanations for the behavior of a machine learning model trained on the data. Our findings reveal that the Infill percentage and Extrusion Temperature have the most significant influence on Tensile Strength, while the impact of Layer Height and Print Speed is relatively minor. Furthermore, we discovered that the relationship between the input parameters and Tensile Strength is highly intricate and nonlinear, making it difficult to accurately describe using simple linear models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
369,043
2004.02931
Feedforward control for wave disturbance rejection on floating offshore wind turbines
Floating offshore wind turbines allow wind energy to be harvested in deep waters. However, additional dynamics and structural loads may result when the floating platform is being excited by wind and waves. In this work, the conventional wind turbine controller is complemented with a novel linear feedforward controller based on wave measurements. The objective of the feedforward controller is to attenuate rotor speed variations caused by wave forcing. To design this controller, a linear model is developed that describes the system response to incident waves. The performance of the feedback-feedforward controller is assessed by a high-fidelity numerical tool using the DTU 10MW turbine and the INNWIND.EU TripleSpar platform as references. Simulations in the presence of irregular waves and turbulent wind show that the feedforward controller effectively compensates the wave-induced rotor oscillations. The novel controller is able to reduce the rotor speed variance by 26%. As a result, the remaining rotor speed variance is only 4% higher compared to operation in still water.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
171,380
2405.17474
Federated Offline Policy Optimization with Dual Regularization
Federated Reinforcement Learning (FRL) has been deemed as a promising solution for intelligent decision-making in the era of Artificial Internet of Things. However, existing FRL approaches often entail repeated interactions with the environment during local updating, which can be prohibitively expensive or even infeasible in many real-world domains. To overcome this challenge, this paper proposes a novel offline federated policy optimization algorithm, named $\texttt{DRPO}$, which enables distributed agents to collaboratively learn a decision policy only from private and static data without further environmental interactions. $\texttt{DRPO}$ leverages dual regularization, incorporating both the local behavioral policy and the global aggregated policy, to judiciously cope with the intrinsic two-tier distributional shifts in offline FRL. Theoretical analysis characterizes the impact of the dual regularization on performance, demonstrating that by achieving the right balance thereof, $\texttt{DRPO}$ can effectively counteract distributional shifts and ensure strict policy improvement in each federative learning round. Extensive experiments validate the significant performance gains of $\texttt{DRPO}$ over baseline methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
457,951
2402.16462
Enabling Communication and Control Co-Design in 6G Networks
Networked control systems (NCSs), which are feedback control loops closed over a communication network, have been a popular research topic over the past decades. Numerous works in the literature propose novel algorithms and protocols with joint consideration of communication and control. However, the vast majority of the recent research results, which have shown remarkable performance improvements if a cross-layer methodology is followed, have not been widely adopted by the industry. In this work, we review the shortcomings of today's mobile networks that render cross-layer solutions, such as semantic and goal-oriented communications, very challenging in practice. To tackle this, we propose a new framework for 6G user plane design that simplifies the adoption of recent research results in networked control, thereby facilitating the joint communication and control design in next-generation mobile networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
432,582
2304.12747
Deep learning based Auto Tuning for Database Management System
The management of database system configurations is a challenging task, as there are hundreds of configuration knobs that control every aspect of the system. This is complicated by the fact that these knobs are not standardized, independent, or universal, making it difficult to determine optimal settings. An automated approach to address this problem using supervised and unsupervised machine learning methods to select impactful knobs, map unseen workloads, and recommend knob settings was implemented in a new tool called OtterTune and is being evaluated on three DBMSs, with results demonstrating that it recommends configurations as good as or better than those generated by existing tools or a human expert.In this work, we extend an automated technique based on Ottertune [1] to reuse training data gathered from previous sessions to tune new DBMS deployments with the help of supervised and unsupervised machine learning methods to improve latency prediction. Our approach involves the expansion of the methods proposed in the original paper. We use GMM clustering to prune metrics and combine ensemble models, such as RandomForest, with non-linear models, like neural networks, for prediction modeling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
360,337
2307.06483
Misclassification in Automated Content Analysis Causes Bias in Regression. Can We Fix It? Yes We Can!
Automated classifiers (ACs), often built via supervised machine learning (SML), can categorize large, statistically powerful samples of data ranging from text to images and video, and have become widely popular measurement devices in communication science and related fields. Despite this popularity, even highly accurate classifiers make errors that cause misclassification bias and misleading results in downstream analyses-unless such analyses account for these errors. As we show in a systematic literature review of SML applications, communication scholars largely ignore misclassification bias. In principle, existing statistical methods can use "gold standard" validation data, such as that created by human annotators, to correct misclassification bias and produce consistent estimates. We introduce and test such methods, including a new method we design and implement in the R package misclassificationmodels, via Monte Carlo simulations designed to reveal each method's limitations, which we also release. Based on our results, we recommend our new error correction method as it is versatile and efficient. In sum, automated classifiers, even those below common accuracy standards or making systematic misclassifications, can be useful for measurement with careful study design and appropriate error correction methods.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
379,079
1911.08444
MANGA: Method Agnostic Neural-policy Generalization and Adaptation
In this paper we target the problem of transferring policies across multiple environments with different dynamics parameters and motor noise variations, by introducing a framework that decouples the processes of policy learning and system identification. Efficiently transferring learned policies to an unknown environment with changes in dynamics configurations in the presence of motor noise is very important for operating robots in the real world, and our work is a novel attempt in that direction. We introduce MANGA: Method Agnostic Neural-policy Generalization and Adaptation, that trains dynamics conditioned policies and efficiently learns to estimate the dynamics parameters of the environment given off-policy state-transition rollouts in the environment. Our scheme is agnostic to the type of training method used - both reinforcement learning (RL) and imitation learning (IL) strategies can be used. We demonstrate the effectiveness of our approach by experimenting with four different MuJoCo agents and comparing against previously proposed transfer baselines.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
154,194
2209.13727
Deep Learning Based Detection of Enlarged Perivascular Spaces on Brain MRI
BACKGROUND AND PURPOSE: Deep learning has been demonstrated effective in many neuroimaging applications. However, in many scenarios, the number of imaging sequences capturing information related to small vessel disease lesions is insufficient to support data-driven techniques. Additionally, cohort-based studies may not always have the optimal or essential imaging sequences for accurate lesion detection. Therefore, it is necessary to determine which imaging sequences are crucial for precise detection. This study introduces a novel deep learning framework to detect enlarged perivascular spaces (ePVS) and aims to find the optimal combination of MRI sequences for deep learning-based quantification. MATERIALS AND METHODS: We implemented an effective lightweight U-Net adapted for ePVS detection and comprehensively investigated different combinations of information from SWI, FLAIR, T1-weighted (T1w), and T2-weighted (T2w) MRI sequences. The training data included 21 participants, which were randomly selected from the MESA cohort. Participants had ePVS 683 lesions on average. For T1w, T2w, and FLAIR images, the MESA study collected 3D isotropic MRI scans at six different sites with Siemens scanners. Our training data included participants from all these sites and all the scanner models, and the proposed model was applied to the whole brain instead of selective regions. RESULTS: The experimental results showed that T2w MRI is the most important for accurate ePVS detection, and the incorporation of SWI, FLAIR and T1w MRI in the deep neural network had minor improvements in accuracy and resulted in the highest sensitivity and precision (sensitivity =0.82, precision =0.83). The proposed method achieved comparable accuracy at a minimal time cost compared to manual reading.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
320,001
2009.09893
Decontextualized learning for interpretable hierarchical representations of visual patterns
Apart from discriminative models for classification and object detection tasks, the application of deep convolutional neural networks to basic research utilizing natural imaging data has been somewhat limited; particularly in cases where a set of interpretable features for downstream analysis is needed, a key requirement for many scientific investigations. We present an algorithm and training paradigm designed specifically to address this: decontextualized hierarchical representation learning (DHRL). By combining a generative model chaining procedure with a ladder network architecture and latent space regularization for inference, DHRL address the limitations of small datasets and encourages a disentangled set of hierarchically organized features. In addition to providing a tractable path for analyzing complex hierarchal patterns using variation inference, this approach is generative and can be directly combined with empirical and theoretical approaches. To highlight the extensibility and usefulness of DHRL, we demonstrate this method in application to a question from evolutionary biology.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
196,723
2311.13061
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
Language models are often used as the backbone of modern dialogue systems. These models are pre-trained on large amounts of written fluent language. Repetition is typically penalised when evaluating language model generations. However, it is a key component of dialogue. Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue. In this study, we evaluate (a) whether language models produce human-like levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension. We believe that such joint analysis of model production and comprehension behaviour can inform the development of cognitively inspired dialogue generation systems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
409,607
2105.07797
Deep regression for uncertainty-aware and interpretable analysis of large-scale body MRI
Large-scale medical studies such as the UK Biobank examine thousands of volunteer participants with medical imaging techniques. Combined with the vast amount of collected metadata, anatomical information from these images has the potential for medical analyses at unprecedented scale. However, their evaluation often requires manual input and long processing times, limiting the amount of reference values for biomarkers and other measurements available for research. Recent approaches with convolutional neural networks for regression can perform these evaluations automatically. On magnetic resonance imaging (MRI) data of more than 40,000 UK Biobank subjects, these systems can estimate human age, body composition and more. This style of analysis is almost entirely data-driven and no manual intervention or guidance with manually segmented ground truth images is required. The networks often closely emulate the reference method that provided their training data and can reach levels of agreement comparable to the expected variability between established medical gold standard techniques. The risk of silent failure can be individually quantified by predictive uncertainty obtained from a mean-variance criterion and ensembling. Saliency analysis furthermore enables an interpretation of the underlying relevant image features and showed that the networks learned to correctly target specific organs, limbs, and regions of interest.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
235,560
2104.11590
A Prioritized Trajectory Planning Algorithm for Connected and Automated Vehicle Mandatory Lane Changes
We introduce a prioritized system-optimal algorithm for mandatory lane change (MLC) behavior of connected and automated vehicles (CAV) from a dedicated lane. Our approach applies a cooperative lane change that prioritizes the decisions of lane changing vehicles which are closer to the end of the diverging zone (DZ), and optimizes the predicted total system travel time. Our experiments on synthetic data show that the proposed algorithm improves the traffic network efficiency by attaining higher speeds in the dedicated lane and earlier MLC positions while ensuring a low computational time. Our approach outperforms the traditional gap acceptance model.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
231,960
2310.20389
High-Resolution Reference Image Assisted Volumetric Super-Resolution of Cardiac Diffusion Weighted Imaging
Diffusion Tensor Cardiac Magnetic Resonance (DT-CMR) is the only in vivo method to non-invasively examine the microstructure of the human heart. Current research in DT-CMR aims to improve the understanding of how the cardiac microstructure relates to the macroscopic function of the healthy heart as well as how microstructural dysfunction contributes to disease. To get the final DT-CMR metrics, we need to acquire diffusion weighted images of at least 6 directions. However, due to DWI's low signal-to-noise ratio, the standard voxel size is quite big on the scale for microstructures. In this study, we explored the potential of deep-learning-based methods in improving the image quality volumetrically (x4 in all dimensions). This study proposed a novel framework to enable volumetric super-resolution, with an additional model input of high-resolution b0 DWI. We demonstrated that the additional input could offer higher super-resolved image quality. Going beyond, the model is also able to super-resolve DWIs of unseen b-values, proving the model framework's generalizability for cardiac DWI superresolution. In conclusion, we would then recommend giving the model a high-resolution reference image as an additional input to the low-resolution image for training and inference to guide all super-resolution frameworks for parametric imaging where a reference image is available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
404,373
2401.05509
Optimized Ensemble Model Towards Secured Industrial IoT Devices
The continued growth in the deployment of Internet-of-Things (IoT) devices has been fueled by the increased connectivity demand, particularly in industrial environments. However, this has led to an increase in the number of network related attacks due to the increased number of potential attack surfaces. Industrial IoT (IIoT) devices are prone to various network related attacks that can have severe consequences on the manufacturing process as well as on the safety of the workers in the manufacturing plant. One promising solution that has emerged in recent years for attack detection is Machine learning (ML). More specifically, ensemble learning models have shown great promise in improving the performance of the underlying ML models. Accordingly, this paper proposes a framework based on the combined use of Bayesian Optimization-Gaussian Process (BO-GP) with an ensemble tree-based learning model to improve the performance of intrusion and attack detection in IIoT environments. The proposed framework's performance is evaluated using the Windows 10 dataset collected by the Cyber Range and IoT labs at University of New South Wales. Experimental results illustrate the improvement in detection accuracy, precision, and F-score when compared to standard tree and ensemble tree models.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
420,814
1306.4066
MYE: Missing Year Estimation in Academic Social Networks
In bibliometrics studies, a common challenge is how to deal with incorrect or incomplete data. However, given a large volume of data, there often exists certain relationships between the data items that can allow us to recover missing data items and correct erroneous data. In this paper, we study a particular problem of this sort - estimating the missing year information associated with publications (and hence authors' years of active publication). We first propose a simple algorithm that only makes use of the "direct" information, such as paper citation/reference relationships or paper-author relationships. The result of this simple algorithm is used as a benchmark for comparison. Our goal is to develop algorithms that increase both the coverage (the percentage of missing year papers recovered) and accuracy (mean absolute error of the estimated year to the real year). We propose some advanced algorithms that extend inference by information propagation. For each algorithm, we propose three versions according to the given academic social network type: a) Homogeneous (only contains paper citation links), b) Bipartite (only contains paper-author relations), and, c) Heterogeneous (both paper citation and paper-author relations). We carry out experiments on the three public data sets (MSR Libra, DBLP and APS), and evaluated by applying the K-fold cross validation method. We show that the advanced algorithms can improve both coverage and accuracy.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
25,281
2212.02042
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning
Recent works have brought attention to the vulnerability of Federated Learning (FL) systems to gradient leakage attacks. Such attacks exploit clients' uploaded gradients to reconstruct their sensitive data, thereby compromising the privacy protection capability of FL. In response, various defense mechanisms have been proposed to mitigate this threat by manipulating the uploaded gradients. Unfortunately, empirical evaluations have demonstrated limited resilience of these defenses against sophisticated attacks, indicating an urgent need for more effective defenses. In this paper, we explore a novel defensive paradigm that departs from conventional gradient perturbation approaches and instead focuses on the construction of robust data. Intuitively, if robust data exhibits low semantic similarity with clients' raw data, the gradients associated with robust data can effectively obfuscate attackers. To this end, we design Refiner that jointly optimizes two metrics for privacy protection and performance maintenance. The utility metric is designed to promote consistency between the gradients of key parameters associated with robust data and those derived from clients' data, thus maintaining model performance. Furthermore, the privacy metric guides the generation of robust data towards enlarging the semantic gap with clients' data. Theoretical analysis supports the effectiveness of Refiner, and empirical evaluations on multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art attacks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
334,662
1404.5997
One weird trick for parallelizing convolutional neural networks
I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
32,547
1502.01733
Arrhythmia Detection using Mutual Information-Based Integration Method
The aim of this paper is to propose an application of mutual information-based ensemble methods to the analysis and classification of heart beats associated with different types of Arrhythmia. Models of multilayer perceptrons, support vector machines, and radial basis function neural networks were trained and tested using the MIT-BIH arrhythmia database. This research brings a focus to an ensemble method that, to our knowledge, is a novel application in the area of ECG Arrhythmia detection. The proposed classifier ensemble method showed improved performance, relative to either majority voting classifier integration or to individual classifier performance. The overall ensemble accuracy was 98.25%.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
39,961
2406.10471
Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts
Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences. While parameter-efficient fine-tuning (PEFT) methods excel in performance and generalization, they are costly and limit communal benefits when used individually. To this end, we introduce Personalized Pieces (Per-Pcs), a framework that allows users to safely share and assemble personalized PEFT efficiently with collaborative efforts. Per-Pcs involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. Further analysis highlights Per-Pcs's robustness concerning sharer count and selection strategy, pieces sharing ratio, and scalability in computation time and storage space. Per-Pcs's modularity promotes safe sharing, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
464,416
1404.4923
Unified Structured Learning for Simultaneous Human Pose Estimation and Garment Attribute Classification
In this paper, we utilize structured learning to simultaneously address two intertwined problems: human pose estimation (HPE) and garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce a jointly optimal estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of "candidates") that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, the outputs are the joint labels of human parts and garment attributes, and the joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the "strong edge" evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured Support Vector Machines (SVM) algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima by using an iterative procedure, where in each iteration the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark datasets show the state-of-the-art performance of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
32,447
2412.05488
Enhancing Sample Generation of Diffusion Models using Noise Level Correction
The denoising process of diffusion models can be interpreted as an approximate projection of noisy samples onto the data manifold. Moreover, the noise level in these samples approximates their distance to the underlying manifold. Building on this insight, we propose a novel method to enhance sample generation by aligning the estimated noise level with the true distance of noisy samples to the manifold. Specifically, we introduce a noise level correction network, leveraging a pre-trained denoising network, to refine noise level estimates during the denoising process. Additionally, we extend this approach to various image restoration tasks by integrating task-specific constraints, including inpainting, deblurring, super-resolution, colorization, and compressed sensing. Experimental results demonstrate that our method significantly improves sample quality in both unconstrained and constrained generation scenarios. Notably, the proposed noise level correction framework is compatible with existing denoising schedulers (e.g., DDIM), offering additional performance improvements.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
514,840
1809.05817
T* : A Heuristic Search Based Algorithm for Motion Planning with Temporal Goals
Motion planning is the core problem to solve for developing any application involving an autonomous mobile robot. The fundamental motion planning problem involves generating a trajectory for a robot for point-to-point navigation while avoiding obstacles. Heuristic-based search algorithms like A* have been shown to be extremely efficient in solving such planning problems. Recently, there has been an increased interest in specifying complex motion plans using temporal logic. In the state-of-the-art algorithm, the temporal logic motion planning problem is reduced to a graph search problem and Dijkstra's shortest path algorithm is used to compute the optimal trajectory satisfying the specification. The A* algorithm when used with a proper heuristic for the distance from the destination can generate an optimal path in a graph efficiently. The primary challenge for using A* algorithm in temporal logic path planning is that there is no notion of a single destination state for the robot. In this thesis, we present a novel motion planning algorithm T* that uses the A* search procedure in temporal logic path planning \emph{opportunistically} to generate an optimal trajectory satisfying a temporal logic query. Our experimental results demonstrate that T* achieves an order of magnitude improvement over the state-of-the-art algorithm to solve many temporal logic motion planning problems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
107,878
2304.00185
PrefGen: Preference Guided Image Generation with Relative Attributes
Deep generative models have the capacity to render high fidelity images of content like human faces. Recently, there has been substantial progress in conditionally generating images with specific quantitative attributes, like the emotion conveyed by one's face. These methods typically require a user to explicitly quantify the desired intensity of a visual attribute. A limitation of this method is that many attributes, like how "angry" a human face looks, are difficult for a user to precisely quantify. However, a user would be able to reliably say which of two faces seems "angrier". Following this premise, we develop the $\textit{PrefGen}$ system, which allows users to control the relative attributes of generated images by presenting them with simple paired comparison queries of the form "do you prefer image $a$ or image $b$?" Using information from a sequence of query responses, we can estimate user preferences over a set of image attributes and perform preference-guided image editing and generation. Furthermore, to make preference localization feasible and efficient, we apply an active query selection strategy. We demonstrate the success of this approach using a StyleGAN2 generator on the task of human face editing. Additionally, we demonstrate how our approach can be combined with CLIP, allowing a user to edit the relative intensity of attributes specified by text prompts. Code at https://github.com/helblazer811/PrefGen.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
355,584