id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2311.17471
Distributed AI in Zero-touch Provisioning for Edge Networks: Challenges and Research Directions
Zero-touch network is anticipated to inaugurate the generation of intelligent and highly flexible resource provisioning strategies where multiple service providers collaboratively offer computation and storage resources. This transformation presents substantial challenges to network administration and service providers regarding sustainability and scalability. This article combines Distributed Artificial Intelligence (DAI) with Zero-touch Provisioning (ZTP) for edge networks. This combination helps to manage network devices seamlessly and intelligently by minimizing human intervention. In addition, several advantages are also highlighted that come with incorporating Distributed AI into ZTP in the context of edge networks. Further, we draw potential research directions to foster novel studies in this field and overcome the current limitations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
411,312
1801.09360
Comparative Study of ECO and CFNet Trackers in Noisy Environment
Object tracking is one of the most challenging task and has secured significant attention of computer vision researchers in the past two decades. Recent deep learning based trackers have shown good performance on various tracking challenges. A tracking method should track objects in sequential frames accurately in challenges such as deformation, low resolution, occlusion, scale and light variations. Most trackers achieve good performance on specific challenges instead of all tracking problems, hence there is a lack of general purpose tracking algorithms that can perform well in all conditions. Moreover, performance of tracking techniques has not been evaluated in noisy environments. Visual object tracking has real world applications and there is good chance that noise may get added during image acquisition in surveillance cameras. We aim to study the robustness of two state of the art trackers in the presence of noise including Efficient Convolutional Operators (ECO) and Correlation Filter Network (CFNet). Our study demonstrates that the performance of these trackers degrades as the noise level increases, which demonstrate the need to design more robust tracking algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,095
2304.04915
AffectMachine-Classical: A novel system for generating affective classical music
This work introduces a new music generation system, called AffectMachine-Classical, that is capable of generating affective Classic music in real-time. AffectMachine was designed to be incorporated into biofeedback systems (such as brain-computer-interfaces) to help users become aware of, and ultimately mediate, their own dynamic affective states. That is, this system was developed for music-based MedTech to support real-time emotion self-regulation in users. We provide an overview of the rule-based, probabilistic system architecture, describing the main aspects of the system and how they are novel. We then present the results of a listener study that was conducted to validate the ability of the system to reliably convey target emotions to listeners. The findings indicate that AffectMachine-Classical is very effective in communicating various levels of Arousal ($R^2 = .96$) to listeners, and is also quite convincing in terms of Valence (R^2 = .90). Future work will embed AffectMachine-Classical into biofeedback systems, to leverage the efficacy of the affective music for emotional well-being in listeners.
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
357,408
2105.10422
LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single Image Super-Resolution and Beyond
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version. Last few years have witnessed impressive progress propelled by deep learning methods. However, one critical challenge faced by existing methods is to strike a sweet spot of deep model complexity and resulting SISR quality. This paper addresses this pain point by proposing a linearly-assembled pixel-adaptive regression network (LAPAR), which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases. Such a parametric representation renders our model highly lightweight and easy to optimize while achieving state-of-the-art results on SISR benchmarks. Moreover, based on the same idea, LAPAR is extended to tackle other restoration tasks, e.g., image denoising and JPEG image deblocking, and again, yields strong performance. The code is available at https://github.com/dvlab-research/Simple-SR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
236,393
2003.01355
CLUECorpus2020: A Large-scale Chinese Corpus for Pre-training Language Model
In this paper, we introduce the Chinese corpus from CLUE organization, CLUECorpus2020, a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl. To better understand this corpus, we conduct language understanding experiments on both small and large scale, and results show that the models trained on this corpus can achieve excellent performance on Chinese. We release a new Chinese vocabulary with a size of 8K, which is only one-third of the vocabulary size used in Chinese Bert released by Google. It saves computational cost and memory while works as good as original vocabulary. We also release both large and tiny versions of the pre-trained model on this corpus. The former achieves the state-of-the-art result, and the latter retains most precision while accelerating training and prediction speed for eight times compared to Bert-base. To facilitate future work on self-supervised learning on Chinese, we release our dataset, new vocabulary, codes, and pre-trained models on Github.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
166,627
1210.4008
Location-Based Events Detection on Micro-Blogs
The increasing use of social networks generates enormous amounts of data that can be used for many types of analysis. Some of these data have temporal and geographical information, which can be used for comprehensive examination. In this paper, we propose a new method to analyze the massive volume of messages available in Twitter to identify places in the world where topics such as TV shows, climate change, disasters, and sports are emerging. The proposed method is based on a neural network that is used to detect outliers from a time series, which is built upon statistical data from tweets located on different political divisions (i.e., countries, cities). The outliers are used to identify topics within an abnormal behavior in Twitter. The effectiveness of our method is evaluated in an online environment indicating new findings on modeling local people's behavior from different places.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
19,116
2501.07101
Dual Scale-aware Adaptive Masked Knowledge Distillation for Object Detection
Recent feature masking knowledge distillation methods make use of attention mechanisms to identify either important spatial regions or channel clues for discriminative feature reconstruction. However, most of existing strategies perform global attention-guided feature masking distillation without delving into fine-grained visual clues in feature maps. In particular, uncovering locality-aware clues across different scales are conducive to reconstructing region-aware features, thereby significantly benefiting distillation performance. In this study, we propose a fine-grained adaptive feature masking distillation framework for accurate object detection. Different from previous methods in which global masking is performed on single-scale feature maps, we explore the scale-aware feature masking by performing feature distillation across various scales, such that the object-aware locality is encoded for improved feature reconstruction. In addition, our fine-grained feature distillation strategy is combined with a masking logits distillation scheme in which logits difference between teacher and student networks is utilized to guide the distillation process. Thus, it can help the student model to better learn from the teacher counterpart with improved knowledge transfer. Extensive experiments for detection task demonstrate the superiority of our method. For example, when RetinaNet, RepPoints and Cascade Mask RCNN are used as teacher detectors, the student network achieves mAP scores of 41.5\%, 42.9\%, and 42.6\%, respectively, outperforming state-of-the-art methods such as DMKD and FreeKD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
524,264
2501.08326
Omni-RGPT: Unifying Image and Video Region-level Understanding via Token Marks
We present Omni-RGPT, a multimodal large language model designed to facilitate region-level comprehension for both images and videos. To achieve consistent region representation across spatio-temporal dimensions, we introduce Token Mark, a set of tokens highlighting the target regions within the visual feature space. These tokens are directly embedded into spatial regions using region prompts (e.g., boxes or masks) and simultaneously incorporated into the text prompt to specify the target, establishing a direct connection between visual and text tokens. To further support robust video understanding without requiring tracklets, we introduce an auxiliary task that guides Token Mark by leveraging the consistency of the tokens, enabling stable region interpretation across the video. Additionally, we introduce a large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT achieves state-of-the-art results on image and video-based commonsense reasoning benchmarks while showing strong performance in captioning and referring expression comprehension tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
524,721
2309.17143
Revisiting Cephalometric Landmark Detection from the view of Human Pose Estimation with Lightweight Super-Resolution Head
Accurate localization of cephalometric landmarks holds great importance in the fields of orthodontics and orthognathics due to its potential for automating key point labeling. In the context of landmark detection, particularly in cephalometrics, it has been observed that existing methods often lack standardized pipelines and well-designed bias reduction processes, which significantly impact their performance. In this paper, we revisit a related task, human pose estimation (HPE), which shares numerous similarities with cephalometric landmark detection (CLD), and emphasize the potential for transferring techniques from the former field to benefit the latter. Motivated by this insight, we have developed a robust and adaptable benchmark based on the well-established HPE codebase known as MMPose. This benchmark can serve as a dependable baseline for achieving exceptional CLD performance. Furthermore, we introduce an upscaling design within the framework to further enhance performance. This enhancement involves the incorporation of a lightweight and efficient super-resolution module, which generates heatmap predictions on high-resolution features and leads to further performance refinement, benefiting from its ability to reduce quantization bias. In the MICCAI CLDetection2023 challenge, our method achieves 1st place ranking on three metrics and 3rd place on the remaining one. The code for our method is available at https://github.com/5k5000/CLdetection2023.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
395,646
2012.00953
Ship Detection: Parameter Server Variant
Deep learning ship detection in satellite optical imagery suffers from false positive occurrences with clouds, landmasses, and man-made objects that interfere with correct classification of ships, typically limiting class accuracy scores to 88\%. This work explores the tensions between customization strategies, class accuracy rates, training times, and costs in cloud based solutions. We demonstrate how a custom U-Net can achieve 92\% class accuracy over a validation dataset and 68\% over a target dataset with 90\% confidence. We also compare a single node architecture with a parameter server variant whose workers act as a boosting mechanism. The parameter server variant outperforms class accuracy on the target dataset reaching 73\% class accuracy compared to the best single node approach. A comparative investigation on the systematic performance of the single node and parameter server variant architectures is discussed with support from empirical findings.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
209,286
2402.00971
FuseFormer: A Transformer for Visual and Thermal Image Fusion
Due to the lack of a definitive ground truth for the image fusion problem, the loss functions are structured based on evaluation metrics, such as the structural similarity index measure (SSIM). However, in doing so, a bias is introduced toward the SSIM and, consequently, the input visual band image. The objective of this study is to propose a novel methodology for the image fusion problem that mitigates the limitations associated with using classical evaluation metrics as loss functions. Our approach integrates a transformer-based multi-scale fusion strategy that adeptly addresses local and global context information. This integration not only refines the individual components of the image fusion process but also significantly enhances the overall efficacy of the method. Our proposed method follows a two-stage training approach, where an auto-encoder is initially trained to extract deep features at multiple scales in the first stage. For the second stage, we integrate our fusion block and change the loss function as mentioned. The multi-scale features are fused using a combination of Convolutional Neural Networks (CNNs) and Transformers. The CNNs are utilized to capture local features, while the Transformer handles the integration of general context features. Through extensive experiments on various benchmark datasets, our proposed method, along with the novel loss function definition, demonstrates superior performance compared to other competitive fusion algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
425,797
2211.06550
TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data
Personal data collected at scale promises to improve decision-making and accelerate innovation. However, sharing and using such data raises serious privacy concerns. A promising solution is to produce synthetic data, artificial records to share instead of real data. Since synthetic records are not linked to real persons, this intuitively prevents classical re-identification attacks. However, this is insufficient to protect privacy. We here present TAPAS, a toolbox of attacks to evaluate synthetic data privacy under a wide range of scenarios. These attacks include generalizations of prior works and novel attacks. We also introduce a general framework for reasoning about privacy threats to synthetic data and showcase TAPAS on several examples.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
329,928
2204.10646
Measuring the Salad Bowl: Superdiversity on Twitter
Superdiversity refers to large cultural diversity in a population due to immigration. In this paper, we introduce a superdiversity index based on the changes in the emotional content of words used by a multi-cultural community, compared to the standard language. To compute our index we use Twitter data and we develop an algorithm to extend a dictionary for lexicon-based sentiment analysis. We validate our index by comparing it with official immigration statistics available from the European Commission's Joint Research Center, through the D4I data challenge. We show that, in general, our measure correlates with immigration rates, at various geographical resolutions. Our method produces very good results across languages, being tested here both on English and Italian tweets. We argue that our index has predictive power in regions where exact data on immigration is not available, paving the way for a nowcasting model of immigration rates.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
292,865
2207.06617
Perception-Oriented Stereo Image Super-Resolution
Recent studies of deep learning based stereo image super-resolution (StereoSR) have promoted the development of StereoSR. However, existing StereoSR models mainly concentrate on improving quantitative evaluation metrics and neglect the visual quality of super-resolved stereo images. To improve the perceptual performance, this paper proposes the first perception-oriented stereo image super-resolution approach by exploiting the feedback, provided by the evaluation on the perceptual quality of StereoSR results. To provide accurate guidance for the StereoSR model, we develop the first special stereo image super-resolution quality assessment (StereoSRQA) model, and further construct a StereoSRQA database. Extensive experiments demonstrate that our StereoSR approach significantly improves the perceptual quality and enhances the reliability of stereo images for disparity estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,935
2003.07780
TraLFM: Latent Factor Modeling of Traffic Trajectory Data
The widespread use of positioning devices (e.g., GPS) has given rise to a vast body of human movement data, often in the form of trajectories. Understanding human mobility patterns could benefit many location-based applications. In this paper, we propose a novel generative model called TraLFM via latent factor modeling to mine human mobility patterns underlying traffic trajectories. TraLFM is based on three key observations: (1) human mobility patterns are reflected by the sequences of locations in the trajectories; (2) human mobility patterns vary with people; and (3) human mobility patterns tend to be cyclical and change over time. Thus, TraLFM models the joint action of sequential, personal and temporal factors in a unified way, and brings a new perspective to many applications such as latent factor analysis and next location prediction. We perform thorough empirical studies on two real datasets, and the experimental results confirm that TraLFM outperforms the state-of-the-art methods significantly in these applications.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
168,539
2001.09700
DP-CGAN: Differentially Private Synthetic Data and Label Generation
Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible. One of the main challenges in this area is to preserve the privacy of individuals who participate in the training of the GAN models. To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model while preserving privacy of the training dataset. DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Renyi differential privacy accountant to track the spent privacy budget. The experimental results show that DP-CGAN can generate visually and empirically promising results on the MNIST dataset with a single-digit epsilon parameter in differential privacy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,648
1003.1827
Investigation and Assessment of Disorder of Ultrasound B-mode Images
Digital image plays a vital role in the early detection of cancers, such as prostate cancer, breast cancer, lungs cancer, cervical cancer. Ultrasound imaging method is also suitable for early detection of the abnormality of fetus. The accurate detection of region of interest in ultrasound image is crucial. Since the result of reflection, refraction and deflection of ultrasound waves from different types of tissues with different acoustic impedance. Usually, the contrast in ultrasound image is very low and weak edges make the image difficult to identify the fetus region in the ultrasound image. So the analysis of ultrasound image is more challenging one. We try to develop a new algorithmic approach to solve the problem of non clarity and find disorder of it. Generally there is no common enhancement approach for noise reduction. This paper proposes different filtering techniques based on statistical methods for the removal of various noise. The quality of the enhanced images is measured by the statistical quantity measures: Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
5,887
1805.07794
Object-Aware Guidance for Autonomous Scene Reconstruction
To carry out autonomous 3D scanning and online reconstruction of unknown indoor scenes, one has to find a balance between global exploration of the entire scene and local scanning of the objects within it. In this work, we propose a novel approach, which provides object-aware guidance for autoscanning, for exploring, reconstructing, and understanding an unknown scene within one navigation pass. Our approach interleaves between object analysis to identify the next best object (NBO) for global exploration, and object-aware information gain analysis to plan the next best view (NBV) for local scanning. First, an objectness-based segmentation method is introduced to extract semantic objects from the current scene surface via a multi-class graph cuts minimization. Then, an object of interest (OOI) is identified as the NBO which the robot aims to visit and scan. The robot then conducts fine scanning on the OOI with views determined by the NBV strategy. When the OOI is recognized as a full object, it can be replaced by its most similar 3D model in a shape database. The algorithm iterates until all of the objects are recognized and reconstructed in the scene. Various experiments and comparisons have shown the feasibility of our proposed approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
97,937
1907.03030
Dependency-aware Attention Control for Unconstrained Face Recognition with Image Sets
This paper targets the problem of image set-based face verification and identification. Unlike traditional single media (an image or video) setting, we encounter a set of heterogeneous contents containing orderless images and videos. The importance of each image is usually considered either equal or based on their independent quality assessment. How to model the relationship of orderless images within a set remains a challenge. We address this problem by formulating it as a Markov Decision Process (MDP) in the latent space. Specifically, we first present a dependency-aware attention control (DAC) network, which resorts to actor-critic reinforcement learning for sequential attention decision of each image embedding to fully exploit the rich correlation cues among the unordered images. Moreover, we introduce its sample-efficient variant with off-policy experience replay to speed up the learning process. The pose-guided representation scheme can further boost the performance at the extremes of the pose variation.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
137,742
1812.08094
Shallow Cue Guided Deep Visual Tracking via Mixed Models
In this paper, a robust visual tracking approach via mixed model based convolutional neural networks (SDT) is developed. In order to handle abrupt or fast motion, a prior map is generated to facilitate the localization of region of interest (ROI) before the deep tracker is performed. A top-down saliency model with nineteen shallow cues are employed to construct the prior map with online learnt combination weights. Moreover, apart from a holistic deep learner, four local networks are also trained to learn different components of the target. The generated four local heat maps will facilitate to rectify the holistic map by eliminating the distracters to avoid drifting. Furthermore, to guarantee the instance for online update of high quality, a prioritised update strategy is implemented by casting the problem into a label noise problem. The selection probability is designed by considering both confidence values and bio-inspired memory for temporal information integration. Experiments are conducted qualitatively and quantitatively on a set of challenging image sequences. Comparative study demonstrates that the proposed algorithm outperforms other state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
116,937
2403.03842
Political polarisation in turbulent times: Tracking polarisation trends and partisan news link sharing on Finnish Twitter, 2015-2023
The study analyses polarisation on Finnish social media with data from the platform X, which was known as Twitter during the time of data collection (during the Sipil\"a and Marin governments, 2015-2023). The users were clustered into three different ideological groups - the Conservative Right, the Moderate Right, and the Liberal Left - based on their retweeting of tweets referring to the different political parties in Finland. Trends in polarisation of several topics encompassing the most recent political crises - immigration, climate change, COVID-19, and security policy - between these ideological groups is analysed using network methods. To what extent the polarisation of each topic aligns with the polarisation of the other topics is also studied. In addition, the sharing of news links is examined in relation to the ideological groups of the users as well as to the sentiment and the virality of the tweets in which news links are shared.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
435,355
2107.14051
Improvement of image classification by multiple optical scattering
Multiple optical scattering occurs when light propagates in a non-uniform medium. During the multiple scattering, images were distorted and the spatial information they carried became scrambled. However, the image information is not lost but presents in the form of speckle patterns (SPs). In this study, we built up an optical random scattering system based on an LCD and an RGB laser source. We found that the image classification can be improved by the help of random scattering which is considered as a feedforward neural network to extracts features from image. Along with the ridge classification deployed on computer, we achieved excellent classification accuracy higher than 94%, for a variety of data sets covering medical, agricultural, environmental protection and other fields. In addition, the proposed optical scattering system has the advantages of high speed, low power consumption, and miniaturization, which is suitable for deploying in edge computing applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
248,364
1302.2472
Quantifying the effects of social influence
How do humans respond to indirect social influence when making decisions? We analysed an experiment where subjects had to repeatedly guess the correct answer to factual questions, while having only aggregated information about the answers of others. While the response of humans to aggregated information is a widely observed phenomenon, it has not been investigated quantitatively, in a controlled setting. We found that the adjustment of individual guesses depends linearly on the distance to the mean of all guesses. This is a remarkable, and yet surprisingly simple, statistical regularity. It holds across all questions analysed, even though the correct answers differ in several orders of magnitude. Our finding supports the assumption that individual diversity does not affect the response to indirect social influence. It also complements previous results on the nonlinear response in information-rich scenarios. We argue that the nature of the response to social influence crucially changes with the level of information aggregation. This insight contributes to the empirical foundation of models for collective decisions under social influence.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
21,945
2310.13024
Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt
Continual pre-training has been urgent for adapting a pre-trained model to a multitude of domains and tasks in the fast-evolving world. In practice, a continually pre-trained model is expected to demonstrate not only greater capacity when fine-tuned on pre-trained domains but also a non-decreasing performance on unseen ones. In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains. To this end, we propose a prompt-guided continual pre-training method, where we train a hypernetwork to generate domain-specific prompts by both agreement and disagreement losses. The agreement loss maximally preserves the generalization of a pre-trained model to new domains, and the disagreement one guards the exclusiveness of the generated hidden states for each domain. Remarkably, prompts by the hypernetwork alleviate the domain identity when fine-tuning and promote knowledge transfer across domains. Our method achieved improvements of 3.57% and 3.4% on two real-world datasets (including domain shift and temporal shift), respectively, demonstrating its efficacy.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
401,264
2008.07983
Reinforcement Learning Evaluation and Solution for the Feedback Capacity of the Ising Channel with Large Alphabet
We propose a new method to compute the feedback capacity of unifilar finite state channels (FSCs) with memory using reinforcement learning (RL). The feedback capacity was previously estimated using its formulation as a Markov decision process (MDP) with dynamic programming (DP) algorithms. However, their computational complexity grows exponentially with the channel alphabet size. Therefore, we use RL, and specifically its ability to parameterize value functions and policies with neural networks, to evaluate numerically the feedback capacity of channels with a large alphabet size. The outcome of the RL algorithm is a numerical lower bound on the feedback capacity, which is used to reveal the structure of the optimal solution. The structure is modeled by a graph-based auxiliary random variable that is utilized to derive an analytic upper bound on the feedback capacity with the duality bound. The capacity computation is concluded by verifying the tightness of the upper bound by testing whether it is BCJR invariant. We demonstrate this method on the Ising channel with an arbitrary alphabet size. For an alphabet size smaller than or equal to 8, we derive the analytic solution of the capacity. Next, the structure of the numerical solution is used to deduce a simple coding scheme that achieves the feedback capacity and serves as a lower bound for larger alphabets. For an alphabet size greater than 8, we present an upper bound on the feedback capacity. For an asymptotically large alphabet size, we present an asymptotic optimal coding scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
192,276
2309.15224
Collaborative Watermarking for Adversarial Speech Synthesis
Advances in neural speech synthesis have brought us technology that is not only close to human naturalness, but is also capable of instant voice cloning with little data, and is highly accessible with pre-trained models available. Naturally, the potential flood of generated content raises the need for synthetic speech detection and watermarking. Recently, considerable research effort in synthetic speech detection has been related to the Automatic Speaker Verification and Spoofing Countermeasure Challenge (ASVspoof), which focuses on passive countermeasures. This paper takes a complementary view to generated speech detection: a synthesis system should make an active effort to watermark the generated speech in a way that aids detection by another machine, but remains transparent to a human listener. We propose a collaborative training scheme for synthetic speech watermarking and show that a HiFi-GAN neural vocoder collaborating with the ASVspoof 2021 baseline countermeasure models consistently improves detection performance over conventional classifier training. Furthermore, we demonstrate how collaborative training can be paired with augmentation strategies for added robustness against noise and time-stretching. Finally, listening tests demonstrate that collaborative training has little adverse effect on perceptual quality of vocoded speech.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
394,885
2207.02518
Compositional Generalization in Grounded Language Learning via Induced Model Sparsity
We provide a study of how induced model sparsity can help achieve compositional generalization and better sample efficiency in grounded language learning problems. We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations. We show that standard neural architectures do not always yield compositional generalization. To address this, we design an agent that contains a goal identification module that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal. The output of the goal identification module is the input to a value iteration network planner. Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations. We examine the internal representations of our agent and find the correct correspondences between words in its dictionary and attributes in the environment.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
306,540
2302.04024
InMyFace: Inertial and Mechanomyography-Based Sensor Fusion for Wearable Facial Activity Recognition
Recognizing facial activity is a well-understood (but non-trivial) computer vision problem. However, reliable solutions require a camera with a good view of the face, which is often unavailable in wearable settings. Furthermore, in wearable applications, where systems accompany users throughout their daily activities, a permanently running camera can be problematic for privacy (and legal) reasons. This work presents an alternative solution based on the fusion of wearable inertial sensors, planar pressure sensors, and acoustic mechanomyography (muscle sounds). The sensors were placed unobtrusively in a sports cap to monitor facial muscle activities related to facial expressions. We present our integrated wearable sensor system, describe data fusion and analysis methods, and evaluate the system in an experiment with thirteen subjects from different cultural backgrounds (eight countries) and both sexes (six women and seven men). In a one-model-per-user scheme and using a late fusion approach, the system yielded an average F1 score of 85.00% for the case where all sensing modalities are combined. With a cross-user validation and a one-model-for-all-user scheme, an F1 score of 79.00% was obtained for thirteen participants (six females and seven males). Moreover, in a hybrid fusion (cross-user) approach and six classes, an average F1 score of 82.00% was obtained for eight users. The results are competitive with state-of-the-art non-camera-based solutions for a cross-user study. In addition, our unique set of participants demonstrates the inclusiveness and generalizability of the approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
344,559
2103.00766
Computing Prices for Target Profits in Contracts
Price discrimination for maximizing expected profit is a well-studied concept in economics and there are various methods that achieve the maximum given the user type distribution and the budget constraints. In many applications, particularly with regards to engineering and computing, it is often the case than the user type distribution is unknown or not accurately known. In this paper, we therefore propose and study a mathematical framework for price discrimination with \emph{target} profits under the contract-theoretic model. We first consider service providers with a given user type profile and determine sufficient conditions for achieving a target profit. Our proof is constructive in that it also provides a method to compute the quality-price tag menu. Next we consider a dual scenario where the offered service qualities are predetermined and describe an iterative method to obtain nominal demand values that best match the qualities offered by the service provider while achieving a target profit-user satisfaction margin. We also illustrate our methods with design examples in both cases.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
222,391
2101.09451
Error Diffusion Halftoning Against Adversarial Examples
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks (DNNs) into making wrong predictions. Enhancing the adversarial robustness of DNNs has gained considerable interest in recent years. Although image transformation-based defenses were widely considered at an earlier time, most of them have been defeated by adaptive attacks. In this paper, we propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples. Error diffusion halftoning projects an image into a 1-bit space and diffuses quantization error to neighboring pixels. This process can remove adversarial perturbations from a given image while maintaining acceptable image quality in the meantime in favor of recognition. Experimental results demonstrate that the proposed method is able to improve adversarial robustness even under advanced adaptive attacks, while most of the other image transformation-based defenses do not. We show that a proper image transformation can still be an effective defense approach. Code: https://github.com/shaoyuanlo/Halftoning-Defense
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
216,602
1709.01788
Radial Line Fourier Descriptor for Historical Handwritten Text Representation
Automatic recognition of historical handwritten manuscripts is a daunting task due to paper degradation over time. Recognition-free retrieval or word spotting is popularly used for information retrieval and digitization of the historical handwritten documents. However, the performance of word spotting algorithms depends heavily on feature detection and representation methods. Although there exist popular feature descriptors such as Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the invariant properties of these descriptors amplify the noise in the degraded document images, rendering them more sensitive to noise and complex characteristics of historical manuscripts. Therefore, an efficient and relaxed feature descriptor is required as handwritten words across different documents are indeed similar, but not identical. This paper introduces a Radial Line Fourier (RLF) descriptor for handwritten word representation, with a short feature vector of 32 dimensions. A segmentation-free and training-free handwritten word spotting method is studied herein that relies on the proposed RLF descriptor, takes into account different keypoint representations and uses a simple preconditioner-based feature matching algorithm. The effectiveness of the RLF descriptor for segmentation-free handwritten word spotting is empirically evaluated on well-known historical handwritten datasets using standard evaluation measures.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
80,153
2008.12272
Monocular, One-stage, Regression of Multiple 3D People
This paper focuses on the regression of multiple 3D people from a single RGB image. Existing approaches predominantly follow a multi-stage pipeline that first detects people in bounding boxes and then independently regresses their 3D body meshes. In contrast, we propose to Regress all meshes in a One-stage fashion for Multiple 3D People (termed ROMP). The approach is conceptually simple, bounding box-free, and able to learn a per-pixel representation in an end-to-end manner. Our method simultaneously predicts a Body Center heatmap and a Mesh Parameter map, which can jointly describe the 3D body mesh on the pixel level. Through a body-center-guided sampling process, the body mesh parameters of all people in the image are easily extracted from the Mesh Parameter map. Equipped with such a fine-grained representation, our one-stage framework is free of the complex multi-stage process and more robust to occlusion. Compared with state-of-the-art methods, ROMP achieves superior performance on the challenging multi-person benchmarks, including 3DPW and CMU Panoptic. Experiments on crowded/occluded datasets demonstrate the robustness under various types of occlusion. The released code is the first real-time implementation of monocular multi-person 3D mesh regression.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,528
1906.07987
Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates
We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation. The two methods are known to achieve complementary bias-variance trade-off properties, with TD tending to achieve lower variance but potentially higher bias. In this paper, we argue that the larger bias of TD can be a result of the amplification of local approximation errors. We address this by proposing an algorithm that adaptively switches between TD and MC in each state, thus mitigating the propagation of errors. Our method is based on learned confidence intervals that detect biases of TD estimates. We demonstrate in a variety of policy evaluation tasks that this simple adaptive algorithm performs competitively with the best approach in hindsight, suggesting that learned confidence intervals are a powerful technique for adapting policy evaluation to use TD or MC returns in a data-driven way.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
135,752
2401.12745
On the Utility of Probing Trajectories for Algorithm-Selection
Machine-learning approaches to algorithm-selection typically take data describing an instance as input. Input data can take the form of features derived from the instance description or fitness landscape, or can be a direct representation of the instance itself, i.e. an image or textual description. Regardless of the choice of input, there is an implicit assumption that instances that are similar will elicit similar performance from algorithm, and that a model is capable of learning this relationship. We argue that viewing algorithm-selection purely from an instance perspective can be misleading as it fails to account for how an algorithm `views' similarity between instances. We propose a novel `algorithm-centric' method for describing instances that can be used to train models for algorithm-selection: specifically, we use short probing trajectories calculated by applying a solver to an instance for a very short period of time. The approach is demonstrated to be promising, providing comparable or better results to computationally expensive landscape-based feature-based approaches. Furthermore, projecting the trajectories into a 2-dimensional space illustrates that functions that are similar from an algorithm-perspective do not necessarily correspond to the accepted categorisation of these functions from a human perspective.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
423,486
1905.07553
Which Tasks Should Be Learned Together in Multi-task Learning?
Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,271
2203.03436
Deep Neural Decision Forest for Acoustic Scene Classification
Acoustic scene classification (ASC) aims to classify an audio clip based on the characteristic of the recording environment. In this regard, deep learning based approaches have emerged as a useful tool for ASC problems. Conventional approaches to improving the classification accuracy include integrating auxiliary methods such as attention mechanism, pre-trained models and ensemble multiple sub-networks. However, due to the complexity of audio clips captured from different environments, it is difficult to distinguish their categories without using any auxiliary methods for existing deep learning models using only a single classifier. In this paper, we propose a novel approach for ASC using deep neural decision forest (DNDF). DNDF combines a fixed number of convolutional layers and a decision forest as the final classifier. The decision forest consists of a fixed number of decision tree classifiers, which have been shown to offer better classification performance than a single classifier in some datasets. In particular, the decision forest differs substantially from traditional random forests as it is stochastic, differentiable, and capable of using the back-propagation to update and learn feature representations in neural network. Experimental results on the DCASE2019 and ESC-50 datasets demonstrate that our proposed DNDF method improves the ASC performance in terms of classification accuracy and shows competitive performance as compared with state-of-the-art baselines.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
284,086
2303.17243
Shapley Chains: Extending Shapley Values to Classifier Chains
In spite of increased attention on explainable machine learning models, explaining multi-output predictions has not yet been extensively addressed. Methods that use Shapley values to attribute feature contributions to the decision making are one of the most popular approaches to explain local individual and global predictions. By considering each output separately in multi-output tasks, these methods fail to provide complete feature explanations. We propose Shapley Chains to overcome this issue by including label interdependencies in the explanation design process. Shapley Chains assign Shapley values as feature importance scores in multi-output classification using classifier chains, by separating the direct and indirect influence of these feature scores. Compared to existing methods, this approach allows to attribute a more complete feature contribution to the predictions of multi-output classification tasks. We provide a mechanism to distribute the hidden contributions of the outputs with respect to a given chaining order of these outputs. Moreover, we show how our approach can reveal indirect feature contributions missed by existing approaches. Shapley Chains help to emphasize the real learning factors in multi-output applications and allows a better understanding of the flow of information through output interdependencies in synthetic and real-world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
355,151
2407.03961
Leveraging Latent Diffusion Models for Training-Free In-Distribution Data Augmentation for Surface Defect Detection
Defect detection is the task of identifying defects in production samples. Usually, defect detection classifiers are trained on ground-truth data formed by normal samples (negative data) and samples with defects (positive data), where the latter are consistently fewer than normal samples. State-of-the-art data augmentation procedures add synthetic defect data by superimposing artifacts to normal samples to mitigate problems related to unbalanced training data. These techniques often produce out-of-distribution images, resulting in systems that learn what is not a normal sample but cannot accurately identify what a defect looks like. In this work, we introduce DIAG, a training-free Diffusion-based In-distribution Anomaly Generation pipeline for data augmentation. Unlike conventional image generation techniques, we implement a human-in-the-loop pipeline, where domain experts provide multimodal guidance to the model through text descriptions and region localization of the possible anomalies. This strategic shift enhances the interpretability of results and fosters a more robust human feedback loop, facilitating iterative improvements of the generated outputs. Remarkably, our approach operates in a zero-shot manner, avoiding time-consuming fine-tuning procedures while achieving superior performance. We demonstrate the efficacy and versatility of DIAG with respect to state-of-the-art data augmentation approaches on the challenging KSDD2 dataset, with an improvement in AP of approximately 18% when positive samples are available and 28% when they are missing. The source code is available at https://github.com/intelligolabs/DIAG.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
470,369
cs/0009026
An improved parser for data-oriented lexical-functional analysis
We present an LFG-DOP parser which uses fragments from LFG-annotated sentences to parse new sentences. Experiments with the Verbmobil and Homecentre corpora show that (1) Viterbi n best search performs about 100 times faster than Monte Carlo search while both achieve the same accuracy; (2) the DOP hypothesis which states that parse accuracy increases with increasing fragment size is confirmed for LFG-DOP; (3) LFG-DOP's relative frequency estimator performs worse than a discounted frequency estimator; and (4) LFG-DOP significantly outperforms Tree-DOP is evaluated on tree structures only.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,220
2311.16977
Bidirectional Reactive Programming for Machine Learning
Reactive languages are dedicated to the programming of systems which interact continuously and concurrently with their environment. Values take the form of unbounded streams modeling the (discrete) passing of time or the sequence of concurrent interactions. While conventional reactivity models recurrences forward in time, we introduce a symmetric reactive construct enabling backward recurrences. Constraints on the latter allow to make the implementation practical. Machine Learning (ML) systems provide numerous motivations for all of this: we demonstrate that reverse-mode automatic differentiation, backpropagation, batch normalization, bidirectional recurrent neural networks, training and reinforcement learning algorithms, are all naturally captured as bidirectional reactive programs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
411,110
1808.05594
Toward an Expressive Bipedal Robot: Variable Gait Synthesis and Validation in a Planar Model
Humans are efficient, yet expressive in their motion. Human walking behaviors can be used to walk across a great variety of surfaces without falling and to communicate internal state to other humans through variable gait styles. This provides inspiration for creating similarly expressive bipedal robots. To this end, a framework is presented for stylistic gait generation in a compass-like under-actuated planar biped model. The gait design is done using model-based trajectory optimization with variable constraints. For a finite range of optimization parameters, a large set of 360 gaits can be generated for this model. In particular, step length and cost function are varied to produce distinct cyclic walking gaits. From these resulting gaits, 6 gaits are identified and labeled, using embodied movement analysis, with stylistic verbs that correlate with human activity, e.g., "lope" and "saunter". These labels have been validated by conducting user studies in Amazon Mechanical Turk and thus demonstrate that visually distinguishable, meaningful gaits are generated using this framework. This lays groundwork for creating a bipedal humanoid with variable socially competent movement profiles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
105,383
2311.12572
Scheduling Distributed Flexible Assembly Lines using Safe Reinforcement Learning with Soft Shielding
Highly automated assembly lines enable significant productivity gains in the manufacturing industry, particularly in mass production condition. Nonetheless, challenges persist in job scheduling for make-to-job and mass customization, necessitating further investigation to improve efficiency, reduce tardiness, promote safety and reliability. In this contribution, an advantage actor-critic based reinforcement learning method is proposed to address scheduling problems of distributed flexible assembly lines in a real-time manner. To enhance the performance, a more condensed environment representation approach is proposed, which is designed to work with the masks made by priority dispatching rules to generate fixed and advantageous action space. Moreover, a Monte-Carlo tree search based soft shielding component is developed to help address long-sequence dependent unsafe behaviors and monitor the risk of overdue scheduling. Finally, the proposed algorithm and its soft shielding component are validated in performance evaluation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
409,384
1212.5620
Topological Analysis and Mitigation Strategies for Cascading Failures in Power Grid Networks
Recently, there has been a growing concern about the overload status of the power grid networks, and the increasing possibility of cascading failures. Many researchers have studied these networks to provide design guidelines for more robust power grids. Topological analysis is one of the components of system analysis for its robustness. This paper presents a complex systems analysis of power grid networks. First, the cascading effect has been simulated on three well known networks: the IEEE 300 bus test system, the IEEE 118 bus test system, and the WSCC 179 bus equivalent model. To extend the analysis to a larger set of networks, we develop a network generator and generate multiple graphs with characteristics similar to the IEEE test networks but with different topologies. The generated graphs are then compared to the test networks to show the effect of topology in determining their robustness with respect to cascading failures. The generated graphs turn out to be more robust than the test graphs, showing the importance of topology in the robust design of power grids. The second part of this paper concerns the discussion of two novel mitigation strategies for cascading failures: Targeted Load Reduction and Islanding using Distributed Sources. These new mitigation strategies are compared with the Homogeneous Load Reduction strategy. Even though the Homogeneous Load Reduction is simpler to implement, the Targeted Load Reduction is much more effective. Additionally, an algorithm is presented for the partitioning of the network for islanding as an effort towards fault isolation to prevent cascading failures. The results for island formation are better if the sources are well distributed, else the algorithm leads to the formation of superislands.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
20,570
2301.04218
Leveraging Diffusion For Strong and High Quality Face Morphing Attacks
Face morphing attacks seek to deceive a Face Recognition (FR) system by presenting a morphed image consisting of the biometric qualities from two different identities with the aim of triggering a false acceptance with one of the two identities, thereby presenting a significant threat to biometric systems. The success of a morphing attack is dependent on the ability of the morphed image to represent the biometric characteristics of both identities that were used to create the image. We present a novel morphing attack that uses a Diffusion-based architecture to improve the visual fidelity of the image and the ability of the morphing attack to represent characteristics from both identities. We demonstrate the effectiveness of the proposed attack by evaluating its visual fidelity via the Frechet Inception Distance (FID). Also, extensive experiments are conducted to measure the vulnerability of FR systems to the proposed attack. The ability of a morphing attack detector to detect the proposed attack is measured and compared against two state-of-the-art GAN-based morphing attacks along with two Landmark-based attacks. Additionally, a novel metric to measure the relative strength between different morphing attacks is introduced and evaluated.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
339,992
2002.01575
Seeing through the smoke : a world-wide comparative study of e-cigarette flavors, brands and markets using data from Reddit and Twitter
The growing popularity of E-cigarettes, an alternative to cigarettes, has motivated us to study trends of the brands, flavors and online market activity using posts from Reddit and Twitter. The main motivation for this world-wide study is to emphasize the difference that laws and regulations have on the usage and availability of different flavors and brands of vapes in different countries. Data has been obtained from subreddits belonging to e-cigarette communities from Australia, Canada, Europe, and the UK. Extensive cleaning of data, and rigorous text mining operations provide varying results for different countries. Varying results have been obtained from Reddit and Twitter since they provide different atmospheres to the users.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
162,682
1608.00255
Continuation semantics for multi-quantifier sentences: operation-based approaches
Classical scope-assignment strategies for multi-quantifier sentences involve quantifier phrase (QP)-movement. More recent continuation-based approaches provide a compelling alternative, for they interpret QP's in situ - without resorting to Logical Forms or any structures beyond the overt syntax. The continuation-based strategies can be divided into two groups: those that locate the source of scope-ambiguity in the rules of semantic composition and those that attribute it to the lexical entries for the quantifier words. In this paper, we focus on the former operation-based approaches and the nature of the semantic operations involved. More specifically, we discuss three such possible operation-based strategies for multi-quantifier sentences, together with their relative merits and costs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
59,251
2410.05206
Studying and Mitigating Biases in Sign Language Understanding Models
Ensuring that the benefits of sign language technologies are distributed equitably among all community members is crucial. Thus, it is important to address potential biases and inequities that may arise from the design or use of these resources. Crowd-sourced sign language datasets, such as the ASL Citizen dataset, are great resources for improving accessibility and preserving linguistic diversity, but they must be used thoughtfully to avoid reinforcing existing biases. In this work, we utilize the rich information about participant demographics and lexical features present in the ASL Citizen dataset to study and document the biases that may result from models trained on crowd-sourced sign datasets. Further, we apply several bias mitigation techniques during model training, and find that these techniques reduce performance disparities without decreasing accuracy. With the publication of this work, we release the demographic information about the participants in the ASL Citizen dataset to encourage future bias mitigation work in this space.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
495,614
2404.00406
TACO -- Twitter Arguments from COnversations
Twitter has emerged as a global hub for engaging in online conversations and as a research corpus for various disciplines that have recognized the significance of its user-generated content. Argument mining is an important analytical task for processing and understanding online discourse. Specifically, it aims to identify the structural elements of arguments, denoted as information and inference. These elements, however, are not static and may require context within the conversation they are in, yet there is a lack of data and annotation frameworks addressing this dynamic aspect on Twitter. We contribute TACO, the first dataset of Twitter Arguments utilizing 1,814 tweets covering 200 entire conversations spanning six heterogeneous topics annotated with an agreement of 0.718 Krippendorff's alpha among six experts. Second, we provide our annotation framework, incorporating definitions from the Cambridge Dictionary, to define and identify argument components on Twitter. Our transformer-based classifier achieves an 85.06\% macro F1 baseline score in detecting arguments. Moreover, our data reveals that Twitter users tend to engage in discussions involving informed inferences and information. TACO serves multiple purposes, such as training tweet classifiers to manage tweets based on inference and information elements, while also providing valuable insights into the conversational reply patterns of tweets.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
442,906
2310.08287
A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors
The distribution of the weights of modern deep neural networks (DNNs) - crucial for uncertainty quantification and robustness - is an eminently complex object due to its extremely high dimensionality. This paper proposes one of the first large-scale explorations of the posterior distribution of deep Bayesian Neural Networks (BNNs), expanding its study to real-world vision tasks and architectures. Specifically, we investigate the optimal approach for approximating the posterior, analyze the connection between posterior quality and uncertainty quantification, delve into the impact of modes on the posterior, and explore methods for visualizing the posterior. Moreover, we uncover weight-space symmetries as a critical aspect for understanding the posterior. To this extent, we develop an in-depth assessment of the impact of both permutation and scaling symmetries that tend to obfuscate the Bayesian posterior. While the first type of transformation is known for duplicating modes, we explore the relationship between the latter and L2 regularization, challenging previous misconceptions. Finally, to help the community improve our understanding of the Bayesian posterior, we will shortly release the first large-scale checkpoint dataset, including thousands of real-world models and our codes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,336
1912.01649
Mo' States Mo' Problems: Emergency Stop Mechanisms from Observation
In many environments, only a relatively small subset of the complete state space is necessary in order to accomplish a given task. We develop a simple technique using emergency stops (e-stops) to exploit this phenomenon. Using e-stops significantly improves sample complexity by reducing the amount of required exploration, while retaining a performance bound that efficiently trades off the rate of convergence with a small asymptotic sub-optimality gap. We analyze the regret behavior of e-stops and present empirical results in discrete and continuous settings demonstrating that our reset mechanism can provide order-of-magnitude speedups on top of existing reinforcement learning methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,133
2410.20823
Novel Object Synthesis via Adaptive Text-Image Harmony
In this paper, we study an object synthesis task that combines an object text with an object image to create a new object image. However, most diffusion models struggle with this task, \textit{i.e.}, often generating an object that predominantly reflects either the text or the image due to an imbalance between their inputs. To address this issue, we propose a simple yet effective method called Adaptive Text-Image Harmony (ATIH) to generate novel and surprising objects. First, we introduce a scale factor and an injection step to balance text and image features in cross-attention and to preserve image information in self-attention during the text-image inversion diffusion process, respectively. Second, to better integrate object text and image, we design a balanced loss function with a noise parameter, ensuring both optimal editability and fidelity of the object image. Third, to adaptively adjust these parameters, we present a novel similarity score function that not only maximizes the similarities between the generated object image and the input text/image but also balances these similarities to harmonize text and image integration. Extensive experiments demonstrate the effectiveness of our approach, showcasing remarkable object creations such as colobus-glass jar. Project page: https://xzr52.github.io/ATIH/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
502,984
1703.08440
K-Means Clustering using Tabu Search with Quantized Means
The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering as an alternative to Lloyd's algorithm, which for all its ease of implementation and fast runtime, has the major drawback of being trapped at local optima. While the TS approach can yield superior performance, it involves a high computational complexity. Moreover, the difficulty in parameter selection in the existing TS approach does not make it any more attractive. This paper presents an alternative, low-complexity formulation of the TS optimization procedure for K-Means clustering. This approach does not require many parameter settings. We initially constrain the centers to points in the dataset. We then aim at evolving these centers using a unique neighborhood structure that makes use of gradient information of the objective function. This results in an efficient exploration of the search space, after which the means are refined. The proposed scheme is implemented in MATLAB and tested on four real-world datasets, and it achieves a significant improvement over the existing TS approach in terms of the intra cluster sum of squares and computational time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
70,584
2201.11879
Random Caching Design for Multi-User Multi-Antenna HetNets with Interference Nulling
The strong interference suffered by users can be a severe problem in cache-enabled networks (CENs) due to the content-centric user association mechanism. To tackle this issue, multi-antenna technology may be employed for interference management. In this paper, we consider a user-centric interference nulling (IN) scheme in two-tier multi-user multi-antenna CEN, with a hybrid most-popular and random caching policy at macro base stations (MBSs) and small base stations (SBSs) to provide file diversity. All the interfering SBSs within the IN range of a user are requested to suppress the interference at this user using zero-forcing beamforming. Using stochastic geometry analysis techniques, we derive a tractable expression for the area spectral efficiency (ASE). A lower bound on the ASE is also obtained, with which we then consider ASE maximization, by optimizing the caching policy and IN coefficient. To solve the resultant mixed integer programming problem, we design an alternating optimization algorithm to minimize the lower bound of the ASE. Our numerical results demonstrate that the proposed caching policy yields performance that is close to the optimum, and it outperforms several existing baselines.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
277,434
2203.13277
A Manifold View of Adversarial Risk
The adversarial risk of a machine learning model has been widely studied. Most previous works assume that the data lies in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero. We finalize the paper with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier by only focusing on the normal adversarial risk.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
287,569
2112.08342
DG2: Data Augmentation Through Document Grounded Dialogue Generation
Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and need for extensive annotation. Especially in document-grounded dialog systems, human experts need to carefully read the unstructured documents to answer the users' questions. As a result, existing document-grounded dialog datasets are relatively small-scale and obstruct the effective training of dialogue systems. In this paper, we propose an automatic data augmentation technique grounded on documents through a generative dialogue model. The dialogue model consists of a user bot and agent bot that can synthesize diverse dialogues given an input document, which are then used to train a downstream model. When supplementing the original dataset, our method achieves significant improvement over traditional data augmentation methods. We also achieve great performance in the low-resource setting.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
271,763
2501.03256
AI-ANNE: (A) (N)eural (N)et for (E)xploration: Transferring Deep Learning Models onto Microcontrollers and Embedded Systems
This working paper explores the integration of neural networks onto resource-constrained embedded systems like a Raspberry Pi Pico / Raspberry Pi Pico 2. A TinyML aproach transfers neural networks directly on these microcontrollers, enabling real-time, low-latency, and energy-efficient inference while maintaining data privacy. Therefore, AI-ANNE: (A) (N)eural (N)et for (E)xploration will be presented, which facilitates the transfer of pre-trained models from high-performance platforms like TensorFlow and Keras onto microcontrollers, using a lightweight programming language like MicroPython. This approach demonstrates how neural network architectures, such as neurons, layers, density and activation functions can be implemented in MicroPython in order to deal with the computational limitations of embedded systems. Based on the Raspberry Pi Pico / Raspberry Pi Pico 2, two different neural networks on microcontrollers are presented for an example of data classification. As an further application example, such a microcontroller can be used for condition monitoring, where immediate corrective measures are triggered on the basis of sensor data. Overall, this working paper presents a very easy-to-implement way of using neural networks on energy-efficient devices such as microcontrollers. This makes AI-ANNE: (A) (N)eural (N)et for (E)xploration not only suited for practical use, but also as an educational tool with clear insights into how neural networks operate.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
522,810
2307.05537
NLP Meets RNA: Unsupervised Embedding Learning for Ribozymes with Word2Vec
Ribozymes, RNA molecules with distinct 3D structures and catalytic activity, have widespread applications in synthetic biology and therapeutics. However, relatively little research has focused on leveraging deep learning to enhance our understanding of ribozymes. This study implements Word2Vec, an unsupervised learning technique for natural language processing, to learn ribozyme embeddings. Ribo2Vec was trained on over 9,000 diverse ribozymes, learning to map sequences to 128 and 256-dimensional vector spaces. Using Ribo2Vec, sequence embeddings for five classes of ribozymes (hatchet, pistol, hairpin, hovlinc, and twister sister) were calculated. Principal component analysis demonstrated the ability of these embeddings to distinguish between ribozyme classes. Furthermore, a simple SVM classifier trained on ribozyme embeddings showed promising results in accurately classifying ribozyme types. Our results suggest that the embedding vectors contained meaningful information about ribozymes. Interestingly, 256-dimensional embeddings behaved similarly to 128-dimensional embeddings, suggesting that a lower dimension vector space is generally sufficient to capture ribozyme features. This approach demonstrates the potential of Word2Vec for bioinformatics, opening new avenues for ribozyme research. Future research includes using a Transformer-based method to learn RNA embeddings, which can capture long-range interactions between nucleotides.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
378,777
1809.09307
Utilizing Class Information for Deep Network Representation Shaping
Statistical characteristics of deep network representations, such as sparsity and correlation, are known to be relevant to the performance and interpretability of deep learning. When a statistical characteristic is desired, often an adequate regularizer can be designed and applied during the training phase. Typically, such a regularizer aims to manipulate a statistical characteristic over all classes together. For classification tasks, however, it might be advantageous to enforce the desired characteristic per class such that different classes can be better distinguished. Motivated by the idea, we design two class-wise regularizers that explicitly utilize class information: class-wise Covariance Regularizer (cw-CR) and class-wise Variance Regularizer (cw-VR). cw-CR targets to reduce the covariance of representations calculated from the same class samples for encouraging feature independence. cw-VR is similar, but variance instead of covariance is targeted to improve feature compactness. For the sake of completeness, their counterparts without using class information, Covariance Regularizer (CR) and Variance Regularizer (VR), are considered together. The four regularizers are conceptually simple and computationally very efficient, and the visualization shows that the regularizers indeed perform distinct representation shaping. In terms of classification performance, significant improvements over the baseline and L1/L2 weight regularization methods were found for 21 out of 22 tasks over popular benchmark datasets. In particular, cw-VR achieved the best performance for 13 tasks including ResNet-32/110.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
108,679
2210.17437
Learning New Tasks from a Few Examples with Soft-Label Prototypes
Existing approaches to few-shot learning in NLP rely on large language models (LLMs) and/or fine-tuning of these to generalise on out-of-distribution data. In this work, we propose a novel few-shot learning approach based on soft-label prototypes (SLPs) designed to collectively capture the distribution of different classes across the input domain space. We focus on learning previously unseen NLP tasks from very few examples (4, 8, 16) per class and experimentally demonstrate that our approach achieves superior performance on the majority of tested tasks in this data-lean setting while being highly parameter efficient. We also show that our few-shot adaptation method can be integrated into more generalised learning settings, primarily meta-learning, to yield superior performance against strong baselines.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
327,691
1710.06952
Asynchronous Decentralized Parallel Stochastic Gradient Descent
Most commonly used distributed machine learning systems are either synchronous or centralized asynchronous. Synchronous algorithms like AllReduce-SGD perform poorly in a heterogeneous environment, while asynchronous algorithms using a parameter server suffer from 1) communication bottleneck at parameter servers when workers are many, and 2) significantly worse convergence when the traffic to parameter server is congested. Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate? In this paper, we propose an asynchronous decentralized stochastic gradient decent algorithm (AD-PSGD) satisfying all above expectations. Our theoretical analysis shows AD-PSGD converges at the optimal $O(1/\sqrt{K})$ rate as SGD and has linear speedup w.r.t. number of workers. Empirically, AD-PSGD outperforms the best of decentralized parallel SGD (D-PSGD), asynchronous parallel SGD (A-PSGD), and standard data parallel SGD (AllReduce-SGD), often by orders of magnitude in a heterogeneous environment. When training ResNet-50 on ImageNet with up to 128 GPUs, AD-PSGD converges (w.r.t epochs) similarly to the AllReduce-SGD, but each epoch can be up to 4-8X faster than its synchronous counterparts in a network-sharing HPC environment. To the best of our knowledge, AD-PSGD is the first asynchronous algorithm that achieves a similar epoch-wise convergence rate as AllReduce-SGD, at an over 100-GPU scale.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
82,857
2105.12544
Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization
Current dialogue summarization systems usually encode the text with a number of general semantic features (e.g., keywords and topics) to gain more powerful dialogue modeling capabilities. However, these features are obtained via open-domain toolkits that are dialog-agnostic or heavily relied on human annotations. In this paper, we show how DialoGPT, a pre-trained model for conversational response generation, can be developed as an unsupervised dialogue annotator, which takes advantage of dialogue background knowledge encoded in DialoGPT. We apply DialoGPT to label three types of features on two dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non pre-trained models as our summarizes. Experimental results show that our proposed method can obtain remarkable improvements on both datasets and achieves new state-of-the-art performance on the SAMSum dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
237,037
2405.04078
WISER: Weak supervISion and supErvised Representation learning to improve drug response prediction in cancer
Cancer, a leading cause of death globally, occurs due to genomic changes and manifests heterogeneously across patients. To advance research on personalized treatment strategies, the effectiveness of various drugs on cells derived from cancers (`cell lines') is experimentally determined in laboratory settings. Nevertheless, variations in the distribution of genomic data and drug responses between cell lines and humans arise due to biological and environmental differences. Moreover, while genomic profiles of many cancer patients are readily available, the scarcity of corresponding drug response data limits the ability to train machine learning models that can predict drug response in patients effectively. Recent cancer drug response prediction methods have largely followed the paradigm of unsupervised domain-invariant representation learning followed by a downstream drug response classification step. Introducing supervision in both stages is challenging due to heterogeneous patient response to drugs and limited drug response data. This paper addresses these challenges through a novel representation learning method in the first phase and weak supervision in the second. Experimental results on real patient data demonstrate the efficacy of our method (WISER) over state-of-the-art alternatives on predicting personalized drug response.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
452,435
2109.06304
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration
Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness. In this paper, we propose a contrastive fine-tuning objective that enables BERT to produce more powerful phrase embeddings. Our approach (Phrase-BERT) relies on a dataset of diverse phrasal paraphrases, which is automatically generated using a paraphrase generation model, as well as a large-scale dataset of phrases in context mined from the Books3 corpus. Phrase-BERT outperforms baselines across a variety of phrase-level similarity tasks, while also demonstrating increased lexical diversity between nearest neighbors in the vector space. Finally, as a case study, we show that Phrase-BERT embeddings can be easily integrated with a simple autoencoder to build a phrase-based neural topic model that interprets topics as mixtures of words and phrases by performing a nearest neighbor search in the embedding space. Crowdsourced evaluations demonstrate that this phrase-based topic model produces more coherent and meaningful topics than baseline word and phrase-level topic models, further validating the utility of Phrase-BERT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
255,098
1907.11803
qwLSH: Cache-conscious Indexing for Processing Similarity Search Query Workloads in High-Dimensional Spaces
Similarity search queries in high-dimensional spaces are an important type of queries in many domains such as image processing, machine learning, etc. Since exact similarity search indexing techniques suffer from the well-known curse of dimensionality in high-dimensional spaces, approximate search techniques are often utilized instead. Locality Sensitive Hashing (LSH) has been shown to be an effective approximate search method for solving similarity search queries in high-dimensional spaces. Often times, queries in real-world settings arrive as part of a query workload. LSH and its variants are particularly designed to solve single queries effectively. They suffer from one major drawback while executing query workloads: they do not take into consideration important data characteristics for effective cache utilization while designing the index structures. In this paper, we present qwLSH, an index structure for efficiently processing similarity search query workloads in high-dimensional spaces. We intelligently divide a given cache during processing of a query workload by using novel cost models. Experimental results show that, given a query workload, qwLSH is able to perform faster than existing techniques due to its unique cost models and strategies.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
139,941
2306.16334
On the Identifiability of Quantized Factors
Disentanglement aims to recover meaningful latent ground-truth factors from the observed distribution solely, and is formalized through the theory of identifiability. The identifiability of independent latent factors is proven to be impossible in the unsupervised i.i.d. setting under a general nonlinear map from factors to observations. In this work, however, we demonstrate that it is possible to recover quantized latent factors under a generic nonlinear diffeomorphism. We only assume that the latent factors have independent discontinuities in their density, without requiring the factors to be statistically independent. We introduce this novel form of identifiability, termed quantized factor identifiability, and provide a comprehensive proof of the recovery of the quantized factors.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
376,343
1610.08173
Controlled Barrage Regions: Stochastic Modeling, Analysis, and Optimization
A barrage relay network (BRN) is a broadcast oriented ad hoc network involving autonomous cooperative communication, a slotted time-division frame format, and a coarse slot-level synchronization. While inherently a broadcast protocol, BRNs can support unicast transmission by superimposing a plurality of controlled barrage regions (CBRs) onto the network. Within each CBRs, a new packet is injected by the unicast source during the first time slot of each new radio frame. When a CBRs is sufficiently long that a packet might not be able to reach the other end within a radio frame, multiple packets can be active at the same time via spatial pipelining, resulting in interference within the CBRs. In this paper, the dynamics of packet transmission within a CBRs is described as a Markov process, and the outage probability of each link within the CBRs is evaluated in closed form, thereby accounting for fading and co-channel interference. In order to account for the linkage between simultaneous active packets and their temporal correlation, a Viterbi-like algorithm is used. Using this accurate analytical framework, a line network is optimized, which identifies the code rate, the number of relays, and the length of a radio frame that maximizes the transport capacity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
62,895
1512.04636
Noise-Compensated, Bias-Corrected Diffusion Weighted Endorectal Magnetic Resonance Imaging via a Stochastically Fully-Connected Joint Conditional Random Field Model
Diffusion weighted magnetic resonance imaging (DW-MR) is a powerful tool in imaging-based prostate cancer screening and detection. Endorectal coils are commonly used in DW-MR imaging to improve the signal-to-noise ratio (SNR) of the acquisition, at the expense of significant intensity inhomogeneities (bias field) that worsens as we move away from the endorectal coil. The presence of bias field can have a significant negative impact on the accuracy of different image analysis tasks, as well as prostate tumor localization, thus leading to increased inter- and intra-observer variability. Retrospective bias correction approaches are introduced as a more efficient way of bias correction compared to the prospective methods such that they correct for both of the scanner and anatomy-related bias fields in MR imaging. Previously proposed retrospective bias field correction methods suffer from undesired noise amplification that can reduce the quality of bias-corrected DW-MR image. Here, we propose a unified data reconstruction approach that enables joint compensation of bias field as well as data noise in DW-MR imaging. The proposed noise-compensated, bias-corrected (NCBC) data reconstruction method takes advantage of a novel stochastically fully connected joint conditional random field (SFC-JCRF) model to mitigate the effects of data noise and bias field in the reconstructed MR data. The proposed NCBC reconstruction method was tested on synthetic DW-MR data, physical DW-phantom as well as real DW-MR data all acquired using endorectal MR coil. Both qualitative and quantitative analysis illustrated that the proposed NCBC method can achieve improved image quality when compared to other tested bias correction methods. As such, the proposed NCBC method may have potential as a useful retrospective approach for improving the consistency of image interpretations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
50,149
2011.09962
Proposing method to Increase the detection accuracy of stomach cancer based on colour and lint features of tongue using CNN and SVM
Today, gastric cancer is one of the diseases which affected many people's life. Early detection and accuracy are the main and crucial challenges in finding this kind of cancer. In this paper, a method to increase the accuracy of the diagnosis of detecting cancer using lint and colour features of tongue based on deep convolutional neural networks and support vector machine is proposed. In the proposed method, the region of tongue is first separated from the face image by {deep RCNN} \color{black} Recursive Convolutional Neural Network (R-CNN) \color{black}. After the necessary preprocessing, the images to the convolutional neural network are provided and the training and test operations are triggered. The results show that the proposed method is correctly able to identify the area of the tongue as well as the patient's person from the non-patient. Based on experiments, the DenseNet network has the highest accuracy compared to other deep architectures. The experimental results show that the accuracy of this network for gastric cancer detection reaches 91% which shows the superiority of method in comparison to the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
207,375
2004.02907
Data-Driven Distributed Stochastic Model Predictive Control with Closed-Loop Chance Constraint Satisfaction
Distributed model predictive control methods for uncertain systems often suffer from considerable conservatism and can tolerate only small uncertainties due to the use of robust formulations that are amenable to distributed design and optimization methods. In this work, we propose a distributed stochastic model predictive control (DSMPC) scheme for dynamically coupled linear discrete-time systems subject to unbounded additive disturbances that are potentially correlated in time. An indirect feedback formulation ensures recursive feasibility of the DSMPC problem, and a data-driven, distributed and optimization-free constraint tightening approach allows for exact satisfaction of chance constraints during closed-loop control, addressing typical sources of conservatism. The computational complexity of the proposed controller is similar to nominal distributed MPC. The approach is demonstrated in simulation for the temperature control of a large-scale data center subject to randomly varying computational loads.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
171,375
2405.16645
Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models
The availability of large-scale multimodal datasets and advancements in diffusion models have significantly accelerated progress in 4D content generation. Most prior approaches rely on multiple image or video diffusion models, utilizing score distillation sampling for optimization or generating pseudo novel views for direct supervision. However, these methods are hindered by slow optimization speeds and multi-view inconsistency issues. Spatial and temporal consistency in 4D geometry has been extensively explored respectively in 3D-aware diffusion models and traditional monocular video diffusion models. Building on this foundation, we propose a strategy to migrate the temporal consistency in video diffusion models to the spatial-temporal consistency required for 4D generation. Specifically, we present a novel framework, \textbf{Diffusion4D}, for efficient and scalable 4D content generation. Leveraging a meticulously curated dynamic 3D dataset, we develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets. To control the dynamic strength of these assets, we introduce a 3D-to-4D motion magnitude metric as guidance. Additionally, we propose a novel motion magnitude reconstruction loss and 3D-aware classifier-free guidance to refine the learning and generation of motion dynamics. After obtaining orbital views of the 4D asset, we perform explicit 4D construction with Gaussian splatting in a coarse-to-fine manner. The synthesized multi-view consistent 4D image set enables us to swiftly generate high-fidelity and diverse 4D assets within just several minutes. Extensive experiments demonstrate that our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency across various prompt modalities.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
457,532
2304.13497
Efficient Circuit-Level Implementation of Knuth-Based Balanced and Nearly-Balanced Codes
Coding schemes are often used in high-speed processor-processor or processor-memory busses in digital systems. In particular, we have introduced (in a 2012 DesignCon paper) a zero sum (ZS) signaling method which uses balanced or nearly-balanced coding to reduce simultaneous switching noise (SSN) in a single-ended bus to a level comparable to that of differential signaling. While several balanced coding schemes are known, few papers exist that describe the necessary digital hardware implementations of (known) balanced coding schemes, and no algorithms had previously been developed for nearly-balanced coding. In this work, we extend a known balanced coding scheme to accommodate nearly-balanced coding and demonstrate a range of coding and decoding circuits through synthesis in 65 nm CMOS. These hardware implementations have minimal impact on the energy efficiency and area when compared to current serializer/deserializers (SerDes) at clock rates which would support SerDes integration.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
360,597
2410.11570
A Data-Driven Aggressive Autonomous Racing Framework Utilizing Local Trajectory Planning with Velocity Prediction
The development of autonomous driving has boosted the research on autonomous racing. However, existing local trajectory planning methods have difficulty planning trajectories with optimal velocity profiles at racetracks with sharp corners, thus weakening the performance of autonomous racing. To address this problem, we propose a local trajectory planning method that integrates Velocity Prediction based on Model Predictive Contour Control (VPMPCC). The optimal parameters of VPMPCC are learned through Bayesian Optimization (BO) based on a proposed novel Objective Function adapted to Racing (OFR). Specifically, VPMPCC achieves velocity prediction by encoding the racetrack as a reference velocity profile and incorporating it into the optimization problem. This method optimizes the velocity profile of local trajectories, especially at corners with significant curvature. The proposed OFR balances racing performance with vehicle safety, ensuring safe and efficient BO training. In the simulation, the number of training iterations for OFR-based BO is reduced by 42.86% compared to the state-of-the-art method. The optimal simulation-trained parameters are then applied to a real-world F1TENTH vehicle without retraining. During prolonged racing on a custom-built racetrack featuring significant sharp corners, the mean velocity of VPMPCC reaches 93.18% of the vehicle's handling limits. The released code is available at https://github.com/zhouhengli/VPMPCC.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
498,624
2111.01564
MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks
We propose a novel way to incorporate expert knowledge into the training of deep neural networks. Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering. In contrast, our approach, called MultiplexNet, represents domain knowledge as a logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts. It introduces a Categorical latent variable that learns to choose which constraint term optimizes the error function of the network and it compiles the constraints directly into the output of existing learning algorithms. We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the MultiplexNet approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. In some cases, MultiplexNet finds better solutions than the baselines; or solutions that could not be achieved with the alternative approaches. Our contribution is in encoding domain knowledge in a way that facilitates inference that is shown to be both efficient and general; and critically, our approach guarantees 100% constraint satisfaction in a network's output.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
264,593
2202.08815
GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the Language of Motifs
Most methods for explaining black-box classifiers (e.g. on tabular data, images, or time series) rely on measuring the impact that removing/perturbing features has on the model output. This forces the explanation language to match the classifier's feature space. However, when dealing with graph data, in which the basic features correspond to the edges describing the graph structure, this matching between features space and explanation language might not be appropriate. Decoupling the feature space (edges) from a desired high-level explanation language (such as motifs) is thus a major challenge towards developing actionable explanations for graph classification tasks. In this paper we introduce GRAPHSHAP, a Shapley-based approach able to provide motif-based explanations for identity-aware graph classifiers, assuming no knowledge whatsoever about the model or its training data: the only requirement is that the classifier can be queried as a black-box at will. For the sake of computational efficiency we explore a progressive approximation strategy and show how a simple kernel can efficiently approximate explanation scores, thus allowing GRAPHSHAP to scale on scenarios with a large explanation space (i.e. large number of motifs). We showcase GRAPHSHAP on a real-world brain-network dataset consisting of patients affected by Autism Spectrum Disorder and a control group. Our experiments highlight how the classification provided by a black-box model can be effectively explained by few connectomics patterns.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
280,991
1811.10347
Estimating Causal Effects With Partial Covariates For Clinical Interpretability
Estimating the causal effects of an intervention in the presence of confounding is a frequently occurring problem in applications such as medicine. The task is challenging since there may be multiple confounding factors, some of which may be missing, and inferences must be made from high-dimensional, noisy measurements. In this paper, we propose a decision-theoretic approach to estimate the causal effects of interventions where a subset of the covariates is unavailable for some patients during testing. Our approach uses the information bottleneck principle to perform a discrete, low-dimensional sufficient reduction of the covariate data to estimate a distribution over confounders. In doing so, we can estimate the causal effect of an intervention where only partial covariate information is available. Our results on a causal inference benchmark and a real application for treating sepsis show that our method achieves state-of-the-art performance, without sacrificing interpretability.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
114,469
2411.14743
FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole Slide Image Classification
Few-shot learning presents a critical solution for cancer diagnosis in computational pathology (CPath), addressing fundamental limitations in data availability, particularly the scarcity of expert annotations and patient privacy constraints. A key challenge in this paradigm stems from the inherent disparity between the limited training set of whole slide images (WSIs) and the enormous number of contained patches, where a significant portion of these patches lacks diagnostically relevant information, potentially diluting the model's ability to learn and focus on critical diagnostic features. While recent works attempt to address this by incorporating additional knowledge, several crucial gaps hinder further progress: (1) despite the emergence of powerful pathology foundation models (FMs), their potential remains largely untapped, with most approaches limiting their use to basic feature extraction; (2) current language guidance mechanisms attempt to align text prompts with vast numbers of WSI patches all at once, struggling to leverage rich pathological semantic information. To this end, we introduce the knowledge-enhanced adaptive visual compression framework, dubbed FOCUS, which uniquely combines pathology FMs with language prior knowledge to enable a focused analysis of diagnostically relevant regions by prioritizing discriminative WSI patches. Our approach implements a progressive three-stage compression strategy: we first leverage FMs for global visual redundancy elimination, and integrate compressed features with language prompts for semantic relevance assessment, then perform neighbor-aware visual token filtering while preserving spatial coherence. Extensive experiments on pathological datasets spanning breast, lung, and ovarian cancers demonstrate its superior performance in few-shot pathology diagnosis. Code will be made available at https://github.com/dddavid4real/FOCUS.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
510,302
1602.08829
Access Time Tradeoffs in Archive Compression
Web archives, query and proxy logs, and so on, can all be very large and highly repetitive; and are accessed only sporadically and partially, rather than continually and holistically. This type of data is ideal for compression-based archiving, provided that random-access to small fragments of the original data can be achieved without needing to decompress everything. The recent RLZ (relative Lempel Ziv) compression approach uses a semi-static model extracted from the text to be compressed, together with a greedy factorization of the whole text encoded using static integer codes. Here we demonstrate more precisely than before the scenarios in which RLZ excels. We contrast RLZ with alternatives based on block-based adaptive methods, including approaches that "prime" the encoding for each block, and measure a range of implementation options using both hard-disk (HDD) and solid-state disk (SSD) drives. For HDD, the dominant factor affecting access speed is the compression rate achieved, even when this involves larger dictionaries and larger blocks. When the data is on SSD the same effects are present, but not as markedly, and more complex trade-offs apply.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
52,696
2311.15106
Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion
As the immense opportunities enabled by large language models become more apparent, NLP systems will be increasingly expected to excel in real-world settings. However, in many instances, powerful models alone will not yield translational NLP solutions, especially if the formulated problem is not well aligned with the real-world task. In this work, we study the case of UMLS vocabulary insertion, an important real-world task in which hundreds of thousands of new terms, referred to as atoms, are added to the UMLS, one of the most comprehensive open-source biomedical knowledge bases. Previous work aimed to develop an automated NLP system to make this time-consuming, costly, and error-prone task more efficient. Nevertheless, practical progress in this direction has been difficult to achieve due to a problem formulation and evaluation gap between research output and the real-world task. In order to address this gap, we introduce a new formulation for UMLS vocabulary insertion which mirrors the real-world task, datasets which faithfully represent it and several strong baselines we developed through re-purposing existing solutions. Additionally, we propose an effective rule-enhanced biomedical language model which enables important new model behavior, outperforms all strong baselines and provides measurable qualitative improvements to editors who carry out the UVI task. We hope this case study provides insight into the considerable importance of problem formulation for the success of translational NLP solutions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
410,384
2407.08735
Real-Time Anomaly Detection and Reactive Planning with Large Language Models
Foundation models, e.g., large language models (LLMs), trained on internet-scale data possess zero-shot generalization capabilities that make them a promising technology towards detecting and mitigating out-of-distribution failure modes of robotic systems. Fully realizing this promise, however, poses two challenges: (i) mitigating the considerable computational expense of these models such that they may be applied online, and (ii) incorporating their judgement regarding potential anomalies into a safe control framework. In this work, we present a two-stage reasoning framework: First is a fast binary anomaly classifier that analyzes observations in an LLM embedding space, which may then trigger a slower fallback selection stage that utilizes the reasoning capabilities of generative LLMs. These stages correspond to branch points in a model predictive control strategy that maintains the joint feasibility of continuing along various fallback plans to account for the slow reasoner's latency as soon as an anomaly is detected, thus ensuring safety. We show that our fast anomaly classifier outperforms autoregressive reasoning with state-of-the-art GPT models, even when instantiated with relatively small language models. This enables our runtime monitor to improve the trustworthiness of dynamic robotic systems, such as quadrotors or autonomous vehicles, under resource and time constraints. Videos illustrating our approach in both simulation and real-world experiments are available on this project page: https://sites.google.com/view/aesop-llm.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
472,291
2406.09882
Harm Mitigation in Recommender Systems under User Preference Dynamics
We consider a recommender system that takes into account the interplay between recommendations, the evolution of user interests, and harmful content. We model the impact of recommendations on user behavior, particularly the tendency to consume harmful content. We seek recommendation policies that establish a tradeoff between maximizing click-through rate (CTR) and mitigating harm. We establish conditions under which the user profile dynamics have a stationary point, and propose algorithms for finding an optimal recommendation policy at stationarity. We experiment on a semi-synthetic movie recommendation setting initialized with real data and observe that our policies outperform baselines at simultaneously maximizing CTR and mitigating harm.
false
false
false
false
false
true
true
false
false
false
false
false
false
true
false
false
false
false
464,126
2412.10373
GaussianWorld: Gaussian World Model for Streaming 3D Occupancy Prediction
3D occupancy prediction is important for autonomous driving due to its comprehensive perception of the surroundings. To incorporate sequential inputs, most existing methods fuse representations from previous frames to infer the current 3D occupancy. However, they fail to consider the continuity of driving scenarios and ignore the strong prior provided by the evolution of 3D scenes (e.g., only dynamic objects move). In this paper, we propose a world-model-based framework to exploit the scene evolution for perception. We reformulate 3D occupancy prediction as a 4D occupancy forecasting problem conditioned on the current sensor input. We decompose the scene evolution into three factors: 1) ego motion alignment of static scenes; 2) local movements of dynamic objects; and 3) completion of newly-observed scenes. We then employ a Gaussian world model (GaussianWorld) to explicitly exploit these priors and infer the scene evolution in the 3D Gaussian space considering the current RGB observation. We evaluate the effectiveness of our framework on the widely used nuScenes dataset. Our GaussianWorld improves the performance of the single-frame counterpart by over 2% in mIoU without introducing additional computations. Code: https://github.com/zuosc19/GaussianWorld.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
516,891
2010.06847
Fusing electrical and elasticity imaging
Electrical and elasticity imaging are promising modalities for a suite of different applications including medical tomography, non-destructive testing, and structural health monitoring. These emerging modalities are capable of providing remote, non-invasive, and low cost opportunities. Unfortunately, both modalities are severely ill-posed nonlinear inverse problems, susceptive to noise and modelling errors. Nevertheless, the ability to incorporate complimentary data sets obtained simultaneously offers mutually-beneficial information. By fusing electrical and elastic modalities as a joint problem we are afforded the possibility to stabilise the inversion process via the utilisation of auxiliary information from both modalities as well as joint structural operators. In this study, we will discuss a possible approach to combine electrical and elasticity imaging in a joint reconstruction problem giving rise to novel multi-modality applications for use in both medical and structural engineering.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
200,619
2305.01054
CHIC: Corporate Document for Visual question Answering
The massive use of digital documents due to the substantial trend of paperless initiatives confronted some companies to find ways to process thousands of documents per day automatically. To achieve this, they use automatic information retrieval (IR) allowing them to extract useful information from large datasets quickly. In order to have effective IR methods, it is first necessary to have an adequate dataset. Although companies have enough data to take into account their needs, there is also a need for a public database to compare contributions between state-of-the-art methods. Public data on the document exists as DocVQA[2] and XFUND [10], but these do not fully satisfy the needs of companies. XFUND contains only form documents while the company uses several types of documents (i.e. structured documents like forms but also semi-structured as invoices, and unstructured as emails). Compared to XFUND, DocVQA has several types of documents but only 4.5% of them are corporate documents (i.e. invoice, purchase order, etc). All of this 4.5% of documents do not meet the diversity of documents required by the company. We propose CHIC a visual question-answering public dataset. This dataset contains different types of corporate documents and the information extracted from these documents meet the right expectations of companies.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
361,535
2204.05968
Automated Surface Texture Analysis via Discrete Cosine Transform and Discrete Wavelet Transform
Surface roughness and texture are critical to the functional performance of engineering components. The ability to analyze roughness and texture effectively and efficiently is much needed to ensure surface quality in many surface generation processes, such as machining, surface mechanical treatment, etc. Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) are two commonly used signal decomposition tools for surface roughness and texture analysis. Both methods require selecting a threshold to decompose a given surface into its three main components: form, waviness, and roughness. However, although DWT and DCT are part of the ISO surface finish standards, there exists no systematic guidance on how to compute these thresholds, and they are often manually selected on case by case basis. This makes utilizing these methods for studying surfaces dependent on the user's judgment and limits their automation potential. Therefore, we present two automatic threshold selection algorithms based on information theory and signal energy. We use machine learning to validate the success of our algorithms both using simulated surfaces as well as digital microscopy images of machined surfaces. Specifically, we generate feature vectors for each surface area or profile and apply supervised classification. Comparing our results with the heuristic threshold selection approach shows good agreement with mean accuracies as high as 95\%. We also compare our results with Gaussian filtering (GF) and show that while GF results for areas can yield slightly higher accuracies, our results outperform GF for surface profiles. We further show that our automatic threshold selection has significant advantages in terms of computational time as evidenced by decreasing the number of mode computations by an order of magnitude compared to the heuristic thresholding for DCT.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
291,198
2205.13422
Opinion Spam Detection: A New Approach Using Machine Learning and Network-Based Algorithms
E-commerce is the fastest-growing segment of the economy. Online reviews play a crucial role in helping consumers evaluate and compare products and services. As a result, fake reviews (opinion spam) are becoming more prevalent and negatively impacting customers and service providers. There are many reasons why it is hard to identify opinion spammers automatically, including the absence of reliable labeled data. This limitation precludes an off-the-shelf application of a machine learning pipeline. We propose a new method for classifying reviewers as spammers or benign, combining machine learning with a message-passing algorithm that capitalizes on the users' graph structure to compensate for the possible scarcity of labeled data. We devise a new way of sampling the labels for the training step (active learning), replacing the typical uniform sampling. Experiments on three large real-world datasets from Yelp.com show that our method outperforms state-of-the-art active learning approaches and also machine learning methods that use a much larger set of labeled data for training.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
298,929
2403.06664
Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One popular solution to this is storage-offloaded training, which uses host memory and storage as an extended memory hierarchy. However, this obviously comes at the cost of storage bandwidth bottleneck because storage devices have orders of magnitude lower bandwidth compared to that of GPU device memories. Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system. The main component of Smart-Infinity is SmartUpdate, which performs parameter updates on custom near-storage accelerators. We identify that moving parameter updates to the storage side removes most of the storage traffic. In addition, we propose an efficient data transfer handler structure to address the system integration issues for Smart-Infinity. The handler allows overlapping data transfers with fixed memory consumption by reusing the device buffer. Lastly, we propose accelerator-assisted gradient compression/decompression to enhance the scalability of Smart-Infinity. When scaling to multiple near-storage processing devices, the write traffic on the shared channel becomes the bottleneck. To alleviate this, we compress the gradients on the GPU and decompress them on the accelerators. It provides further acceleration from reduced traffic. As a result, Smart-Infinity achieves a significant speedup compared to the baseline. Notably, Smart-Infinity is a ready-to-use approach that is fully integrated into PyTorch on a real system. We will open-source Smart-Infinity to facilitate its use.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
436,553
1907.03697
Prediction of Soil Moisture Content Based On Satellite Data and Sequence-to-Sequence Networks
The main objective of this study is to combine remote sensing and machine learning to detect soil moisture content. Growing population and food consumption has led to the need to improve agricultural yield and to reduce wastage of natural resources. In this paper, we propose a neural network architecture, based on recent work by the research community, that can make a strong social impact and aid United Nations Sustainable Development Goal of Zero Hunger. The main aims here are to: improve efficiency of water usage; reduce dependence on irrigation; increase overall crop yield; minimise risk of crop loss due to drought and extreme weather conditions. We achieve this by applying satellite imagery, crop segmentation, soil classification and NDVI and soil moisture prediction on satellite data, ground truth and climate data records. By applying machine learning to sensor data and ground data, farm management systems can evolve into a real time AI enabled platform that can provide actionable recommendations and decision support tools to the farmers.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
137,910
2111.09281
An Experimental Study of Latency for IEEE 802.11be Multi-link Operation
Will Multi-Link Operation (MLO) be able to improve the latency of Wi-Fi networks? MLO is one of the most disruptive MAC-layer techniques included in the IEEE 802.11be amendment. It allows a device to use multiple radios simultaneously and in a coordinated way, providing a new framework to improve the WLAN throughput and latency. In this paper, we investigate the potential latency benefits of MLO by using a large dataset containing 5 GHz spectrum occupancy measurements. Experimental results show that when the channels are symmetrically occupied, MLO can improve latency by one order of magnitude. In contrast, in asymmetrically occupied channels, MLO can sometimes be detrimental and increase latency. To address this case, we introduce Opportunistic Simultaneous Transmit and Receive (STR+) channel access and study its benefits.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
266,967
2112.09496
Towards Launching AI Algorithms for Cellular Pathology into Clinical & Pharmaceutical Orbits
Computational Pathology (CPath) is an emerging field concerned with the study of tissue pathology via computational algorithms for the processing and analysis of digitized high-resolution images of tissue slides. Recent deep learning based developments in CPath have successfully leveraged sheer volume of raw pixel data in histology images for predicting target parameters in the domains of diagnostics, prognostics, treatment sensitivity and patient stratification -- heralding the promise of a new data-driven AI era for both histopathology and oncology. With data serving as the fuel and AI as the engine, CPath algorithms are poised to be ready for takeoff and eventual launch into clinical and pharmaceutical orbits. In this paper, we discuss CPath limitations and associated challenges to enable the readers distinguish hope from hype and provide directions for future research to overcome some of the major challenges faced by this budding field to enable its launch into the two orbits.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
272,173
2103.14873
Equivariant Filtering Framework for Inertial-Integrated Navigation
This paper proposes a equivariant filtering (EqF) framework for the inertial-integrated state estimation problem. As the kinematic system of the inertial-integrated navigation can be naturally modeling on the matrix Lie group $SE_2(3)$, the symmetry of the Lie group can be exploited to design a equivariant filter which extends the invariant extended Kalman filtering on the group affine system. Furthermore, details of the analytic state transition matrices for left invariant error and right invariant error are given.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
227,001
2303.00492
Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized Devices
Graph neural networks (GNN) have been widely deployed in real-world networked applications and systems due to their capability to handle graph-structured data. However, the growing awareness of data privacy severely challenges the traditional centralized model training paradigm, where a server holds all the graph information. Federated learning is an emerging collaborative computing paradigm that allows model training without data centralization. Existing federated GNN studies mainly focus on systems where clients hold distinctive graphs or sub-graphs. The practical node-level federated situation, where each client is only aware of its direct neighbors, has yet to be studied. In this paper, we propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning with feature and degree protection on node-level federated graphs. We first design a tree constructor to improve the representation capability given the limited structural information. We further present a Monte Carlo Markov Chain-based algorithm to mitigate the workload imbalance caused by degree heterogeneity with theoretically-guaranteed performance. Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training. Extensive experiments demonstrate that Lumos outperforms the baseline with significantly higher accuracy and greatly reduced communication cost and training time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
348,614
2211.06218
Total Variation Graph Neural Networks
Recently proposed Graph Neural Networks (GNNs) for vertex clustering are trained with an unsupervised minimum cut objective, approximated by a Spectral Clustering (SC) relaxation. However, the SC relaxation is loose and, while it offers a closed-form solution, it also yields overly smooth cluster assignments that poorly separate the vertices. In this paper, we propose a GNN model that computes cluster assignments by optimizing a tighter relaxation of the minimum cut based on graph total variation (GTV). The cluster assignments can be used directly to perform vertex clustering or to implement graph pooling in a graph classification framework. Our model consists of two core components: i) a message-passing layer that minimizes the $\ell_1$ distance in the features of adjacent vertices, which is key to achieving sharp transitions between clusters; ii) an unsupervised loss function that minimizes the GTV of the cluster assignments while ensuring balanced partitions. Experimental results show that our model outperforms other GNNs for vertex clustering and graph classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
329,816
2411.13441
A Case Study of API Design for Interoperability and Security of the Internet of Things
Heterogeneous distributed systems, including the Internet of Things (IoT) or distributed cyber-physical systems (CPS), often suffer a lack of interoperability and security, which hinders the wider deployment of such systems. Specifically, the different levels of security requirements and the heterogeneity in terms of communication models, for instance, point-to-point vs. publish-subscribe, are the example challenges of IoT and distributed CPS consisting of heterogeneous devices and applications. In this paper, we propose a working application programming interface (API) and runtime to enhance interoperability and security while addressing the challenges that stem from the heterogeneity in the IoT and distributed CPS. In our case study, we design and implement our application programming interface (API) design approach using open-source software, and with our working implementation, we evaluate the effectiveness of our proposed approach. Our experimental results suggest that our approach can achieve both interoperability and security in the IoT and distributed CPS with a reasonably small overhead and better-managed software.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
509,786
1702.07189
Analyzing Learned Convnet Features with Dirichlet Process Gaussian Mixture Models
Convolutional Neural Networks (Convnets) have achieved good results in a range of computer vision tasks the recent years. Though given a lot of attention, visualizing the learned representations to interpret Convnets, still remains a challenging task. The high dimensionality of internal representations and the high abstractions of deep layers are the main challenges when visualizing Convnet functionality. We present in this paper a technique based on clustering internal Convnet representations with a Dirichlet Process Gaussian Mixture Model, for visualization of learned representations in Convnets. Our method copes with the high dimensionality of a Convnet by clustering representations across all nodes of each layer. We will discuss how this application is useful when considering transfer learning, i.e.\ transferring a model trained on one dataset to solve a task on a different one.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
68,742
2402.11242
Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection
Learning with noisy labels has gained increasing attention because the inevitable imperfect labels in real-world scenarios can substantially hurt the deep model performance. Recent studies tend to regard low-loss samples as clean ones and discard high-loss ones to alleviate the negative impact of noisy labels. However, real-world datasets contain not only noisy labels but also class imbalance. The imbalance issue is prone to causing failure in the loss-based sample selection since the under-learning of tail classes also leans to produce high losses. To this end, we propose a simple yet effective method to address noisy labels in imbalanced datasets. Specifically, we propose Class-Balance-based sample Selection (CBS) to prevent the tail class samples from being neglected during training. We propose Confidence-based Sample Augmentation (CSA) for the chosen clean samples to enhance their reliability in the training process. To exploit selected noisy samples, we resort to prediction history to rectify labels of noisy samples. Moreover, we introduce the Average Confidence Margin (ACM) metric to measure the quality of corrected labels by leveraging the model's evolving training dynamics, thereby ensuring that low-quality corrected noisy samples are appropriately masked out. Lastly, consistency regularization is imposed on filtered label-corrected noisy samples to boost model performance. Comprehensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our proposed method, especially in imbalanced scenarios. Comprehensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our proposed method, especially in imbalanced scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
430,303
2106.07453
Efficient Data-specific Model Search for Collaborative Filtering
Collaborative filtering (CF), as a fundamental approach for recommender systems, is usually built on the latent factor model with learnable parameters to predict users' preferences towards items. However, designing a proper CF model for a given data is not easy, since the properties of datasets are highly diverse. In this paper, motivated by the recent advances in automated machine learning (AutoML), we propose to design a data-specific CF model by AutoML techniques. The key here is a new framework that unifies state-of-the-art (SOTA) CF methods and splits them into disjoint stages of input encoding, embedding function, interaction function, and prediction function. We further develop an easy-to-use, robust, and efficient search strategy, which utilizes random search and a performance predictor for efficient searching within the above framework. In this way, we can combinatorially generalize data-specific CF models, which have not been visited in the literature, from SOTA ones. Extensive experiments on five real-world datasets demonstrate that our method can consistently outperform SOTA ones for various CF tasks. Further experiments verify the rationality of the proposed framework and the efficiency of the search strategy. The searched CF models can also provide insights for exploring more effective methods in the future
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
240,929
2109.03585
Matching in the Dark: A Dataset for Matching Image Pairs of Low-light Scenes
This paper considers matching images of low-light scenes, aiming to widen the frontier of SfM and visual SLAM applications. Recent image sensors can record the brightness of scenes with more than eight-bit precision, available in their RAW-format image. We are interested in making full use of such high-precision information to match extremely low-light scene images that conventional methods cannot handle. For extreme low-light scenes, even if some of their brightness information exists in the RAW format images' low bits, the standard raw image processing on cameras fails to utilize them properly. As was recently shown by Chen et al., CNNs can learn to produce images with a natural appearance from such RAW-format images. To consider if and how well we can utilize such information stored in RAW-format images for image matching, we have created a new dataset named MID (matching in the dark). Using it, we experimentally evaluated combinations of eight image-enhancing methods and eleven image matching methods consisting of classical/neural local descriptors and classical/neural initial point-matching methods. The results show the advantage of using the RAW-format images and the strengths and weaknesses of the above component methods. They also imply there is room for further research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
254,116
2002.09615
Preference Modeling with Context-Dependent Salient Features
We consider the problem of estimating a ranking on a set of items from noisy pairwise comparisons given item features. We address the fact that pairwise comparison data often reflects irrational choice, e.g. intransitivity. Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features. Formalizing this framework, we propose the salient feature preference model and prove a finite sample complexity result for learning the parameters of our model and the underlying ranking with maximum likelihood estimation. We also provide empirical results that support our theoretical bounds and illustrate how our model explains systematic intransitivity. Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
165,119
1803.04321
Structure-based Sybil Detection in Social Networks via Local Rule-based Propagation
Sybil detection in social networks is a basic security research problem. Structure-based methods have been shown to be promising at detecting Sybils. Existing structure-based methods can be classified into Random Walk (RW)-based methods and Loop Belief Propagation (LBP)-based methods. RW-based methods cannot leverage labeled Sybils and labeled benign users simultaneously, which limits their detection accuracy, and/or they are not robust to noisy labels. LBP-based methods are not scalable and cannot guarantee convergence. In this work, we propose SybilSCAR, a novel structure-based method to detect Sybils in social networks. SybilSCAR is Scalable, Convergent, Accurate, and Robust to label noise. We first propose a framework to unify RW-based and LBP-based methods. Under our framework, these methods can be viewed as iteratively applying a (different) local rule to every user, which propagates label information among a social graph. Second, we design a new local rule, which SybilSCAR iteratively applies to every user to detect Sybils. We compare SybilSCAR with state-of-the-art RW-based and LBP-based methods theoretically and empirically. Theoretically, we show that, with proper parameter settings, SybilSCAR has a tighter asymptotical bound on the number of Sybils that are falsely accepted into a social network than existing structure-based methods. Empirically, we perform evaluation using both social networks with synthesized Sybils and a large-scale Twitter dataset (41.7M nodes and 1.2B edges) with real Sybils. Our results show that 1) SybilSCAR is substantially more accurate and more robust to label noise than state-of-the-art RW-based methods; 2) SybilSCAR is more accurate and one order of magnitude more scalable than state-of-the-art LBP-based methods.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
92,434
1506.07032
Coverage centralities for temporal networks
Structure of real networked systems, such as social relationship, can be modeled as temporal networks in which each edge appears only at the prescribed time. Understanding the structure of temporal networks requires quantifying the importance of a temporal vertex, which is a pair of vertex index and time. In this paper, we define two centrality measures of a temporal vertex based on the fastest temporal paths which use the temporal vertex. The definition is free from parameters and robust against the change in time scale on which we focus. In addition, we can efficiently compute these centrality values for all temporal vertices. Using the two centrality measures, we reveal that distributions of these centrality values of real-world temporal networks are heterogeneous. For various datasets, we also demonstrate that a majority of the highly central temporal vertices are located within a narrow time window around a particular time. In other words, there is a bottleneck time at which most information sent in the temporal network passes through a small number of temporal vertices, which suggests an important role of these temporal vertices in spreading phenomena.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
44,471