id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1308.5906
Biological effects and equivalent doses in radiotherapy: a software solution
The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding the delivered doses or any future prescriptions relating to treatment changes. We therefore propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to the equivalent dose computed using standard calculators in seven French radiotherapy centers.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
26,672
2309.07822
CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
In recent years, large language models (LLMs) have shown remarkable capabilities at scale, particularly at generating text conditioned on a prompt. In our work, we investigate the use of LLMs to augment training data of small language models~(SLMs) with automatically generated counterfactual~(CF) instances -- i.e. minimally altered inputs -- in order to improve out-of-domain~(OOD) performance of SLMs in the extractive question answering~(QA) setup. We show that, across various LLM generators, such data augmentation consistently enhances OOD performance and improves model calibration for both confidence-based and rationale-augmented calibrator models. Furthermore, these performance improvements correlate with higher diversity of CF instances in terms of their surface form and semantic content. Finally, we show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance, indicating that rationale-augmented calibrators prefer concise explanations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
391,918
2111.07524
PatchGraph: In-hand tactile tracking with learned surface normals
We address the problem of tracking 3D object poses from touch during in-hand manipulations. Specifically, we look at tracking small objects using vision-based tactile sensors that provide high-dimensional tactile image measurements at the point of contact. While prior work has relied on a-priori information about the object being localized, we remove this requirement. Our key insight is that an object is composed of several local surface patches, each informative enough to achieve reliable object tracking. Moreover, we can recover the geometry of this local patch online by extracting local surface normal information embedded in each tactile image. We propose a novel two-stage approach. First, we learn a mapping from tactile images to surface normals using an image translation network. Second, we use these surface normals within a factor graph to both reconstruct a local patch map and use it to infer 3D object poses. We demonstrate reliable object tracking for over $100$ contact sequences across unique shapes with four objects in simulation and two objects in the real-world. Supplementary video: https://youtu.be/FHks--haOGY
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
266,403
2309.15681
Tactile-based Active Inference for Force-Controlled Peg-in-Hole Insertions
Reinforcement Learning (RL) has shown great promise for efficiently learning force control policies in peg-in-hole tasks. However, robots often face difficulties due to visual occlusions by the gripper and uncertainties in the initial grasping pose of the peg. These challenges often restrict force-controlled insertion policies to situations where the peg is rigidly fixed to the end-effector. While vision-based tactile sensors offer rich tactile feedback that could potentially address these issues, utilizing them to learn effective tactile policies is both computationally intensive and difficult to generalize. In this paper, we propose a robust tactile insertion policy that can align the tilted peg with the hole using active inference, without the need for extensive training on large datasets. Our approach employs a dual-policy architecture: one policy focuses on insertion, integrating force control and RL to guide the object into the hole, while the other policy performs active inference based on tactile feedback to align the tilted peg with the hole. In real-world experiments, our dual-policy architecture achieved 90% success rate into a hole with a clearance of less than 0.1 mm, significantly outperforming previous methods that lack tactile sensory feedback (5%). To assess the generalizability of our alignment policy, we conducted experiments with five different pegs, demonstrating its effective adaptation to multiple objects.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
395,069
1212.2547
Information spreading with aging in heterogeneous populations
We study the critical properties of a model of information spreading based on the SIS epidemic model. Spreading rates decay with time, as ruled by two parameters, $\epsilon$ and $l$, that can be either constant or randomly distributed in the population. The spreading dynamics is developed on top of Erd\"os-Renyi networks. We present the mean-field analytical solution of the model in its simplest formulation, and Monte Carlo simulations are performed for the more heterogeneous cases. The outcomes show that the system undergoes a nonequilibrium phase transition whose critical point depends on the parameters $\epsilon$ and $l$. In addition, we conclude that the more heterogeneous the population, the more favored the information spreading over the network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
20,326
2307.03200
Transcribing Educational Videos Using Whisper: A preliminary study on using AI for transcribing educational videos
Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience. The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems. In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
true
377,958
2401.11143
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple Modalities
We propose the Multi-Head Density Adaptive Attention Mechanism (DAAM), a novel probabilistic attention framework that can be used for Parameter-Efficient Fine-tuning (PEFT), and the Density Adaptive Transformer (DAT), designed to enhance information aggregation across multiple modalities, including Speech, Text, and Vision. DAAM integrates learnable mean and variance into its attention mechanism, implemented in a multi-head framework, enabling it to collectively model any probability distribution for dynamic recalibration of feature significance. This method demonstrates significant improvements, especially with highly non-stationary data, surpassing the state-of-the-art attention techniques in model performance, up to approximately +20% (abs.) in accuracy. Empirically, DAAM exhibits superior adaptability and efficacy across a diverse range of tasks, including emotion recognition in speech, image classification, and text classification, thereby establishing its robustness and versatility in handling data across multiple modalities. Furthermore, we introduce the Importance Factor, a new learning-based metric that enhances the explainability of models trained with DAAM-based methods.
false
false
true
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
422,888
2501.17755
AI Governance through Markets
This paper argues that market governance mechanisms should be considered a key approach in the governance of artificial intelligence (AI), alongside traditional regulatory frameworks. While current governance approaches have predominantly focused on regulation, we contend that market-based mechanisms offer effective incentives for responsible AI development. We examine four emerging vectors of market governance: insurance, auditing, procurement, and due diligence, demonstrating how these mechanisms can affirm the relationship between AI risk and financial risk while addressing capital allocation inefficiencies. While we do not claim that market forces alone can adequately protect societal interests, we maintain that standardised AI disclosures and market mechanisms can create powerful incentives for safe and responsible AI development. This paper urges regulators, economists, and machine learning researchers to investigate and implement market-based approaches to AI governance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
528,441
2408.02632
SEAS: Self-Evolving Adversarial Safety Optimization for Large Language Models
As large language models (LLMs) continue to advance in capability and influence, ensuring their security and preventing harmful outputs has become crucial. A promising approach to address these concerns involves training models to automatically generate adversarial prompts for red teaming. However, the evolving subtlety of vulnerabilities in LLMs challenges the effectiveness of current adversarial methods, which struggle to specifically target and explore the weaknesses of these models. To tackle these challenges, we introduce the $\mathbf{S}\text{elf-}\mathbf{E}\text{volving }\mathbf{A}\text{dversarial }\mathbf{S}\text{afety }\mathbf{(SEAS)}$ optimization framework, which enhances security by leveraging data generated by the model itself. SEAS operates through three iterative stages: Initialization, Attack, and Adversarial Optimization, refining both the Red Team and Target models to improve robustness and safety. This framework reduces reliance on manual testing and significantly enhances the security capabilities of LLMs. Our contributions include a novel adversarial framework, a comprehensive safety dataset, and after three iterations, the Target model achieves a security level comparable to GPT-4, while the Red Team model shows a marked increase in attack success rate (ASR) against advanced models. Our code and datasets are released at https://SEAS-LLM.github.io/.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
478,700
2301.06289
Strong Converses using Typical Changes of Measures and Asymptotic Markov Chains
The paper presents exponentially-strong converses for source-coding, channel coding, and hypothesis testing problems. More specifically, it presents alternative proofs for the well-known exponentially-strong converse bounds for almost lossless source-coding with side-information and for channel coding over a discrete memoryless channel (DMC). These alternative proofs are solely based on a change of measure argument on the sets of conditionally or jointly typical sequences that result in a correct decision, and on the analysis of these measures in the asymptotic regime of infinite blocklengths. The paper also presents a new exponentially-strong converse for the K-hop hypothesis testing against independence problem with certain Markov chains and a strong converse for the two-terminal Lround interactive compression problem with multiple distortion constraints that depend on both sources and both reconstructions. This latter problem includes as special cases the Wyner-Ziv problem, the interactive function computation problem, and the compression with lossy common reconstruction problem. These new strong converse proofs are derived using similar change of measure arguments as described above and by additionally proving that certain Markov chains involving auxiliary random variables hold in the asymptotic regime of infinite blocklengths. As shown in related publications, the same method also yields converse bounds under expected resource constraints.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
340,611
1701.03647
Restricted Boltzmann Machines with Gaussian Visible Units Guided by Pairwise Constraints
Restricted Boltzmann machines (RBMs) and their variants are usually trained by contrastive divergence (CD) learning, but the training procedure is an unsupervised learning approach, without any guidances of the background knowledge. To enhance the expression ability of traditional RBMs, in this paper, we propose pairwise constraints restricted Boltzmann machine with Gaussian visible units (pcGRBM) model, in which the learning procedure is guided by pairwise constraints and the process of encoding is conducted under these guidances. The pairwise constraints are encoded in hidden layer features of pcGRBM. Then, some pairwise hidden features of pcGRBM flock together and another part of them are separated by the guidances. In order to deal with real-valued data, the binary visible units are replaced by linear units with Gausian noise in the pcGRBM model. In the learning process of pcGRBM, the pairwise constraints are iterated transitions between visible and hidden units during CD learning procedure. Then, the proposed model is inferred by approximative gradient descent method and the corresponding learning algorithm is designed in this paper. In order to compare the availability of pcGRBM and traditional RBMs with Gaussian visible units, the features of the pcGRBM and RBMs hidden layer are used as input 'data' for K-means, spectral clustering (SP) and affinity propagation (AP) algorithms, respectively. A thorough experimental evaluation is performed with sixteen image datasets of Microsoft Research Asia Multimedia (MSRA-MM). The experimental results show that the clustering performance of K-means, SP and AP algorithms based on pcGRBM model are significantly better than traditional RBMs. In addition, the pcGRBM model for clustering task shows better performance than some semi-supervised clustering algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
66,742
1311.1013
Interference Alignment (IA) and Coordinated Multi-Point (CoMP) with IEEE802.11ac feedback compression: testbed results
We have implemented interference alignment (IA) and joint transmission coordinated multipoint (CoMP) on a wireless testbed using the feedback compression scheme of the new 802.11ac standard. The performance as a function of the frequency domain granularity is assessed. Realistic throughput gains are obtained by probing each spatial modulation stream with ten different coding and modulation schemes. The gain of IA and CoMP over TDMA MIMO is found to be 26% and 71%, respectively under stationary conditions. In our dense indoor office deployment, the frequency domain granularity of the feedback can be reduced down to every 8th subcarrier (2.5MHz), without sacrificing performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,200
1802.06058
Variance-based Gradient Compression for Efficient Distributed Deep Learning
Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,579
2309.09916
Learning Nonparametric High-Dimensional Generative Models: The Empirical-Beta-Copula Autoencoder
By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can simply be turned into a generative model. For this to work, it is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained. Several simple possibilities (kernel density estimates, Gaussian distribution) and more sophisticated ones (Gaussian mixture models, copula models, normalization flows) can be thought of and have been tried recently. This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model while striving for simplicity. Among them, a new copula-based method, the Empirical Beta Copula Autoencoder, is considered. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
392,792
2307.10337
Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks
As the capabilities of Large Language Models (LLMs) emerge, they not only assist in accomplishing traditional tasks within more efficient paradigms but also stimulate the evolution of social bots. Researchers have begun exploring the implementation of LLMs as the driving core of social bots, enabling more efficient and user-friendly completion of tasks like profile completion, social behavior decision-making, and social content generation. However, there is currently a lack of systematic research on the behavioral characteristics of LLMs-driven social bots and their impact on social networks. We have curated data from Chirper, a Twitter-like social network populated by LLMs-driven social bots and embarked on an exploratory study. Our findings indicate that: (1) LLMs-driven social bots possess enhanced individual-level camouflage while exhibiting certain collective characteristics; (2) these bots have the ability to exert influence on online communities through toxic behaviors; (3) existing detection methods are applicable to the activity environment of LLMs-driven social bots but may be subject to certain limitations in effectiveness. Moreover, we have organized the data collected in our study into the Masquerade-23 dataset, which we have publicly released, thus addressing the data void in the subfield of LLMs-driven social bots behavior datasets. Our research outcomes provide primary insights for the research and governance of LLMs-driven social bots within the research community.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
380,523
2402.16153
ChatMusician: Understanding and Generating Music Intrinsically with LLM
While Large Language Models (LLMs) demonstrate impressive capabilities in text generation, we find that their ability has yet to be generalized to music, humanity's creative language. We introduce ChatMusician, an open-source LLM that integrates intrinsic musical abilities. It is based on continual pre-training and finetuning LLaMA2 on a text-compatible music representation, ABC notation, and the music is treated as a second language. ChatMusician can understand and generate music with a pure text tokenizer without any external multi-modal neural structures or tokenizers. Interestingly, endowing musical abilities does not harm language abilities, even achieving a slightly higher MMLU score. Our model is capable of composing well-structured, full-length music, conditioned on texts, chords, melodies, motifs, musical forms, etc, surpassing GPT-4 baseline. On our meticulously curated college-level music understanding benchmark, MusicTheoryBench, ChatMusician surpasses LLaMA2 and GPT-3.5 on zero-shot setting by a noticeable margin. Our work reveals that LLMs can be an excellent compressor for music, but there remains significant territory to be conquered. We release our 4B token music-language corpora MusicPile, the collected MusicTheoryBench, code, model and demo in GitHub.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
432,445
1705.01040
Maximum Resilience of Artificial Neural Networks
The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximal amount of input or sensor perturbation which is still tolerated. This problem of computing maximal perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximal resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
72,786
2010.10150
Local Knowledge Powered Conversational Agents
State-of-the-art conversational agents have advanced significantly in conjunction with the use of large transformer-based language models. However, even with these advancements, conversational agents still lack the ability to produce responses that are informative and coherent with the local context. In this work, we propose a dialog framework that incorporates both local knowledge as well as users' past dialogues to generate high quality conversations. We introduce an approach to build a dataset based on Reddit conversations, where outbound URL links are widely available in the conversations and the hyperlinked documents can be naturally included as local external knowledge. Using our framework and dataset, we demonstrate that incorporating local knowledge can largely improve informativeness, coherency and realisticness measures using human evaluations. In particular, our approach consistently outperforms the state-of-the-art conversational model on the Reddit dataset across all three measures. We also find that scaling the size of our models from 117M to 8.3B parameters yields consistent improvement of validation perplexity as well as human evaluated metrics. Our model with 8.3B parameters can generate human-like responses as rated by various human evaluations in a single-turn dialog setting.
true
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
201,797
2011.10100
Efficient Consensus Model based on Proximal Gradient Method applied to Convolutional Sparse Problems
Convolutional sparse representation (CSR), shift-invariant model for inverse problems, has gained much attention in the fields of signal/image processing, machine learning and computer vision. The most challenging problems in CSR implies the minimization of a composite function of the form $min_x \sum_i f_i(x) + g(x)$, where a direct and low-cost solution can be difficult to achieve. However, it has been reported that semi-distributed formulations such as ADMM consensus can provide important computational benefits. In the present work, we derive and detail a thorough theoretical analysis of an efficient consensus algorithm based on proximal gradient (PG) approach. The effectiveness of the proposed algorithm with respect to its ADMM counterpart is primarily assessed in the classic convolutional dictionary learning problem. Furthermore, our consensus method, which is generically structured, can be used to solve other optimization problems, where a sum of convex functions with a regularization term share a single global variable. As an example, the proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
207,412
1012.5208
Texture feature extraction in the spatial-frequency domain for content-based image retrieval
The advent of large scale multimedia databases has led to great challenges in content-based image retrieval (CBIR). Even though CBIR is considered an emerging field of research, however it constitutes a strong background for new methodologies and systems implementations. Therefore, many research contributions are focusing on techniques enabling higher image retrieval accuracy while preserving low level of computational complexity. Image retrieval based on texture features is receiving special attention because of the omnipresence of this visual feature in most real-world images. This paper highlights the state-of-the-art and current progress relevant to texture-based image retrieval and spatial-frequency image representations. In particular, it gives an overview of statistical methodologies and techniques employed for texture feature extraction using most popular spatial-frequency image transforms, namely discrete wavelets, Gabor wavelets, dual-tree complex wavelet and contourlets. Indications are also given about used similarity measurement functions and most important achieved results.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
true
8,635
1204.5431
Robust Head Pose Estimation Using Contourlet Transform
Estimating pose of the head is an important preprocessing step in many pattern recognition and computer vision systems such as face recognition. Since the performance of the face recognition systems is greatly affected by the poses of the face, how to estimate the accurate pose of the face in human face image is still a challenging problem. In this paper, we represent a novel method for head pose estimation. To enhance the efficiency of the estimation we use contourlet transform for feature extraction. Contourlet transform is multi-resolution, multi-direction transform. In order to reduce the feature space dimension and obtain appropriate features we use LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) to remove ineffcient features. Then, we apply different classifiers such as k-nearest neighborhood (knn) and minimum distance. We use the public available FERET database to evaluate the performance of proposed method. Simulation results indicate the superior robustness of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
15,652
1909.00482
A Semi-Automated Usability Evaluation Framework for Interactive Image Segmentation Systems
For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer interaction (HCI) is an essential part of interactive image segmentation. Nevertheless, publications introducing novel interactive segmentation systems (ISS) often lack an objective comparison of HCI aspects. It is demonstrated, that even when the underlying segmentation algorithm is the same throughout interactive prototypes, their user experience may vary substantially. As a result, users prefer simple interfaces as well as a considerable degree of freedom to control each iterative step of the segmentation. In this article, an objective method for the comparison of ISS is proposed, based on extensive user studies. A summative qualitative content analysis is conducted via abstraction of visual and verbal feedback given by the participants. A direct assessment of the segmentation system is executed by the users via the system usability scale (SUS) and AttrakDiff-2 questionnaires. Furthermore, an approximation of the findings regarding usability aspects in those studies is introduced, conducted solely from the system-measurable user actions during their usage of interactive segmentation prototypes. The prediction of all questionnaire results has an average relative error of 8.9%, which is close to the expected precision of the questionnaire results themselves. This automated evaluation scheme may significantly reduce the resources necessary to investigate each variation of a prototype's user interface (UI) features and segmentation methodologies.
true
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
143,643
1711.05929
Defense against Universal Adversarial Perturbations
Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to `any' image can fool a state-of-the-art network classifier to change its prediction about the image label. These `Universal Adversarial Perturbations' pose a serious threat to the success of Deep Learning in practice. We present the first dedicated framework to effectively defend the networks against such perturbations. Our approach learns a Perturbation Rectifying Network (PRN) as `pre-input' layers to a targeted model, such that the targeted model needs no modification. The PRN is learned from real and synthetic image-agnostic perturbations, where an efficient method to compute the latter is also proposed. A perturbation detector is separately trained on the Discrete Cosine Transform of the input-output difference of the PRN. A query image is first passed through the PRN and verified by the detector. If a perturbation is detected, the output of the PRN is used for label prediction instead of the actual image. A rigorous evaluation shows that our framework can defend the network classifiers against unseen adversarial perturbations in the real-world scenarios with up to 97.5% success rate. The PRN also generalizes well in the sense that training for one targeted network defends another network with a comparable success rate.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
84,678
2208.14536
MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
We present MultiCoNER, a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation. We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves moderate performance (macro-F1=54%), highlighting the difficulty of our data. GEMNET, which uses gazetteers, improvement significantly (average improvement of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained language models, and we believe that it can help further research in building robust NER systems. MultiCoNER is publicly available at https://registry.opendata.aws/multiconer/ and we hope that this resource will help advance research in various aspects of NER.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
315,343
2310.03739
Aligning Text-to-Image Diffusion Models with Reward Backpropagation
Text-to-image diffusion models have recently emerged at the forefront of image generation, powered by very large-scale unsupervised or weakly supervised text-to-image training datasets. Due to their unsupervised training, controlling their behavior in downstream tasks, such as maximizing human-perceived image quality, image-text alignment, or ethical image generation, is difficult. Recent works finetune diffusion models to downstream reward functions using vanilla reinforcement learning, notorious for the high variance of the gradient estimators. In this paper, we propose AlignProp, a method that aligns diffusion models to downstream reward functions using end-to-end backpropagation of the reward gradient through the denoising process. While naive implementation of such backpropagation would require prohibitive memory resources for storing the partial derivatives of modern text-to-image models, AlignProp finetunes low-rank adapter weight modules and uses gradient checkpointing, to render its memory usage viable. We test AlignProp in finetuning diffusion models to various objectives, such as image-text semantic alignment, aesthetics, compressibility and controllability of the number of objects present, as well as their combinations. We show AlignProp achieves higher rewards in fewer training steps than alternatives, while being conceptually simpler, making it a straightforward choice for optimizing diffusion models for differentiable reward functions of interest. Code and Visualization results are available at https://align-prop.github.io/.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
397,395
2410.13085
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models
Artificial Intelligence (AI) has demonstrated significant potential in healthcare, particularly in disease diagnosis and treatment planning. Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for interactive diagnostic tools. However, these models often suffer from factual hallucination, which can lead to incorrect diagnoses. Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to address these issues. However, the amount of high-quality data and distribution shifts between training data and deployment data limit the application of fine-tuning methods. Although RAG is lightweight and effective, existing RAG-based approaches are not sufficiently general to different medical domains and can potentially cause misalignment issues, both between modalities and between the model and the ground truth. In this paper, we propose a versatile multimodal RAG system, MMed-RAG, designed to enhance the factuality of Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an adaptive retrieved contexts selection method, and a provable RAG-based preference fine-tuning strategy. These innovations make the RAG process sufficiently general and reliable, significantly improving alignment when introducing retrieved contexts. Experimental results across five medical datasets (involving radiology, ophthalmology, pathology) on medical VQA and report generation demonstrate that MMed-RAG can achieve an average improvement of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available in https://github.com/richard-peng-xia/MMed-RAG.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
499,360
1905.10568
Scalable Block-Diagonal Locality-Constrained Projective Dictionary Learning
We propose a novel structured discriminative block-diagonal dictionary learning method, referred to as scalable Locality-Constrained Projective Dictionary Learning (LC-PDL), for efficient representation and classification. To improve the scalability by saving both training and testing time, our LC-PDL aims at learning a structured discriminative dictionary and a block-diagonal representation without using costly l0/l1-norm. Besides, it avoids extra time-consuming sparse reconstruction process with the well-trained dictionary for new sample as many existing models. More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class. To enhance the performance, we incorporate a locality constraint of atoms into the DL procedures to keep local information and obtain the codes of samples over each class separately. A block-diagonal discriminative approximation term is also derived to learn a discriminative projection to bridge data with their codes by extracting the special block-diagonal features from data, which can ensure the approximate coefficients to associate with its label information clearly. Then, a robust multiclass classifier is trained over extracted block-diagonal codes for accurate label predictions. Experimental results verify the effectiveness of our algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
132,104
2107.05235
Position-enhanced and Time-aware Graph Convolutional Network for Sequential Recommendations
Most of the existing deep learning-based sequential recommendation approaches utilize the recurrent neural network architecture or self-attention to model the sequential patterns and temporal influence among a user's historical behavior and learn the user's preference at a specific time. However, these methods have two main drawbacks. First, they focus on modeling users' dynamic states from a user-centric perspective and always neglect the dynamics of items over time. Second, most of them deal with only the first-order user-item interactions and do not consider the high-order connectivity between users and items, which has recently been proved helpful for the sequential recommendation. To address the above problems, in this article, we attempt to model user-item interactions by a bipartite graph structure and propose a new recommendation approach based on a Position-enhanced and Time-aware Graph Convolutional Network (PTGCN) for the sequential recommendation. PTGCN models the sequential patterns and temporal dynamics between user-item interactions by defining a position-enhanced and time-aware graph convolution operation and learning the dynamic representations of users and items simultaneously on the bipartite graph with a self-attention aggregator. Also, it realizes the high-order connectivity between users and items by stacking multi-layer graph convolutions. To demonstrate the effectiveness of PTGCN, we carried out a comprehensive evaluation of PTGCN on three real-world datasets of different sizes compared with a few competitive baselines. Experimental results indicate that PTGCN outperforms several state-of-the-art models in terms of two commonly-used evaluation metrics for ranking.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
245,715
2010.02840
Semantic Evaluation for Text-to-SQL with Distilled Test Suites
We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models. Our method distills a small test suite of databases that achieves high code coverage for the gold query from a large number of randomly generated databases. At evaluation time, it computes the denotation accuracy of the predicted queries on the distilled test suite, hence calculating a tight upper-bound for semantic accuracy efficiently. We use our proposed method to evaluate 21 models submitted to the Spider leader board and manually verify that our method is always correct on 100 examples. In contrast, the current Spider metric leads to a 2.5% false negative rate on average and 8.1% in the worst case, indicating that test suite accuracy is needed. Our implementation, along with distilled test suites for eleven Text-to-SQL datasets, is publicly available.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
199,185
1405.6296
Four Classes of Morphogenetic Collective Systems
We studied the roles of morphogenetic principles---heterogeneity of components, dynamic differentiation/re-differentiation of components, and local information sharing among components---in the self-organization of morphogenetic collective systems. By incrementally introducing these principles to collectives, we defined four distinct classes of morphogenetic collective systems. Monte Carlo simulations were conducted using an extended version of the Swarm Chemistry model that was equipped with dynamic differentiation/re-differentiation and local information sharing capabilities. Self-organization of swarms was characterized by several kinetic and topological measurements, the latter of which were facilitated by a newly developed network-based method. Results of simulations revealed that, while heterogeneity of components had a strong impact on the structure and behavior of the swarms, dynamic differentiation/re-differentiation of components and local information sharing helped the swarms maintain spatially adjacent, coherent organization.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
33,369
1804.06964
GNAS: A Greedy Neural Architecture Search Method for Multi-Attribute Learning
A key problem in deep multi-attribute learning is to effectively discover the inter-attribute correlation structures. Typically, the conventional deep multi-attribute learning approaches follow the pipeline of manually designing the network architectures based on task-specific expertise prior knowledge and careful network tunings, leading to the inflexibility for various complicated scenarios in practice. Motivated by addressing this problem, we propose an efficient greedy neural architecture search approach (GNAS) to automatically discover the optimal tree-like deep architecture for multi-attribute learning. In a greedy manner, GNAS divides the optimization of global architecture into the optimizations of individual connections step by step. By iteratively updating the local architectures, the global tree-like architecture gets converged where the bottom layers are shared across relevant attributes and the branches in top layers more encode attribute-specific features. Experiments on three benchmark multi-attribute datasets show the effectiveness and compactness of neural architectures derived by GNAS, and also demonstrate the efficiency of GNAS in searching neural architectures.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
95,421
1903.08552
Traversing the noise of dynamic mini-batch sub-sampled loss functions: A visual guide
Mini-batch sub-sampling in neural network training is unavoidable, due to growing data demands, memory-limited computational resources such as graphical processing units (GPUs), and the dynamics of on-line learning. In this study we specifically distinguish between static mini-batch sub-sampled loss functions, where mini-batches are intermittently fixed during training, resulting in smooth but biased loss functions; and the dynamic sub-sampling equivalent, where new mini-batches are sampled at every loss evaluation, trading bias for variance in sampling induced discontinuities. These render automated optimization strategies such as minimization line searches ineffective, since critical points may not exist and function minimizers find spurious, discontinuity induced minima. This paper suggests recasting the optimization problem to find stochastic non-negative associated gradient projection points (SNN-GPPs). We demonstrate that the SNN-GPP optimality criterion is less susceptible to sub-sampling induced discontinuities than critical points or minimizers. We conduct a visual investigation, comparing local minimum and SNN-GPP optimality criteria in the loss functions of a simple neural network training problem for a variety of popular activation functions. Since SNN-GPPs better approximate the location of true optima, particularly when using smooth activation functions with high curvature characteristics, we postulate that line searches locating SNN-GPPs can contribute significantly to automating neural network training
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,853
2310.16331
Brain-Inspired Reservoir Computing Using Memristors with Tunable Dynamics and Short-Term Plasticity
Recent advancements in reservoir computing research have created a demand for analog devices with dynamics that can facilitate the physical implementation of reservoirs, promising faster information processing while consuming less energy and occupying a smaller area footprint. Studies have demonstrated that dynamic memristors, with nonlinear and short-term memory dynamics, are excellent candidates as information-processing devices or reservoirs for temporal classification and prediction tasks. Previous implementations relied on nominally identical memristors that applied the same nonlinear transformation to the input data, which is not enough to achieve a rich state space. To address this limitation, researchers either diversified the data encoding across multiple memristors or harnessed the stochastic device-to-device variability among the memristors. However, this approach requires additional pre-processing steps and leads to synchronization issues. Instead, it is preferable to encode the data once and pass it through a reservoir layer consisting of memristors with distinct dynamics. Here, we demonstrate that ion-channel-based memristors with voltage-dependent dynamics can be controllably and predictively tuned through voltage or adjustment of the ion channel concentration to exhibit diverse dynamic properties. We show, through experiments and simulations, that reservoir layers constructed with a small number of distinct memristors exhibit significantly higher predictive and classification accuracies with a single data encoding. We found that for a second-order nonlinear dynamical system prediction task, the varied memristor reservoir experimentally achieved a normalized mean square error of 0.0015 using only five distinct memristors. Moreover, in a neural activity classification task, a reservoir of just three distinct memristors experimentally attained an accuracy of 96.5%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,679
1811.11686
Compliant Fluidic Control Structures: Concept and Synthesis Approach
The concept and synthesis approach for planar Compliant Fluidic Control Structures (CFCSs), monolithic flexible continua with embedded functional pores, is presented in this manuscript. Such structures are envisioned to find application in biomedicine as tunable microuidic devices for drug/nutrient delivery. The functional pores enlarge and/or contract upondeformation of the compliant structure in response to external stimuli, facilitating the regulated control of fluid/nutrient/drug transport. A thickness design variable based topology optimization problem is formulated to generate effective designs of these structures. An objective based on hydraulic diameter(s) is conceptualized, and it is extremized using a gradient based optimizer. Both geometrical and material nonlinearities are considered. The nonlinear behaviour of employed hyperelastic material is modeled via the Arruda-Boyce constitutive material model. Large-displacement finite element analysis is performed using the updated Lagrangian formulation in plane-stress setting. The proposed synthesis approach is applied to various CFCSs for a variety of fluidic control functionalities. The optimized designs of various CFCSs with single and/or multiple functional pores are fabricated via a Polydimethylsiloxane (PDMS) soft lithography process, using a high precision 3D printed mold and their performances are compared. with the numerical predictions.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
114,834
1802.03275
Slice Sampling Particle Belief Propagation
Inference in continuous label Markov random fields is a challenging task. We use particle belief propagation (PBP) for solving the inference problem in continuous label space. Sampling particles from the belief distribution is typically done by using Metropolis-Hastings Markov chain Monte Carlo methods which involves sampling from a proposal distribution. This proposal distribution has to be carefully designed depending on the particular model and input data to achieve fast convergence. We propose to avoid dependence on a proposal distribution by introducing a slice sampling based PBP algorithm. The proposed approach shows superior convergence performance on an image denoising toy example. Our findings are validated on a challenging relational 2D feature tracking application.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
89,939
2402.02307
Joint Activity and Data Detection for Massive Grant-Free Access Using Deterministic Non-Orthogonal Signatures
Grant-free access is a key enabler for connecting wireless devices with low latency and low signaling overhead in massive machine-type communications (mMTC). For massive grant-free access, user-specific signatures are uniquely assigned to mMTC devices. In this paper, we first derive a sufficient condition for the successful identification of active devices through maximum likelihood (ML) estimation in massive grant-free access. The condition is represented by the coherence of a signature sequence matrix containing the signatures of all devices. Then, we present a design framework of non-orthogonal signature sequences in a deterministic fashion. The design principle relies on unimodular masking sequences with low correlation, which are applied as masking sequences to the columns of the discrete Fourier transform (DFT) matrix. For example constructions, we use four polyphase masking sequences represented by characters over finite fields. Leveraging algebraic techniques, we show that the signature sequence matrix of proposed non-orthogonal sequences has theoretically bounded low coherence. Simulation results demonstrate that the deterministic non-orthogonal signatures achieve the excellent performance of joint activity and data detection by ML- and approximate message passing (AMP)-based algorithms for massive grant-free access in mMTC.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
426,481
0903.0952
Definition of Strange Attractor in Benard problem for Generalized Couette Cell
For movements of the viscous continuous flow in generalized Couette cell the dynamic system describing the central limiting variety is received.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
3,289
2311.13469
Span-Based Optimal Sample Complexity for Average Reward MDPs
We study the sample complexity of learning an $\varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. We establish the complexity bound $\widetilde{O}\left(SA\frac{H}{\varepsilon^2} \right)$, where $H$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,H$ and $\varepsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. Our result is based on reducing the average-reward MDP to a discounted MDP. To establish the optimality of this reduction, we develop improved bounds for $\gamma$-discounted MDPs, showing that $\widetilde{O}\left(SA\frac{H}{(1-\gamma)^2\varepsilon^2} \right)$ samples suffice to learn a $\varepsilon$-optimal policy in weakly communicating MDPs under the regime that $\gamma \geq 1 - \frac{1}{H}$, circumventing the well-known lower bound of $\widetilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\varepsilon^2} \right)$ for general $\gamma$-discounted MDPs. Our analysis develops upper bounds on certain instance-dependent variance parameters in terms of the span parameter. These bounds are tighter than those based on the mixing time or diameter of the MDP and may be of broader use.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
409,747
2007.03203
Learning Combined Set Covering and Traveling Salesman Problem
The Traveling Salesman Problem is one of the most intensively studied combinatorial optimization problems due both to its range of real-world applications and its computational complexity. When combined with the Set Covering Problem, it raises even more issues related to tractability and scalability. We study a combined Set Covering and Traveling Salesman problem and provide a mixed integer programming formulation to solve the problem. Motivated by applications where the optimal policy needs to be updated on a regular basis and repetitively solving this via MIP can be computationally expensive, we propose a machine learning approach to effectively deal with this problem by providing an opportunity to learn from historical optimal solutions that are derived from the MIP formulation. We also present a case study using the vaccine distribution chain of the World Health Organization, and provide numerical results with data derived from four countries in sub-Saharan Africa.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
185,985
2103.01658
Minimizing Information Leakage of Abrupt Changes in Stochastic Systems
This work investigates the problem of analyzing privacy of abrupt changes for general Markov processes. These processes may be affected by changes, or exogenous signals, that need to remain private. Privacy refers to the disclosure of information of these changes through observations of the underlying Markov chain. In contrast to previous work on privacy, we study the problem for an online sequence of data. We use theoretical tools from optimal detection theory to motivate a definition of online privacy based on the average amount of information per observation of the stochastic system in consideration. Two cases are considered: the full-information case, where the eavesdropper measures all but the signals that indicate a change, and the limited-information case, where the eavesdropper only measures the state of the Markov process. For both cases, we provide ways to derive privacy upper-bounds and compute policies that attain a higher privacy level. It turns out that the problem of computing privacy-aware policies is concave, and we conclude with some examples and numerical simulations for both cases.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
222,706
2209.10222
Fairness Reprogramming
Despite a surge of recent advances in promoting machine Learning (ML) fairness, the existing mainstream approaches mostly require retraining or finetuning the entire weights of the neural network to meet the fairness criteria. However, this is often infeasible in practice for those large-scale trained models due to large computational and storage costs, low data efficiency, and model privacy issues. In this paper, we propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique. Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger, which is tuned towards the fairness criteria under a min-max formulation. We further introduce an information-theoretic framework that explains why and under what conditions fairness goals can be achieved using the fairness trigger. We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models by providing false demographic information that hinders the model from utilizing the correct demographic information to make the prediction. Extensive experiments on both NLP and CV datasets demonstrate that our method can achieve better fairness improvements than retraining-based methods with far less data dependency under two widely-used fairness criteria. Codes are available at https://github.com/UCSB-NLP-Chang/Fairness-Reprogramming.git.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
318,792
1808.08575
Title-Guided Encoding for Keyphrase Generation
Keyphrase generation (KG) aims to generate a set of keyphrases given a document, which is a fundamental task in natural language processing (NLP). Most previous methods solve this problem in an extractive manner, while recently, several attempts are made under the generative setting using deep neural networks. However, the state-of-the-art generative methods simply treat the document title and the document main body equally, ignoring the leading role of the title to the overall document. To solve this problem, we introduce a new model called Title-Guided Network (TG-Net) for automatic keyphrase generation task based on the encoder-decoder architecture with two new features: (i) the title is additionally employed as a query-like input, and (ii) a title-guided encoder gathers the relevant information from the title to each word in the document. Experiments on a range of KG datasets demonstrate that our model outperforms the state-of-the-art models with a large margin, especially for documents with either very low or very high title length ratios.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
105,983
2405.15002
Private Regression via Data-Dependent Sufficient Statistic Perturbation
Sufficient statistic perturbation (SSP) is a widely used method for differentially private linear regression. SSP adopts a data-independent approach where privacy noise from a simple distribution is added to sufficient statistics. However, sufficient statistics can often be expressed as linear queries and better approximated by data-dependent mechanisms. In this paper we introduce data-dependent SSP for linear regression based on post-processing privately released marginals, and find that it outperforms state-of-the-art data-independent SSP. We extend this result to logistic regression by developing an approximate objective that can be expressed in terms of sufficient statistics, resulting in a novel and highly competitive SSP approach for logistic regression. We also make a connection to synthetic data for machine learning: for models with sufficient statistics, training on synthetic data corresponds to data-dependent SSP, with the overall utility determined by how well the mechanism answers these linear queries.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,693
2201.09451
Emotion-based Modeling of Mental Disorders on Social Media
According to the World Health Organization (WHO), one in four people will be affected by mental disorders at some point in their lives. However, in many parts of the world, patients do not actively seek professional diagnosis because of stigma attached to mental illness, ignorance of mental health and its associated symptoms. In this paper, we propose a model for passively detecting mental disorders using conversations on Reddit. Specifically, we focus on a subset of mental disorders that are characterized by distinct emotional patterns (henceforth called emotional disorders): major depressive, anxiety, and bipolar disorders. Through passive (i.e., unprompted) detection, we can encourage patients to seek diagnosis and treatment for mental disorders. Our proposed model is different from other work in this area in that our model is based entirely on the emotional states, and the transition between these states of users on Reddit, whereas prior work is typically based on content-based representations (e.g., n-grams, language model embeddings, etc). We show that content-based representation is affected by domain and topic bias and thus does not generalize, while our model, on the other hand, suppresses topic-specific information and thus generalizes well across different topics and times. We conduct experiments on our model's ability to detect different emotional disorders and on the generalizability of our model. Our experiments show that while our model performs comparably to content-based models, such as BERT, it generalizes much better across time and topic.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
276,679
1802.02223
Seeded Ising Model and Statistical Natures of Human Iris Templates
We propose a variant of Ising model, called the Seeded Ising Model, to model probabilistic nature of human iris templates. This model is an Ising model in which the values at certain lattice points are held fixed throughout Ising model evolution. Using this we show how to reconstruct the full iris template from partial information, and we show that about 1/6 of the given template is needed to recover almost all information content of the original one in the sense that the resulting Hamming distance is well within the range to assert correctly the identity of the subject. This leads us to propose the concept of effective statistical degree of freedom of iris templates and show it is about 1/6 of the total number of bits. In particular, for a template of $2048$ bits, its effective statistical degree of freedom is about $342$ bits, which coincides very well with the degree of freedom computed by the completely different method proposed by Daugman.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,737
2103.02729
Linear Bandit Algorithms with Sublinear Time Complexity
We propose two linear bandits algorithms with per-step complexity sublinear in the number of arms $K$. The algorithms are designed for applications where the arm set is extremely large and slowly changing. Our key realization is that choosing an arm reduces to a maximum inner product search (MIPS) problem, which can be solved approximately without breaking regret guarantees. Existing approximate MIPS solvers run in sublinear time. We extend those solvers and present theoretical guarantees for online learning problems, where adaptivity (i.e., a later step depends on the feedback in previous steps) becomes a unique challenge. We then explicitly characterize the tradeoff between the per-step complexity and regret. For sufficiently large $K$, our algorithms have sublinear per-step complexity and $\tilde O(\sqrt{T})$ regret. Empirically, we evaluate our proposed algorithms in a synthetic environment and a real-world online movie recommendation problem. Our proposed algorithms can deliver a more than 72 times speedup compared to the linear time baselines while retaining similar regret.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
223,043
1407.4765
Ark: A Real-World Consensus Implementation
Ark is an implementation of a consensus algorithm similar to Paxos and Raft, designed as an improvement over the existing consensus algorithm used by MongoDB and TokuMX. Ark was designed from first principles, improving on the election algorithm used by TokuMX, to fix deficiencies in MongoDB's consensus algorithms that can cause data loss. It ultimately has many similarities with Raft, but diverges in a few ways, mainly to support other features like chained replication and unacknowledged writes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
34,730
2211.10284
Estimating more camera poses for ego-centric videos is essential for VQ3D
Visual queries 3D localization (VQ3D) is a task in the Ego4D Episodic Memory Benchmark. Given an egocentric video, the goal is to answer queries of the form "Where did I last see object X?", where the query object X is specified as a static image, and the answer should be a 3D displacement vector pointing to object X. However, current techniques use naive ways to estimate the camera poses of video frames, resulting in a low query with pose (QwP) ratio, thus a poor overall success rate. We design a new pipeline for the challenging egocentric video camera pose estimation problem in our work. Moreover, we revisit the current VQ3D framework and optimize it in terms of performance and efficiency. As a result, we get the top-1 overall success rate of 25.8% on VQ3D leaderboard, which is two times better than the 8.7% reported by the baseline.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,261
1808.05403
A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning
In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the $\ell_1$ norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the $\ell_1$ and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
105,346
2304.11576
Exact Worst-Case Execution-Time Analysis for Implicit Model Predictive Control
We propose the first method that determines the exact worst-case execution time (WCET) for implicit linear model predictive control (MPC). Such WCET bounds are imperative when MPC is used in real time to control safety-critical systems. The proposed method applies when the quadratic programming solver in the MPC controller belongs to a family of well-established active-set solvers. For such solvers, we leverage a previously proposed complexity certification framework to generate a finite set of archetypal optimization problems; we prove that these archetypal problems form an execution-time equivalent cover of all possible problems; that is, that they capture the execution time for solving any possible optimization problem that can be encountered online. Hence, by solving just these archetypal problems on the hardware on which the MPC is to be deployed, and by recording the execution times, we obtain the exact WCET. In addition to providing formal proofs of the methods efficacy, we validate the method on an MPC example where an inverted pendulum on a cart is stabilized. The experiments highlight the following advantages compared with classical WCET methods: (i) in contrast to classical static methods, our method gives the exact WCET; (ii) in contrast to classical measurement-based methods, our method guarantees a correct WCET estimate and requires fewer measurements on the hardware.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
359,874
1707.04406
Inner-Scene Similarities as a Contextual Cue for Object Detection
Using image context is an effective approach for improving object detection. Previously proposed methods used contextual cues that rely on semantic or spatial information. In this work, we explore a different kind of contextual information: inner-scene similarity. We present the CISS (Context by Inner Scene Similarity) algorithm, which is based on the observation that two visually similar sub-image patches are likely to share semantic identities, especially when both appear in the same image. CISS uses base-scores provided by a base detector and performs as a post-detection stage. For each candidate sub-image (denoted anchor), the CISS algorithm finds a few similar sub-images (denoted supporters), and, using them, calculates a new enhanced score for the anchor. This is done by utilizing the base-scores of the supporters and a pre-trained dependency model. The new scores are modeled as a linear function of the base scores of the anchor and the supporters and is estimated using a minimum mean square error optimization. This approach results in: (a) improved detection of partly occluded objects (when there are similar non-occluded objects in the scene), and (b) fewer false alarms (when the base detector mistakenly classifies a background patch as an object). This work relates to Duncan and Humphreys' "similarity theory," a psychophysical study. which suggested that the human visual system perceptually groups similar image regions and that the classification of one region is affected by the estimated identity of the other. Experimental results demonstrate the enhancement of a base detector's scores on the PASCAL VOC dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
77,036
2104.12021
Explainable Artificial Intelligence Reveals Novel Insight into Tumor Microenvironment Conditions Linked with Better Prognosis in Patients with Breast Cancer
We investigated the data-driven relationship between features in the tumor microenvironment (TME) and the overall and 5-year survival in triple-negative breast cancer (TNBC) and non-TNBC (NTNBC) patients by using Explainable Artificial Intelligence (XAI) models. We used clinical information from patients with invasive breast carcinoma from The Cancer Genome Atlas and from two studies from the cbioPortal, the PanCanAtlas project and the GDAC Firehose study. In this study, we used a normalized RNA sequencing data-driven cohort from 1,015 breast cancer patients, alive or deceased, from the UCSC Xena data set and performed integrated deconvolution with the EPIC method to estimate the percentage of seven different immune and stromal cells from RNA sequencing data. Novel insights derived from our XAI model showed that CD4+ T cells and B cells are more critical than other TME features for enhanced prognosis for both TNBC and NTNBC patients. Our XAI model revealed the critical inflection points (i.e., threshold fractions) of CD4+ T cells and B cells above or below which 5-year survival rates improve. Subsequently, we ascertained the conditional probabilities of $\geq$ 5-year survival in both TNBC and NTNBC patients under specific conditions inferred from the inflection points. In particular, the XAI models revealed that a B-cell fraction exceeding 0.018 in the TME could ensure 100% 5-year survival for NTNBC patients. The findings from this research could lead to more accurate clinical predictions and enhanced immunotherapies and to the design of innovative strategies to reprogram the TME of breast cancer patients.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
232,086
2003.03685
Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets
The paper introduces a novel conditional independence (CI) based method for linear and nonlinear, lagged and contemporaneous causal discovery from observational time series in the causally sufficient case. Existing CI-based methods such as the PC algorithm and also common methods from other frameworks suffer from low recall and partially inflated false positives for strong autocorrelation which is an ubiquitous challenge in time series. The novel method, PCMCI$^+$, extends PCMCI [Runge et al., 2019b] to include discovery of contemporaneous links. PCMCI$^+$ improves the reliability of CI tests by optimizing the choice of conditioning sets and even benefits from autocorrelation. The method is order-independent and consistent in the oracle case. A broad range of numerical experiments demonstrates that PCMCI$^+$ has higher adjacency detection power and especially more contemporaneous orientation recall compared to other methods while better controlling false positives. Optimized conditioning sets also lead to much shorter runtimes than the PC algorithm. PCMCI$^+$ can be of considerable use in many real world application scenarios where often time resolutions are too coarse to resolve time delays and strong autocorrelation is present.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
167,320
2305.18362
Statistically Significant Concept-based Explanation of Image Classifiers via Model Knockoffs
A concept-based classifier can explain the decision process of a deep learning model by human-understandable concepts in image classification problems. However, sometimes concept-based explanations may cause false positives, which misregards unrelated concepts as important for the prediction task. Our goal is to find the statistically significant concept for classification to prevent misinterpretation. In this study, we propose a method using a deep learning model to learn the image concept and then using the Knockoff samples to select the important concepts for prediction by controlling the False Discovery Rate (FDR) under a certain value. We evaluate the proposed method in our synthetic and real data experiments. Also, it shows that our method can control the FDR properly while selecting highly interpretable concepts to improve the trustworthiness of the model.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
368,985
2401.13941
AC-Driven Series Elastic Electrohydraulic Actuator for Stable and Smooth Displacement Output
Soft electrohydraulic actuators known as HASEL actuators have attracted widespread research interest due to their outstanding dynamic performance and high output power. However, the displacement of electrohydraulic actuators usually declines with time under constant DC voltage, which hampers its prospective application. A mathematical model is firstly established to not only explain the decrease in displacement under DC voltage but also predict the relatively stable displacement with oscillation under AC square wave voltage. The mathematical model is validated since the actual displacement confirms the trend observed by our model. To smooth the displacement oscillation introduced by AC voltage, a serial elastic component is incorporated to form a SE-HASEL actuator. A feedback control with a proportion-integration algorithm enables the SE-HASEL actuator to eliminate the obstinate displacement hysteresis. Our results revealed that, through our methodology, the SE-HASEL actuator can give stable and smooth displacement and is capable of absorbing external impact disturbance simultaneously. A rotary joint based on the SE-HASEL actuator is developed to reflect its possibility to generate a common rotary motion for wide robotic applications. More importantly, this paper also proposes a highly accurate needle biopsy robot that can be utilized in MRI-guide surgical procedures. Overall, we have achieved AC-driven series elastic electrohydraulic actuators that can exhibit stable and smooth displacement output.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
423,907
2001.07708
Towards Comparability in Non-Intrusive Load Monitoring: On Data and Performance Evaluation
Non-Intrusive Load Monitoring (NILM) comprises of a set of techniques that provide insights into the energy consumption of households and industrial facilities. Latest contributions show significant improvements in terms of accuracy and generalisation abilities. Despite all progress made concerning disaggregation techniques, performance evaluation and comparability remains an open research question. The lack of standardisation and consensus on evaluation procedures makes reproducibility and comparability extremely difficult. In this paper, we draw attention to comparability in NILM with a focus on highlighting the considerable differences amongst common energy datasets used to test the performance of algorithms. We divide discussion on comparability into data aspects, performance metrics, and give a close view on evaluation processes. Detailed information on pre-processing as well as data cleaning methods, the importance of unified performance reporting, and the need for complexity measures in load disaggregation are found to be the most urgent issues in NILM-related research. In addition, our evaluation suggests that datasets should be chosen carefully. We conclude by formulating suggestions for future work to enhance comparability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,104
2111.05307
Machine-learning custom-made basis functions for partial differential equations
Spectral methods are an important part of scientific computing's arsenal for solving partial differential equations (PDEs). However, their applicability and effectiveness depend crucially on the choice of basis functions used to expand the solution of a PDE. The last decade has seen the emergence of deep learning as a strong contender in providing efficient representations of complex functions. In the current work, we present an approach for combining deep neural networks with spectral methods to solve PDEs. In particular, we use a deep learning technique known as the Deep Operator Network (DeepONet), to identify candidate functions on which to expand the solution of PDEs. We have devised an approach which uses the candidate functions provided by the DeepONet as a starting point to construct a set of functions which have the following properties: i) they constitute a basis, 2) they are orthonormal, and 3) they are hierarchical i.e., akin to Fourier series or orthogonal polynomials. We have exploited the favorable properties of our custom-made basis functions to both study their approximation capability and use them to expand the solution of linear and nonlinear time-dependent PDEs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
265,754
1110.0425
Hybrid Codes Needed for Coordination over the Point-to-Point Channel
We consider a new fundamental question regarding the point-to-point memoryless channel. The source-channel separation theorem indicates that random codebook construction for lossy source compression and channel coding can be independently constructed and paired to achieve optimal performance for coordinating a source sequence with a reconstruction sequence. But what if we want the channel input to also be coordinated with the source and reconstruction? Such situations arise in network communication problems, where the correlation inherent in the information sources can be used to correlate channel inputs. Hybrid codes have been shown to be useful in a number of network communication problems. In this work we highlight their advantages over purely digital codebook construction by applying them to the point-to-point setting, coordinating both the channel input and the reconstruction with the source.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
12,460
1712.08425
Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging - with data from the Osteoarthritis Initiative
Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured that the cartilage volume change quantifications became consistent with manual expert scores.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,195
1808.01199
Generation Meets Recommendation: Proposing Novel Items for Groups of Users
Consider a movie studio aiming to produce a set of new movies for summer release: What types of movies it should produce? Who would the movies appeal to? How many movies should it make? Similar issues are encountered by a variety of organizations, e.g., mobile-phone manufacturers and online magazines, who have to create new (non-existent) items to satisfy groups of users with different preferences. In this paper, we present a joint problem formalization of these interrelated issues, and propose generative methods that address these questions simultaneously. Specifically, we leverage the latent space obtained by training a deep generative model---the Variational Autoencoder (VAE)---via a loss function that incorporates both rating performance and item reconstruction terms. We then apply a greedy search algorithm that utilizes this learned latent space to jointly obtain K plausible new items, and user groups that would find the items appealing. An evaluation of our methods on a synthetic dataset indicates that our approach is able to generate novel items similar to highly-desirable unobserved items. As case studies on real-world data, we applied our method on the MART abstract art and Movielens Tag Genome dataset, which resulted in promising results: small and diverse sets of novel items.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
104,532
2204.03748
Energy self-sufficient systems for monitoring sewer networks
Underground infrastructure networks form the backbone of vital supply and disposal systems. However, they are under-monitored in comparison to their value. This is due, in large part, to the lack of energy supply for monitoring and data transmission. In this paper, we investigate a novel, energy harvesting system used to power underground sewer infrastructure monitoring networks. The system collects the required energy from ambient sources, such as temperature differences or residual light in sewer networks. A prototype was developed that could use either a thermoelectric generator (TEG) or a solar cell to capture the energy needed to acquire and transmit ultrasonic water level data via LoRaWAN. Real-world field trials were satisfactory and showed the potential power output, as well as, possibilities to improve the system. Using an extrapolation model, we proved that the developed solution could work reliably throughout the year.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
290,422
1607.03105
Systholic Boolean Orthonormalizer Network in Wavelet Domain for SAR Image Despeckling
We describe a novel method for removing speckle (in wavelet domain) of unknown variance from SAR images. The me-thod is based on the following procedure: We apply 1) Bidimentional Discrete Wavelet Transform (DWT-2D) to the speckled image, 2) scaling and rounding to the coefficients of the highest subbands (to obtain integer and positive coefficients), 3) bit-slicing to the new highest subbands (to obtain bit-planes), 4) then we apply the Systholic Boolean Orthonormalizer Network (SBON) to the input bit-plane set and we obtain two orthonormal output bit-plane sets (in a Boolean sense), we project a set on the other one, by means of an AND operation, and then, 5) we apply re-assembling, and, 6) re-sca-ling. Finally, 7) we apply Inverse DWT-2D and reconstruct a SAR image from the modified wavelet coefficients. Despeckling results compare favorably to the most of methods in use at the moment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,459
2009.04547
Optimal Inspection and Maintenance Planning for Deteriorating Structural Components through Dynamic Bayesian Networks and Markov Decision Processes
Civil and maritime engineering systems, among others, from bridges to offshore platforms and wind turbines, must be efficiently managed as they are exposed to deterioration mechanisms throughout their operational life, such as fatigue or corrosion. Identifying optimal inspection and maintenance policies demands the solution of a complex sequential decision-making problem under uncertainty, with the main objective of efficiently controlling the risk associated with structural failures. Addressing this complexity, risk-based inspection planning methodologies, supported often by dynamic Bayesian networks, evaluate a set of pre-defined heuristic decision rules to reasonably simplify the decision problem. However, the resulting policies may be compromised by the limited space considered in the definition of the decision rules. Avoiding this limitation, Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical methodology for stochastic optimal control under uncertain action outcomes and observations, in which the optimal actions are prescribed as a function of the entire, dynamically updated, state probability distribution. In this paper, we combine dynamic Bayesian networks with POMDPs in a joint framework for optimal inspection and maintenance planning, and we provide the formulation for developing both infinite and finite horizon POMDPs in a structural reliability context. The proposed methodology is implemented and tested for the case of a structural component subject to fatigue deterioration, demonstrating the capability of state-of-the-art point-based POMDP solvers for solving the underlying planning optimization problem. Within the numerical experiments, POMDP and heuristic-based policies are thoroughly compared, and results showcase that POMDPs achieve substantially lower costs as compared to their counterparts, even for traditional problem settings.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
195,072
2206.10146
KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing
Part-level attribute parsing is a fundamental but challenging task, which requires the region-level visual understanding to provide explainable details of body parts. Most existing approaches address this problem by adding a regional convolutional neural network (RCNN) with an attribute prediction head to a two-stage detector, in which attributes of body parts are identified from local-wise part boxes. However, local-wise part boxes with limit visual clues (i.e., part appearance only) lead to unsatisfying parsing results, since attributes of body parts are highly dependent on comprehensive relations among them. In this article, we propose a Knowledge Embedded RCNN (KE-RCNN) to identify attributes by leveraging rich knowledges, including implicit knowledge (e.g., the attribute ``above-the-hip'' for a shirt requires visual/geometry relations of shirt-hip) and explicit knowledge (e.g., the part of ``shorts'' cannot have the attribute of ``hoodie'' or ``lining''). Specifically, the KE-RCNN consists of two novel components, i.e., Implicit Knowledge based Encoder (IK-En) and Explicit Knowledge based Decoder (EK-De). The former is designed to enhance part-level representation by encoding part-part relational contexts into part boxes, and the latter one is proposed to decode attributes with a guidance of prior knowledge about \textit{part-attribute} relations. In this way, the KE-RCNN is plug-and-play, which can be integrated into any two-stage detectors, e.g., Attribute-RCNN, Cascade-RCNN, HRNet based RCNN and SwinTransformer based RCNN. Extensive experiments conducted on two challenging benchmarks, e.g., Fashionpedia and Kinetics-TPS, demonstrate the effectiveness and generalizability of the KE-RCNN. In particular, it achieves higher improvements over all existing methods, reaching around 3% of AP on Fashionpedia and around 4% of Acc on Kinetics-TPS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
303,815
2401.12416
Enhancing Reliability of Neural Networks at the Edge: Inverted Normalization with Stochastic Affine Transformations
Bayesian Neural Networks (BayNNs) naturally provide uncertainty in their predictions, making them a suitable choice in safety-critical applications. Additionally, their realization using memristor-based in-memory computing (IMC) architectures enables them for resource-constrained edge applications. In addition to predictive uncertainty, however, the ability to be inherently robust to noise in computation is also essential to ensure functional safety. In particular, memristor-based IMCs are susceptible to various sources of non-idealities such as manufacturing and runtime variations, drift, and failure, which can significantly reduce inference accuracy. In this paper, we propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in IMC architectures. To achieve this, we introduce a novel normalization layer combined with stochastic affine transformations. Empirical results in various benchmark datasets show a graceful degradation in inference accuracy, with an improvement of up to $58.11\%$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
423,362
1812.04700
Predictive Learning on Hidden Tree-Structured Ising Models
We provide high-probability sample complexity guarantees for exact structure recovery and accurate predictive learning using noise-corrupted samples from an acyclic (tree-shaped) graphical model. The hidden variables follow a tree-structured Ising model distribution, whereas the observable variables are generated by a binary symmetric channel taking the hidden variables as its input (flipping each bit independently with some constant probability $q\in [0,1/2)$). In the absence of noise, predictive learning on Ising models was recently studied by Bresler and Karzand (2020); this paper quantifies how noise in the hidden model impacts the tasks of structure recovery and marginal distribution estimation by proving upper and lower bounds on the sample complexity. Our results generalize state-of-the-art bounds reported in prior work, and they exactly recover the noiseless case ($q=0$). In fact, for any tree with $p$ vertices and probability of incorrect recovery $\delta>0$, the sufficient number of samples remains logarithmic as in the noiseless case, i.e., $\mathcal{O}(\log(p/\delta))$, while the dependence on $q$ is $\mathcal{O}\big( 1/(1-2q)^{4} \big)$, for both aforementioned tasks. We also present a new equivalent of Isserlis' Theorem for sign-valued tree-structured distributions, yielding a new low-complexity algorithm for higher-order moment estimation.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
116,260
2501.09857
Efficient Probabilistic Assessment of Power System Resilience Using the Polynomial Chaos Expansion Method with Enhanced Stability
Increasing frequency and intensity of extreme weather events motivates the assessment of power system resilience. The random nature of these events and the resulting failures mandates probabilistic resilience assessment, but state-of-the-art methods (e.g., Monte Carlo simulation) are computationally inefficient. This paper leverages the polynomial chaos expansion (PCE) method to efficiently quantify uncertainty in power system resilience. To address repeatability issues arising from PCE computation with different sample sets, we propose the integration of the Maximin-LHS experiment design method with the PCE method. Numerical studies on the IEEE 39-bus system illustrate the improved repeatability and convergence of the proposed method. The enhanced PCE method is then used to assess the resilience of the system and propose adaptation measures to improve it.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
525,296
2107.02643
Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps
Fetal ultrasound screening during pregnancy plays a vital role in the early detection of fetal malformations which have potential long-term health impacts. The level of skill required to diagnose such malformations from live ultrasound during examination is high and resources for screening are often limited. We present an interpretable, atlas-learning segmentation method for automatic diagnosis of Hypo-plastic Left Heart Syndrome (HLHS) from a single `4 Chamber Heart' view image. We propose to extend the recently introduced Image-and-Spatial Transformer Networks (Atlas-ISTN) into a framework that enables sensitising atlas generation to disease. In this framework we can jointly learn image segmentation, registration, atlas construction and disease prediction while providing a maximum level of clinical interpretability compared to direct image classification methods. As a result our segmentation allows diagnoses competitive with expert-derived manual diagnosis and yields an AUC-ROC of 0.978 (1043 cases for training, 260 for validation and 325 for testing).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
244,892
2107.02569
Self-training with noisy student model and semi-supervised loss function for dcase 2021 challenge task 4
This report proposes a polyphonic sound event detection (SED) method for the DCASE 2021 Challenge Task 4. The proposed SED model consists of two stages: a mean-teacher model for providing target labels regarding weakly labeled or unlabeled data and a self-training-based noisy student model for predicting strong labels for sound events. The mean-teacher model, which is based on the residual convolutional recurrent neural network (RCRNN) for the teacher and student model, is first trained using all the training data from a weakly labeled dataset, an unlabeled dataset, and a strongly labeled synthetic dataset. Then, the trained mean-teacher model predicts the strong label to each of the weakly labeled and unlabeled datasets, which is brought to the noisy student model in the second stage of the proposed SED model. Here, the structure of the noisy student model is identical to the RCRNN-based student model of the mean-teacher model in the first stage. Then, it is self-trained by adding feature noises, such as time-frequency shift, mixup, SpecAugment, and dropout-based model noise. In addition, a semi-supervised loss function is applied to train the noisy student model, which acts as label noise injection. The performance of the proposed SED model is evaluated on the validation set of the DCASE 2021 Challenge Task 4, and then, several ensemble models that combine five-fold validation models with different hyperparameters of the semi-supervised loss function are finally selected as our final models.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
244,870
1911.07185
Towards the Automation of Deep Image Prior
Single image inverse problem is a notoriously challenging ill-posed problem that aims to restore the original image from one of its corrupted versions. Recently, this field has been immensely influenced by the emergence of deep-learning techniques. Deep Image Prior (DIP) offers a new approach that forces the recovered image to be synthesized from a given deep architecture. While DIP is quite an effective unsupervised approach, it is deprecated in real-world applications because of the requirement of human assistance. In this work, we aim to find the best-recovered image without the assistance of humans by adding a stopping criterion, which will reach maximum when the iteration no longer improves the image quality. More specifically, we propose to add a pseudo noise to the corrupted image and measure the pseudo-noise component in the recovered image by the orthogonality between signal and noise. The accuracy of the orthogonal stopping criterion has been demonstrated for several tested problems such as denoising, super-resolution, and inpainting, in which 38 out of 40 experiments are higher than 95%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
153,757
2312.08511
The Relative Value of Prediction in Algorithmic Decision Making
Algorithmic predictions are increasingly used to inform the allocations of goods and interventions in the public sphere. In these domains, predictions serve as a means to an end. They provide stakeholders with insights into likelihood of future events as a means to improve decision making quality, and enhance social welfare. However, if maximizing welfare is the ultimate goal, prediction is only a small piece of the puzzle. There are various other policy levers a social planner might pursue in order to improve bottom-line outcomes, such as expanding access to available goods, or increasing the effect sizes of interventions. Given this broad range of design decisions, a basic question to ask is: What is the relative value of prediction in algorithmic decision making? How do the improvements in welfare arising from better predictions compare to those of other policy levers? The goal of our work is to initiate the formal study of these questions. Our main results are theoretical in nature. We identify simple, sharp conditions determining the relative value of prediction vis-\`a-vis expanding access, within several statistical models that are popular amongst quantitative social scientists. Furthermore, we illustrate how these theoretical insights may be used to guide the design of algorithmic decision making systems in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
415,319
1808.09397
MedSTS: A Resource for Clinical Semantic Textual Similarity
The wide adoption of electronic health records (EHRs) has enabled a wide range of applications leveraging EHR data. However, the meaningful use of EHR data largely depends on our ability to efficiently extract and consolidate information embedded in clinical text where natural language processing (NLP) techniques are essential. Semantic textual similarity (STS) that measures the semantic similarity between text snippets plays a significant role in many NLP applications. In the general NLP domain, STS shared tasks have made available a huge collection of text snippet pairs with manual annotations in various domains. In the clinical domain, STS can enable us to detect and eliminate redundant information that may lead to a reduction in cognitive burden and an improvement in the clinical decision-making process. This paper elaborates our efforts to assemble a resource for STS in the medical domain, MedSTS. It consists of a total of 174,629 sentence pairs gathered from a clinical corpus at Mayo Clinic. A subset of MedSTS (MedSTS_ann) containing 1,068 sentence pairs was annotated by two medical experts with semantic similarity scores of 0-5 (low to high similarity). We further analyzed the medical concepts in the MedSTS corpus, and tested four STS systems on the MedSTS_ann corpus. In the future, we will organize a shared task by releasing the MedSTS_ann corpus to motivate the community to tackle the real world clinical problems.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
106,178
1809.02782
Sentiment analysis for Arabic language: A brief survey of approaches and techniques
With the emergence of Web 2.0 technology and the expansion of on-line social networks, current Internet users have the ability to add their reviews, ratings and opinions on social media and on commercial and news web sites. Sentiment analysis aims to classify these reviews reviews in an automatic way. In the literature, there are numerous approaches proposed for automatic sentiment analysis for different language contexts. Each language has its own properties that makes the sentiment analysis more challenging. In this regard, this work presents a comprehensive survey of existing Arabic sentiment analysis studies, and covers the various approaches and techniques proposed in the literature. Moreover, we highlight the main difficulties and challenges of Arabic sentiment analysis, and the proposed techniques in literature to overcome these barriers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,132
1908.00975
Y-Net: A Hybrid Deep Learning Reconstruction Framework for Photoacoustic Imaging in vivo
Photoacoustic imaging (PAI) is an emerging non-invasive imaging modality combining the advantages of deep ultrasound penetration and high optical contrast. Image reconstruction is an essential topic in PAI, which is unfortunately an ill-posed problem due to the complex and unknown optical/acoustic parameters in tissue. Conventional algorithms used in PAI (e.g., delay-and-sum) provide a fast solution while many artifacts remain, especially for linear array probe with limited-view issue. Convolutional neural network (CNN) has shown state-of-the-art results in computer vision, and more and more work based on CNN has been studied in medical image processing recently. In this paper, we present a non-iterative scheme filling the gap between existing direct-processing and post-processing methods, and propose a new framework Y-Net: a CNN architecture to reconstruct the PA image by optimizing both raw data and beamformed images once. The network connected two encoders with one decoder path, which optimally utilizes more information from raw data and beamformed image. The results of the test set showed good performance compared with conventional reconstruction algorithms and other deep learning methods. Our method is also validated with experiments both in-vitro and in vivo, which still performs better than other existing methods. The proposed Y-Net architecture also has high potential in medical image reconstruction for other imaging modalities beyond PAI.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
140,641
2309.09530
Adapting Large Language Models to Domains via Reading Comprehension
We explore how continued pre-training on domain-specific corpora influences large language models, revealing that training on the raw corpora endows the model with domain knowledge, but drastically hurts its prompting ability for question answering. Taken inspiration from human learning via reading comprehension--practice after reading improves the ability to answer questions based on the learned knowledge--we propose a simple method for transforming raw corpora into reading comprehension texts. Each raw text is enriched with a series of tasks related to its content. Our method, highly scalable and applicable to any pre-training corpora, consistently enhances performance across various tasks in three different domains: biomedicine, finance, and law. Notably, our 7B language model achieves competitive performance with domain-specific models of much larger scales, such as BloombergGPT-50B. Furthermore, we demonstrate that domain-specific reading comprehension texts can improve the model's performance even on general benchmarks, showing the potential to develop a general model across even more domains. Our model, code, and data are available at https://github.com/microsoft/LMOps.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
392,652
2203.03931
PASS: Part-Aware Self-Supervised Pre-Training for Person Re-Identification
In person re-identification (ReID), very recent researches have validated pre-training the models on unlabelled person images is much better than on ImageNet. However, these researches directly apply the existing self-supervised learning (SSL) methods designed for image classification to ReID without any adaption in the framework. These SSL methods match the outputs of local views (e.g., red T-shirt, blue shorts) to those of the global views at the same time, losing lots of details. In this paper, we propose a ReID-specific pre-training method, Part-Aware Self-Supervised pre-training (PASS), which can generate part-level features to offer fine-grained information and is more suitable for ReID. PASS divides the images into several local areas, and the local views randomly cropped from each area are assigned with a specific learnable [PART] token. On the other hand, the [PART]s of all local areas are also appended to the global views. PASS learns to match the output of the local views and global views on the same [PART]. That is, the learned [PART] of the local views from a local area is only matched with the corresponding [PART] learned from the global views. As a result, each [PART] can focus on a specific local area of the image and extracts fine-grained information of this area. Experiments show PASS sets the new state-of-the-art performances on Market1501 and MSMT17 on various ReID tasks, e.g., vanilla ViT-S/16 pre-trained by PASS achieves 92.2\%/90.2\%/88.5\% mAP accuracy on Market1501 for supervised/UDA/USL ReID. Our codes are available at https://github.com/CASIA-IVA-Lab/PASS-reID.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,282
2203.00836
CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Anti-cancer drug discoveries have been serendipitous, we sought to present the Open Molecular Graph Learning Benchmark, named CandidateDrug4Cancer, a challenging and realistic benchmark dataset to facilitate scalable, robust, and reproducible graph machine learning research for anti-cancer drug discovery. CandidateDrug4Cancer dataset encompasses multiple most-mentioned 29 targets for cancer, covering 54869 cancer-related drug molecules which are ranged from pre-clinical, clinical and FDA-approved. Besides building the datasets, we also perform benchmark experiments with effective Drug Target Interaction (DTI) prediction baselines using descriptors and expressive graph neural networks. Experimental results suggest that CandidateDrug4Cancer presents significant challenges for learning molecular graphs and targets in practical application, indicating opportunities for future researches on developing candidate drugs for treating cancers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
283,140
2202.10324
VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning
We propose VRL3, a powerful data-driven framework with a simple design for solving challenging visual deep reinforcement learning (DRL) tasks. We analyze a number of major obstacles in taking a data-driven approach, and present a suite of design principles, novel findings, and critical insights about data-driven visual DRL. Our framework has three stages: in stage 1, we leverage non-RL datasets (e.g. ImageNet) to learn task-agnostic visual representations; in stage 2, we use offline RL data (e.g. a limited number of expert demonstrations) to convert the task-agnostic representations into more powerful task-specific representations; in stage 3, we fine-tune the agent with online RL. On a set of challenging hand manipulation tasks with sparse reward and realistic visual inputs, compared to the previous SOTA, VRL3 achieves an average of 780% better sample efficiency. And on the hardest task, VRL3 is 1220% more sample efficient (2440% when using a wider encoder) and solves the task with only 10% of the computation. These significant results clearly demonstrate the great potential of data-driven deep reinforcement learning.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
281,490
2502.14648
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
In today's world, machine learning is hard to imagine without large training datasets and models. This has led to the use of stochastic methods for training, such as stochastic gradient descent (SGD). SGD provides weak theoretical guarantees of convergence, but there are modifications, such as Stochastic Variance Reduced Gradient (SVRG) and StochAstic Recursive grAdient algoritHm (SARAH), that can reduce the variance. These methods require the computation of the full gradient occasionally, which can be time consuming. In this paper, we explore variants of variance reduction algorithms that eliminate the need for full gradient computations. To make our approach memory-efficient and avoid full gradient computations, we use two key techniques: the shuffling heuristic and idea of SAG/SAGA methods. As a result, we improve existing estimates for variance reduction algorithms without the full gradient computations. Additionally, for the non-convex objective function, our estimate matches that of classic shuffling methods, while for the strongly convex one, it is an improvement. We conduct comprehensive theoretical analysis and provide extensive experimental results to validate the efficiency and practicality of our methods for large-scale machine learning problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
535,914
2502.00190
On the Effectiveness of Random Weights in Graph Neural Networks
Graph Neural Networks (GNNs) have achieved remarkable success across diverse tasks on graph-structured data, primarily through the use of learned weights in message passing layers. In this paper, we demonstrate that random weights can be surprisingly effective, achieving performance comparable to end-to-end training counterparts, across various tasks and datasets. Specifically, we show that by replacing learnable weights with random weights, GNNs can retain strong predictive power, while significantly reducing training time by up to 6$\times$ and memory usage by up to 3$\times$. Moreover, the random weights combined with our construction yield random graph propagation operators, which we show to reduce the problem of feature rank collapse in GNNs. These understandings and empirical results highlight random weights as a lightweight and efficient alternative, offering a compelling perspective on the design and training of GNN architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
529,245
1309.1585
Network-Level Cooperation in Energy Harvesting Wireless Networks
We consider a two-hop communication network consisted of a source node, a relay and a destination node in which the source and the relay node have external traffic arrivals. The relay forwards a fraction of the source node's traffic to the destination and the cooperation is performed at the network level. In addition, both source and relay nodes have energy harvesting capabilities and an unlimited battery to store the harvested energy. We study the impact of the energy constraints on the stability region. Specifically, we provide inner and outer bounds on the stability region of the two-hop network with energy harvesting source and relay.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
26,879
2010.03760
Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference
While discriminative neural network classifiers are generally preferred, recent work has shown advantages of generative classifiers in term of data efficiency and robustness. In this paper, we focus on natural language inference (NLI). We propose GenNLI, a generative classifier for NLI tasks, and empirically characterize its performance by comparing it to five baselines, including discriminative models and large-scale pretrained language representation models like BERT. We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work . In particular, we find strong results with a simple unbounded modification to log loss, which we call the "infinilog loss". Our experiments show that GenNLI outperforms both discriminative and pretrained baselines across several challenging NLI experimental settings, including small training sets, imbalanced label distributions, and label noise.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
199,521
2002.04525
Industry 4.0: contributions of holonic manufacturing control architectures and future challenges
The flexibility claimed by the next generation production systems induces a deep modification of the behaviour and the core itself of the control systems. Over-connectivity and data management abilities targeted by Industry 4.0 paradigm enable the emergence of more flexible and reactive control systems, based on the cooperation of autonomous and connected entities in the decision-making process. From most relevant articles extracted from existing literature, a list of 10 key enablers for Industry 4.0 is first presented. During the last 20 years, the holonic paradigm has become a major paradigm of Intelligent Manufacturing Systems. After the presentation of the holonic paradigm and holon properties, this article highlights how historical and current holonic control architectures can partly fulfil I4.0 key enablers. The remaining unfulfilled key enablers are then the subject of an extensive discussion on the remaining research perspectives on holonic architectures needed to achieve a complete support of Industry4.0.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
163,634
2410.18565
Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation
We introduce Bielik 7B v0.1, a 7-billion-parameter generative text model for Polish language processing. Trained on curated Polish corpora, this model addresses key challenges in language model development through innovative techniques. These include Weighted Instruction Cross-Entropy Loss, which balances the learning of different instruction types, and Adaptive Learning Rate, which dynamically adjusts the learning rate based on training progress. To evaluate performance, we created the Open PL LLM Leaderboard and Polish MT-Bench, novel frameworks assessing various NLP tasks and conversational abilities. Bielik 7B v0.1 demonstrates significant improvements, achieving a 9 percentage point increase in average score compared to Mistral-7B-v0.1 on the RAG Reader task. It also excels in the Polish MT-Bench, particularly in Reasoning (6.15/10) and Role-playing (7.83/10) categories. This model represents a substantial advancement in Polish language AI, offering a powerful tool for diverse linguistic applications and setting new benchmarks in the field.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
501,939
1301.3698
Modeling human dynamics of face-to-face interaction networks
Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of inter-conversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents which perform a random walk in a two dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
21,132
2405.00924
Zonotope-based Symbolic Controller Synthesis for Linear Temporal Logic Specifications
This paper studies the controller synthesis problem for nonlinear control systems under linear temporal logic (LTL) specifications using zonotope techniques. A local-to-global control strategy is proposed for the desired specification expressed as an LTL formula. First, a novel approach is developed to divide the state space into finite zonotopes and constrained zonotopes, which are called cells and allowed to intersect with the neighbor cells. Second, from the intersection relation, a graph among all cells is generated to verify the realization of the accepting path for the LTL formula. The realization verification determines if there is a need for the control design, and also results in finite local LTL formulas. Third, once the accepting path is realized, a novel abstraction-based method is derived for the controller design. In particular, we only focus on the cells from the realization verification and approximate each cell thanks to properties of zonotopes. Based on local symbolic models and local LTL formulas, an iterative synthesis algorithm is proposed to design all local abstract controllers, whose existence and combination establish the global controller for the LTL formula. Finally, the proposed framework is illustrated via a path planning problem of mobile robots.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
451,132
1911.03801
Human Driver Behavior Prediction based on UrbanFlow
How autonomous vehicles and human drivers share public transportation systems is an important problem, as fully automatic transportation environments are still a long way off. Understanding human drivers' behavior can be beneficial for autonomous vehicle decision making and planning, especially when the autonomous vehicle is surrounded by human drivers who have various driving behaviors and patterns of interaction with other vehicles. In this paper, we propose an LSTM-based trajectory prediction method for human drivers which can help the autonomous vehicle make better decisions, especially in urban intersection scenarios. Meanwhile, in order to collect human drivers' driving behavior data in the urban scenario, we describe a system called UrbanFlow which includes the whole procedure from raw bird's-eye view data collection via drone to the final processed trajectories. The system is mainly intended for urban scenarios but can be extended to be used for any traffic scenarios.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
152,763
2005.07006
Foreground-Background Ambient Sound Scene Separation
Ambient sound scenes typically comprise multiple short events occurring on top of a somewhat stationary background. We consider the task of separating these events from the background, which we call foreground-background ambient sound scene separation. We propose a deep learning-based separation framework with a suitable feature normaliza-tion scheme and an optional auxiliary network capturing the background statistics, and we investigate its ability to handle the great variety of sound classes encountered in ambient sound scenes, which have often not been seen in training. To do so, we create single-channel foreground-background mixtures using isolated sounds from the DESED and Audioset datasets, and we conduct extensive experiments with mixtures of seen or unseen sound classes at various signal-to-noise ratios. Our experimental findings demonstrate the generalization ability of the proposed approach.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,163
2409.12096
An Efficient Projection-Based Next-best-view Planning Framework for Reconstruction of Unknown Objects
Efficiently and completely capturing the three-dimensional data of an object is a fundamental problem in industrial and robotic applications. The task of next-best-view (NBV) planning is to infer the pose of the next viewpoint based on the current data, and gradually realize the complete three-dimensional reconstruction. Many existing algorithms, however, suffer a large computational burden due to the use of ray-casting. To address this, this paper proposes a projection-based NBV planning framework. It can select the next best view at an extremely fast speed while ensuring the complete scanning of the object. Specifically, this framework refits different types of voxel clusters into ellipsoids based on the voxel structure.Then, the next best view is selected from the candidate views using a projection-based viewpoint quality evaluation function in conjunction with a global partitioning strategy. This process replaces the ray-casting in voxel structures, significantly improving the computational efficiency. Comparative experiments with other algorithms in a simulation environment show that the framework proposed in this paper can achieve 10 times efficiency improvement on the basis of capturing roughly the same coverage. The real-world experimental results also prove the efficiency and feasibility of the framework.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
489,435
1510.06153
Creating Scalable and Interactive Web Applications Using High Performance Latent Variable Models
In this project we outline a modularized, scalable system for comparing Amazon products in an interactive and informative way using efficient latent variable models and dynamic visualization. We demonstrate how our system can build on the structure and rich review information of Amazon products in order to provide a fast, multifaceted, and intuitive comparison. By providing a condensed per-topic comparison visualization to the user, we are able to display aggregate information from the entire set of reviews while providing an interface that is at least as compact as the "most helpful reviews" currently displayed by Amazon, yet far more informative.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
48,091
2308.13506
Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level
As research on machine translation moves to translating text beyond the sentence level, it remains unclear how effective automatic evaluation metrics are at scoring longer translations. In this work, we first propose a method for creating paragraph-level data for training and meta-evaluating metrics from existing sentence-level data. Then, we use these new datasets to benchmark existing sentence-level metrics as well as train learned metrics at the paragraph level. Interestingly, our experimental results demonstrate that using sentence-level metrics to score entire paragraphs is equally as effective as using a metric designed to work at the paragraph level. We speculate this result can be attributed to properties of the task of reference-based evaluation as well as limitations of our datasets with respect to capturing all types of phenomena that occur in paragraph-level translations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
387,948
1804.08302
Deep cross-domain building extraction for selective depth estimation from oblique aerial imagery
With the technological advancements of aerial imagery and accurate 3d reconstruction of urban environments, more and more attention has been paid to the automated analyses of urban areas. In our work, we examine two important aspects that allow live analysis of building structures in city models given oblique aerial imagery, namely automatic building extraction with convolutional neural networks (CNNs) and selective real-time depth estimation from aerial imagery. We use transfer learning to train the Faster R-CNN method for real-time deep object detection, by combining a large ground-based dataset for urban scene understanding with a smaller number of images from an aerial dataset. We achieve an average precision (AP) of about 80% for the task of building extraction on a selected evaluation dataset. Our evaluation focuses on both dataset-specific learning and transfer learning. Furthermore, we present an algorithm that allows for multi-view depth estimation from aerial imagery in real-time. We adopt the semi-global matching (SGM) optimization strategy to preserve sharp edges at object boundaries. In combination with the Faster R-CNN, it allows a selective reconstruction of buildings, identified with regions of interest (RoIs), from oblique aerial imagery.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
95,740
2209.06300
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models
Adversarial extraction attacks constitute an insidious threat against Deep Learning (DL) models in-which an adversary aims to steal the architecture, parameters, and hyper-parameters of a targeted DL model. Existing extraction attack literature have observed varying levels of attack success for different DL models and datasets, yet the underlying cause(s) behind their susceptibility often remain unclear, and would help facilitate creating secure DL systems. In this paper we present PINCH: an efficient and automated extraction attack framework capable of designing, deploying, and analyzing extraction attack scenarios across heterogeneous hardware platforms. Using PINCH, we perform extensive experimental evaluation of extraction attacks against 21 model architectures to explore new extraction attack scenarios and further attack staging. Our findings show (1) key extraction characteristics whereby particular model configurations exhibit strong resilience against specific attacks, (2) even partial extraction success enables further staging for other adversarial attacks, and (3) equivalent stolen models uncover differences in expressive power, yet exhibit similar captured knowledge.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
317,346
1009.5268
General Scaled Support Vector Machines
Support Vector Machines (SVMs) are popular tools for data mining tasks such as classification, regression, and density estimation. However, original SVM (C-SVM) only considers local information of data points on or over the margin. Therefore, C-SVM loses robustness. To solve this problem, one approach is to translate (i.e., to move without rotation or change of shape) the hyperplane according to the distribution of the entire data. But existing work can only be applied for 1-D case. In this paper, we propose a simple and efficient method called General Scaled SVM (GS-SVM) to extend the existing approach to multi-dimensional case. Our method translates the hyperplane according to the distribution of data projected on the normal vector of the hyperplane. Compared with C-SVM, GS-SVM has better performance on several data sets.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
7,693
2403.16336
Predictive Inference in Multi-environment Scenarios
We address the challenge of constructing valid confidence intervals and sets in problems of prediction across multiple environments. We investigate two types of coverage suitable for these problems, extending the jackknife and split-conformal methods to show how to obtain distribution-free coverage in such non-traditional, potentially hierarchical data-generating scenarios. We demonstrate a novel resizing method to adapt to problem difficulty, which applies both to existing approaches for predictive inference and the methods we develop; this reduces prediction set sizes using limited information from the test environment, a key to the methods' practical performance, which we evaluate through neurochemical sensing and species classification datasets. Our contributions also include extensions for settings with non-real-valued responses, a theory of consistency for predictive inference in these general problems, and insights on the limits of conditional coverage.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
440,983
1904.11481
Age of Information in Multicast Networks with Multiple Update Streams
We consider the age of information in a multicast network where there is a single source node that sends time-sensitive updates to $n$ receiver nodes. Each status update is one of two kinds: type I or type II. To study the age of information experienced by the receiver nodes for both types of updates, we consider two cases: update streams are generated by the source node at-will and update streams arrive exogenously to the source node. We show that using an earliest $k_1$ and $k_2$ transmission scheme for type I and type II updates, respectively, the age of information of both update streams at the receiver nodes can be made a constant independent of $n$. In particular, the source node transmits each type I update packet to the earliest $k_1$ and each type II update packet to the earliest $k_2$ of $n$ receiver nodes. We determine the optimum $k_1$ and $k_2$ stopping thresholds for arbitrary shifted exponential link delays to individually and jointly minimize the average age of both update streams and characterize the pareto optimal curve for the two ages.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
128,867
2011.03199
Secure Performance Analysis and Optimization for FD-NOMA Vehicular Communications
Vehicle-to-vehicle (V2V) communication appeals to increasing research interest as a result of its applications to provide safety information as well as infotainment services. The increasing demand of transmit rates and various requirements of quality of services (QoS) in vehicular communication scenarios call for the integration of V2V communication systems and potential techniques in the future wireless communications, such as full duplex (FD) and non-orthogonal multiple access (NOMA) which enhance spectral efficiency and provide massive connectivity. However, the large amount of data transmission and user connectivity give rise to the concern of security issues and personal privacy. In order to analyze the security performance of V2V communications, we introduce a cooperative NOMA V2V system model with an FD relay. This paper focuses on the security performance of the FD-NOMA based V2V system on the physical layer perspective. We first derive several analytical results of the ergodic secrecy capacity. Then, we propose a secrecy sum rate optimization scheme utilizing the instantaneous channel state information (CSI), which is formulated as a non-convex optimization problem. Based on the differential structure of the non-convex constraints, the original problem is approximated and solved by a series of convex optimization problems. Simulation results validate the analytical results and the effectiveness of the secrecy sum rate optimization algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
205,172
1810.09233
Scalable NoC-based Neuromorphic Hardware Learning and Inference
Bio-inspired neuromorphic hardware is a research direction to approach brain's computational power and energy efficiency. Spiking neural networks (SNN) encode information as sparsely distributed spike trains and employ spike-timing-dependent plasticity (STDP) mechanism for learning. Existing hardware implementations of SNN are limited in scale or do not have in-hardware learning capability. In this work, we propose a low-cost scalable Network-on-Chip (NoC) based SNN hardware architecture with fully distributed in-hardware STDP learning capability. All hardware neurons work in parallel and communicate through the NoC. This enables chip-level interconnection, scalability and reconfigurability necessary for deploying different applications. The hardware is applied to learn MNIST digits as an evaluation of its learning capability. We explore the design space to study the trade-offs between speed, area and energy. How to use this procedure to find optimal architecture configuration is also discussed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
111,020
1807.11367
Fairly Allocating Many Goods with Few Queries
We investigate the query complexity of the fair allocation of indivisible goods. For two agents with arbitrary monotonic utilities, we design an algorithm that computes an allocation satisfying envy-freeness up to one good (EF1), a relaxation of envy-freeness, using a logarithmic number of queries. We show that the logarithmic query complexity bound also holds for three agents with additive utilities, and that a polylogarithmic bound holds for three agents with monotonic utilities. These results suggest that it is possible to fairly allocate goods in practice even when the number of goods is extremely large. By contrast, we prove that computing an allocation satisfying envy-freeness and another of its relaxations, envy-freeness up to any good (EFX), requires a linear number of queries even when there are only two agents with identical additive utilities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
104,163
1211.4422
Continuous Models of Epidemic Spreading in Heterogeneous Dynamically Changing Random Networks
Modeling spreading processes in complex random networks plays an essential role in understanding and prediction of many real phenomena like epidemics or rumor spreading. The dynamics of such systems may be represented algorithmically by Monte-Carlo simulations on graphs or by ordinary differential equations (ODEs). Despite many results in the area of network modeling the selection of the best computational representation of the model dynamics remains a challenge. While a closed form description is often straightforward to derive, it generally cannot be solved analytically; as a consequence the network dynamics requires a numerical solution of the ODEs or a direct Monte-Carlo simulation on the networks. Moreover, Monte-Carlo simulations and ODE solutions are not equivalent since ODEs produce a deterministic solution while Monte-Carlo simulations are stochastic by nature. Despite some recent advantages in Monte-Carlo simulations, particularly in the flexibility of implementation, the computational cost of an ODE solution is much lower and supports accurate and detailed output analysis such as uncertainty or sensitivity analyses, parameter identification etc. In this paper we propose a novel approach to model spreading processes in complex random heterogeneous networks using systems of nonlinear ordinary differential equations. We successfully apply this approach to predict the dynamics of HIV-AIDS spreading in sexual networks, and compare it to historical data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
19,812