id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.16359 | An Improved Graph Pooling Network for Skeleton-Based Action Recognition | Pooling is a crucial operation in computer vision, yet the unique structure of skeletons hinders the application of existing pooling strategies to skeleton graph modelling. In this paper, we propose an Improved Graph Pooling Network, referred to as IGPN. The main innovations include: Our method incorporates a region-awareness pooling strategy based on structural partitioning. The correlation matrix of the original feature is used to adaptively adjust the weight of information in different regions of the newly generated features, resulting in more flexible and effective processing. To prevent the irreversible loss of discriminative information, we propose a cross fusion module and an information supplement module to provide block-level and input-level information respectively. As a plug-and-play structure, the proposed operation can be seamlessly combined with existing GCN-based models. We conducted extensive evaluations on several challenging benchmarks, and the experimental results indicate the effectiveness of our proposed solutions. For example, in the cross-subject evaluation of the NTU-RGB+D 60 dataset, IGPN achieves a significant improvement in accuracy compared to the baseline while reducing Flops by nearly 70%; a heavier version has also been introduced to further boost accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 449,458 |
2407.20859 | Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction
Amplification | Recently, autonomous agents built on large language models (LLMs) have experienced significant development and are being deployed in real-world applications. These agents can extend the base LLM's capabilities in multiple ways. For example, a well-built agent using GPT-3.5-Turbo as its core can outperform the more advanced GPT-4 model by leveraging external components. More importantly, the usage of tools enables these systems to perform actions in the real world, moving from merely generating text to actively interacting with their environment. Given the agents' practical applications and their ability to execute consequential actions, it is crucial to assess potential vulnerabilities. Such autonomous systems can cause more severe damage than a standalone language model if compromised. While some existing research has explored harmful actions by LLM agents, our study approaches the vulnerability from a different perspective. We introduce a new type of attack that causes malfunctions by misleading the agent into executing repetitive or irrelevant actions. We conduct comprehensive evaluations using various attack methods, surfaces, and properties to pinpoint areas of susceptibility. Our experiments reveal that these attacks can induce failure rates exceeding 80\% in multiple scenarios. Through attacks on implemented and deployable agents in multi-agent scenarios, we accentuate the realistic risks associated with these vulnerabilities. To mitigate such attacks, we propose self-examination detection methods. However, our findings indicate these attacks are difficult to detect effectively using LLMs alone, highlighting the substantial risks associated with this vulnerability. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 477,310 |
0902.0320 | Planar Graphical Models which are Easy | We describe a rich family of binary variables statistical mechanics models on a given planar graph which are equivalent to Gaussian Grassmann Graphical models (free fermions) defined on the same graph. Calculation of the partition function (weighted counting) for such a model is easy (of polynomial complexity) as reducible to evaluation of a Pfaffian of a matrix of size equal to twice the number of edges in the graph. In particular, this approach touches upon Holographic Algorithms of Valiant and utilizes the Gauge Transformations discussed in our previous works. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 3,095 |
1109.5240 | A Continuous Feedback Optimal Control based on Second-Variations for
Problems with Control Constraints | The paper describes a continuous second-variation algorithm to solve optimal control problems where the control is defined on a closed set. A second order expansion of a Lagrangian provides linear updates of the control to construct a locally feedback optimal control of the problem. Since the process involves a backward and a forward stage, which require storing trajectories, a method has been devised to accurately store continuous solutions of ordinary differential equations. Thanks to the continuous approach, the method adapts implicitly the numerical time mesh. The novel method is demonstrated on bang-bang optimal control problems, showing the suitability of the method to identify automatically optimal switching points in the control. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 12,297 |
1902.05023 | Simultaneous Sparse Recovery and Blind Demodulation | The task of finding a sparse signal decomposition in an overcomplete dictionary is made more complicated when the signal undergoes an unknown modulation (or convolution in the complementary Fourier domain). Such simultaneous sparse recovery and blind demodulation problems appear in many applications including medical imaging, super resolution, self-calibration, etc. In this paper, we consider a more general sparse recovery and blind demodulation problem in which each atom comprising the signal undergoes a distinct modulation process. Under the assumption that the modulating waveforms live in a known common subspace, we employ the lifting technique and recast this problem as the recovery of a column-wise sparse matrix from structured linear measurements. In this framework, we accomplish sparse recovery and blind demodulation simultaneously by minimizing the induced atomic norm, which in this problem corresponds to the block $\ell_1$ norm minimization. For perfect recovery in the noiseless case, we derive near optimal sample complexity bounds for Gaussian and random Fourier overcomplete dictionaries. We also provide bounds on recovering the column-wise sparse matrix in the noisy case. Numerical simulations illustrate and support our theoretical results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 121,461 |
1303.0350 | Structure-semantics interplay in complex networks and its effects on the
predictability of similarity in texts | There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between the various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 22,556 |
1311.6184 | Bounding the Test Log-Likelihood of Generative Models | Several interesting generative learning algorithms involve a complex probability distribution over many random variables, involving intractable normalization constants or latent variable normalization. Some of them may even not have an analytic expression for the unnormalized probability function and no tractable approximation. This makes it difficult to estimate the quality of these models, once they have been trained, or to monitor their quality (e.g. for early stopping) while training. A previously proposed method is based on constructing a non-parametric density estimator of the model's probability function from samples generated by the model. We revisit this idea, propose a more efficient estimator, and prove that it provides a lower bound on the true test log-likelihood, and an unbiased estimator as the number of generated samples goes to infinity, although one that incorporates the effect of poor mixing. We further propose a biased variant of the estimator that can be used reliably with a finite number of samples for the purpose of model comparison. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 28,630 |
2205.09072 | On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias | We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with $r$ neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most $\mathcal{O}(r)$ linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is $\tilde{\mathcal{O}}(r)$ and independent of the sample size. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,145 |
2206.00332 | Smart Channel State Information Pre-processing for Joint Authentication
and Secret Key Distillation | While the literature on RF fingerprinting-based authentication and key distillation is vast, the two topics have customarily been studied separately. In this paper, starting from the observation that the wireless channel is a composite, deterministic / stochastic process, we propose a power domain decomposition that allows performing the two tasks simultaneously. We devise intelligent pre-processing schemes to decompose channel state information (CSI) observation vectors into "predictable" and "unpredictable" components. The former, primarily due to large-scale fading, can be used for node authentication through RF fingerprinting. The latter, primarily due to small-scale fading, could be used for semantically secure secret key generation (SKG). To perform the decomposition, we propose: (i) a fingerprint "separability" criterion, expressed through the maximisation of the total variation distance between the empirical fingerprint measures; (ii) a statistical independence metric for observations collected at different users, expressed through a normalised version of the $d$-dimensional Hilbert Schmidt independence criterion (dHSIC) test statistic. We propose both explicit implementations, using principal component analysis (PCA) and kernel PCA and black-box, unsupervised learning, using autoencoders. Our experiments on synthetic and real CSI datasets showcase that the incorporation of RF fingerprinting and SKG, with explicit security guarantees, is tangible in future generations of wireless. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 300,081 |
2308.14841 | Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck
Muscle Contraction | Ergonomic efficiency is essential to the mass and prolonged adoption of VR/AR experiences. While VR/AR head-mounted displays unlock users' natural wide-range head movements during viewing, their neck muscle comfort is inevitably compromised by the added hardware weight. Unfortunately, little quantitative knowledge for understanding and addressing such an issue is available so far. Leveraging electromyography devices, we measure, model, and predict VR users' neck muscle contraction levels (MCL) while they move their heads to interact with the virtual environment. Specifically, by learning from collected physiological data, we develop a bio-physically inspired computational model to predict neck MCL under diverse head kinematic states. Beyond quantifying the cumulative MCL of completed head movements, our model can also predict potential MCL requirements with target head poses only. A series of objective evaluations and user studies demonstrate its prediction accuracy and generality, as well as its ability in reducing users' neck discomfort by optimizing the layout of visual targets. We hope this research will motivate new ergonomic-centered designs for VR/AR and interactive graphics applications. Source code is released at: https://github.com/NYU-ICL/xr-ergonomics-neck-comfort. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 388,464 |
2410.03731 | Unsupervised Human Preference Learning | Large language models demonstrate impressive reasoning abilities but struggle to provide personalized content due to their lack of individual user preference information. Existing methods, such as in-context learning and parameter-efficient fine-tuning, fall short in capturing the complexity of human preferences, especially given the small, personal datasets individuals possess. In this paper, we propose a novel approach utilizing small parameter models as preference agents to generate natural language rules that guide a larger, pre-trained model, enabling efficient personalization. Our method involves a small, local "steering wheel" model that directs the outputs of a much larger foundation model, producing content tailored to an individual's preferences while leveraging the extensive knowledge and capabilities of the large model. Importantly, this personalization is achieved without the need to fine-tune the large model. Experimental results on email and article datasets, demonstrate that our technique significantly outperforms baseline personalization methods. By allowing foundation models to adapt to individual preferences in a data and compute-efficient manner, our approach paves the way for highly personalized language model applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 494,922 |
2105.01583 | Riemannian Geometry with differentiable ambient space and metric
operator | We show Riemannian geometry could be studied by identifying the tangent bundle of a Riemannian manifold $\mathcal{M}$ with a subbundle of the trivial bundle $\mathcal{M} \times \mathcal{E}$, obtained by embedding $\mathcal{M}$ differentiably in a Euclidean space $\mathcal{E}$. Given such an embedding, we can extend the metric tensor on $\mathcal{M}$ to a (positive-definite) operator-valued function acting on $\mathcal{E}$, giving us an embedded ambient structure. The formulas for the Christoffel symbols and Riemannian curvature in local coordinates have simple generalizations to this setup. For a Riemannian submersion $\mathfrak{q}:\mathcal{M}\to \mathcal{B}$ from an embedded manifold $\mathcal{M}\subset \mathcal{E}$, we define a submersed ambient structure and obtain similar formulas, with the O'Neil tensor expressed in terms of the projection to the horizontal bundle $\mathcal{H}\mathcal{M}$. Using this framework, we provide the embedded and submersed ambient structures for the double tangent bundle $\mathcal{T}\mathcal{T}\mathcal{M}$ and the tangent of the horizontal bundle $\mathcal{T}\mathcal{H}\mathcal{M}$, describe the fibration of a horizontal bundle over the tangent bundle of the base manifold and extend the notion of a canonical flip to the submersion case. We obtain a formula for horizontal lifts of Jacobi fields, and a new closed-form formula for Jacobi fields of naturally reductive homogeneous spaces. We construct natural metrics on these double tangent bundles, in particular, extending Sasaki and other natural metrics to the submersion case. We illustrate by providing explicit calculations for several manifolds. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 233,559 |
2410.12413 | Theoretical Analysis of Hierarchical Language Recognition and Generation
by Transformers without Positional Encoding | In this study, we provide constructive proof that Transformers can recognize and generate hierarchical language efficiently with respect to model size, even without the need for a specific positional encoding. Specifically, we show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures. We demonstrate that Transformers without positional encoding can generate hierarchical languages. Furthermore, we suggest that explicit positional encoding might have a detrimental effect on generalization with respect to sequence length. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 499,017 |
2310.09615 | STORM: Efficient Stochastic Transformer based World Models for
Reinforcement Learning | Recently, model-based reinforcement learning algorithms have demonstrated remarkable efficacy in visual input environments. These approaches begin by constructing a parameterized simulation world model of the real environment through self-supervised learning. By leveraging the imagination of the world model, the agent's policy is enhanced without the constraints of sampling from the real environment. The performance of these algorithms heavily relies on the sequence modeling and generation capabilities of the world model. However, constructing a perfectly accurate model of a complex unknown environment is nearly impossible. Discrepancies between the model and reality may cause the agent to pursue virtual goals, resulting in subpar performance in the real environment. Introducing random noise into model-based reinforcement learning has been proven beneficial. In this work, we introduce Stochastic Transformer-based wORld Model (STORM), an efficient world model architecture that combines the strong sequence modeling and generation capabilities of Transformers with the stochastic nature of variational autoencoders. STORM achieves a mean human performance of $126.7\%$ on the Atari $100$k benchmark, setting a new record among state-of-the-art methods that do not employ lookahead search techniques. Moreover, training an agent with $1.85$ hours of real-time interaction experience on a single NVIDIA GeForce RTX 3090 graphics card requires only $4.3$ hours, showcasing improved efficiency compared to previous methodologies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 399,850 |
2403.10499 | Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A
Pilot Study | Pre-training image representations from the raw text about images enables zero-shot vision transfer to downstream tasks. Through pre-training on millions of samples collected from the internet, multimodal foundation models, such as CLIP, produce state-of-the-art zero-shot results that often reach competitiveness with fully supervised methods without the need for task-specific training. Besides the encouraging performance on classification accuracy, it is reported that these models close the robustness gap by matching the performance of supervised models trained on ImageNet under natural distribution shift. Because robustness is critical to real-world applications, especially safety-critical ones, in this paper, we present a comprehensive evaluation based on a large-scale robustness benchmark covering 7 natural, 3 synthetic distribution shifts, and 11 adversarial attacks. We use CLIP as a pilot study. We show that CLIP leads to a significant robustness drop compared to supervised ImageNet models on our benchmark, especially under synthetic distribution shift and adversarial attacks. Furthermore, data overlap analysis suggests that the observed robustness under natural distribution shifts could be attributed, at least in part, to data overlap. In summary, our evaluation shows a comprehensive evaluation of robustness is necessary; and there is a significant need to improve the robustness of zero-shot multimodal models. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 438,218 |
2502.08960 | A Comprehensive Survey on Imbalanced Data Learning | With the expansion of data availability, machine learning (ML) has achieved remarkable breakthroughs in both academia and industry. However, imbalanced data distributions are prevalent in various types of raw data and severely hinder the performance of ML by biasing the decision-making processes. To deepen the understanding of imbalanced data and facilitate the related research and applications, this survey systematically analyzing various real-world data formats and concludes existing researches for different data formats into four distinct categories: data re-balancing, feature representation, training strategy, and ensemble learning. This structured analysis help researchers comprehensively understand the pervasive nature of imbalance across diverse data format, thereby paving a clearer path toward achieving specific research goals. we provide an overview of relevant open-source libraries, spotlight current challenges, and offer novel insights aimed at fostering future advancements in this critical area of study. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,244 |
1707.05932 | Effects of Feedback on the One-sided Secrecy of Two-way Wiretap through
Multiple Transmissions | In this paper, the one-sided secrecy of two-way wiretap channel with feedback is investigated, where the confidential messages of one user through multiple transmissions is guaranteed secure against an external eavesdropper. For one thing, one-sided secrecy satisfies the secure demand of many practical scenarios. For another, the secrecy is measured over many blocks since the correlation between eavesdropper's observation and the confidential messages in successive blocks, instead of secrecy measurement of one block in previous works. Thus, firstly, an achievable secrecy rate region is derived for the general two-way wiretap channel with feedback through multiple transmissions under one-sided secrecy. Secondly, outer bounds on the secrecy capacity region are also obtained. The gap between inner and outer bounds on the secrecy capacity region is explored via the binary input two-way wiretap channels. Most notably, the secrecy capacity regions are established for the XOR channel. Furthermore, the result shows that the achievable rate region with feedback is larger than that without feedback. Therefore, the benefit role of feedback is precisely characterized for two-way wiretap channel with feedback under one-sided secrecy. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,320 |
2210.16782 | Unsupervised Learning of Structured Representations via Closed-Loop
Transcription | This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes. While most existing unsupervised learning approaches focus on a representation for only one of these two goals, we show that a unified representation can enjoy the mutual benefits of having both. Such a representation is attainable by generalizing the recently proposed \textit{closed-loop transcription} framework, known as CTRL, to the unsupervised setting. This entails solving a constrained maximin game over a rate reduction objective that expands features of all samples while compressing features of augmentations of each sample. Through this process, we see discriminative low-dimensional structures emerge in the resulting representations. Under comparable experimental conditions and network complexities, we demonstrate that these structured representations enable classification performance close to state-of-the-art unsupervised discriminative representations, and conditionally generated image quality significantly higher than that of state-of-the-art unsupervised generative models. Source code can be found at https://github.com/Delay-Xili/uCTRL. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 327,454 |
2409.12576 | StoryMaker: Towards Holistic Consistent Characters in Text-to-image
Generation | Tuning-free personalized image generation methods have achieved significant success in maintaining facial consistency, i.e., identities, even with multiple characters. However, the lack of holistic consistency in scenes with multiple characters hampers these methods' ability to create a cohesive narrative. In this paper, we introduce StoryMaker, a personalization solution that preserves not only facial consistency but also clothing, hairstyles, and body consistency, thus facilitating the creation of a story through a series of images. StoryMaker incorporates conditions based on face identities and cropped character images, which include clothing, hairstyles, and bodies. Specifically, we integrate the facial identity information with the cropped character images using the Positional-aware Perceiver Resampler (PPR) to obtain distinct character features. To prevent intermingling of multiple characters and the background, we separately constrain the cross-attention impact regions of different characters and the background using MSE loss with segmentation masks. Additionally, we train the generation network conditioned on poses to promote decoupling from poses. A LoRA is also employed to enhance fidelity and quality. Experiments underscore the effectiveness of our approach. StoryMaker supports numerous applications and is compatible with other societal plug-ins. Our source codes and model weights are available at https://github.com/RedAIGC/StoryMaker. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,644 |
2204.01672 | Residual-guided Personalized Speech Synthesis based on Face Image | Previous works derive personalized speech features by training the model on a large dataset composed of his/her audio sounds. It was reported that face information has a strong link with the speech sound. Thus in this work, we innovatively extract personalized speech features from human faces to synthesize personalized speech using neural vocoder. A Face-based Residual Personalized Speech Synthesis Model (FR-PSS) containing a speech encoder, a speech synthesizer and a face encoder is designed for PSS. In this model, by designing two speech priors, a residual-guided strategy is introduced to guide the face feature to approach the true speech feature in the training. Moreover, considering the error of feature's absolute values and their directional bias, we formulate a novel tri-item loss function for face encoder. Experimental results show that the speech synthesized by our model is comparable to the personalized speech synthesized by training a large amount of audio data in previous works. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 289,677 |
2108.07247 | Robust Hierarchical Clustering for Directed Networks: An Axiomatic
Approach | We provide a complete taxonomic characterization of robust hierarchical clustering methods for directed networks following an axiomatic approach. We begin by introducing three practical properties associated with the notion of robustness in hierarchical clustering: linear scale preservation, stability, and excisiveness. Linear scale preservation enforces imperviousness to change in units of measure whereas stability ensures that a bounded perturbation in the input network entails a bounded perturbation in the clustering output. Excisiveness refers to the local consistency of the clustering outcome. Algorithmically, excisiveness implies that we can reduce computational complexity by only clustering a subset of our data while theoretically guaranteeing that the same hierarchical outcome would be observed when clustering the whole dataset. In parallel to these three properties, we introduce the concept of representability, a generative model for describing clustering methods through the specification of their action on a collection of networks. Our main result is to leverage this generative model to give a precise characterization of all robust -- i.e., excisive, linear scale preserving, and stable -- hierarchical clustering methods for directed networks. We also address the implementation of our methods and describe an application to real data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 250,866 |
2408.11760 | SBDet: A Symmetry-Breaking Object Detector via Relaxed
Rotation-Equivariance | Introducing Group Equivariant Convolution (GConv) empowers models to explore symmetries hidden in visual data, improving their performance. However, in real-world scenarios, objects or scenes often exhibit perturbations of a symmetric system, specifically a deviation from a symmetric architecture, which can be characterized by a non-trivial action of a symmetry group, known as Symmetry-Breaking. Traditional GConv methods are limited by the strict operation rules in the group space, only ensuring features remain strictly equivariant under limited group transformations, making it difficult to adapt to Symmetry-Breaking or non-rigid transformations. Motivated by this, we introduce a novel Relaxed Rotation GConv (R2GConv) with our defined Relaxed Rotation-Equivariant group $\mathbf{R}_4$. Furthermore, we propose a Relaxed Rotation-Equivariant Network (R2Net) as the backbone and further develop the Symmetry-Breaking Object Detector (SBDet) for 2D object detection built upon it. Experiments demonstrate the effectiveness of our proposed R2GConv in natural image classification tasks, and SBDet achieves excellent performance in object detection tasks with improved generalization capabilities and robustness. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,414 |
2411.12701 | When Backdoors Speak: Understanding LLM Backdoor Attacks Through
Model-Generated Explanations | Large Language Models (LLMs) are known to be vulnerable to backdoor attacks, where triggers embedded in poisoned samples can maliciously alter LLMs' behaviors. In this paper, we move beyond attacking LLMs and instead examine backdoor attacks through the novel lens of natural language explanations. Specifically, we leverage LLMs' generative capabilities to produce human-readable explanations for their decisions, enabling direct comparisons between explanations for clean and poisoned samples. Our results show that backdoored models produce coherent explanations for clean inputs but diverse and logically flawed explanations for poisoned data, a pattern consistent across classification and generation tasks for different backdoor attacks. Further analysis reveals key insights into the explanation generation process. At the token level, explanation tokens associated with poisoned samples only appear in the final few transformer layers. At the sentence level, attention dynamics indicate that poisoned inputs shift attention away from the original input context during explanation generation. These findings enhance our understanding of backdoor mechanisms in LLMs and present a promising framework for detecting vulnerabilities through explainability. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 509,504 |
1907.10697 | Deep Generative Quantile-Copula Models for Probabilistic Forecasting | We introduce a new category of multivariate conditional generative models and demonstrate its performance and versatility in probabilistic time series forecasting and simulation. Specifically, the output of quantile regression networks is expanded from a set of fixed quantiles to the whole Quantile Function by a univariate mapping from a latent uniform distribution to the target distribution. Then the multivariate case is solved by learning such quantile functions for each dimension's marginal distribution, followed by estimating a conditional Copula to associate these latent uniform random variables. The quantile functions and copula, together defining the joint predictive distribution, can be parameterized by a single implicit generative Deep Neural Network. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 139,681 |
2010.05542 | The National Corpus of Contemporary Welsh: Project Report | Y Corpws
Cenedlaethol Cymraeg Cyfoes: Adroddiad y Prosiect | This report provides an overview of the CorCenCC project and the online corpus resource that was developed as a result of work on the project. The report lays out the theoretical underpinnings of the research, demonstrating how the project has built on and extended this theory. We also raise and discuss some of the key operational questions that arose during the course of the project, outlining the ways in which they were answered, the impact of these decisions on the resource that has been produced and the longer-term contribution they will make to practices in corpus-building. Finally, we discuss some of the applications and the utility of the work, outlining the impact that CorCenCC is set to have on a range of different individuals and user groups. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 200,178 |
1906.10689 | Soft computing methods for multiobjective location of garbage
accumulation points in smart cities | This article describes the application of soft computing methods for solving the problem of locating garbage accumulation points in urban scenarios. This is a relevant problem in modern smart cities, in order to reduce negative environmental and social impacts in the waste management process, and also to optimize the available budget from the city administration to install waste bins. A specific problem model is presented, which accounts for reducing the investment costs, enhance the number of citizens served by the installed bins, and the accessibility to the system. A family of single- and multi-objective heuristics based on the PageRank method and two mutiobjective evolutionary algorithms are proposed. Experimental evaluation performed on real scenarios on the cities of Montevideo (Uruguay) and Bahia Blanca (Argentina) demonstrates the effectiveness of the proposed approaches. The methods allow computing plannings with different trade-off between the problem objectives. The computed results improve over the current planning in Montevideo and provide a reasonable budget cost and quality of service for Bahia Blanca. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 136,491 |
1606.04467 | Outer Bounds on the Storage-Repair Bandwidth Tradeoff of Exact-Repair
Regenerating Codes | In this paper, three outer bounds on the normalized storage-repair bandwidth (S-RB) tradeoff of regenerating codes having parameter set $\{(n,k,d),(\alpha,\beta)\}$ under the exact-repair (ER) setting are presented. The first outer bound is applicable for every parameter set $(n,k,d)$ and in conjunction with a code construction known as {\em improved layered codes}, it characterizes the normalized ER tradeoff for the case $(n,k=3,d=n-1)$. It establishes a non-vanishing gap between the ER and functional-repair (FR) tradeoffs for every $(n,k,d)$. The second bound is an improvement upon an existing bound due to Mohajer et al. and is tighter than the first bound, in a regime away from the Minimum Storage Regeneraing (MSR) point. The third bound is for the case of $k=d$, under the linear setting. This outer bound matches with the achievable region of {\em layered codes} thereby characterizing the normalized ER tradeoff of linear ER codes when $k=d=n-1$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 57,259 |
2112.08637 | Analyzing the Limits of Self-Supervision in Handling Bias in Language | Prompting inputs with natural language task descriptions has emerged as a popular mechanism to elicit reasonably accurate outputs from large-scale generative language models with little to no in-context supervision. This also helps gain insight into how well language models capture the semantics of a wide range of downstream tasks purely from self-supervised pre-training on massive corpora of unlabeled text. Such models have naturally also been exposed to a lot of undesirable content like racist and sexist language and there is limited work on awareness of models along these dimensions. In this paper, we define and comprehensively evaluate how well such language models capture the semantics of four tasks for bias: diagnosis, identification, extraction and rephrasing. We define three broad classes of task descriptions for these tasks: statement, question, and completion, with numerous lexical variants within each class. We study the efficacy of prompting for each task using these classes and the null task description across several decoding methods and few-shot examples. Our analyses indicate that language models are capable of performing these tasks to widely varying degrees across different bias dimensions, such as gender and political affiliation. We believe our work is an important step towards unbiased language models by quantifying the limits of current self-supervision objectives at accomplishing such sociologically challenging tasks. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 271,874 |
2303.18200 | PADME-SoSci: A Platform for Analytics and Distributed Machine Learning
for the Social Sciences | Data privacy and ownership are significant in social data science, raising legal and ethical concerns. Sharing and analyzing data is difficult when different parties own different parts of it. An approach to this challenge is to apply de-identification or anonymization techniques to the data before collecting it for analysis. However, this can reduce data utility and increase the risk of re-identification. To address these limitations, we present PADME, a distributed analytics tool that federates model implementation and training. PADME uses a federated approach where the model is implemented and deployed by all parties and visits each data location incrementally for training. This enables the analysis of data across locations while still allowing the model to be trained as if all data were in a single location. Training the model on data in its original location preserves data ownership. Furthermore, the results are not provided until the analysis is completed on all data locations to ensure privacy and avoid bias in the results. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 355,499 |
1810.07322 | Functionality-Oriented Convolutional Filter Pruning | The sophisticated structure of Convolutional Neural Network (CNN) allows for outstanding performance, but at the cost of intensive computation. As significant redundancies inevitably present in such a structure, many works have been proposed to prune the convolutional filters for computation cost reduction. Although extremely effective, most works are based only on quantitative characteristics of the convolutional filters, and highly overlook the qualitative interpretation of individual filter's specific functionality. In this work, we interpreted the functionality and redundancy of the convolutional filters from different perspectives, and proposed a functionality-oriented filter pruning method. With extensive experiment results, we proved the convolutional filters' qualitative significance regardless of magnitude, demonstrated significant neural network redundancy due to repetitive filter functions, and analyzed the filter functionality defection under inappropriate retraining process. Such an interpretable pruning approach not only offers outstanding computation cost optimization over previous filter pruning methods, but also interprets filter pruning process. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 110,619 |
2305.15542 | TOAST: Transfer Learning via Attention Steering | Transfer learning involves adapting a pre-trained model to novel downstream tasks. However, we observe that current transfer learning methods often fail to focus on task-relevant features. In this work, we explore refocusing model attention for transfer learning. We introduce Top-Down Attention Steering (TOAST), a novel transfer learning algorithm that keeps the pre-trained backbone frozen, selects task-relevant features in the output, and feeds those features back to the model to steer the attention to the task-specific features. By refocusing the attention only, TOAST achieves state-of-the-art results on a number of transfer learning benchmarks, while having a small number of tunable parameters. Compared to fully fine-tuning, LoRA, and prompt tuning, TOAST substantially improves performance across a range of fine-grained visual classification datasets (e.g., 81.1% -> 86.2% on FGVC). TOAST also outperforms the fully fine-tuned Alpaca and Vicuna models on instruction-following language generation. Code is available at https://github.com/bfshi/TOAST. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 367,652 |
1701.07524 | The Role of Transmitter Cooperation in Linear Interference Networks with
Block Erasures | In this work, we explore the potential and optimal use of transmitter cooperation in wireless interference networks with deep fading conditions. We consider a linear interference network with K transmitter-receiver pairs, where each transmitter can be connected to two neighboring receivers. Long-term fluctuations (shadow fading) in the wireless channel can lead to any link being erased with probability p. Each receiver is interested in one unique message that can be available at two transmitters. The considered rate criterion is the average per user degrees of freedom (puDoF) as K goes to infinity. Prior to this work, the optimal assignment of messages to transmitters were identified in the two limits as p goes to 0 and as p goes to 1. We identify new schemes that achieve average puDoF values that are higher than the state of the art for a significant part of the range 0 < p < 1. The key idea to our results is to understand that the role of cooperation shifts from increasing the probability of delivering a message to its intended destination at high values of p, to interference cancellation at low values of p. Our schemes are based on an algorithm that achieves the optimal DoF value in any network realization, when restricted to a given message assignment as well as the use of zero-forcing schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,308 |
2409.18568 | Experimental Evaluation of Machine Learning Models for Goal-oriented
Customer Service Chatbot with Pipeline Architecture | Integrating machine learning (ML) into customer service chatbots enhances their ability to understand and respond to user queries, ultimately improving service performance. However, they may appear artificial to some users and affecting customer experience. Hence, meticulous evaluation of ML models for each pipeline component is crucial for optimizing performance, though differences in functionalities can lead to unfair comparisons. In this paper, we present a tailored experimental evaluation approach for goal-oriented customer service chatbots with pipeline architecture, focusing on three key components: Natural Language Understanding (NLU), dialogue management (DM), and Natural Language Generation (NLG). Our methodology emphasizes individual assessment to determine optimal ML models. Specifically, we focus on optimizing hyperparameters and evaluating candidate models for NLU (utilizing BERT and LSTM), DM (employing DQN and DDQN), and NLG (leveraging GPT-2 and DialoGPT). The results show that for the NLU component, BERT excelled in intent detection whereas LSTM was superior for slot filling. For the DM component, the DDQN model outperformed DQN by achieving fewer turns, higher rewards, as well as greater success rates. For NLG, the large language model GPT-2 surpassed DialoGPT in BLEU, METEOR, and ROUGE metrics. These findings aim to provide a benchmark for future research in developing and optimizing customer service chatbots, offering valuable insights into model performance and optimal hyperparameters. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 492,311 |
2501.12380 | MMVU: Measuring Expert-Level Multi-Discipline Video Understanding | We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark for evaluating foundation models in video understanding. MMVU includes 3,000 expert-annotated questions spanning 27 subjects across four core disciplines: Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to prior benchmarks, MMVU features three key advancements. First, it challenges models to apply domain-specific knowledge and perform expert-level reasoning to analyze specialized-domain videos, moving beyond the basic visual perception typically assessed in current video benchmarks. Second, each example is annotated by human experts from scratch. We implement strict data quality controls to ensure the high quality of the dataset. Finally, each example is enriched with expert-annotated reasoning rationals and relevant domain knowledge, facilitating in-depth analysis. We conduct an extensive evaluation of 32 frontier multimodal foundation models on MMVU. The latest System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest performance among the tested models. However, they still fall short of matching human expertise. Through in-depth error analyses and case studies, we offer actionable insights for future advancements in expert-level, knowledge-intensive video understanding for specialized domains. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 526,286 |
2402.01915 | Robust Inverse Graphics via Probabilistic Inference | How do we infer a 3D scene from a single image in the presence of corruptions like rain, snow or fog? Straightforward domain randomization relies on knowing the family of corruptions ahead of time. Here, we propose a Bayesian approach-dubbed robust inverse graphics (RIG)-that relies on a strong scene prior and an uninformative uniform corruption prior, making it applicable to a wide range of corruptions. Given a single image, RIG performs posterior inference jointly over the scene and the corruption. We demonstrate this idea by training a neural radiance field (NeRF) scene prior and using a secondary NeRF to represent the corruptions over which we place an uninformative prior. RIG, trained only on clean data, outperforms depth estimators and alternative NeRF approaches that perform point estimation instead of full inference. The results hold for a number of scene prior architectures based on normalizing flows and diffusion models. For the latter, we develop reconstruction-guidance with auxiliary latents (ReGAL)-a diffusion conditioning algorithm that is applicable in the presence of auxiliary latent variables such as the corruption. RIG demonstrates how scene priors can be used beyond generation tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 426,284 |
1309.1274 | A Small Universal Petri Net | A universal deterministic inhibitor Petri net with 14 places, 29 transitions and 138 arcs was constructed via simulation of Neary and Woods' weakly universal Turing machine with 2 states and 4 symbols; the total time complexity is exponential in the running time of their weak machine. To simulate the blank words of the weakly universal Turing machine, a couple of dedicated transitions insert their codes when reaching edges of the working zone. To complete a chain of a given Petri net encoding to be executed by the universal Petri net, a translation of a bi-tag system into a Turing machine was constructed. The constructed Petri net is universal in the standard sense; a weaker form of universality for Petri nets was not introduced in this work. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 26,847 |
1904.12694 | ExplaiNE: An Approach for Explaining Network Embedding-based Link
Predictions | Networks are powerful data structures, but are challenging to work with for conventional machine learning methods. Network Embedding (NE) methods attempt to resolve this by learning vector representations for the nodes, for subsequent use in downstream machine learning tasks. Link Prediction (LP) is one such downstream machine learning task that is an important use case and popular benchmark for NE methods. Unfortunately, while NE methods perform exceedingly well at this task, they are lacking in transparency as compared to simpler LP approaches. We introduce ExplaiNE, an approach to offer counterfactual explanations for NE-based LP methods, by identifying existing links in the network that explain the predicted links. ExplaiNE is applicable to a broad class of NE algorithms. An extensive empirical evaluation for the NE method `Conditional Network Embedding' in particular demonstrates its accuracy and scalability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 129,203 |
2410.07278 | PAR: Prompt-Aware Token Reduction Method for Efficient Large Multimodal
Models | Multimodal large language models (MLLMs) demonstrate strong performance across visual tasks, but their efficiency is hindered by significant computational and memory demands from processing long contexts in multimodal inputs. To address this, we introduce PAR (Prompt-Aware Token Reduction), a novel and plug-and-play approach that reduces visual tokens efficiently without compromising model performance. Unlike previous methods that rely heavily on attention mechanisms and overlooking cross-modal interactions , we uses a prompt-aware strategy to adpative identify and cluster essential visual tokens. PAR categorizes visual context redundancy into two types: external and internal. External redundancy is minimized through semantic retrieval, while internal redundancy is addressed using a token routing mechanism. This method substantially reduces computational load without requiring additional training or complex architectural modifications. \textbf{Experimental results demonstrate that across various visual question answering tasks, PAR reduces FLOPs by 83\% with a compression ratio of 89\%, while retaining 97\% of baseline accuracy.} The adaptive design of PAR achieves a 2x token reduction ratio compared to prior approaches, enabling a better balance between performance and efficiency. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 496,568 |
2410.09342 | LLM$\times$MapReduce: Simplified Long-Sequence Processing using Large
Language Models | Enlarging the context window of large language models (LLMs) has become a crucial research area, particularly for applications involving extremely long texts. In this work, we propose a novel training-free framework for processing long texts, utilizing a divide-and-conquer strategy to achieve comprehensive document understanding. The proposed LLM$\times$MapReduce framework splits the entire document into several chunks for LLMs to read and then aggregates the intermediate answers to produce the final output. The main challenge for divide-and-conquer long text processing frameworks lies in the risk of losing essential long-range information when splitting the document, which can lead the model to produce incomplete or incorrect answers based on the segmented texts. Disrupted long-range information can be classified into two categories: inter-chunk dependency and inter-chunk conflict. We design a structured information protocol to better cope with inter-chunk dependency and an in-context confidence calibration mechanism to resolve inter-chunk conflicts. Experimental results demonstrate that LLM$\times$MapReduce can outperform representative open-source and commercial long-context LLMs, and is applicable to several different models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 497,546 |
1411.7964 | Effective Face Frontalization in Unconstrained Images | "Frontalization" is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 37,981 |
1912.13421 | Risk of the Least Squares Minimum Norm Estimator under the Spike
Covariance Model | We study risk of the minimum norm linear least squares estimator in when the number of parameters $d$ depends on $n$, and $\frac{d}{n} \rightarrow \infty$. We assume that data has an underlying low rank structure by restricting ourselves to spike covariance matrices, where a fixed finite number of eigenvalues grow with $n$ and are much larger than the rest of the eigenvalues, which are (asymptotically) in the same order. We show that in this setting risk of minimum norm least squares estimator vanishes in compare to risk of the null estimator. We give asymptotic and non asymptotic upper bounds for this risk, and also leverage the assumption of spike model to give an analysis of the bias that leads to tighter bounds in compare to previous works. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 159,076 |
2410.14573 | Building Trust in Black-box Optimization: A Comprehensive Framework for
Explainability | Optimizing costly black-box functions within a constrained evaluation budget presents significant challenges in many real-world applications. Surrogate Optimization (SO) is a common resolution, yet its proprietary nature introduced by the complexity of surrogate models and the sampling core (e.g., acquisition functions) often leads to a lack of explainability and transparency. While existing literature has primarily concentrated on enhancing convergence to global optima, the practical interpretation of newly proposed strategies remains underexplored, especially in batch evaluation settings. In this paper, we propose \emph{Inclusive} Explainability Metrics for Surrogate Optimization (IEMSO), a comprehensive set of model-agnostic metrics designed to enhance the transparency, trustworthiness, and explainability of the SO approaches. Through these metrics, we provide both intermediate and post-hoc explanations to practitioners before and after performing expensive evaluations to gain trust. We consider four primary categories of metrics, each targeting a specific aspect of the SO process: Sampling Core Metrics, Batch Properties Metrics, Optimization Process Metrics, and Feature Importance. Our experimental evaluations demonstrate the significant potential of the proposed metrics across different benchmarks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,078 |
2207.05342 | Video Graph Transformer for Video Question Answering | This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT's uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal reasoning; and 2) it exploits disentangled video and text Transformers for relevance comparison between the video and text to perform QA, instead of entangled cross-modal Transformer for answer classification. Vision-text communication is done by additional cross-modal interaction modules. With more reasonable video encoding and QA solution, we show that VGT can achieve much better performances on VideoQA tasks that challenge dynamic relation reasoning than prior arts in the pretraining-free scenario. Its performances even surpass those models that are pretrained with millions of external data. We further show that VGT can also benefit a lot from self-supervised cross-modal pretraining, yet with orders of magnitude smaller data. These results clearly demonstrate the effectiveness and superiority of VGT, and reveal its potential for more data-efficient pretraining. With comprehensive analyses and some heuristic observations, we hope that VGT can promote VQA research beyond coarse recognition/description towards fine-grained relation reasoning in realistic videos. Our code is available at https://github.com/sail-sg/VGT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 307,508 |
1910.12010 | Dense Dilated Network with Probability Regularized Walk for Vessel
Detection | The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multiscale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also are under receiver operating characteristic curve. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 150,935 |
2409.15067 | SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks | Federated Learning (FL) is a distributed machine learning paradigm designed for privacy-sensitive applications that run on resource-constrained devices with non-Identically and Independently Distributed (IID) data. Traditional FL frameworks adopt the client-server model with a single-level aggregation (AGR) process, where the server builds the global model by aggregating all trained local models received from client devices. However, this conventional approach encounters challenges, including susceptibility to model/data poisoning attacks. In recent years, advancements in the Internet of Things (IoT) and edge computing have enabled the development of hierarchical FL systems with a two-level AGR process running at edge and cloud servers. In this paper, we propose a Secure Hierarchical FL (SHFL) framework to address poisoning attacks in hierarchical edge networks. By aggregating trained models at the edge, SHFL employs two novel methods to address model/data poisoning attacks in the presence of client adversaries: 1) a client selection algorithm running at the edge for choosing IoT devices to participate in training, and 2) a model AGR method designed based on convex optimization theory to reduce the impact of edge models from networks with adversaries in the process of computing the global model (at the cloud level). The evaluation results reveal that compared to state-of-the-art methods, SHFL significantly increases the maximum accuracy achieved by the global model in the presence of client adversaries applying model/data poisoning attacks. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 490,741 |
1903.02982 | Correction of Electron Back-scattered Diffraction datasets using an
evolutionary algorithm | In materials science and particularly electron microscopy, Electron Back-scatter Diffraction (EBSD) is a common and powerful mapping technique for collecting local crystallographic data at the sub-micron scale. The quality of the reconstruction of the maps is critical to study the spatial distribution of phases and crystallographic orientation relationships between phases, a key interest in materials science. However, EBSD data is known to suffer from distortions that arise from several instrument and detector artifacts. In this paper, we present an unsupervised method that corrects those distortions, and enables or enhances phase differentiation in EBSD data. The method uses a segmented electron image of the phases of interest (laths, precipitates, voids, inclusions) gathered using detectors that generate less distorted data, of the same area than the EBSD map, and then searches for the best transformation to correct the distortions of the initial EBSD data. To do so, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is implemented to distort the EBSD until it matches the reference electron image. Fast and versatile, this method does not require any human annotation and can be applied to large datasets and wide areas, where the distortions are important. Besides, this method requires very little assumption concerning the shape of the distortion function. Some application examples in multiphase materials with feature sizes down to 1 $\mu$m are presented, including a Titanium alloy and a Nickel-base superalloy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 123,613 |
cs/0412052 | WebotsTM: Professional Mobile Robot Simulation | Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 538,433 |
2212.02071 | AMORETTO: A Method for Deriving IoT-enriched Event Logs | Process analytics aims to gain insights into the behaviour and performance of business processes through the analysis of event logs, which record the execution of processes. With the widespread use of the Internet of Things (IoT), IoT data has become readily available and can provide valuable context information about business processes. As such, process analytics can benefit from incorporating IoT data into event logs to support more comprehensive, context-aware analyses. However, most existing studies focus on enhancing business process models with IoT data, whereas little attention has been paid to incorporating IoT data into event logs for process analytics. Hence, this paper aims to systematically integrate IoT data into event logs to support context-aware process analytics. To this end, we propose AMORETTO - a method for deriving IoT-enriched event logs. Firstly, we provide a classification of context data, referred to as the IoT-Pro context classification, which encompasses two context dimensions: IoT context and process context. Next, we present a method for integrating IoT data with event logs, guided by IoT-Pro, to yield IoT-enriched event logs. To demonstrate the applicability of AMORETTO, we applied it to a real-life use case and examined whether the derived IoT-enriched event log sufficed to address certain specific analytical questions. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 334,672 |
2207.06399 | Pattern recognition in the nucleation kinetics of non-equilibrium
self-assembly | Inspired by biology's most sophisticated computer, the brain, neural networks constitute a profound reformulation of computational principles. Remarkably, analogous high-dimensional, highly-interconnected computational architectures also arise within information-processing molecular systems inside living cells, such as signal transduction cascades and genetic regulatory networks. Might neuromorphic collective modes be found more broadly in other physical and chemical processes, even those that ostensibly play non-information-processing roles such as protein synthesis, metabolism, or structural self-assembly? Here we examine nucleation during self-assembly of multicomponent structures, showing that high-dimensional patterns of concentrations can be discriminated and classified in a manner similar to neural network computation. Specifically, we design a set of 917 DNA tiles that can self-assemble in three alternative ways such that competitive nucleation depends sensitively on the extent of co-localization of high-concentration tiles within the three structures. The system was trained in-silico to classify a set of 18 grayscale 30 x 30 pixel images into three categories. Experimentally, fluorescence and atomic force microscopy monitoring during and after a 150-hour anneal established that all trained images were correctly classified, while a test set of image variations probed the robustness of the results. While slow compared to prior biochemical neural networks, our approach is surprisingly compact, robust, and scalable. This success suggests that ubiquitous physical phenomena, such as nucleation, may hold powerful information processing capabilities when scaled up as high-dimensional multicomponent systems. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 307,867 |
2306.04116 | Unbalanced Optimal Transport for Unbalanced Word Alignment | Monolingual word alignment is crucial to model semantic interactions between sentences. In particular, null alignment, a phenomenon in which words have no corresponding counterparts, is pervasive and critical in handling semantically divergent sentences. Identification of null alignment is useful on its own to reason about the semantic similarity of sentences by indicating there exists information inequality. To achieve unbalanced word alignment that values both alignment and null alignment, this study shows that the family of optimal transport (OT), i.e., balanced, partial, and unbalanced OT, are natural and powerful approaches even without tailor-made techniques. Our extensive experiments covering unsupervised and supervised settings indicate that our generic OT-based alignment methods are competitive against the state-of-the-arts specially designed for word alignment, remarkably on challenging datasets with high null alignment frequencies. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 371,609 |
2405.10700 | SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation
Tasks | Diaspora communities are disproportionately impacted by off-the-radar misinformation and often neglected by mainstream fact-checking efforts, creating a critical need to scale-up efforts of nascent fact-checking initiatives. In this paper we present SynDy, a framework for Synthetic Dynamic Dataset Generation to leverage the capabilities of the largest frontier Large Language Models (LLMs) to train local, specialized language models. To the best of our knowledge, SynDy is the first paper utilizing LLMs to create fine-grained synthetic labels for tasks of direct relevance to misinformation mitigation, namely Claim Matching, Topical Clustering, and Claim Relationship Classification. SynDy utilizes LLMs and social media queries to automatically generate distantly-supervised, topically-focused datasets with synthetic labels on these three tasks, providing essential tools to scale up human-led fact-checking at a fraction of the cost of human-annotated data. Training on SynDy's generated labels shows improvement over a standard baseline and is not significantly worse compared to training on human labels (which may be infeasible to acquire). SynDy is being integrated into Meedan's chatbot tiplines that are used by over 50 organizations, serve over 230K users annually, and automatically distribute human-written fact-checks via messaging apps such as WhatsApp. SynDy will also be integrated into our deployed Co-Insights toolkit, enabling low-resource organizations to launch tiplines for their communities. Finally, we envision SynDy enabling additional fact-checking tools such as matching new misinformation claims to high-quality explainers on common misinformation topics. | false | false | false | false | true | true | false | false | true | false | false | false | false | true | false | false | false | false | 454,858 |
2408.17133 | iCPS-DL: A Description Language for Autonomic Industrial Cyber-Physical
Systems | Modern industrial systems require frequent updates to their cyber and physical infrastructures, which often demand considerable reconfiguration effort. This paper introduces a framework to automate this process, implemented as the industrial Cyber-Physical Systems Description Language, iCPSDL. This framework maps an industrial process as a knowledge graph, which includes information about physical and cyber-physical components, a state estimation model, and software component interaction. A novel aspect is the use of communication semantics to ensure correct interaction among distributed entities. Reasoning on the knowledge graph facilitates the configuration of cyber-physical elements in an industrial system. A case study in the Water Distribution Networks domain demonstrates the framework's application. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 484,587 |
2106.10581 | Supervised learning for crop/weed classification based on color and
texture features | Computer vision techniques have attracted a great interest in precision agriculture, recently. The common goal of all computer vision-based precision agriculture tasks is to detect the objects of interest (e.g., crop, weed) and discriminating them from the background. The Weeds are unwanted plants growing among crops competing for nutrients, water, and sunlight, causing losses to crop yields. Weed detection and mapping is critical for site-specific weed management to reduce the cost of labor and impact of herbicides. This paper investigates the use of color and texture features for discrimination of Soybean crops and weeds. Feature extraction methods including two color spaces (RGB, HSV), gray level Co-occurrence matrix (GLCM), and Local Binary Pattern (LBP) are used to train the Support Vector Machine (SVM) classifier. The experiment was carried out on image dataset of soybean crop, obtained from an unmanned aerial vehicle (UAV), which is publicly available. The results from the experiment showed that the highest accuracy (above 96%) was obtained from the combination of color and LBP features. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 242,073 |
2010.10949 | DiSCO: Differentiable Scan Context with Orientation | Global localization is essential for robot navigation, of which the first step is to retrieve a query from the map database. This problem is called place recognition. In recent years, LiDAR scan based place recognition has drawn attention as it is robust against the appearance change. In this paper, we propose a LiDAR-based place recognition method, named Differentiable Scan Context with Orientation (DiSCO), which simultaneously finds the scan at a similar place and estimates their relative orientation. The orientation can further be used as the initial value for the down-stream local optimal metric pose estimation, improving the pose estimation especially when a large orientation between the current scan and retrieved scan exists. Our key idea is to transform the feature into the frequency domain. We utilize the magnitude of the spectrum as the place signature, which is theoretically rotation-invariant. In addition, based on the differentiable phase correlation, we can efficiently estimate the global optimal relative orientation using the spectrum. With such structural constraints, the network can be learned in an end-to-end manner, and the backbone is fully shared by the two tasks, achieving interpretability and light weight. Finally, DiSCO is validated on three datasets with long-term outdoor conditions, showing better performance than the compared methods. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 202,063 |
2307.02462 | Expert-Agnostic Ultrasound Image Quality Assessment using Deep
Variational Clustering | Ultrasound imaging is a commonly used modality for several diagnostic and therapeutic procedures. However, the diagnosis by ultrasound relies heavily on the quality of images assessed manually by sonographers, which diminishes the objectivity of the diagnosis and makes it operator-dependent. The supervised learning-based methods for automated quality assessment require manually annotated datasets, which are highly labour-intensive to acquire. These ultrasound images are low in quality and suffer from noisy annotations caused by inter-observer perceptual variations, which hampers learning efficiency. We propose an UnSupervised UltraSound image Quality assessment Network, US2QNet, that eliminates the burden and uncertainty of manual annotations. US2QNet uses the variational autoencoder embedded with the three modules, pre-processing, clustering and post-processing, to jointly enhance, extract, cluster and visualize the quality feature representation of ultrasound images. The pre-processing module uses filtering of images to point the network's attention towards salient quality features, rather than getting distracted by noise. Post-processing is proposed for visualizing the clusters of feature representations in 2D space. We validated the proposed framework for quality assessment of the urinary bladder ultrasound images. The proposed framework achieved 78% accuracy and superior performance to state-of-the-art clustering methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 377,699 |
1407.6404 | An autoregressive (AR) model based stochastic unknown input realization
and filtering technique | This paper studies the state estimation problem of linear discrete-time systems with stochastic unknown inputs. The unknown input is a wide-sense stationary process while no other prior informaton needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm (ERA). An augmented state system is constructed and the standard Kalman filter is applied for state estimation. A reduced order model (ROM) filter is also introduced to reduce the computational cost of the Kalman filter. Two numerical examples are given to illustrate the procedure. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 34,863 |
1907.10726 | Cross-Attention End-to-End ASR for Two-Party Conversations | We present an end-to-end speech recognition model that learns interaction between two speakers based on the turn-changing information. Unlike conventional speech recognition models, our model exploits two speakers' history of conversational-context information that spans across multiple turns within an end-to-end framework. Specifically, we propose a speaker-specific cross-attention mechanism that can look at the output of the other speaker side as well as the one of the current speaker for better at recognizing long conversations. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 139,689 |
1103.0632 | An Agent Based Architecture (Using Planning) for Dynamic and Semantic
Web Services Composition in an EBXML Context | The process-based semantic composition of Web Services is gaining a considerable momentum as an approach for the effective integration of distributed, heterogeneous, and autonomous applications. To compose Web Services semantically, we need an ontology. There are several ways of inserting semantics in Web Services. One of them consists of using description languages like OWL-S. In this paper, we introduce our work which consists in the proposition of a new model and the use of semantic matching technology for semantic and dynamic composition of ebXML business processes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 9,460 |
2002.03807 | Automatic image-based identification and biomass estimation of
invertebrates | Understanding how biological communities respond to environmental changes is a key challenge in ecology and ecosystem management. The apparent decline of insect populations necessitates more biomonitoring but the time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed. In turn, this affects the scale of efforts to map invertebrate diversity altogether. Given recent advances in computer vision, we propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology. We describe a robot-enabled image-based identification machine, which can automate the process of invertebrate identification, biomass estimation and sample sorting. We use the imaging device to generate a comprehensive image database of terrestrial arthropod species. We use this database to test the classification accuracy i.e. how well the species identity of a specimen can be predicted from images taken by the machine. We also test sensitivity of the classification accuracy to the camera settings (aperture and exposure time) in order to move forward with the best possible image quality. We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task. The results for the initial dataset are very promising ($\overline{ACC}=0.980$). The system is general and can easily be used for other groups of invertebrates as well. As such, our results pave the way for generating more data on spatial and temporal variation in invertebrate abundance, diversity and biomass. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 163,416 |
2411.14630 | ACE-Net: AutofoCus-Enhanced Convolutional Network for Field Imperfection
Estimation with application to high b-value spiral Diffusion MRI | Spatiotemporal magnetic field variations from B0-inhomogeneity and diffusion-encoding-induced eddy-currents can be detrimental to rapid image-encoding schemes such as spiral, EPI and 3D-cones, resulting in undesirable image artifacts. In this work, a data driven approach for automatic estimation of these field imperfections is developed by combining autofocus metrics with deep learning, and by leveraging a compact basis representation of the expected field imperfections. The method was applied to single-shot spiral diffusion MRI at high b-values where accurate estimation of B0 and eddy were obtained, resulting in high quality image reconstruction without need for additional external calibrations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,254 |
1802.06485 | Robust Estimation via Robust Gradient Estimation | We provide a new computationally-efficient class of estimators for risk minimization. We show that these estimators are robust for general statistical models: in the classical Huber epsilon-contamination model and in heavy-tailed settings. Our workhorse is a novel robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem. We provide specific consequences of our theory for linear regression, logistic regression and for estimation of the canonical parameters in an exponential family. These results provide some of the first computationally tractable and provably robust estimators for these canonical statistical models. Finally, we study the empirical performance of our proposed methods on synthetic and real datasets, and find that our methods convincingly outperform a variety of baselines. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,689 |
1501.06218 | Infinite Edge Partition Models for Overlapping Community Detection and
Link Prediction | A hierarchical gamma process infinite edge partition model is proposed to factorize the binary adjacency matrix of an unweighted undirected relational network under a Bernoulli-Poisson link. The model describes both homophily and stochastic equivalence, and is scalable to big sparse networks by focusing its computation on pairs of linked nodes. It can not only discover overlapping communities and inter-community interactions, but also predict missing edges. A simplified version omitting inter-community interactions is also provided and we reveal its interesting connections to existing models. The number of communities is automatically inferred in a nonparametric Bayesian manner, and efficient inference via Gibbs sampling is derived using novel data augmentation techniques. Experimental results on four real networks demonstrate the models' scalability and state-of-the-art performance. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 39,588 |
2407.02218 | Multi-Modal Video Dialog State Tracking in the Wild | We present MST-MIXER - a novel video dialog model operating over a generic multi-modal state tracking scheme. Current models that claim to perform multi-modal state tracking fall short of two major aspects: (1) They either track only one modality (mostly the visual input) or (2) they target synthetic datasets that do not reflect the complexity of real-world in the wild scenarios. Our model addresses these two limitations in an attempt to close this crucial research gap. Specifically, MST-MIXER first tracks the most important constituents of each input modality. Then, it predicts the missing underlying structure of the selected constituents of each modality by learning local latent graphs using a novel multi-modal graph structure learning method. Subsequently, the learned local graphs and features are parsed together to form a global graph operating on the mix of all modalities which further refines its structure and node embeddings. Finally, the fine-grained graph node features are used to enhance the hidden states of the backbone Vision-Language Model (VLM). MST-MIXER achieves new state-of-the-art results on five challenging benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 469,630 |
2302.10503 | Reusable Slotwise Mechanisms | Agents with the ability to comprehend and reason about the dynamics of objects would be expected to exhibit improved robustness and generalization in novel scenarios. However, achieving this capability necessitates not only an effective scene representation but also an understanding of the mechanisms governing interactions among object subsets. Recent studies have made significant progress in representing scenes using object slots. In this work, we introduce Reusable Slotwise Mechanisms, or RSM, a framework that models object dynamics by leveraging communication among slots along with a modular architecture capable of dynamically selecting reusable mechanisms for predicting the future states of each object slot. Crucially, RSM leverages the Central Contextual Information (CCI), enabling selected mechanisms to access the remaining slots through a bottleneck, effectively allowing for modeling of higher order and complex interactions that might require a sparse subset of objects. Experimental results demonstrate the superior performance of RSM compared to state-of-the-art methods across various future prediction and related downstream tasks, including Visual Question Answering and action planning. Furthermore, we showcase RSM's Out-of-Distribution generalization ability to handle scenes in intricate scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 346,835 |
2307.16321 | Self-Supervised Learning of Gait-Based Biomarkers | Markerless motion capture (MMC) is revolutionizing gait analysis in clinical settings by making it more accessible, raising the question of how to extract the most clinically meaningful information from gait data. In multiple fields ranging from image processing to natural language processing, self-supervised learning (SSL) from large amounts of unannotated data produces very effective representations for downstream tasks. However, there has only been limited use of SSL to learn effective representations of gait and movement, and it has not been applied to gait analysis with MMC. One SSL objective that has not been applied to gait is contrastive learning, which finds representations that place similar samples closer together in the learned space. If the learned similarity metric captures clinically meaningful differences, this could produce a useful representation for many downstream clinical tasks. Contrastive learning can also be combined with causal masking to predict future timesteps, which is an appealing SSL objective given the dynamical nature of gait. We applied these techniques to gait analyses performed with MMC in a rehabilitation hospital from a diverse clinical population. We find that contrastive learning on unannotated gait data learns a representation that captures clinically meaningful information. We probe this learned representation using the framework of biomarkers and show it holds promise as both a diagnostic and response biomarker, by showing it can accurately classify diagnosis from gait and is responsive to inpatient therapy, respectively. We ultimately hope these learned representations will enable predictive and prognostic gait-based biomarkers that can facilitate precision rehabilitation through greater use of MMC to quantify movement in rehabilitation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,559 |
1808.04859 | GestureGAN for Hand Gesture-to-Gesture Translation in the Wild | Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator $G$ and a discriminator $D$, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of "channel pollution" while back-propagating the gradients. In addition, we present the Fr\'echet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https://github.com/Ha0Tang/GestureGAN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 105,245 |
2002.11669 | Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from
Sensing to Tracking | Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, interactive motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 165,794 |
2110.11323 | StyleAlign: Analysis and Applications of Aligned StyleGAN Models | In this paper, we perform an in-depth study of the properties and applications of aligned generative models. We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation. Here, we perform the first detailed exploration of model alignment, also focusing on StyleGAN. First, we empirically analyze aligned models and provide answers to important questions regarding their nature. In particular, we find that the child model's latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing. We further show that zero-shot vision tasks may be performed in the child domain, while relying exclusively on supervision in the parent domain. We demonstrate qualitatively and quantitatively that our approach yields state-of-the-art results, while requiring only simple fine-tuning and inversion. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 262,442 |
0705.3693 | Morphing Ensemble Kalman Filters | A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for nonlinear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modeling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration method is used that requires only gridded data, so the features in the model state do not need to be identified by the user. The morphing EnKF operates on a transformed state consisting of the registration mapping and the residual. Essentially, the morphing EnKF uses intermediate states obtained by morphing instead of linear combinations of the states. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 282 |
2111.09046 | Multi-Robot Object Transport Motion Planning with a Deformable Sheet | Using a deformable sheet to handle objects is convenient and found in many practical applications. For object manipulation through a deformable sheet that is held by multiple mobile robots, it is a challenging task to model the object-sheet interactions. We present a computational model and algorithm to capture the object position on the deformable sheet with changing robotic team formations. A virtual variable cables model (VVCM) is proposed to simplify the modeling of the robot-sheet-object system. With the VVCM, we further present a motion planner for the robotic team to transport the object in a three-dimensional (3D) cluttered environment. Simulation and experimental results with different robot team sizes show the effectiveness and versatility of the proposed VVCM. We also compare and demonstrate the planning results to avoid the obstacle in 3D space with the other benchmark planner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,886 |
1811.04422 | An Optimal Control View of Adversarial Machine Learning | I describe an optimal control view of adversarial machine learning, where the dynamical system is the machine learner, the input are adversarial actions, and the control costs are defined by the adversary's goals to do harm and be hard to detect. This view encompasses many types of adversarial machine learning, including test-item attacks, training-data poisoning, and adversarial reward shaping. The view encourages adversarial machine learning researcher to utilize advances in control theory and reinforcement learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 113,088 |
1512.01192 | Prototypical Priors: From Improving Classification to Zero-Shot Learning | Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time. In a novel extension to this idea, we propose the use of visual prototypical concepts as side information. For most real-world visual object categories, it may be difficult to establish a unique prototype. However, in cases such as traffic signs, brand logos, flags, and even natural language characters, these prototypical templates are available and can be leveraged for an improved recognition performance. The present work proposes a way to incorporate this prototypical information in a deep learning framework. Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss. Based on our experiments with two different datasets of traffic signs and brand logos, prototypical embeddings incorporated in a conventional convolutional neural network improve the recognition performance. Recognition accuracy on the Belga logo dataset is especially noteworthy and establishes a new state-of-the-art. In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time. Thus, unlike earlier approaches, testing on seen and unseen classes is handled using the same pipeline, and the system can be tuned for a trade-off of seen and unseen class performance as per task requirement. Comparison with one of the latest works in the zero-shot learning domain yields top results on the two datasets mentioned above. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 49,782 |
2303.02234 | Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning | Reinforcement learning has shown great potential in solving complex tasks when large amounts of data can be generated with little effort. In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles. However, for tasks that, for instance, involve complex soft robots, devising such models is substantially more challenging. Being able to train effectively in increasingly complicated scenarios with reinforcement learning enables to take advantage of complex systems such as soft robots. Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently. We (i) abstract the task into distinct components, (ii) off-load the simple dynamics parts into the simulation, and (iii) multiply these virtual parts to generate more data in hindsight. Our new method, Hindsight States (HiS), uses this data and selects the most useful transitions for training. It can be used with an arbitrary off-policy algorithm. We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm, Hindsight Experience Replay (HER). Finally, we evaluate HiS on a physical system and show that it boosts performance on a complex table tennis task with a muscular robot. Videos and code of the experiments can be found on webdav.tuebingen.mpg.de/his/. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | true | false | false | 349,266 |
2405.07483 | A Class of Convex Optimization-Based Recursive Algorithms for
Identification of Stochastic Systems | Focusing on identification, this paper develops a class of convex optimization-based criteria and correspondingly the recursive algorithms to estimate the parameter vector $\theta^{*}$ of a stochastic dynamic system. Not only do the criteria include the classical least-squares estimator but also the $L_l=|\cdot|^l, l\geq 1$, the Huber, the Log-cosh, and the Quantile costs as special cases. First, we prove that the minimizers of the convex optimization-based criteria converge to $\theta^{*}$ with probability one. Second, the recursive algorithms are proposed to find the estimates, which minimize the convex optimization-based criteria, and it is shown that these estimates also converge to the true parameter vector with probability one. Numerical examples are given, justifying the performance of the proposed algorithms including the strong consistency of the estimates, the robustness against outliers in the observations, and higher efficiency in online computation compared with the kernel-based regularization method due to the recursive nature. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 453,733 |
1707.08243 | A Graphical Characterization of Structurally Controllable Linear Systems
with Dependent Parameters | One version of the concept of structural controllability defined for single-input systems by Lin and subsequently generalized to multi-input systems by others, states that a parameterized matrix pair $(A, B)$ whose nonzero entries are distinct parameters, is structurally controllable if values can be assigned to the parameters which cause the resulting matrix pair to be controllable. In this paper the concept of structural controllability is broadened to allow for the possibility that a parameter may appear in more than one location in the pair $(A, B)$. Subject to a certain condition on the parameterization called the "binary assumption", an explicit graph-theoretic characterization of such matrix pairs is derived. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 77,782 |
2410.14400 | Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance | Bokeh rendering is one of the most popular techniques in photography. It can make photographs visually appealing, forcing users to focus their attentions on particular area of image. However, achieving satisfactory bokeh effect usually presents significant challenge, since mobile cameras with restricted optical systems are constrained, while expensive high-end DSLR lens with large aperture should be needed. Therefore, many deep learning-based computational photography methods have been developed to mimic the bokeh effect in recent years. Nevertheless, most of these methods were limited to rendering bokeh effect in certain single aperture. There lacks user-friendly bokeh rendering method that can provide precise focal plane control and customised bokeh generation. There as well lacks authentic realistic bokeh dataset that can potentially promote bokeh learning on variable apertures. To address these two issues, in this paper, we have proposed an effective controllable bokeh rendering method, and contributed a Variable Aperture Bokeh Dataset (VABD). In the proposed method, user can customize focal plane to accurately locate concerned subjects and select target aperture information for bokeh rendering. Experimental results on public EBB! benchmark dataset and our constructed dataset VABD have demonstrated that the customized focal plane together aperture prompt can bootstrap model to simulate realistic bokeh effect. The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models. The contributed dataset and source codes will be released on github https://github.com/MoTong-AI-studio/VABM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 500,014 |
1805.08693 | High throughput quantitative metallography for complex microstructures
using deep learning: A case study in ultrahigh carbon steel | We apply a deep convolutional neural network segmentation model to enable novel automated microstructure segmentation applications for complex microstructures typically evaluated manually and subjectively. We explore two microstructure segmentation tasks in an openly-available ultrahigh carbon steel microstructure dataset: segmenting cementite particles in the spheroidized matrix, and segmenting larger fields of view featuring grain boundary carbide, spheroidized particle matrix, particle-free grain boundary denuded zone, and Widmanst\"atten cementite. We also demonstrate how to combine these data-driven microstructure segmentation models to obtain empirical cementite particle size and denuded zone width distributions from more complex micrographs containing multiple microconstituents. The full annotated dataset is available on materialsdata.nist.gov (https://materialsdata.nist.gov/handle/11256/964). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 98,219 |
2211.06567 | Online Search with Predictions: Pareto-optimal Algorithm and its
Applications in Energy Markets | This paper develops learning-augmented algorithms for energy trading in volatile electricity markets. The basic problem is to sell (or buy) $k$ units of energy for the highest revenue (lowest cost) over uncertain time-varying prices, which can framed as a classic online search problem in the literature of competitive analysis. State-of-the-art algorithms assume no knowledge about future market prices when they make trading decisions in each time slot, and aim for guaranteeing the performance for the worst-case price sequence. In practice, however, predictions about future prices become commonly available by leveraging machine learning. This paper aims to incorporate machine-learned predictions to design competitive algorithms for online search problems. An important property of our algorithms is that they achieve performances competitive with the offline algorithm in hindsight when the predictions are accurate (i.e., consistency) and also provide worst-case guarantees when the predictions are arbitrarily wrong (i.e., robustness). The proposed algorithms achieve the Pareto-optimal trade-off between consistency and robustness, where no other algorithms for online search can improve on the consistency for a given robustness. Further, we extend the basic online search problem to a more general inventory management setting that can capture storage-assisted energy trading in electricity markets. In empirical evaluations using traces from real-world applications, our learning-augmented algorithms improve the average empirical performance compared to benchmark algorithms, while also providing improved worst-case performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 329,937 |
2107.14428 | Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation | Semantic segmentation requires per-pixel prediction for a given image. Typically, the output resolution of a segmentation network is severely reduced due to the downsampling operations in the CNN backbone. Most previous methods employ upsampling decoders to recover the spatial resolution. Various decoders were designed in the literature. Here, we propose a novel decoder, termed dynamic neural representational decoder (NRD), which is simple yet significantly more efficient. As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks. This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient. Furthermore, these neural representations are dynamically generated and conditioned on the outputs of the encoder networks. The desired semantic labels can be efficiently decoded from the neural representations, resulting in high-resolution semantic segmentation predictions. We empirically show that our proposed decoder can outperform the decoder in DeeplabV3+ with only 30% computational complexity, and achieve competitive performance with the methods using dilated encoders with only 15% computation. Experiments on the Cityscapes, ADE20K, and PASCAL Context datasets demonstrate the effectiveness and efficiency of our proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 248,467 |
2210.10769 | "Why did the Model Fail?": Attributing Model Performance Changes to
Distribution Shifts | Machine learning models frequently experience performance drops under distribution shifts. The underlying cause of such shifts may be multiple simultaneous factors such as changes in data quality, differences in specific covariate distributions, or changes in the relationship between label and features. When a model does fail during deployment, attributing performance change to these factors is critical for the model developer to identify the root cause and take mitigating actions. In this work, we introduce the problem of attributing performance differences between environments to distribution shifts in the underlying data generating mechanisms. We formulate the problem as a cooperative game where the players are distributions. We define the value of a set of distributions to be the change in model performance when only this set of distributions has changed between environments, and derive an importance weighting method for computing the value of an arbitrary set of distributions. The contribution of each distribution to the total performance change is then quantified as its Shapley value. We demonstrate the correctness and utility of our method on synthetic, semi-synthetic, and real-world case studies, showing its effectiveness in attributing performance changes to a wide range of distribution shifts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 325,055 |
1210.6198 | Network Localization by Shadow Edges | Localization is a fundamental task for sensor networks. Traditional network construction approaches allow to obtain localized networks requiring the nodes to be at least tri-connected (in 2D), i.e., the communication graph needs to be globally rigid. In this paper we exploit, besides the information on the neighbors sensed by each robot/sensor, also the information about the lack of communication among nodes. The result is a framework where the nodes are required to be bi-connected and the communication graph has to be rigid. This is possible considering a novel typology of link, namely Shadow Edges, that account for the lack of communication among nodes and allow to reduce the uncertainty associated to the position of the nodes. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 19,345 |
2307.09642 | Skin Lesion Correspondence Localization in Total Body Photography | Longitudinal tracking of skin lesions - finding correspondence, changes in morphology, and texture - is beneficial to the early detection of melanoma. However, it has not been well investigated in the context of full-body imaging. We propose a novel framework combining geometric and texture information to localize skin lesion correspondence from a source scan to a target scan in total body photography (TBP). Body landmarks or sparse correspondence are first created on the source and target 3D textured meshes. Every vertex on each of the meshes is then mapped to a feature vector characterizing the geodesic distances to the landmarks on that mesh. Then, for each lesion of interest (LOI) on the source, its corresponding location on the target is first coarsely estimated using the geometric information encoded in the feature vectors and then refined using the texture information. We evaluated the framework quantitatively on both a public and a private dataset, for which our success rates (at 10 mm criterion) are comparable to the only reported longitudinal study. As full-body 3D capture becomes more prevalent and has higher quality, we expect the proposed method to constitute a valuable step in the longitudinal tracking of skin lesions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 380,235 |
2010.06986 | On the Problem of Underranking in Group-Fair Ranking | Search and recommendation systems, such as search engines, recruiting tools, online marketplaces, news, and social media, output ranked lists of content, products, and sometimes, people. Credit ratings, standardized tests, risk assessments output only a score, but are also used implicitly for ranking. Bias in such ranking systems, especially among the top ranks, can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes. On the other hand, a bias correction for minority groups can cause more harm if perceived as favoring group-fair outcomes over meritocracy. In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work. Most group-fair ranking algorithms post-process a given ranking and output a group-fair ranking. We define underranking based on how close the group-fair rank of each item is to its original rank, and prove a lower bound on the trade-off achievable for simultaneous underranking and group fairness in ranking. We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove. Our algorithm works with group fairness constraints for any number of groups. Our experimental results confirm the theoretical trade-off between underranking and group fairness, and also show that our algorithm achieves the best of both when compared to the state-of-the-art baselines. | false | false | false | false | false | true | true | false | false | false | false | false | false | true | false | false | false | true | 200,665 |
1611.00684 | Wearable Vision Detection of Environmental Fall Risks using
Convolutional Neural Networks | In this paper, a method to detect environmental hazards related to a fall risk using a mobile vision system is proposed. First-person perspective videos are proposed to provide objective evidence on cause and circumstances of perturbed balance during activities of daily living, targeted to seniors. A classification problem was defined with 12 total classes of potential fall risks, including slope changes (e.g., stairs, curbs, ramps) and surfaces (e.g., gravel, grass, concrete). Data was collected using a chest-mounted GoPro camera. We developed a convolutional neural network for automatic feature extraction, reduction, and classification of frames. Initial results, with a mean square error of 8%, are promising. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 63,267 |
2311.12852 | Cell-free Terahertz Networks: A Spatial-spectral Approach | Cell-free network architecture plays a promising role in the terahertz (THz) networks since it provides better link reliability and uniformly good services for all the users compared to the co-located massive MIMO counterpart, and the spatial-spectral THz link has the advantages of lower initial access latency and fast beam operations. To this end, this work studies cell-free spatial-spectral THz networks with leaky-wave antennas, to exploit the benefits of leveraging both cell-free and spatial-spectral THz technologies. By addressing the coupling effects between propagation angles and frequencies, we propose novel frequency-dependent THz transmit antenna selection schemes to maximize the transmission rate. Numerical results confirm that the proposed antenna selection schemes can achieve much larger transmission rate than the maximal ratio transmission of using all the transmit antennas with equal subchannel bandwidth allocation in higher THz frequencies. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 409,514 |
2408.03592 | HistoSPACE: Histology-Inspired Spatial Transcriptome Prediction And
Characterization Engine | Spatial transcriptomics (ST) enables the visualization of gene expression within the context of tissue morphology. This emerging discipline has the potential to serve as a foundation for developing tools to design precision medicines. However, due to the higher costs and expertise required for such experiments, its translation into a regular clinical practice might be challenging. Despite the implementation of modern deep learning to enhance information obtained from histological images using AI, efforts have been constrained by limitations in the diversity of information. In this paper, we developed a model, HistoSPACE that explore the diversity of histological images available with ST data to extract molecular insights from tissue image. Our proposed study built an image encoder derived from universal image autoencoder. This image encoder was connected to convolution blocks to built the final model. It was further fine tuned with the help of ST-Data. This model is notably lightweight in compared to traditional histological models. Our developed model demonstrates significant efficiency compared to contemporary algorithms, revealing a correlation of 0.56 in leave-one-out cross-validation. Finally, its robustness was validated through an independent dataset, showing a well matched preditction with predefined disease pathology. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 479,080 |
1603.08976 | Local Search Yields a PTAS for k-Means in Doubling Metrics | The most well known and ubiquitous clustering problem encountered in nearly every branch of science is undoubtedly $k$-means: given a set of data points and a parameter $k$, select $k$ centres and partition the data points into $k$ clusters around these centres so that the sum of squares of distances of the points to their cluster centre is minimized. Typically these data points lie $\mathbb{R}^d$ for some $d\geq 2$. $k$-means and the first algorithms for it were introduced in the 1950's. Since then, hundreds of papers have studied this problem and many algorithms have been proposed for it. The most commonly used algorithm is known as Lloyd-Forgy, which is also referred to as "the" $k$-means algorithm, and various extensions of it often work very well in practice. However, they may produce solutions whose cost is arbitrarily large compared to the optimum solution. Kanungo et al. [2004] analyzed a simple local search heuristic to get a polynomial-time algorithm with approximation ratio $9+\epsilon$ for any fixed $\epsilon>0$ for $k$-means in Euclidean space. Finding an algorithm with a better approximation guarantee has remained one of the biggest open questions in this area, in particular whether one can get a true PTAS for fixed dimension Euclidean space. We settle this problem by showing that a simple local search algorithm provides a PTAS for $k$-means in $\mathbb{R}^d$ for any fixed $d$. More precisely, for any error parameter $\epsilon>0$, the local search algorithm that considers swaps of up to $\rho=d^{O(d)}\cdot{\epsilon}^{-O(d/\epsilon)}$ centres at a time finds a solution using exactly $k$ centres whose cost is at most a $(1+\epsilon)$-factor greater than the optimum. Finally, we provide the first demonstration that local search yields a PTAS for the uncapacitated facility location problem and $k$-median with non-uniform opening costs in doubling metrics. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 53,850 |
1708.04728 | DeepRebirth: Accelerating Deep Neural Network Execution on Mobile
Devices | Deploying deep neural networks on mobile devices is a challenging task. Current model compression methods such as matrix decomposition effectively reduce the deployed model size, but still cannot satisfy real-time processing requirement. This paper first discovers that the major obstacle is the excessive execution time of non-tensor layers such as pooling and normalization without tensor-like trainable parameters. This motivates us to design a novel acceleration framework: DeepRebirth through "slimming" existing consecutive and parallel non-tensor and tensor layers. The layer slimming is executed at different substructures: (a) streamline slimming by merging the consecutive non-tensor and tensor layer vertically; (b) branch slimming by merging non-tensor and tensor branches horizontally. The proposed optimization operations significantly accelerate the model execution and also greatly reduce the run-time memory cost since the slimmed model architecture contains less hidden layers. To maximally avoid accuracy loss, the parameters in new generated layers are learned with layer-wise fine-tuning based on both theoretical analysis and empirical verification. As observed in the experiment, DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on GoogLeNet with only 0.4% drop of top-5 accuracy on ImageNet. Furthermore, by combining with other model compression techniques, DeepRebirth offers an average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5% top-5 accuracy, 14% faster than SqueezeNet which only has a top-5 accuracy of 80.5%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 79,008 |
2011.00766 | I Know What You Asked: Graph Path Learning using AMR for Commonsense
Reasoning | CommonsenseQA is a task in which a correct answer is predicted through commonsense reasoning with pre-defined knowledge. Most previous works have aimed to improve the performance with distributed representation without considering the process of predicting the answer from the semantic representation of the question. To shed light upon the semantic interpretation of the question, we propose an AMR-ConceptNet-Pruned (ACP) graph. The ACP graph is pruned from a full integrated graph encompassing Abstract Meaning Representation (AMR) graph generated from input questions and an external commonsense knowledge graph, ConceptNet (CN). Then the ACP graph is exploited to interpret the reasoning path as well as to predict the correct answer on the CommonsenseQA task. This paper presents the manner in which the commonsense reasoning process can be interpreted with the relations and concepts provided by the ACP graph. Moreover, ACP-based models are shown to outperform the baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 204,358 |
2409.10578 | GLEAN: Generative Learning for Eliminating Adversarial Noise | In the age of powerful diffusion models such as DALL-E and Stable Diffusion, many in the digital art community have suffered style mimicry attacks due to fine-tuning these models on their works. The ability to mimic an artist's style via text-to-image diffusion models raises serious ethical issues, especially without explicit consent. Glaze, a tool that applies various ranges of perturbations to digital art, has shown significant success in preventing style mimicry attacks, at the cost of artifacts ranging from imperceptible noise to severe quality degradation. The release of Glaze has sparked further discussions regarding the effectiveness of similar protection methods. In this paper, we propose GLEAN- applying I2I generative networks to strip perturbations from Glazed images, evaluating the performance of style mimicry attacks before and after GLEAN on the results of Glaze. GLEAN aims to support and enhance Glaze by highlighting its limitations and encouraging further development. | false | false | false | false | true | false | true | false | false | false | false | true | true | false | false | false | false | false | 488,807 |
2205.10077 | Error Probability Bounds for Coded-Index DNA Storage | The DNA storage channel is considered, in which a codeword is comprised of $M$ unordered DNA molecules. At reading time, $N$ molecules are sampled with replacement, and then each molecule is sequenced. A coded-index concatenated-coding scheme is considered, in which the $m$th molecule of the codeword is restricted to a subset of all possible molecules (an inner code), which is unique for each $m$. The decoder has low-complexity, and is based on first decoding each molecule separately (the inner code), and then decoding the sequence of molecules (an outer code). Only mild assumptions are made on the sequencing channel, in the form of the existence of an inner code and decoder with vanishing error. The error probability of a random code as well as an expurgated code is analyzed and shown to decay exponentially with $N$. This establishes the importance of increasing the coverage depth $N/M$ in order to obtain low error probability. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 297,547 |
2411.02943 | Capturing research literature attitude towards Sustainable Development
Goals: an LLM-based topic modeling approach | The world is facing a multitude of challenges that hinder the development of human civilization and the well-being of humanity on the planet. The Sustainable Development Goals (SDGs) were formulated by the United Nations in 2015 to address these global challenges by 2030. Natural language processing techniques can help uncover discussions on SDGs within research literature. We propose a completely automated pipeline to 1) fetch content from the Scopus database and prepare datasets dedicated to five groups of SDGs; 2) perform topic modeling, a statistical technique used to identify topics in large collections of textual data; and 3) enable topic exploration through keywords-based search and topic frequency time series extraction. For topic modeling, we leverage the stack of BERTopic scaled up to be applied on large corpora of textual documents (we find hundreds of topics on hundreds of thousands of documents), introducing i) a novel LLM-based embeddings computation for representing scientific abstracts in the continuous space and ii) a hyperparameter optimizer to efficiently find the best configuration for any new big datasets. We additionally produce the visualization of results on interactive dashboards reporting topics' temporal evolution. Results are made inspectable and explorable, contributing to the interpretability of the topic modeling process. Our proposed LLM-based topic modeling pipeline for big-text datasets allows users to capture insights on the evolution of the attitude toward SDGs within scientific abstracts in the 2006-2023 time span. All the results are reproducible by using our system; the workflow can be generalized to be applied at any point in time to any big corpus of textual documents. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 505,730 |
2408.00973 | META-ANOVA: Screening interactions for interpretable machine learning | There are two things to be considered when we evaluate predictive models. One is prediction accuracy,and the other is interpretability. Over the recent decades, many prediction models of high performance, such as ensemble-based models and deep neural networks, have been developed. However, these models are often too complex, making it difficult to intuitively interpret their predictions. This complexity in interpretation limits their use in many real-world fields that require accountability, such as medicine, finance, and college admissions. In this study, we develop a novel method called Meta-ANOVA to provide an interpretable model for any given prediction model. The basic idea of Meta-ANOVA is to transform a given black-box prediction model to the functional ANOVA model. A novel technical contribution of Meta-ANOVA is a procedure of screening out unnecessary interaction before transforming a given black-box model to the functional ANOVA model. This screening procedure allows the inclusion of higher order interactions in the transformed functional ANOVA model without computational difficulties. We prove that the screening procedure is asymptotically consistent. Through various experiments with synthetic and real-world datasets, we empirically demonstrate the superiority of Meta-ANOVA | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 478,053 |
2405.16368 | Qsco: A Quantum Scoring Module for Open-set Supervised Anomaly Detection | Open set anomaly detection (OSAD) is a crucial task that aims to identify abnormal patterns or behaviors in data sets, especially when the anomalies observed during training do not represent all possible classes of anomalies. The recent advances in quantum computing in handling complex data structures and improving machine learning models herald a paradigm shift in anomaly detection methodologies. This study proposes a Quantum Scoring Module (Qsco), embedding quantum variational circuits into neural networks to enhance the model's processing capabilities in handling uncertainty and unlabeled data. Extensive experiments conducted across eight real-world anomaly detection datasets demonstrate our model's superior performance in detecting anomalies across varied settings and reveal that integrating quantum simulators does not result in prohibitive time complexities. Our study validates the feasibility of quantum-enhanced anomaly detection methods in practical applications. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,381 |
1711.00956 | Running Time Analysis of the (1+1)-EA for OneMax and LeadingOnes under
Bit-wise Noise | In many real-world optimization problems, the objective function evaluation is subject to noise, and we cannot obtain the exact objective value. Evolutionary algorithms (EAs), a type of general-purpose randomized optimization algorithm, have been shown to be able to solve noisy optimization problems well. However, previous theoretical analyses of EAs mainly focused on noise-free optimization, which makes the theoretical understanding largely insufficient for the noisy case. Meanwhile, the few existing theoretical studies under noise often considered the one-bit noise model, which flips a randomly chosen bit of a solution before evaluation; while in many realistic applications, several bits of a solution can be changed simultaneously. In this paper, we study a natural extension of one-bit noise, the bit-wise noise model, which independently flips each bit of a solution with some probability. We analyze the running time of the (1+1)-EA solving OneMax and LeadingOnes under bit-wise noise for the first time, and derive the ranges of the noise level for polynomial and super-polynomial running time bounds. The analysis on LeadingOnes under bit-wise noise can be easily transferred to one-bit noise, and improves the previously known results. Since our analysis discloses that the (1+1)-EA can be efficient only under low noise levels, we also study whether the sampling strategy can bring robustness to noise. We prove that using sampling can significantly increase the largest noise level allowing a polynomial running time, that is, sampling is robust to noise. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | true | 83,806 |
2109.03389 | An Optimal Resource Allocator of Elastic Training for Deep Learning Jobs
on Cloud | Cloud training platforms, such as Amazon Web Services and Huawei Cloud provide users with computational resources to train their deep learning jobs. Elastic training is a service embedded in cloud training platforms that dynamically scales up or down the resources allocated to a job. The core technique of an elastic training system is to best allocate limited resources among heterogeneous jobs in terms of shorter queueing delay and higher training efficiency. This paper presents an optimal resource allocator for elastic training system that leverages a mixed-integer programming (MIP) model to maximize the training progress of deep learning jobs. We take advantage of the real-world job data obtained from ModelArts, the deep learning training platform of Huawei Cloud and conduct simulation experiments to compare the optimal resource allocator with a greedy one as benchmark. Numerical results show that the proposed allocator can reduce queuing time by up to 32% and accelerate training efficiency by up to 24% relative to the greedy resource allocator, thereby greatly improving user experience with Huawei ModelArts and potentially enabling the realization of higher profits for the product. Also, the optimal resource allocator is fast in decision-making, taking merely 0.4 seconds on average. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 254,046 |
1710.03953 | One-step Estimation of Networked Population Size: Respondent-Driven
Capture-Recapture with Anonymity | Population size estimates for hidden and hard-to-reach populations are particularly important when members are known to suffer from disproportion health issues or to pose health risks to the larger ambient population in which they are embedded. Efforts to derive size estimates are often frustrated by a range of factors that preclude conventional survey strategies, including social stigma associated with group membership or members' involvement in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, to be used in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. Provably sufficient conditions for the consistency of these estimators (in large configuration networks) are given. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which are seen to perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population estimates can be derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. Limitations and future work are discussed in the concluding section. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 82,396 |
2407.06152 | Uni-ELF: A Multi-Level Representation Learning Framework for Electrolyte
Formulation Design | Advancements in lithium battery technology heavily rely on the design and engineering of electrolytes. However, current schemes for molecular design and recipe optimization of electrolytes lack an effective computational-experimental closed loop and often fall short in accurately predicting diverse electrolyte formulation properties. In this work, we introduce Uni-ELF, a novel multi-level representation learning framework to advance electrolyte design. Our approach involves two-stage pretraining: reconstructing three-dimensional molecular structures at the molecular level using the Uni-Mol model, and predicting statistical structural properties (e.g., radial distribution functions) from molecular dynamics simulations at the mixture level. Through this comprehensive pretraining, Uni-ELF is able to capture intricate molecular and mixture-level information, which significantly enhances its predictive capability. As a result, Uni-ELF substantially outperforms state-of-the-art methods in predicting both molecular properties (e.g., melting point, boiling point, synthesizability) and formulation properties (e.g., conductivity, Coulombic efficiency). Moreover, Uni-ELF can be seamlessly integrated into an automatic experimental design workflow. We believe this innovative framework will pave the way for automated AI-based electrolyte design and engineering. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 471,282 |
2010.08580 | Linguistically-Informed Transformations (LIT): A Method for
Automatically Generating Contrast Sets | Although large-scale pretrained language models, such as BERT and RoBERTa, have achieved superhuman performance on in-distribution test sets, their performance suffers on out-of-distribution test sets (e.g., on contrast sets). Building contrast sets often re-quires human-expert annotation, which is expensive and hard to create on a large scale. In this work, we propose a Linguistically-Informed Transformation (LIT) method to automatically generate contrast sets, which enables practitioners to explore linguistic phenomena of interests as well as compose different phenomena. Experimenting with our method on SNLI and MNLI shows that current pretrained language models, although being claimed to contain sufficient linguistic knowledge, struggle on our automatically generated contrast sets. Furthermore, we improve models' performance on the contrast sets by apply-ing LIT to augment the training data, without affecting performance on the original data. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 201,218 |
2303.12295 | Chance Constrained Stochastic Optimal Control for Arbitrarily Disturbed
LTI Systems Via the One-Sided Vysochanskij-Petunin Inequality | While many techniques have been developed for chance constrained stochastic optimal control with Gaussian disturbance processes, far less is known about computationally efficient methods to handle non-Gaussian processes. In this paper, we develop a method for solving chance constrained stochastic optimal control problems for linear time-invariant systems with general additive disturbances with finite moments and unimodal chance constraints. We propose an open-loop control scheme for multi-vehicle planning, with both target sets and collision avoidance constraints. Our method relies on the one-sided Vysochanskij-Petunin inequality, a tool from statistics used to bound tail probabilities of unimodal random variables. Using the one-sided Vysochanskij-Petunin inequality, we reformulate each chance constraint in terms of the expectation and standard deviation. While the reformulated bounds are conservative with respect to the original bounds, they have a simple and closed form, and are amenable to difference of convex optimization techniques. We demonstrate our approach on a multi-satellite rendezvous problem. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 353,207 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.