id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1708.08136
An Ensemble Framework for Detecting Community Changes in Dynamic Networks
Dynamic networks, especially those representing social networks, undergo constant evolution of their community structure over time. Nodes can migrate between different communities, communities can split into multiple new communities, communities can merge together, etc. In order to represent dynamic networks with evolving communities it is essential to use a dynamic model rather than a static one. Here we use a dynamic stochastic block model where the underlying block model is different at different times. In order to represent the structural changes expressed by this dynamic model the network will be split into discrete time segments and a clustering algorithm will assign block memberships for each segment. In this paper we show that using an ensemble of clustering assignments accommodates for the variance in scalable clustering algorithms and produces superior results in terms of pairwise-precision and pairwise-recall. We also demonstrate that the dynamic clustering produced by the ensemble can be visualized as a flowchart which encapsulates the community evolution succinctly.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
79,597
2502.08985
Few is More: Task-Efficient Skill-Discovery for Multi-Task Offline Multi-Agent Reinforcement Learning
As a data-driven approach, offline MARL learns superior policies solely from offline datasets, ideal for domains rich in historical data but with high interaction costs and risks. However, most existing methods are task-specific, requiring retraining for new tasks, leading to redundancy and inefficiency. To address this issue, in this paper, we propose a task-efficient multi-task offline MARL algorithm, Skill-Discovery Conservative Q-Learning (SD-CQL). Unlike existing offline skill-discovery methods, SD-CQL discovers skills by reconstructing the next observation. It then evaluates fixed and variable actions separately and employs behavior-regularized conservative Q-learning to execute the optimal action for each skill. This approach eliminates the need for local-global alignment and enables strong multi-task generalization from limited small-scale source tasks. Substantial experiments on StarCraftII demonstrates the superior generalization performance and task-efficiency of SD-CQL. It achieves the best performance on $\textbf{10}$ out of $14$ task sets, with up to $\textbf{65%}$ improvement on individual task sets, and is within $4\%$ of the best baseline on the remaining four.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
533,255
1807.11470
Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Generating versatile and appropriate synthetic speech requires control over the output expression separate from the spoken text. Important non-textual speech variation is seldom annotated, in which case output control must be learned in an unsupervised fashion. In this paper, we perform an in-depth study of methods for unsupervised learning of control in statistical speech synthesis. For example, we show that popular unsupervised training heuristics can be interpreted as variational inference in certain autoencoder models. We additionally connect these models to VQ-VAEs, another, recently-proposed class of deep variational autoencoders, which we show can be derived from a very similar mathematical argument. The implications of these new probabilistic interpretations are discussed. We illustrate the utility of the various approaches with an application to acoustic modelling for emotional speech synthesis, where the unsupervised methods for learning expression control (without access to emotional labels) are found to give results that in many aspects match or surpass the previous best supervised approach.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
104,184
2412.04057
From Code to Play: Benchmarking Program Search for Games Using Large Language Models
Large language models (LLMs) have shown impressive capabilities in generating program code, opening exciting opportunities for applying program synthesis to games. In this work, we explore the potential of LLMs to directly synthesize usable code for a wide range of gaming applications, focusing on two programming languages, Python and Java. We use an evolutionary hill-climbing algorithm, where the mutations and seeds of the initial programs are controlled by LLMs. For Python, the framework covers various game-related tasks, including five miniature versions of Atari games, ten levels of Baba is You, an environment inspired by Asteroids, and a maze generation task. For Java, the framework contains 12 games from the TAG tabletop games framework. Across 29 tasks, we evaluated 12 language models for Python and 8 for Java. Our findings suggest that the performance of LLMs depends more on the task than on model size. While larger models generate more executable programs, these do not always result in higher-quality solutions but are much more expensive. No model has a clear advantage, although on any specific task, one model may be better. Trying many models on a problem and using the best results across them is more reliable than using just one.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
514,236
2109.01275
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan. AdvTrojan is stealthy because it can be activated only when: 1) a carefully crafted adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model. We leverage adversarial noise in the input space to move Trojan-infected examples across the model decision boundary, making it difficult to detect. The stealthiness behavior of AdvTrojan fools the users into accidentally trust the infected model as a robust classifier against adversarial examples. AdvTrojan can be implemented by only poisoning the training data similar to conventional Trojan backdoor attacks. Our thorough analysis and extensive experiments on several benchmark datasets show that AdvTrojan can bypass existing defenses with a success rate close to 100% in most of our experimental scenarios and can be extended to attack federated learning tasks as well.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
253,386
2305.04059
Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit
Onboard machine learning on the latest satellite hardware offers the potential for significant savings in communication and operational costs. We showcase the training of a machine learning model on a satellite constellation for scene classification using semi-supervised learning while accounting for operational constraints such as temperature and limited power budgets based on satellite processor benchmarks of the neural network. We evaluate mission scenarios employing both decentralised and federated learning approaches. All scenarios achieve convergence to high accuracy (around 91% on EuroSAT RGB dataset) within a one-day mission timeframe.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
362,616
2106.12931
Spatial-Temporal Graph ODE Networks for Traffic Flow Forecasting
Spatial-temporal forecasting has attracted tremendous attention in a wide range of applications, and traffic flow prediction is a canonical and typical example. The complex and long-range spatial-temporal correlations of traffic flow bring it to a most intractable challenge. Existing works typically utilize shallow graph convolution networks (GNNs) and temporal extracting modules to model spatial and temporal dependencies respectively. However, the representation ability of such models is limited due to: (1) shallow GNNs are incapable to capture long-range spatial correlations, (2) only spatial connections are considered and a mass of semantic connections are ignored, which are of great importance for a comprehensive understanding of traffic networks. To this end, we propose Spatial-Temporal Graph Ordinary Differential Equation Networks (STGODE). Specifically, we capture spatial-temporal dynamics through a tensor-based ordinary differential equation (ODE), as a result, deeper networks can be constructed and spatial-temporal features are utilized synchronously. To understand the network more comprehensively, semantical adjacency matrix is considered in our model, and a well-design temporal dialated convolution structure is used to capture long term temporal dependencies. We evaluate our model on multiple real-world traffic datasets and superior performance is achieved over state-of-the-art baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
242,894
2104.08388
Neural String Edit Distance
We propose the neural string edit distance model for string-pair matching and string transduction based on learnable string edit distance. We modify the original expectation-maximization learned edit distance algorithm into a differentiable loss function, allowing us to integrate it into a neural network providing a contextual representation of the input. We evaluate on cognate detection, transliteration, and grapheme-to-phoneme conversion, and show that we can trade off between performance and interpretability in a single framework. Using contextual representations, which are difficult to interpret, we match the performance of state-of-the-art string-pair matching models. Using static embeddings and a slightly different loss function, we force interpretability, at the expense of an accuracy drop.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
230,772
2501.00744
A Distributional Evaluation of Generative Image Models
Generative models are ubiquitous in modern artificial intelligence (AI) applications. Recent advances have led to a variety of generative modeling approaches that are capable of synthesizing highly realistic samples. Despite these developments, evaluating the distributional match between the synthetic samples and the target distribution in a statistically principled way remains a core challenge. We focus on evaluating image generative models, where studies often treat human evaluation as the gold standard. Commonly adopted metrics, such as the Fr\'echet Inception Distance (FID), do not sufficiently capture the differences between the learned and target distributions, because the assumption of normality ignores differences in the tails. We propose the Embedded Characteristic Score (ECS), a comprehensive metric for evaluating the distributional match between the learned and target sample distributions, and explore its connection with moments and tail behavior. We derive natural properties of ECS and show its practical use via simulations and an empirical study.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
521,782
1304.8132
Local Graph Clustering Beyond Cheeger's Inequality
Motivated by applications of large-scale graph clustering, we study random-walk-based LOCAL algorithms whose running times depend only on the size of the output cluster, rather than the entire graph. All previously known such algorithms guarantee an output conductance of $\tilde{O}(\sqrt{\phi(A)})$ when the target set $A$ has conductance $\phi(A)\in[0,1]$. In this paper, we improve it to $$\tilde{O}\bigg( \min\Big\{\sqrt{\phi(A)}, \frac{\phi(A)}{\sqrt{\mathsf{Conn}(A)}} \Big\} \bigg)\enspace, $$ where the internal connectivity parameter $\mathsf{Conn}(A) \in [0,1]$ is defined as the reciprocal of the mixing time of the random walk over the induced subgraph on $A$. For instance, using $\mathsf{Conn}(A) = \Omega(\lambda(A) / \log n)$ where $\lambda$ is the second eigenvalue of the Laplacian of the induced subgraph on $A$, our conductance guarantee can be as good as $\tilde{O}(\phi(A)/\sqrt{\lambda(A)})$. This builds an interesting connection to the recent advance of the so-called improved Cheeger's Inequality [KKL+13], which says that global spectral algorithms can provide a conductance guarantee of $O(\phi_{\mathsf{opt}}/\sqrt{\lambda_3})$ instead of $O(\sqrt{\phi_{\mathsf{opt}}})$. In addition, we provide theoretical guarantee on the clustering accuracy (in terms of precision and recall) of the output set. We also prove that our analysis is tight, and perform empirical evaluation to support our theory on both synthetic and real data. It is worth noting that, our analysis outperforms prior work when the cluster is well-connected. In fact, the better it is well-connected inside, the more significant improvement (both in terms of conductance and accuracy) we can obtain. Our results shed light on why in practice some random-walk-based algorithms perform better than its previous theory, and help guide future research about local clustering.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
24,317
2410.06456
From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning
Large vision language models (VLMs) combine large language models with vision encoders, demonstrating promise across various tasks. However, they often underperform in task-specific applications due to domain gaps between pre-training and fine-tuning. We introduce VITask, a novel framework that enhances task-specific adaptability of VLMs by integrating task-specific models (TSMs). VITask employs three key strategies: exemplar prompting (EP), response distribution alignment (RDA), and contrastive response tuning (CRT) to improve the task-specific performance of VLMs by adjusting their response distributions. EP allows TSM features to guide VLMs, while RDA enables VLMs to adapt without TSMs during inference by learning from exemplar-prompted models. CRT further optimizes the ranking of correct image-response pairs, thereby reducing the risk of generating undesired responses. Experiments on 12 medical diagnosis datasets across 9 imaging modalities show that VITask outperforms both vanilla instruction-tuned VLMs and TSMs, showcasing its ability to integrate complementary features from both models effectively. Additionally, VITask offers practical advantages such as flexible TSM integration and robustness to incomplete instructions, making it a versatile and efficient solution for task-specific VLM tuning. Our code are available at https://github.com/baiyang4/VITask.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
496,207
2201.12342
Error-Correcting Neural Networks for Two-Dimensional Curvature Computation in the Level-Set Method
We present an error-neural-modeling-based strategy for approximating two-dimensional curvature in the level-set method. Our main contribution is a redesigned hybrid solver [Larios-C\'ardenas and Gibou, J. Comput. Phys. (May 2022), 10.1016/j.jcp.2022.111291] that relies on numerical schemes to enable machine-learning operations on demand. In particular, our routine features double predicting to harness curvature symmetry invariance in favor of precision and stability. The core of this solver is a multilayer perceptron trained on circular- and sinusoidal-interface samples. Its role is to quantify the error in numerical curvature approximations and emit corrected estimates for select grid vertices along the free boundary. These corrections arise in response to preprocessed context level-set, curvature, and gradient data. To promote neural capacity, we have adopted sample negative-curvature normalization, reorientation, and reflection-based augmentation. In the same manner, our system incorporates dimensionality reduction, well-balancedness, and regularization to minimize outlying effects. Our training approach is likewise scalable across mesh sizes. For this purpose, we have introduced dimensionless parametrization and probabilistic subsampling during data production. Together, all these elements have improved the accuracy and efficiency of curvature calculations around under-resolved regions. In most experiments, our strategy has outperformed the numerical baseline at twice the number of redistancing steps while requiring only a fraction of the cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
277,608
2201.04462
Chaos and order in event-triggered control
Event-triggered control (ETC) is claimed to provide enormous reductions in sampling frequency when compared to periodic sampling, but little is formally known about its generated traffic. This work shows that ETC can exhibit very complex, even chaotic traffic, especially when the triggering condition is aggressive in reducing communications. First, we characterize limit traffic patterns by observing invariant lines and planes through the origin, as well as their attractivity. Then, we present abstraction-based methods to compute limit metrics, such as limit average and limit inferior inter-sample time (IST) of periodic ETC (PETC), with considerations to the robustness of such metrics, as well as measuring the emergence of chaos. The methodology and tools allow us to find ETC examples that provably outperform periodic sampling in terms of average IST. In particular for PETC, we prove that this requires aperiodic or chaotic traffic.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
275,110
2302.03793
Self-Supervised Unseen Object Instance Segmentation via Long-Term Robot Interaction
We introduce a novel robotic system for improving unseen object instance segmentation in the real world by leveraging long-term robot interaction with objects. Previous approaches either grasp or push an object and then obtain the segmentation mask of the grasped or pushed object after one action. Instead, our system defers the decision on segmenting objects after a sequence of robot pushing actions. By applying multi-object tracking and video object segmentation on the images collected via robot pushing, our system can generate segmentation masks of all the objects in these images in a self-supervised way. These include images where objects are very close to each other, and segmentation errors usually occur on these images for existing object segmentation networks. We demonstrate the usefulness of our system by fine-tuning segmentation networks trained on synthetic data with real-world data collected by our system. We show that, after fine-tuning, the segmentation accuracy of the networks is significantly improved both in the same domain and across different domains. In addition, we verify that the fine-tuned networks improve top-down robotic grasping of unseen objects in the real world.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
344,470
2305.13226
Sequential Transfer Learning to Decode Heard and Imagined Timbre from fMRI Data
We present a sequential transfer learning framework for transformers on functional Magnetic Resonance Imaging (fMRI) data and demonstrate its significant benefits for decoding musical timbre. In the first of two phases, we pre-train our stacked-encoder transformer architecture on Next Thought Prediction, a self-supervised task of predicting whether or not one sequence of fMRI data follows another. This phase imparts a general understanding of the temporal and spatial dynamics of neural activity, and can be applied to any fMRI dataset. In the second phase, we fine-tune the pre-trained models and train additional fresh models on the supervised task of predicting whether or not two sequences of fMRI data were recorded while listening to the same musical timbre. The fine-tuned models achieve significantly higher accuracy with shorter training times than the fresh models, demonstrating the efficacy of our framework for facilitating transfer learning on fMRI data. Additionally, our fine-tuning task achieves a level of classification granularity beyond standard methods. This work contributes to the growing literature on transformer architectures for sequential transfer learning on fMRI data, and provides evidence that our framework is an improvement over current methods for decoding timbre.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
366,407
1902.09381
EAT: a simple and versatile semantic representation format for multi-purpose NLP
Semantic representations are central in many NLP tasks that require human-interpretable data. The conjunctivist framework - primarily developed by Pietroski (2005, 2018) - obtains expressive representations with only a few basic semantic types and relations systematically linked to syntactic positions. While representational simplicity is crucial for computational applications, such findings have not yet had major influence on NLP. We present the first generic semantic representation format for NLP directly based on these insights. We name the format EAT due to its basis in the Event-, Agent-, and Theme arguments in Neo-Davidsonian logical forms. It builds on the idea that similar tripartite argument relations are ubiquitous across categories, and can be constructed from grammatical structure without additional lexical information. We present a detailed exposition of EAT and how it relates to other prevalent formats used in prior work, such as Abstract Meaning Representation (AMR) and Minimal Recursion Semantics (MRS). EAT stands out in two respects: simplicity and versatility. Uniquely, EAT discards semantic metapredicates, and instead represents semantic roles entirely via positional encoding. This is made possible by limiting the number of roles to only three; a major decrease from the many dozens recognized in e.g. AMR and MRS. EAT's simplicity makes it exceptionally versatile in application. First, we show that drastically reducing semantic roles based on EAT benefits text generation from MRS in the test settings of Hajdik et al. (2019). Second, we implement the derivation of EAT from a syntactic parse, and apply this for parallel corpus generation between grammatical classes. Third, we train an encoder-decoder LSTM network to map EAT to English. Finally, we use both the encoder-decoder network and a rule-based alternative to conduct grammatical transformation from EAT-input.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
122,403
2501.18802
Agile and Cooperative Aerial Manipulation of a Cable-Suspended Load
Quadrotors can carry slung loads to hard-to-reach locations at high speed. Since a single quadrotor has limited payload capacities, using a team of quadrotors to collaboratively manipulate a heavy object is a scalable and promising solution. However, existing control algorithms for multi-lifting systems only enable low-speed and low-acceleration operations due to the complex dynamic coupling between quadrotors and the load, limiting their use in time-critical missions such as search and rescue. In this work, we present a solution to significantly enhance the agility of cable-suspended multi-lifting systems. Unlike traditional cascaded solutions, we introduce a trajectory-based framework that solves the whole-body kinodynamic motion planning problem online, accounting for the dynamic coupling effects and constraints between the quadrotors and the load. The planned trajectory is provided to the quadrotors as a reference in a receding-horizon fashion and is tracked by an onboard controller that observes and compensates for the cable tension. Real-world experiments demonstrate that our framework can achieve at least eight times greater acceleration than state-of-the-art methods to follow agile trajectories. Our method can even perform complex maneuvers such as flying through narrow passages at high speed. Additionally, it exhibits high robustness against load uncertainties and does not require adding any sensors to the load, demonstrating strong practicality.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
528,864
2007.08668
BRP-NAS: Prediction-based NAS using GCNs
Neural architecture search (NAS) enables researchers to automatically explore broad design spaces in order to improve efficiency of neural networks. This efficiency is especially important in the case of on-device deployment, where improvements in accuracy should be balanced out with computational demands of a model. In practice, performance metrics of model are computationally expensive to obtain. Previous work uses a proxy (e.g., number of operations) or a layer-wise measurement of neural network layers to estimate end-to-end hardware performance but the imprecise prediction diminishes the quality of NAS. To address this problem, we propose BRP-NAS, an efficient hardware-aware NAS enabled by an accurate performance predictor-based on graph convolutional network (GCN). What is more, we investigate prediction quality on different metrics and show that sample efficiency of the predictor-based NAS can be improved by considering binary relations of models and an iterative data selection strategy. We show that our proposed method outperforms all prior methods on NAS-Bench-101 and NAS-Bench-201, and that our predictor can consistently learn to extract useful features from the DARTS search space, improving upon the second-order baseline. Finally, to raise awareness of the fact that accurate latency estimation is not a trivial task, we release LatBench -- a latency dataset of NAS-Bench-201 models running on a broad range of devices.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
187,702
2112.12084
Input-Specific Robustness Certification for Randomized Smoothing
Although randomized smoothing has demonstrated high certified robustness and superior scalability to other certified defenses, the high computational overhead of the robustness certification bottlenecks the practical applicability, as it depends heavily on the large sample approximation for estimating the confidence interval. In existing works, the sample size for the confidence interval is universally set and agnostic to the input for prediction. This Input-Agnostic Sampling (IAS) scheme may yield a poor Average Certified Radius (ACR)-runtime trade-off which calls for improvement. In this paper, we propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification, in an adaptive way of reducing the sampling size based on the input characteristic. Furthermore, our method universally controls the certified radius decline from the ISS sample size reduction. The empirical results on CIFAR-10 and ImageNet show that ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius. Meanwhile, ISS surpasses IAS on the average certified radius across the extensive hyperparameter settings. Specifically, ISS achieves ACR=0.958 on ImageNet ($\sigma=1.0$) in 250 minutes, compared to ACR=0.917 by IAS under the same condition. We release our code in \url{https://github.com/roy-ch/Input-Specific-Certification}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
272,878
1804.01351
Attack vulnerability of power systems under an equal load redistribution model
This paper studies the vulnerability of flow networks against adversarial attacks. In particular, consider a power system (or, any system carrying a physical flow) consisting of $N$ transmission lines with initial loads $L_1, \ldots , L_N$ and capacities $C_1, \ldots, C_N$, respectively; the capacity $C_i$ defines the maximum flow allowed on line $i$. Under an equal load redistribution model, where load of failed lines is redistributed equally among all remaining lines, we study the {\em optimization} problem of finding the best $k$ lines to attack so as to minimize the number of {\em alive} lines at the steady-state (i.e., when cascades stop). This is done to reveal the worst-case attack vulnerability of the system as well as to reveal its most vulnerable lines. We derive optimal attack strategies in several special cases of load-capacity distributions that are practically relevant. We then consider a modified optimization problem where the adversary is also constrained by the {\em total} load (in addition to the number) of the initial attack set, and prove that this problem is NP-Hard. Finally, we develop heuristic algorithms for selecting the attack set for both the original and modified problems. Through extensive simulations, we show that these heuristics outperform benchmark algorithms under a wide range of settings.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
94,212
2010.09892
Understanding YouTube Communities via Subscription-based Channel Embeddings
YouTube is an important source of news and entertainment worldwide, but the scale makes it challenging to study the ideas and topics being discussed on the platform. This paper presents new methods to discover and classify YouTube channels which enable the analysis of communities and categories on the platform using orders of magnitude more channels than have been used in previous studies. Instead of using channel and video data as features for classification as other researchers have, these methods use a self-supervised learning approach that leverages the public subscription pages of commenters. We test the classification method on the task of predicting the political lean of YouTube news channels and find that it outperforms the previous best model on the task. Further experiments also show that there are important advantages to using commenter subscriptions to discover channels. The subscription data, along with an iterative approach, is applied to discover, to our current understanding, the most comprehensive set of English language socio-political YouTube channels yet to be analyzed. We experiment with predicting more fine grained political tags for channels using a previously annotated dataset and find that our model performs better than the average individual human reviewer for most of the top tags. This fine grained political tag model is then applied to the newly discovered English language socio-political channels to create a new dataset to analyze the amount of traffic going to different political content. The data shows that some tags, such as "Partisan Right" and "Conspiracy", are significantly under represented when looking only at the most popular socio-political channels. Through the use of our methods, we are able to get a much more accurate picture of the size of these communities on YouTube.
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
false
201,691
1805.12518
Incremental Natural Language Processing: Challenges, Strategies, and Evaluation
Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
99,196
1508.00144
Quantitative evaluation of the performance of discrete-time reservoir computers in the forecasting, filtering, and reconstruction of stochastic stationary signals
This paper extends the notion of information processing capacity for non-independent input signals in the context of reservoir computing (RC). The presence of input autocorrelation makes worthwhile the treatment of forecasting and filtering problems for which we explicitly compute this generalized capacity as a function of the reservoir parameter values using a streamlined model. The reservoir model leading to these developments is used to show that, whenever that approximation is valid, this computational paradigm satisfies the so called separation and fading memory properties that are usually associated with good information processing performances. We show that several standard memory, forecasting, and filtering problems that appear in the parametric stochastic time series context can be readily formulated and tackled via RC which, as we show, significantly outperforms standard techniques in some instances.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
45,637
2311.08319
Resource Efficient Over-the-Air Fronthaul Signaling for Uplink Cell-Free Massive MIMO Systems
We propose a novel resource efficient analog over-the-air (OTA) computation framework to address the demanding requirements of the uplink (UL) fronthaul between the access points (APs) and the central processing unit (CPU) in cell-free massive multiple-input multiple-output (MIMO) systems. We discuss the drawbacks of the wired and wireless fronthaul solutions, and show that our proposed mechanism is efficient and scalable as the number of APs increases. We present the transmit precoding and two-phase power assignment strategies at the APs to coherently combine the signals OTA in a spectrally efficient manner. We derive the statistics of the APs locally available signals which enable us to to obtain the analytical expressions for the Bayesian and classical estimators of the OTA combined signals. We empirically evaluate the normalized mean square error (NMSE), symbol error rate (SER), and the coded bit error rate (BER) of our developed solution and benchmark against the state-of-the-art wired fronthaul based system
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
407,673
2408.00998
FBSDiff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation
Large-scale text-to-image diffusion models have been a revolutionary milestone in the evolution of generative AI and multimodal technology, allowing wonderful image generation with natural-language text prompt. However, the issue of lacking controllability of such models restricts their practical applicability for real-life content creation. Thus, attention has been focused on leveraging a reference image to control text-to-image synthesis, which is also regarded as manipulating (or editing) a reference image as per a text prompt, namely, text-driven image-to-image translation. This paper contributes a novel, concise, and efficient approach that adapts pre-trained large-scale text-to-image (T2I) diffusion model to the image-to-image (I2I) paradigm in a plug-and-play manner, realizing high-quality and versatile text-driven I2I translation without any model training, model fine-tuning, or online optimization process. To guide T2I generation with a reference image, we propose to decompose diverse guiding factors with different frequency bands of diffusion features in the DCT spectral space, and accordingly devise a novel frequency band substitution layer which realizes dynamic control of the reference image to the T2I generation result in a plug-and-play manner. We demonstrate that our method allows flexible control over both guiding factor and guiding intensity of the reference image simply by tuning the type and bandwidth of the substituted frequency band, respectively. Extensive qualitative and quantitative experiments verify superiority of our approach over related methods in I2I translation visual quality, versatility, and controllability. The code is publicly available at: https://github.com/XiangGao1102/FBSDiff.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
478,063
2407.14845
Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established. Therefore, understanding how LLMs reason and make decisions is crucial for their safe deployment. This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt. Leveraging the insight that LLMs learn to infer latent concepts during pretraining, we propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty. We show that the uncertainty decreases as the prompt's informativeness increases, similar to epistemic uncertainty. Our detailed experimental results on real datasets validate our proposed model.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
474,929
2412.10004
NeRF-Texture: Synthesizing Neural Radiance Field Textures
Texture synthesis is a fundamental problem in computer graphics that would benefit various applications. Existing methods are effective in handling 2D image textures. In contrast, many real-world textures contain meso-structure in the 3D geometry space, such as grass, leaves, and fabrics, which cannot be effectively modeled using only 2D image textures. We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images. In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape. This allows textures with meso-structure to be effectively learned as latent features situated on the base shape, which are fed into a NeRF decoder trained simultaneously to represent the rich view-dependent appearance. Using this implicit representation, we can synthesize NeRF-based textures through patch matching of latent features. However, inconsistencies between the metrics of the reconstructed content space and the latent feature space may compromise the synthesis quality. To enhance matching performance, we further regularize the distribution of latent features by incorporating a clustering constraint. In addition to generating NeRF textures over a planar domain, our method can also synthesize NeRF textures over curved surfaces, which are practically useful. Experimental results and evaluations demonstrate the effectiveness of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
516,750
2303.06334
SEM-CS: Semantic CLIPStyler for Text-Based Image Style Transfer
CLIPStyler demonstrated image style transfer with realistic textures using only the style text description (instead of requiring a reference style image). However, the ground semantics of objects in style transfer output is lost due to style spillover on salient and background objects (content mismatch) or over-stylization. To solve this, we propose Semantic CLIPStyler (Sem-CS) that performs semantic style transfer. Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description. The semantic style transfer is achieved using global foreground loss (for salient objects) and global background loss (for non-salient objects). Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,799
2301.08956
Using deterministic tourist walk as a small-world metric on Watts-Strogatz networks
The Watts-Strogatz model (WS) has been demonstrated to effectively describe real-world networks due to its ability to reproduce the small-world properties commonly observed in a variety of systems, including social networks, computer networks, biochemical reactions, and neural networks. As the presence of small-world properties is a prevalent characteristic in many real-world networks, the measurement of "small-worldness" has become a crucial metric in the field of network science, leading to the development of various methods for its assessment over the past two decades. In contrast, the deterministic tourist walk (DTW) method has emerged as a prominent technique for texture analysis and network classification. In this paper, we propose the use of a modified version of the DTW method to classify networks into three categories: regular networks, random networks, and small-world networks. Additionally, we construct a small-world metric, denoted by the coefficient $\chi$, from the DTW method. Results indicate that the proposed method demonstrates excellent performance in the task of network classification, achieving over $90\%$ accuracy. Furthermore, the results obtained using the coefficient $\chi$ on real-world networks provide evidence that the proposed method effectively serves as a satisfactory small-world metric.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
341,353
1811.05863
Robust low-rank multilinear tensor approximation for a joint estimation of the multilinear rank and the loading matrices
In order to compute the best low-rank tensor approximation using the Multilinear Tensor Decomposition (MTD) model, it is essential to estimate the rank of the underlying multilinear tensor from the noisy observation tensor. In this paper, we propose a Robust MTD (R-MTD) method, which jointly estimates the multilinear rank and the loading matrices. Based on the low-rank property and an over-estimation of the core tensor, this joint estimation problem is solved by promoting (group) sparsity of the over-estimated core tensor. Group sparsity is promoted using mixed-norms. Then we establish a link between the mixed-norms and the nuclear norm, showing that mixed-norms are better candidates for a convex envelope of the rank. After several iterations of the Alternating Direction Method of Multipliers (ADMM), the Minimum Description Length (MDL) criterion computed from the eigenvalues of the unfolding matrices of the estimated core tensor is minimized in order to estimate the multilinear rank. The latter is then used to estimate more accurately the loading matrices. We further develop another R-MTD method, called R-OMTD, by imposing an orthonormality constraint on each loading matrix in order to decrease the computation complexity. A series of simulated noisy tensor and real-world data are used to show the effectiveness of the proposed methods compared with state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
113,401
2405.14486
RefChecker: Reference-based Fine-grained Hallucination Checker and Benchmark for Large Language Models
Large Language Models (LLMs) have shown impressive capabilities but also a concerning tendency to hallucinate. This paper presents RefChecker, a framework that introduces claim-triplets to represent claims in LLM responses, aiming to detect fine-grained hallucinations. In RefChecker, an extractor generates claim-triplets from a response, which are then evaluated by a checker against a reference. We delineate three task settings: Zero, Noisy and Accurate Context, to reflect various real-world use cases. We curated a benchmark spanning various NLP tasks and annotated 11k claim-triplets from 2.1k responses by seven LLMs. RefChecker supports both proprietary and open-source models as the extractor and checker. Experiments demonstrate that claim-triplets enable superior hallucination detection, compared to other granularities such as response, sentence and sub-sentence level claims. RefChecker outperforms prior methods by 6.8 to 26.1 points on our benchmark and the checking results of RefChecker are strongly aligned with human judgments. This work is open sourced at https://github.com/amazon-science/RefChecker
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
456,452
1909.08869
PgNN: Physics-guided Neural Network for Fourier Ptychographic Microscopy
Fourier ptychography (FP) is a newly developed computational imaging approach that achieves both high resolution and wide field of view by stitching a series of low-resolution images captured under angle-varied illumination. So far, many supervised data-driven models have been applied to solve inverse imaging problems. These models need massive amounts of data to train, and are limited by the dataset characteristics. In FP problems, generic datasets are always scarce, and the optical aberration varies greatly under different acquisition conditions. To address these dilemmas, we model the forward physical imaging process as an interpretable physics-guided neural network (PgNN), where the reconstructed image in the complex domain is considered as the learnable parameters of the neural network. Since the optimal parameters of the PgNN can be derived by minimizing the difference between the model-generated images and real captured angle-varied images corresponding to the same scene, the proposed PgNN can get rid of the problem of massive training data as in traditional supervised methods. Applying the alternate updating mechanism and the total variation regularization, PgNN can flexibly reconstruct images with improved performance. In addition, the Zernike mode is incorporated to compensate for optical aberrations to enhance the robustness of FP reconstructions. As a demonstration, we show our method can reconstruct images with smooth performance and detailed information in both simulated and experimental datasets. In particular, when validated in an extension of a high-defocus, high-exposure tissue section dataset, PgNN outperforms traditional FP methods with fewer artifacts and distinguishable structures.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
146,084
1909.00948
HarDNet: A Low Memory Traffic Network
State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
143,763
2002.12161
On the Capacity of Fractal D2D Social Networks with Hierarchical Communications
The maximum capacity of fractal D2D (device-to-device) social networks with both direct and hierarchical communications is studied in this paper. Specifically, the fractal networks are characterized by the direct social connection and the self-similarity. Firstly, for a fractal D2D social network with direct social communications, it is proved that the maximum capacity is $ \Theta\left(\frac{1}{\sqrt{n\log n}}\right) $ if a user communicates with one of his/her direct contacts randomly, where $ n $ denotes the total number of users in the network, and it can reach up to $ \Theta\left(\frac{1}{\log n}\right) $ if any pair of social contacts with distance $ d $ communicate according to the probability in proportion to $ d^{-\beta} $. Secondly, since users might get in touch with others without direct social connections through the inter-connected multiple users, the fractal D2D social network with these hierarchical communications is studied as well, and the related capacity is further derived. Our results show that this capacity is mainly affected by the correlation exponent $\epsilon$ of the fractal structure. The capacity is reduced in proportional to $ \frac{1}{{\log n}} $ if $ 2<\epsilon<3 $, while the reduction coefficient is $ \frac{1}{n} $ if $ \epsilon>3 $.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
165,944
1802.04451
Blockchain and Artificial Intelligence
It is undeniable that artificial intelligence (AI) and blockchain concepts are spreading at a phenomenal rate. Both technologies have distinct degree of technological complexity and multi-dimensional business implications. However, a common misunderstanding about blockchain concept, in particular, is that blockchain is decentralized and is not controlled by anyone. But the underlying development of a blockchain system is still attributed to a cluster of core developers. Take smart contract as an example, it is essentially a collection of codes (or functions) and data (or states) that are programmed and deployed on a blockchain (say, Ethereum) by different human programmers. It is thus, unfortunately, less likely to be free of loopholes and flaws. In this article, through a brief overview about how artificial intelligence could be used to deliver bug-free smart contract so as to achieve the goal of blockchain 2.0, we to emphasize that the blockchain implementation can be assisted or enhanced via various AI techniques. The alliance of AI and blockchain is expected to create numerous possibilities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
90,224
1210.4907
From imprecise probability assessments to conditional probabilities with quasi additive classes of conditioning events
In this paper, starting from a generalized coherent (i.e. avoiding uniform loss) intervalvalued probability assessment on a finite family of conditional events, we construct conditional probabilities with quasi additive classes of conditioning events which are consistent with the given initial assessment. Quasi additivity assures coherence for the obtained conditional probabilities. In order to reach our goal we define a finite sequence of conditional probabilities by exploiting some theoretical results on g-coherence. In particular, we use solutions of a finite sequence of linear systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
19,231
2409.06827
Cross-Modal Self-Supervised Learning with Effective Contrastive Units for LiDAR Point Clouds
3D perception in LiDAR point clouds is crucial for a self-driving vehicle to properly act in 3D environment. However, manually labeling point clouds is hard and costly. There has been a growing interest in self-supervised pre-training of 3D perception models. Following the success of contrastive learning in images, current methods mostly conduct contrastive pre-training on point clouds only. Yet an autonomous driving vehicle is typically supplied with multiple sensors including cameras and LiDAR. In this context, we systematically study single modality, cross-modality, and multi-modality for contrastive learning of point clouds, and show that cross-modality wins over other alternatives. In addition, considering the huge difference between the training sources in 2D images and 3D point clouds, it remains unclear how to design more effective contrastive units for LiDAR. We therefore propose the instance-aware and similarity-balanced contrastive units that are tailored for self-driving point clouds. Extensive experiments reveal that our approach achieves remarkable performance gains over various point cloud models across the downstream perception tasks of LiDAR based 3D object detection and 3D semantic segmentation on the four popular benchmarks including Waymo Open Dataset, nuScenes, SemanticKITTI and ONCE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
487,286
1505.05770
Variational Inference with Normalizing Flows
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
43,344
1906.04157
Global optimization of dielectric metasurfaces using a physics-driven neural network
We present a global optimizer, based on a conditional generative neural network, which can output ensembles of highly efficient topology-optimized metasurfaces operating across a range of parameters. A key feature of the network is that it initially generates a distribution of devices that broadly samples the design space, and then shifts and refines this distribution towards favorable design space regions over the course of optimization. Training is performed by calculating the forward and adjoint electromagnetic simulations of outputted devices and using the subsequent efficiency gradients for backpropagation. With metagratings operating across a range of wavelengths and angles as a model system, we show that devices produced from the trained generative network have efficiencies comparable to or better than the best devices produced by adjoint-based topology optimization, while requiring less computational cost. Our reframing of adjoint-based optimization to the training of a generative neural network applies generally to physical systems that can utilize gradients to improve performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
134,620
2304.00731
An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner
The interpretability of model has become one of the obstacles to its wide application in the high-stake fields. The usual way to obtain interpretability is to build a black-box first and then explain it using the post-hoc methods. However, the explanations provided by the post-hoc method are not always reliable. Instead, we design an intrinsically interpretable model based on RRL(Rule Representation Learner) for the Lending Club dataset. Specifically, features can be divided into three categories according to their characteristics of themselves and build three sub-networks respectively, each of which is similar to a neural network with a single hidden layer but can be equivalently converted into a set of rules. During the training, we learned tricks from previous research to effectively train binary weights. Finally, our model is compared with the tree-based model. The results show that our model is much better than the interpretable decision tree in performance and close to other black-box, which is of practical significance to both financial institutions and borrowers. More importantly, our model is used to test the correctness of the explanations generated by the post-hoc method, the results show that the post-hoc method is not always reliable.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
355,788
2211.15474
Unsupervised Superpixel Generation using Edge-Sparse Embedding
Partitioning an image into superpixels based on the similarity of pixels with respect to features such as colour or spatial location can significantly reduce data complexity and improve subsequent image processing tasks. Initial algorithms for unsupervised superpixel generation solely relied on local cues without prioritizing significant edges over arbitrary ones. On the other hand, more recent methods based on unsupervised deep learning either fail to properly address the trade-off between superpixel edge adherence and compactness or lack control over the generated number of superpixels. By using random images with strong spatial correlation as input, \ie, blurred noise images, in a non-convolutional image decoder we can reduce the expected number of contrasts and enforce smooth, connected edges in the reconstructed image. We generate edge-sparse pixel embeddings by encoding additional spatial information into the piece-wise smooth activation maps from the decoder's last hidden layer and use a standard clustering algorithm to extract high quality superpixels. Our proposed method reaches state-of-the-art performance on the BSDS500, PASCAL-Context and a microscopy dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
333,283
1912.02079
FocusNet++: Attentive Aggregated Transformations for Efficient and Accurate Medical Image Segmentation
We propose a new residual block for convolutional neural networks and demonstrate its state-of-the-art performance in medical image segmentation. We combine attention mechanisms with group convolutions to create our group attention mechanism, which forms the fundamental building block of our network, FocusNet++. We employ a hybrid loss based on balanced cross entropy, Tversky loss and the adaptive logarithmic loss to enhance the performance along with fast convergence. Our results show that FocusNet++ achieves state-of-the-art results across various benchmark metrics for the ISIC 2018 melanoma segmentation and the cell nuclei segmentation datasets with fewer parameters and FLOPs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,251
1908.00778
A Structural Graph-Based Method for MRI Analysis
The importance of imaging exams, such as Magnetic Resonance Imaging (MRI), for the diagnostic and follow-up of pediatric pathologies and the assessment of anatomical structures' development has been increasingly highlighted in recent times. Manual analysis of MRIs is time-consuming, subjective, and requires significant expertise. To mitigate this, automatic techniques are necessary. Most techniques focus on adult subjects, while pediatric MRI has specific challenges such as the ongoing anatomical and histological changes related to normal development of the organs, reduced signal-to-noise ratio due to the smaller bodies, motion artifacts and cooperation issues, especially in long exams, which can in many cases preclude common analysis methods developed for use in adults. Therefore, the development of a robust technique to aid in pediatric MRI analysis is necessary. This paper presents the current development of a new method based on the learning and matching of structural relational graphs (SRGs). The experiments were performed on liver MRI sequences of one patient from ICr-HC-FMUSP, and preliminary results showcased the viability of the project. Future experiments are expected to culminate with an application for pediatric liver substructure and brain tumor segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,598
2411.13588
Unveiling Redundancy in Diffusion Transformers (DiTs): A Systematic Study
The increased model capacity of Diffusion Transformers (DiTs) and the demand for generating higher resolutions of images and videos have led to a significant rise in inference latency, impacting real-time performance adversely. While prior research has highlighted the presence of high similarity in activation values between adjacent diffusion steps (referred to as redundancy) and proposed various caching mechanisms to mitigate computational overhead, the exploration of redundancy in existing literature remains limited, with findings often not generalizable across different DiT models. This study aims to address this gap by conducting a comprehensive investigation into redundancy across a broad spectrum of mainstream DiT models. Our experimental analysis reveals substantial variations in the distribution of redundancy across diffusion steps among different DiT models. Interestingly, within a single model, the redundancy distribution remains stable regardless of variations in input prompts, step counts, or scheduling strategies. Given the lack of a consistent pattern across diverse models, caching strategies designed for a specific group of models may not easily transfer to others. To overcome this challenge, we introduce a tool for analyzing the redundancy of individual models, enabling subsequent research to develop tailored caching strategies for specific model architectures. The project is publicly available at https://github.com/xdit-project/DiTCacheAnalysis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
509,844
2102.11032
Performance of Automatic De-identification Across Different Note Types
Free-text clinical notes detail all aspects of patient care and have great potential to facilitate quality improvement and assurance initiatives as well as advance clinical research. However, concerns about patient privacy and confidentiality limit the use of clinical notes for research. As a result, the information documented in these notes remains unavailable for most researchers. De-identification (de-id), i.e., locating and removing personally identifying protected health information (PHI), is one way of improving access to clinical narratives. However, there are limited off-the-shelf de-identification systems able to consistently detect PHI across different data sources and medical specialties. In this abstract, we present the performance of a state-of-the art de-id system called NeuroNER1 on a diverse set of notes from University of Washington (UW) when the models are trained on data from an external institution (Partners Healthcare) vs. from the same institution (UW). We present results at the level of PHI and note types.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
221,302
2409.17591
Conjugate Bayesian Two-step Change Point Detection for Hawkes Process
The Bayesian two-step change point detection method is popular for the Hawkes process due to its simplicity and intuitiveness. However, the non-conjugacy between the point process likelihood and the prior requires most existing Bayesian two-step change point detection methods to rely on non-conjugate inference methods. These methods lack analytical expressions, leading to low computational efficiency and impeding timely change point detection. To address this issue, this work employs data augmentation to propose a conjugate Bayesian two-step change point detection method for the Hawkes process, which proves to be more accurate and efficient. Extensive experiments on both synthetic and real data demonstrate the superior effectiveness and efficiency of our method compared to baseline methods. Additionally, we conduct ablation studies to explore the robustness of our method concerning various hyperparameters. Our code is publicly available at https://github.com/Aurora2050/CoBay-CPD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
491,883
1910.10380
Online Synthesis for Runtime Enforcement of Safety in Multi-Agent Systems
A shield is attached to a system to guarantee safety by correcting the system's behavior at runtime. Existing methods that employ design-time synthesis of shields do not scale to multi-agent systems. Moreover, such shields are typically implemented in a centralized manner, requiring global information on the state of all agents in the system. We address these limitations through a new approach where the shields are synthesized at runtime and do not require global information. There is a shield onboard every agent, which can only modify the behavior of the corresponding agent. In this approach, which is fundamentally decentralized, the shield on every agent has two components: a pathfinder that corrects the behavior of the agent and an ordering mechanism that dynamically modifies the priority of the agent. The current priority determines if the shield uses the pathfinder to modify behavior of the agent. We derive an upper bound on the maximum deviation for any agent from its original behavior. We prove that the worst-case synthesis time is quadratic in the number of agents at runtime as opposed to exponential at design-time for existing methods. We test the performance of the decentralized, runtime shield synthesis approach on a collision-avoidance problem. For 50 agents in a 50x50 grid, the synthesis at runtime requires a few seconds per agent whenever a potential collision is detected. In contrast, the centralized design-time synthesis of shields for a similar setting is intractable beyond 4 agents in a 5x5 grid.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
false
false
150,477
2405.17708
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators
Offline policy evaluation (OPE) allows us to evaluate and estimate a new sequential decision-making policy's performance by leveraging historical interaction data collected from other policies. Evaluating a new policy online without a confident estimate of its performance can lead to costly, unsafe, or hazardous outcomes, especially in education and healthcare. Several OPE estimators have been proposed in the last decade, many of which have hyperparameters and require training. Unfortunately, choosing the best OPE algorithm for each task and domain is still unclear. In this paper, we propose a new algorithm that adaptively blends a set of OPE estimators given a dataset without relying on an explicit selection using a statistical procedure. We prove that our estimator is consistent and satisfies several desirable properties for policy evaluation. Additionally, we demonstrate that when compared to alternative approaches, our estimator can be used to select higher-performing policies in healthcare and robotics. Our work contributes to improving ease of use for a general-purpose, estimator-agnostic, off-policy evaluation framework for offline RL.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
458,073
1503.05508
Exploration of the scalability of LocFaults approach for error localization with While-loops programs
A model checker can produce a trace of counterexample, for an erroneous program, which is often long and difficult to understand. In general, the part about the loops is the largest among the instructions in this trace. This makes the location of errors in loops critical, to analyze errors in the overall program. In this paper, we explore the scala-bility capabilities of LocFaults, our error localization approach exploiting paths of CFG(Control Flow Graph) from a counterexample to calculate the MCDs (Minimal Correction Deviations), and MCSs (Minimal Correction Subsets) from each found MCD. We present the times of our approach on programs with While-loops unfolded b times, and a number of deviated conditions ranging from 0 to n. Our preliminary results show that the times of our approach, constraint-based and flow-driven, are better compared to BugAssist which is based on SAT and transforms the entire program to a Boolean formula, and further the information provided by LocFaults is more expressive for the user.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
41,252
2009.02878
Benchmarking off-the-shelf statistical shape modeling tools in clinical applications
Statistical shape modeling (SSM) is widely used in biology and medicine as a new generation of morphometric approaches for the quantitative analysis of anatomical shapes. Technological advancements of in vivo imaging have led to the development of open-source computational tools that automate the modeling of anatomical shapes and their population-level variability. However, little work has been done on the evaluation and validation of such tools in clinical applications that rely on morphometric quantifications (e.g., implant design and lesion screening). Here, we systematically assess the outcome of widely used, state-of-the-art SSM tools, namely ShapeWorks, Deformetrica, and SPHARM-PDM. We use both quantitative and qualitative metrics to evaluate shape models from different tools. We propose validation frameworks for anatomical landmark/measurement inference and lesion screening. We also present a lesion screening method to objectively characterize subtle abnormal shape changes with respect to learned population-level statistics of controls. Results demonstrate that SSM tools display different levels of consistencies, where ShapeWorks and Deformetrica models are more consistent compared to models from SPHARM-PDM due to the groupwise approach of estimating surface correspondences. Furthermore, ShapeWorks and Deformetrica shape models are found to capture clinically relevant population-level variability compared to SPHARM-PDM models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
194,688
2305.11442
Zero-Shot Text Classification via Self-Supervised Tuning
Existing solutions to zero-shot text classification either conduct prompting with pre-trained language models, which is sensitive to the choices of templates, or rely on large-scale annotated data of relevant tasks for meta-tuning. In this work, we propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks by tuning the language models with unlabeled data, called self-supervised tuning. By exploring the inherent structure of free texts, we propose a new learning objective called first sentence prediction to bridge the gap between unlabeled data and text classification tasks. After tuning the model to learn to predict the first sentence in a paragraph based on the rest, the model is able to conduct zero-shot inference on unseen tasks such as topic classification and sentiment analysis. Experimental results show that our model outperforms the state-of-the-art baselines on 7 out of 10 tasks. Moreover, the analysis reveals that our model is less sensitive to the prompt design. Our code and pre-trained models are publicly available at https://github.com/DAMO-NLP-SG/SSTuning .
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
365,530
1911.06663
MMGAN: Generative Adversarial Networks for Multi-Modal Distributions
Over the past years, Generative Adversarial Networks (GANs) have shown a remarkable generation performance especially in image synthesis. Unfortunately, they are also known for having an unstable training process and might loose parts of the data distribution for heterogeneous input data. In this paper, we propose a novel GAN extension for multi-modal distribution learning (MMGAN). In our approach, we model the latent space as a Gaussian mixture model with a number of clusters referring to the number of disconnected data manifolds in the observation space, and include a clustering network, which relates each data manifold to one Gaussian cluster. Thus, the training gets more stable. Moreover, MMGAN allows for clustering real data according to the learned data manifold in the latent space. By a series of benchmark experiments, we illustrate that MMGAN outperforms competitive state-of-the-art models in terms of clustering performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
153,599
1910.02125
Requirements for Developing Robust Neural Networks
Validation accuracy is a necessary, but not sufficient, measure of a neural network classifier's quality. High validation accuracy during development does not guarantee that a model is free of serious flaws, such as vulnerability to adversarial attacks or a tendency to misclassify (with high confidence) data it was not trained on. The model may also be incomprehensible to a human or base its decisions on unreasonable criteria. These problems, which are not unique to classifiers, have been the focus of a substantial amount of recent research. However, they are not prioritized during model development, which almost always optimizes on validation accuracy to the exclusion of everything else. The product of this approach is likely to fail in unexpected ways outside of the training environment. We believe that, in addition to validation accuracy, the model development process must give added weight to other performance metrics such as explainability, resistance to adversarial attacks, and overconfidence on out-of-distribution data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,143
1909.00532
Semantic Segmentation of Panoramic Images Using a Synthetic Dataset
Panoramic images have advantages in information capacity and scene stability due to their large field of view (FoV). In this paper, we propose a method to synthesize a new dataset of panoramic image. We managed to stitch the images taken from different directions into panoramic images, together with their labeled images, to yield the panoramic semantic segmentation dataset denominated as SYNTHIA-PANO. For the purpose of finding out the effect of using panoramic images as training dataset, we designed and performed a comprehensive set of experiments. Experimental results show that using panoramic images as training data is beneficial to the segmentation result. In addition, it has been shown that by using panoramic images with a 180 degree FoV as training data the model has better performance. Furthermore, the model trained with panoramic images also has a better capacity to resist the image distortion.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
143,660
2308.04791
PETformer: Long-term Time Series Forecasting via Placeholder-enhanced Transformer
Recently, the superiority of Transformer for long-term time series forecasting (LTSF) tasks has been challenged, particularly since recent work has shown that simple models can outperform numerous Transformer-based approaches. This suggests that a notable gap remains in fully leveraging the potential of Transformer in LTSF tasks. Consequently, this study investigates key issues when applying Transformer to LTSF, encompassing aspects of temporal continuity, information density, and multi-channel relationships. We introduce the Placeholder-enhanced Technique (PET) to enhance the computational efficiency and predictive accuracy of Transformer in LTSF tasks. Furthermore, we delve into the impact of larger patch strategies and channel interaction strategies on Transformer's performance, specifically Long Sub-sequence Division (LSD) and Multi-channel Separation and Interaction (MSI). These strategies collectively constitute a novel model termed PETformer. Extensive experiments have demonstrated that PETformer achieves state-of-the-art performance on eight commonly used public datasets for LTSF, surpassing all existing models. The insights and enhancement methodologies presented in this paper serve as valuable reference points and sources of inspiration for future research endeavors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
384,559
2003.01247
Iterative Averaging in the Quest for Best Test Error
We analyse and explain the increased generalisation performance of iterate averaging using a Gaussian process perturbation model between the true and batch risk surface on the high dimensional quadratic. We derive three phenomena \latestEdits{from our theoretical results:} (1) The importance of combining iterate averaging (IA) with large learning rates and regularisation for improved regularisation. (2) Justification for less frequent averaging. (3) That we expect adaptive gradient methods to work equally well, or better, with iterate averaging than their non-adaptive counterparts. Inspired by these results\latestEdits{, together with} empirical investigations of the importance of appropriate regularisation for the solution diversity of the iterates, we propose two adaptive algorithms with iterate averaging. These give significantly better results compared to stochastic gradient descent (SGD), require less tuning and do not require early stopping or validation set monitoring. We showcase the efficacy of our approach on the CIFAR-10/100, ImageNet and Penn Treebank datasets on a variety of modern and classical network architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,582
1704.02536
Wireless Information and Power Transfer in Full-Duplex Systems with Massive Antenna Arrays
We consider a multiuser wireless system with a full-duplex hybrid access point (HAP) that transmits to a set of users in the downlink channel, while receiving data from a set of energy-constrained sensors in the uplink channel. We assume that the HAP is equipped with a massive antenna array, while all users and sensor nodes have a single antenna. We adopt a time-switching protocol where in the first phase, sensors are powered through wireless energy transfer from HAP and HAP estimates the downlink channel of the users. In the second phase, sensors use the harvested energy to transmit to the HAP. The downlink-uplink sum-rate region is obtained by solving downlink sum-rate maximization problem under a constraint on uplink sum-rate. Moreover, assuming perfect and imperfect channel state information, we derive expressions for the achievable uplink and downlink rates in the large-antenna limit and approximate results that hold for any finite number of antennas. Based on these analytical results, we obtain the power-scaling law and analyze the effect of the number of antennas on the cancellation of intra-user interference and the self-interference.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
71,464
2211.12281
BESS: Balanced Entity Sampling and Sharing for Large-Scale Knowledge Graph Completion
We present the award-winning submission to the WikiKG90Mv2 track of OGB-LSC@NeurIPS 2022. The task is link-prediction on the large-scale knowledge graph WikiKG90Mv2, consisting of 90M+ nodes and 600M+ edges. Our solution uses a diverse ensemble of $85$ Knowledge Graph Embedding models combining five different scoring functions (TransE, TransH, RotatE, DistMult, ComplEx) and two different loss functions (log-sigmoid, sampled softmax cross-entropy). Each individual model is trained in parallel on a Graphcore Bow Pod$_{16}$ using BESS (Balanced Entity Sampling and Sharing), a new distribution framework for KGE training and inference based on balanced collective communications between workers. Our final model achieves a validation MRR of 0.2922 and a test-challenge MRR of 0.2562, winning the first place in the competition. The code is publicly available at: https://github.com/graphcore/distributed-kge-poplar/tree/2022-ogb-submission.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
332,057
1301.0082
CloudSVM : Training an SVM Classifier in Cloud Computing Systems
In conventional method, distributed support vector machines (SVM) algorithms are trained over pre-configured intranet/internet environments to find out an optimal classifier. These methods are very complicated and costly for large datasets. Hence, we propose a method that is referred as the Cloud SVM training mechanism (CloudSVM) in a cloud computing environment with MapReduce technique for distributed machine learning applications. Accordingly, (i) SVM algorithm is trained in distributed cloud storage servers that work concurrently; (ii) merge all support vectors in every trained cloud node; and (iii) iterate these two steps until the SVM converges to the optimal classifier function. Large scale data sets are not possible to train using SVM algorithm on a single computer. The results of this study are important for training of large scale data sets for machine learning applications. We provided that iterative training of splitted data set in cloud computing environment using SVM will converge to a global optimal classifier in finite iteration size.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
20,693
1903.11286
Deformable kernel networks for guided depth map upsampling
We address the problem of upsampling a low-resolution (LR) depth map using a registered high-resolution (HR) color image of the same scene. Previous methods based on convolutional neural networks (CNNs) combine nonlinear activations of spatially-invariant kernels to estimate structural details from LR depth and HR color images, and regress upsampling results directly from the networks. In this paper, we revisit the weighted averaging process that has been widely used to transfer structural details from hand-crafted visual features to LR depth maps. We instead learn explicitly sparse and spatially-variant kernels for this task. To this end, we propose a CNN architecture and its efficient implementation, called the deformable kernel network (DKN), that outputs sparse sets of neighbors and the corresponding weights adaptively for each pixel. We also propose a fast version of DKN (FDKN) that runs about 17 times faster (0.01 seconds for a HR image of size 640 x 480). Experimental results on standard benchmarks demonstrate the effectiveness of our approach. In particular, we show that the weighted averaging process with 3 x 3 kernels (i.e., aggregating 9 samples sparsely chosen) outperforms the state of the art by a significant margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
125,478
1803.01370
A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization
We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a nonsmooth regularization term. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to maintain an approximation of the Hessian and solve subproblems efficiently in a distributed manner. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. It also converges on non-convex problems, so has the potential to be used on applications such as deep learning. Initial computational results on convex problems demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
91,861
2212.02745
Sources of Noise in Dialogue and How to Deal with Them
Training dialogue systems often entails dealing with noisy training examples and unexpected user inputs. Despite their prevalence, there currently lacks an accurate survey of dialogue noise, nor is there a clear sense of the impact of each noise type on task performance. This paper addresses this gap by first constructing a taxonomy of noise encountered by dialogue systems. In addition, we run a series of experiments to show how different models behave when subjected to varying levels of noise and types of noise. Our results reveal that models are quite robust to label errors commonly tackled by existing denoising algorithms, but that performance suffers from dialogue-specific noise. Driven by these observations, we design a data cleaning algorithm specialized for conversational settings and apply it as a proof-of-concept for targeted dialogue denoising.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
334,866
2106.15968
The Impact of Disinformation on a Controversial Debate on Social Media
In this work we study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter and the role of automated accounts in the diffusion of such content. By characterising the Twitter users with an \textit{Untrustworthiness} score, that tells us how frequently they engage with disinformation content, we are able to see that such bad information consumption habits are not equally distributed across the users; adopting a network analysis approach, we can identify communities characterised by a very high presence of users that frequently share content from unreliable news sources. Within this context, social bots tend to inject in the network more malicious content, that often remains confined in a limited number of clusters; instead, they target reliable content in order to diversify their reach. The evidence we gather suggests that, at least in this particular case study, there is a strong interplay between social bots and users engaging with unreliable content, influencing the diffusion of the latter across the network.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
243,928
2109.04453
Tube-Certified Trajectory Tracking for Nonlinear Systems With Robust Control Contraction Metrics
This paper presents an approach towards guaranteed trajectory tracking for nonlinear control-affine systems subject to external disturbances based on robust control contraction metrics (CCM) that aims to minimize the $\mathcal L_\infty$ gain from the disturbances to nominal-actual trajectory deviations. The guarantee is in the form of invariant tubes, computed offline and valid for any nominal trajectories, in which the actual states and inputs of the system are guaranteed to stay despite disturbances. Under mild assumptions, we prove that the proposed robust CCM (RCCM) approach yields tighter tubes than an existing approach based on CCM and input-to-state stability analysis. We show how the RCCM-based tracking controller together with tubes can be incorporated into a feedback motion planning framework to plan safe trajectories for robotic systems. Simulation results illustrate the effectiveness of the proposed method and empirically demonstrate reduced conservatism compared to the CCM-based approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
254,411
1707.05497
Differentially Private Identity and Closeness Testing of Discrete Distributions
We investigate the problems of identity and closeness testing over a discrete population from random samples. Our goal is to develop efficient testers while guaranteeing Differential Privacy to the individuals of the population. We describe an approach that yields sample-efficient differentially private testers for these problems. Our theoretical results show that there exist private identity and closeness testers that are nearly as sample-efficient as their non-private counterparts. We perform an experimental evaluation of our algorithms on synthetic data. Our experiments illustrate that our private testers achieve small type I and type II errors with sample size sublinear in the domain size of the underlying distributions.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
77,248
2502.07200
Color-Quality Invariance for Robust Medical Image Segmentation
Single-source domain generalization (SDG) in medical image segmentation remains a significant challenge, particularly for images with varying color distributions and qualities. Previous approaches often struggle when models trained on high-quality images fail to generalize to low-quality test images due to these color and quality shifts. In this work, we propose two novel techniques to enhance generalization: dynamic color image normalization (DCIN) module and color-quality generalization (CQG) loss. The DCIN dynamically normalizes the color of test images using two reference image selection strategies. Specifically, the DCIN utilizes a global reference image selection (GRIS), which finds a universal reference image, and a local reference image selection (LRIS), which selects a semantically similar reference image per test sample. Additionally, CQG loss enforces invariance to color and quality variations by ensuring consistent segmentation predictions across transformed image pairs. Experimental results show that our proposals significantly improve segmentation performance over the baseline on two target domain datasets, despite being trained solely on a single source domain. Notably, our model achieved up to a 32.3-point increase in Dice score compared to the baseline, consistently producing robust and usable results even under substantial domain shifts. Our work contributes to the development of more robust medical image segmentation models that generalize across unseen domains. The implementation code is available at https://github.com/RaviShah1/DCIN-CQG.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
532,480
1811.12295
Regression by clustering using Metropolis-Hastings
High quality risk adjustment in health insurance markets weakens insurer incentives to engage in inefficient behavior to attract lower-cost enrollees. We propose a novel methodology based on Markov Chain Monte Carlo methods to improve risk adjustment by clustering diagnostic codes into risk groups optimal for health expenditure prediction. We test the performance of our methodology against common alternatives using panel data from 500 thousand enrollees of the Colombian Healthcare System. Results show that our methodology outperforms common alternatives and suggest that it has potential to improve access to quality healthcare for the chronically ill.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
114,985
1910.08952
i-RIM applied to the fastMRI challenge
We, team AImsterdam, summarize our submission to the fastMRI challenge (Zbontar et al., 2018). Our approach builds on recent advances in invertible learning to infer models as presented in Putzky and Welling (2019). Both, our single-coil and our multi-coil model share the same basic architecture.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
150,028
2410.11815
SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing
Scene graphs offer a structured, hierarchical representation of images, with nodes and edges symbolizing objects and the relationships among them. It can serve as a natural interface for image editing, dramatically improving precision and flexibility. Leveraging this benefit, we introduce a new framework that integrates large language model (LLM) with Text2Image generative model for scene graph-based image editing. This integration enables precise modifications at the object level and creative recomposition of scenes without compromising overall image integrity. Our approach involves two primary stages: 1) Utilizing a LLM-driven scene parser, we construct an image's scene graph, capturing key objects and their interrelationships, as well as parsing fine-grained attributes such as object masks and descriptions. These annotations facilitate concept learning with a fine-tuned diffusion model, representing each object with an optimized token and detailed description prompt. 2) During the image editing phase, a LLM editing controller guides the edits towards specific areas. These edits are then implemented by an attention-modulated diffusion editor, utilizing the fine-tuned model to perform object additions, deletions, replacements, and adjustments. Through extensive experiments, we demonstrate that our framework significantly outperforms existing image editing methods in terms of editing precision and scene aesthetics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
498,726
2002.10560
Triplet Online Instance Matching Loss for Person Re-identification
Mining the shared features of same identity in different scene, and the unique features of different identity in same scene, are most significant challenges in the field of person re-identification (ReID). Online Instance Matching (OIM) loss function and Triplet loss function are main methods for person ReID. Unfortunately, both of them have drawbacks. OIM loss treats all samples equally and puts no emphasis on hard samples. Triplet loss processes batch construction in a complicated and fussy way and converges slowly. For these problems, we propose a Triplet Online Instance Matching (TOIM) loss function, which lays emphasis on the hard samples and improves the accuracy of person ReID effectively. It combines the advantages of OIM loss and Triplet loss and simplifies the process of batch construction, which leads to a more rapid convergence. It can be trained on-line when handle the joint detection and identification task. To validate our loss function, we collect and annotate a large-scale benchmark dataset (UESTC-PR) based on images taken from surveillance cameras, which contains 499 identities and 60,437 images. We evaluated our proposed loss function on Duke, Marker-1501 and UESTC-PR using ResNet-50, and the result shows that our proposed loss function outperforms the baseline methods by a maximum of 21.7%, including Softmax loss, OIM loss and Triplet loss.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
165,437
2211.03253
Soft Robotic Link with Controllable Transparency for Vision-based Tactile and Proximity Sensing
Robots have been brought to work close to humans in many scenarios. For coexistence and collaboration, robots should be safe and pleasant for humans to interact with. To this end, the robots could be both physically soft with multimodal sensing/perception, so that the robots could have better awareness of the surrounding environment, as well as to respond properly to humans' action/intention. This paper introduces a novel soft robotic link, named ProTac, that possesses multiple sensing modes: tactile and proximity sensing, based on computer vision and a functional material. These modalities come from a layered structure of a soft transparent silicon skin, a polymer dispersed liquid crystal (PDLC) film, and reflective markers. Here, the PDLC film can switch actively between the opaque and the transparent state, from which the tactile sensing and proximity sensing can be obtained by using cameras solely built inside the ProTac link. In this paper, inference algorithms for tactile proximity perception are introduced. Evaluation results of two sensing modalities demonstrated that, with a simple activation strategy, ProTac link could effectively perceive useful information from both approaching and in-contact obstacles. The proposed sensing device is expected to bring in ultimate solutions for design of robots with softness, whole-body and multimodal sensing, and safety control strategies.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
328,881
1006.3780
Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity
For the additive white Gaussian noise channel with average codeword power constraint, new coding methods are devised in which the codewords are sparse superpositions, that is, linear combinations of subsets of vectors from a given design, with the possible messages indexed by the choice of subset. Decoding is by least squares, tailored to the assumed form of linear combination. Communication is shown to be reliable with error probability exponentially small for all rates up to the Shannon capacity.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
6,836
2109.14333
Distribution Knowledge Embedding for Graph Pooling
Graph-level representation learning is the pivotal step for downstream tasks that operate on the whole graph. The most common approach to this problem heretofore is graph pooling, where node features are typically averaged or summed to obtain the graph representations. However, pooling operations like averaging or summing inevitably cause massive information missing, which may severely downgrade the final performance. In this paper, we argue what is crucial to graph-level downstream tasks includes not only the topological structure but also the distribution from which nodes are sampled. Therefore, powered by existing Graph Neural Networks (GNN), we propose a new plug-and-play pooling module, termed as Distribution Knowledge Embedding (DKEPool), where graphs are rephrased as distributions on top of GNNs and the pooling goal is to summarize the entire distribution information instead of retaining a certain feature vector by simple predefined pooling operations. A DKEPool network de facto disassembles representation learning into two stages, structure learning and distribution learning. Structure learning follows a recursive neighborhood aggregation scheme to update node features where structure information is obtained. Distribution learning, on the other hand, omits node interconnections and focuses more on the distribution depicted by all the nodes. Extensive experiments demonstrate that the proposed DKEPool significantly and consistently outperforms the state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
257,936
2305.19521
Incremental Randomized Smoothing Certification
Randomized smoothing-based certification is an effective approach for obtaining robustness certificates of deep neural networks (DNNs) against adversarial attacks. This method constructs a smoothed DNN model and certifies its robustness through statistical sampling, but it is computationally expensive, especially when certifying with a large number of samples. Furthermore, when the smoothed model is modified (e.g., quantized or pruned), certification guarantees may not hold for the modified DNN, and recertifying from scratch can be prohibitively expensive. We present the first approach for incremental robustness certification for randomized smoothing, IRS. We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples. IRS significantly reduces the computational cost of certifying modified DNNs while maintaining strong robustness guarantees. We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
369,564
2211.07378
Multiresolution Dual-Polynomial Decomposition Approach for Optimized Characterization of Motor Intent in Myoelectric Control Systems
Surface electromyogram (sEMG) is arguably the most sought-after physiological signal with a broad spectrum of biomedical applications, especially in miniaturized rehabilitation robots such as multifunctional prostheses. The widespread use of sEMG to drive pattern recognition (PR)-based control schemes is primarily due to its rich motor information content and non-invasiveness. Moreover, sEMG recordings exhibit non-linear and non-uniformity properties with inevitable interferences that distort intrinsic characteristics of the signal, precluding existing signal processing methods from yielding requisite motor control information. Therefore, we propose a multiresolution decomposition driven by dual-polynomial interpolation (MRDPI) technique for adequate denoising and reconstruction of multi-class EMG signals to guarantee the dual-advantage of enhanced signal quality and motor information preservation. Parameters for optimal MRDPI configuration were constructed across combinations of thresholding estimation schemes and signal resolution levels using EMG datasets of amputees who performed up to 22 predefined upper-limb motions acquired in-house and from the public NinaPro database. Experimental results showed that the proposed method yielded signals that led to consistent and significantly better decoding performance for all metrics compared to existing methods across features, classifiers, and datasets, offering a potential solution for practical deployment of intuitive EMG-PR-based control schemes for multifunctional prostheses and other miniaturized rehabilitation robotic systems that utilize myoelectric signals as control inputs.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
330,221
2303.16704
TraVaG: Differentially Private Trace Variant Generation Using GANs
Process mining is rapidly growing in the industry. Consequently, privacy concerns regarding sensitive and private information included in event data, used by process mining algorithms, are becoming increasingly relevant. State-of-the-art research mainly focuses on providing privacy guarantees, e.g., differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques for releasing trace variants still do not fulfill all the requirements of industry-scale usage. Moreover, providing privacy guarantees when there exists a high rate of infrequent trace variants is still a challenge. In this paper, we introduce TraVaG as a new approach for releasing differentially private trace variants based on \text{Generative Adversarial Networks} (GANs) that provides industry-scale benefits and enhances the level of privacy guarantees when there exists a high ratio of infrequent variants. Moreover, TraVaG overcomes shortcomings of conventional privacy preservation techniques such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data show that our approach outperforms state-of-the-art techniques in terms of privacy guarantees, plain data utility preservation, and result utility preservation.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
354,952
1809.01281
BOLD5000: A public fMRI dataset of 5000 images
Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that integrate neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enable fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr's dream of a singular vision science - the intertwined study of biological and computer vision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
106,765
2306.04695
ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models
The ability to understand visual concepts and replicate and compose these concepts from images is a central goal for computer vision. Recent advances in text-to-image (T2I) models have lead to high definition and realistic image quality generation by learning from large databases of images and their descriptions. However, the evaluation of T2I models has focused on photorealism and limited qualitative measures of visual understanding. To quantify the ability of T2I models in learning and synthesizing novel visual concepts (a.k.a. personalized T2I), we introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts, and 33K composite text prompts. Along with the dataset, we propose an evaluation metric, Concept Confidence Deviation (CCD), that uses the confidence of oracle concept classifiers to measure the alignment between concepts generated by T2I generators and concepts contained in target images. We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions. Our human study shows that CCD is highly correlated with human understanding of concepts. Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome. The data, code, and interactive demo is available at: https://conceptbed.github.io/
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
371,856
2405.16507
Causal Concept Graph Models: Beyond Causal Opacity in Deep Learning
Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying the decisions of deep neural network (DNN) models. This leads to the inability to rely on and verify state-of-the-art DNN-based systems, especially in high-stakes scenarios. For this reason, circumventing causal opacity in DNNs represents a key open challenge at the intersection of deep learning, interpretability, and causality. This work addresses this gap by introducing Causal Concept Graph Models (Causal CGMs), a class of interpretable models whose decision-making process is causally transparent by design. Our experiments show that Causal CGMs can: (i) match the generalisation performance of causally opaque models, (ii) enable human-in-the-loop corrections to mispredicted intermediate reasoning steps, boosting not just downstream accuracy after corrections but also the reliability of the explanations provided for specific instances, and (iii) support the analysis of interventional and counterfactual scenarios, thereby improving the model's causal interpretability and supporting the effective verification of its reliability and fairness.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
457,465
1705.10420
Discriminatively Learned Hierarchical Rank Pooling Networks
In this work, we present novel temporal encoding methods for action and activity classification by extending the unsupervised rank pooling temporal encoding method in two ways. First, we present "discriminative rank pooling" in which the shared weights of our video representation and the parameters of the action classifiers are estimated jointly for a given training dataset of labelled vector sequences using a bilevel optimization formulation of the learning problem. When the frame level features vectors are obtained from a convolutional neural network (CNN), we rank pool the network activations and jointly estimate all parameters of the model, including CNN filters and fully-connected weights, in an end-to-end manner which we coined as "end-to-end trainable rank pooled CNN". Importantly, this model can make use of any existing convolutional neural network architecture (e.g., AlexNet or VGG) without modification or introduction of additional parameters. Then, we extend rank pooling to a high capacity video representation, called "hierarchical rank pooling". Hierarchical rank pooling consists of a network of rank pooling functions, which encode temporal semantics over arbitrary long video clips based on rich frame level features. By stacking non-linear feature functions and temporal sub-sequence encoders one on top of the other, we build a high capacity encoding network of the dynamic behaviour of the video. The resulting video representation is a fixed-length feature vector describing the entire video clip that can be used as input to standard machine learning classifiers. We demonstrate our approach on the task of action and activity recognition. Obtained results are comparable to state-of-the-art methods on three important activity recognition benchmarks with classification performance of 76.7% mAP on Hollywood2, 69.4% on HMDB51, and 93.6% on UCF101.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,388
2102.03982
Subjective and Objective Visual Quality Assessment of Textured 3D Meshes
Objective visual quality assessment of 3D models is a fundamental issue in computer graphics. Quality assessment metrics may allow a wide range of processes to be guided and evaluated, such as level of detail creation, compression, filtering, and so on. Most computer graphics assets are composed of geometric surfaces on which several texture images can be mapped to 11 make the rendering more realistic. While some quality assessment metrics exist for geometric surfaces, almost no research has been conducted on the evaluation of texture-mapped 3D models. In this context, we present a new subjective study to evaluate the perceptual quality of textured meshes, based on a paired comparison protocol. We introduce both texture and geometry distortions on a set of 5 reference models to produce a database of 136 distorted models, evaluated using two rendering protocols. Based on analysis of the results, we propose two new metrics for visual quality assessment of textured mesh, as optimized linear combinations of accurate geometry and texture quality measurements. These proposed perceptual metrics outperform their counterparts in terms of correlation with human opinion. The database, along with the associated subjective scores, will be made publicly available online.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
218,943
2304.05290
Adapting to Disruptions: Flexibility as a Pillar of Supply Chain Resilience
Supply chain disruptions cause shortages of raw material and products. To increase resilience, i.e., the ability to cope with shocks, substituting goods in established supply chains can become an effective alternative to creating new distribution links. We demonstrate its impact on supply deficits through a detailed analysis of the US opioid distribution system. Reconstructing 40 billion empirical distribution paths, our data-driven model allows a unique inspection of policies that increase the substitution flexibility. Our approach enables policymakers to quantify the trade-off between increasing flexibility, i.e., reduced supply deficits, and increasing complexity of the supply chain, which could make it more expensive to operate.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
357,562
2206.10411
Audio-video fusion strategies for active speaker detection in meetings
Meetings are a common activity in professional contexts, and it remains challenging to endow vocal assistants with advanced functionalities to facilitate meeting management. In this context, a task like active speaker detection can provide useful insights to model interaction between meeting participants. Motivated by our application context related to advanced meeting assistant, we want to combine audio and visual information to achieve the best possible performance. In this paper, we propose two different types of fusion for the detection of the active speaker, combining two visual modalities and an audio modality through neural networks. For comparison purpose, classical unsupervised approaches for audio feature extraction are also used. We expect visual data centered on the face of each participant to be very appropriate for detecting voice activity, based on the detection of lip and facial gestures. Thus, our baseline system uses visual data and we chose a 3D Convolutional Neural Network architecture, which is effective for simultaneously encoding appearance and movement. To improve this system, we supplemented the visual information by processing the audio stream with a CNN or an unsupervised speaker diarization system. We have further improved this system by adding visual modality information using motion through optical flow. We evaluated our proposal with a public and state-of-the-art benchmark: the AMI corpus. We analysed the contribution of each system to the merger carried out in order to determine if a given participant is currently speaking. We also discussed the results we obtained. Besides, we have shown that, for our application context, adding motion information greatly improves performance. Finally, we have shown that attention-based fusion improves performance while reducing the standard deviation.
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
303,899
2112.00933
PartImageNet: A Large, High-Quality Dataset of Parts
It is natural to represent objects in terms of their parts. This has the potential to improve the performance of algorithms for object recognition and segmentation but can also help for downstream tasks like activity recognition. Research on part-based models, however, is hindered by the lack of datasets with per-pixel part annotations. This is partly due to the difficulty and high cost of annotating object parts so it has rarely been done except for humans (where there exists a big literature on part-based models). To help address this problem, we propose PartImageNet, a large, high-quality dataset with part segmentation annotations. It consists of $158$ classes from ImageNet with approximately $24,000$ images. PartImageNet is unique because it offers part-level annotations on a general set of classes including non-rigid, articulated objects, while having an order of magnitude larger size compared to existing part datasets (excluding datasets of humans). It can be utilized for many vision tasks including Object Segmentation, Semantic Part Segmentation, Few-shot Learning and Part Discovery. We conduct comprehensive experiments which study these tasks and set up a set of baselines. The dataset and scripts are released at https://github.com/TACJu/PartImageNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,297
1903.05315
Optimality of Maximum Likelihood for Log-Concave Density Estimation and Bounded Convex Regression
In this paper, we study two problems: (1) estimation of a $d$-dimensional log-concave distribution and (2) bounded multivariate convex regression with random design with an underlying log-concave density or a compactly supported distribution with a continuous density. First, we show that for all $d \ge 4$ the maximum likelihood estimators of both problems achieve an optimal risk of $\Theta_d(n^{-2/(d+1)})$ (up to a logarithmic factor) in terms of squared Hellinger distance and $L_2$ squared distance, respectively. Previously, the optimality of both these estimators was known only for $d\le 3$. We also prove that the $\epsilon$-entropy numbers of the two aforementioned families are equal up to logarithmic factors. We complement these results by proving a sharp bound $\Theta_d(n^{-2/(d+4)})$ on the minimax rate (up to logarithmic factors) with respect to the total variation distance. Finally, we prove that estimating a log-concave density - even a uniform distribution on a convex set - up to a fixed accuracy requires the number of samples \emph{at least} exponential in the dimension. We do that by improving the dimensional constant in the best known lower bound for the minimax rate from $2^{-d}\cdot n^{-2/(d+1)}$ to $c\cdot n^{-2/(d+1)}$ (when $d\geq 2$).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,142
2408.02676
On Biases in a UK Biobank-based Retinal Image Classification Model
Recent work has uncovered alarming disparities in the performance of machine learning models in healthcare. In this study, we explore whether such disparities are present in the UK Biobank fundus retinal images by training and evaluating a disease classification model on these images. We assess possible disparities across various population groups and find substantial differences despite strong overall performance of the model. In particular, we discover unfair performance for certain assessment centres, which is surprising given the rigorous data standardisation protocol. We compare how these differences emerge and apply a range of existing bias mitigation methods to each one. A key insight is that each disparity has unique properties and responds differently to the mitigation methods. We also find that these methods are largely unable to enhance fairness, highlighting the need for better bias mitigation methods tailored to the specific type of bias.
false
false
false
false
true
false
true
false
false
false
false
true
false
true
false
false
false
false
478,713
1611.03204
Top-k Spatial-keyword Publish/Subscribe Over Sliding Window
With the prevalence of social media and GPS-enabled devices, a massive amount of geo-textual data has been generated in a stream fashion, leading to a variety of applications such as location-based recommendation and information dissemination. In this paper, we investigate a novel real-time top-k monitoring problem over sliding window of streaming data; that is, we continuously maintain the top-k most relevant geo-textual messages (e.g., geo-tagged tweets) for a large number of spatial-keyword subscriptions (e.g., registered users interested in local events) simultaneously. To provide the most recent information under controllable memory cost, sliding window model is employed on the streaming geo-textual data. To the best of our knowledge, this is the first work to study top-k spatial-keyword publish/subscribe over sliding window. A novel centralized system, called Skype (Topk Spatial-keyword Publish/Subscribe), is proposed in this paper. In Skype, to continuously maintain top-k results for massive subscriptions, we devise a novel indexing structure upon subscriptions such that each incoming message can be immediately delivered on its arrival. To reduce the expensive top-k re-evaluation cost triggered by message expiration, we develop a novel cost-based k-skyband technique to reduce the number of re-evaluations in a cost-effective way. Extensive experiments verify the great efficiency and effectiveness of our proposed techniques. Furthermore, to support better scalability and higher throughput, we propose a distributed version of Skype, namely, DSkype, on top of Storm, which is a popular distributed stream processing system. With the help of fine-tuned subscription/message distribution mechanisms, DSkype can achieve orders of magnitude speed-up than its centralized version.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
63,666
1607.05387
Generating Images Part by Part with Composite Generative Adversarial Networks
Image generation remains a fundamental problem in artificial intelligence in general and deep learning in specific. The generative adversarial network (GAN) was successful in generating high quality samples of natural images. We propose a model called composite generative adversarial network, that reveals the complex structure of images with multiple generators in which each generator generates some part of the image. Those parts are combined by alpha blending process to create a new single image. It can generate, for example, background and face sequentially with two generators, after training on face dataset. Training was done in an unsupervised way without any labels about what each generator should generate. We found possibilities of learning the structure by using this generative model empirically.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
58,744
2405.17034
FUGNN: Harmonizing Fairness and Utility in Graph Neural Networks
Fairness-aware Graph Neural Networks (GNNs) often face a challenging trade-off, where prioritizing fairness may require compromising utility. In this work, we re-examine fairness through the lens of spectral graph theory, aiming to reconcile fairness and utility within the framework of spectral graph learning. We explore the correlation between sensitive features and spectrum in GNNs, using theoretical analysis to delineate the similarity between original sensitive features and those after convolution under different spectra. Our analysis reveals a reduction in the impact of similarity when the eigenvectors associated with the largest magnitude eigenvalue exhibit directional similarity. Based on these theoretical insights, we propose FUGNN, a novel spectral graph learning approach that harmonizes the conflict between fairness and utility. FUGNN ensures algorithmic fairness and utility by truncating the spectrum and optimizing eigenvector distribution during the encoding process. The fairness-aware eigenvector selection reduces the impact of convolution on sensitive features while concurrently minimizing the sacrifice of utility. FUGNN further optimizes the distribution of eigenvectors through a transformer architecture. By incorporating the optimized spectrum into the graph convolution network, FUGNN effectively learns node representations. Experiments on six real-world datasets demonstrate the superiority of FUGNN over baseline methods. The codes are available at https://github.com/yushuowiki/FUGNN.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
457,727
1109.5951
An Approximation of the Universal Intelligence Measure
The Universal Intelligence Measure is a recently proposed formal definition of intelligence. It is mathematically specified, extremely general, and captures the essence of many informal definitions of intelligence. It is based on Hutter's Universal Artificial Intelligence theory, an extension of Ray Solomonoff's pioneering work on universal induction. Since the Universal Intelligence Measure is only asymptotically computable, building a practical intelligence test from it is not straightforward. This paper studies the practical issues involved in developing a real-world UIM-based performance metric. Based on our investigation, we develop a prototype implementation which we use to evaluate a number of different artificial agents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
12,356
2105.03420
Compound Arbitrarily Varying Channels
We propose a communication model, that we call compound arbitrarily varying channels (CAVC), which unifies and generalizes compound channels and arbitrarily varying channels (AVC). A CAVC can be viewed as a noisy channel with a fixed, but unknown, compound-state and an AVC-state which may vary with every channel use. The AVC-state is controlled by an adversary who is aware of the compound-state. We study three problems in this setting: 'communication', 'communication and compound-state identification', and 'communication or compound-state identification'. For these problems, we study conditions for feasibility and capacity under deterministic coding and random coding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
234,147
2110.08678
Improving Transformers with Probabilistic Attention Keys
Multi-head attention is a driving force behind state-of-the-art transformers, which achieve remarkable performance across a variety of natural language processing (NLP) and computer vision tasks. It has been observed that for many applications, those attention heads learn redundant embedding, and most of them can be removed without degrading the performance of the model. Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head. These mixtures of keys follow a Gaussian mixture model and allow each attention head to focus on different parts of the input sequence efficiently. Compared to its conventional transformer counterpart, Transformer-MGK accelerates training and inference, has fewer parameters, and requires fewer FLOPs to compute while achieving comparable or better accuracy across tasks. Transformer-MGK can also be easily extended to use with linear attention. We empirically demonstrate the advantage of Transformer-MGK in a range of practical applications, including language modeling and tasks that involve very long sequences. On the Wikitext-103 and Long Range Arena benchmark, Transformer-MGKs with 4 heads attain comparable or better performance to the baseline transformers with 8 heads.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
261,507
2401.02899
Model predictive altitude and velocity control in ergodic potential field directed multi-UAV search
This research addresses the challenge of executing multi-UAV survey missions over diverse terrains characterized by varying elevations. The approach integrates advanced two-dimensional ergodic search technique with model predictive control of UAV altitude and velocity. Optimization of altitude and velocity is performed along anticipated UAV ground routes, considering multiple objectives and constraints. This yields a flight regimen tailored to the terrain, as well as the motion and sensing characteristics of the UAVs. The proposed UAV motion control strategy is assessed through simulations of realistic search missions and actual terrain models. Results demonstrate the successful integration of model predictive altitude and velocity control with a two-dimensional potential field-guided ergodic search. Adjusting UAV altitudes to near-ideal levels facilitates the utilization of sensing ranges, thereby enhancing the effectiveness of the search. Furthermore, the control algorithm is capable of real-time computation, encouraging its practical application in real-world scenarios.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
419,880
2005.12061
Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case Rates
We propose a new methodology to design first-order methods for unconstrained strongly convex problems. Specifically, instead of tackling the original objective directly, we construct a shifted objective function that has the same minimizer as the original objective and encodes both the smoothness and strong convexity of the original objective in an interpolation condition. We then propose an algorithmic template for tackling the shifted objective, which can exploit such a condition. Following this template, we derive several new accelerated schemes for problems that are equipped with various first-order oracles and show that the interpolation condition allows us to vastly simplify and tighten the analysis of the derived methods. In particular, all the derived methods have faster worst-case convergence rates than their existing counterparts. Experiments on machine learning tasks are conducted to evaluate the new methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
178,630
2207.00213
Quick Relaxation in Collective Motion
We establish sufficient conditions for the quick relaxation to kinetic equilibrium in the classic Vicsek-Cucker-Smale model of bird flocking. The convergence time is polynomial in the number of birds as long as the number of flocks remains bounded. This new result relies on two key ingredients: exploiting the convex geometry of embedded averaging systems; and deriving new bounds on the s-energy of disconnected agreement systems. We also apply our techniques to bound the relaxation time of certain pattern-formation robotic systems investigated by Sugihara and Suzuki.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
305,684
1707.03237
Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations
Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
76,828
2304.12423
Composite Biomarker Image for Advanced Visualization in Histopathology
Immunohistochemistry (IHC) biomarkers are essential tools for reliable cancer diagnosis and subtyping. It requires cross-staining comparison among Whole Slide Images (WSIs) of IHCs and hematoxylin and eosin (H&E) slides. Currently, pathologists examine the visually co-localized areas across IHC and H&E glass slides for a final diagnosis, which is a tedious and challenging task. Moreover, visually inspecting different IHC slides back and forth to analyze local co-expressions is inherently subjective and prone to error, even when carried out by experienced pathologists. Relying on digital pathology, we propose Composite Biomarker Image (CBI) in this work. CBI is a single image that can be composed using different filtered IHC biomarker images for better visualization. We present a CBI image produced in two steps by the proposed solution for better visualization and hence more efficient clinical workflow. In the first step, IHC biomarker images are aligned with the H&E images using one coordinate system and orientation. In the second step, the positive or negative IHC regions from each biomarker image (based on the pathologists recommendation) are filtered and combined into one image using a fuzzy inference system. For evaluation, the resulting CBI images, from the proposed system, were evaluated qualitatively by the expert pathologists. The CBI concept helps the pathologists to identify the suspected target tissues more easily, which could be further assessed by examining the actual WSIs at the same suspected regions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,200
2404.02540
CSEPrompts: A Benchmark of Introductory Computer Science Prompts
Recent advances in AI, machine learning, and NLP have led to the development of a new generation of Large Language Models (LLMs) that are trained on massive amounts of data and often have trillions of parameters. Commercial applications (e.g., ChatGPT) have made this technology available to the general public, thus making it possible to use LLMs to produce high-quality texts for academic and professional purposes. Schools and universities are aware of the increasing use of AI-generated content by students and they have been researching the impact of this new technology and its potential misuse. Educational programs in Computer Science (CS) and related fields are particularly affected because LLMs are also capable of generating programming code in various programming languages. To help understand the potential impact of publicly available LLMs in CS education, we introduce CSEPrompts, a framework with hundreds of programming exercise prompts and multiple-choice questions retrieved from introductory CS and programming courses. We also provide experimental results on CSEPrompts to evaluate the performance of several LLMs with respect to generating Python code and answering basic computer science and programming questions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
443,903
2402.17804
Predicting machine failures from multivariate time series: an industrial case study
Non-neural Machine Learning (ML) and Deep Learning (DL) models are often used to predict system failures in the context of industrial maintenance. However, only a few researches jointly assess the effect of varying the amount of past data used to make a prediction and the extension in the future of the forecast. This study evaluates the impact of the size of the reading window and of the prediction window on the performances of models trained to forecast failures in three data sets concerning the operation of (1) an industrial wrapping machine working in discrete sessions, (2) an industrial blood refrigerator working continuously, and (3) a nitrogen generator working continuously. The problem is formulated as a binary classification task that assigns the positive label to the prediction window based on the probability of a failure to occur in such an interval. Six algorithms (logistic regression, random forest, support vector machine, LSTM, ConvLSTM, and Transformers) are compared using multivariate telemetry time series. The results indicate that, in the considered scenarios, the dimension of the prediction windows plays a crucial role and highlight the effectiveness of DL approaches at classifying data with diverse time-dependent patterns preceding a failure and the effectiveness of ML approaches at classifying similar and repetitive patterns preceding a failure.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,158
2411.01408
HeightMapNet: Explicit Height Modeling for End-to-End HD Map Learning
Recent advances in high-definition (HD) map construction from surround-view images have highlighted their cost-effectiveness in deployment. However, prevailing techniques often fall short in accurately extracting and utilizing road features, as well as in the implementation of view transformation. In response, we introduce HeightMapNet, a novel framework that establishes a dynamic relationship between image features and road surface height distributions. By integrating height priors, our approach refines the accuracy of Bird's-Eye-View (BEV) features beyond conventional methods. HeightMapNet also introduces a foreground-background separation network that sharply distinguishes between critical road elements and extraneous background components, enabling precise focus on detailed road micro-features. Additionally, our method leverages multi-scale features within the BEV space, optimally utilizing spatial geometric information to boost model performance. HeightMapNet has shown exceptional results on the challenging nuScenes and Argoverse 2 datasets, outperforming several widely recognized approaches. The code will be available at \url{https://github.com/adasfag/HeightMapNet/}.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
505,055