id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2408.09646
Debiased Contrastive Representation Learning for Mitigating Dual Biases in Recommender Systems
In recommender systems, popularity and conformity biases undermine recommender effectiveness by disproportionately favouring popular items, leading to their over-representation in recommendation lists and causing an unbalanced distribution of user-item historical data. We construct a causal graph to address both biases and describe the abstract data generation mechanism. Then, we use it as a guide to develop a novel Debiased Contrastive Learning framework for Mitigating Dual Biases, called DCLMDB. In DCLMDB, both popularity bias and conformity bias are handled in the model training process by contrastive learning to ensure that user choices and recommended items are not unduly influenced by conformity and popularity. Extensive experiments on two real-world datasets, Movielens-10M and Netflix, show that DCLMDB can effectively reduce the dual biases, as well as significantly enhance the accuracy and diversity of recommendations.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
481,522
1102.2837
Efficient Promotion Strategies in Hierarchical Organizations
The Peter principle has been recently investigated by means of an agent-based simulation and its validity has been numerically corroborated. It has been confirmed that, within certain conditions, it can really influence in a negative way the efficiency of a pyramidal organization adopting meritocratic promotions. It was also found that, in order to bypass these effects, alternative promotion strategies should be adopted, as for example a random selection choice. In this paper, within the same line of research, we study promotion strategies in a more realistic hierarchical and modular organization and we show the robustness of our previous results, extending their validity to a more general context. We discuss also why the adoption of these strategies could be useful for real organizations.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
9,179
2001.10888
Cross-Layer Scheduling and Beamforming in Smart-Grid Powered Cellular Networks With Heterogeneous Energy Coordination
User scheduling, beamforming and energy coordination are investigated in smart-grid powered cellular networks (SGPCNs), where the base stations are powered by a smart grid and natural renewable energy sources. Heterogeneous energy coordination is considered in SGPCNs, namely energy merchandizing with the smart grid and energy exchanging among the base stations. A long-term grid-energy expenditure minimization problem with proportional-rate constraints is formulated for SGPCNs. Since user scheduling is coupled with the beamforming vectors, the formulated problem is challenging to handle via standard convex optimization methods. In practice, the beamforming vectors need to be updated over each slot according to the channel variations. User scheduling needs to be updated over several slots (frame) since the frequent scheduling of user equipment can cause reliability issues. Therefore, the Lyapunov optimization method is used to decouple the problem. A practical two-scale algorithm is proposed to schedule users at each frame, and obtain the beamforming vectors and amount of exchanged natural renewable energy at each slot. We prove that the proposed two-scale algorithm can asymptotically achieve the optimal solutions via tuning a control parameter. Numerical results verify the performance of the proposed two-scale algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
161,928
1706.05374
Expected Policy Gradients
We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates across the action when estimating the gradient, instead of relying only on the action in the sampled trajectory. We establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. We also prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and, for the Gaussian case, with no computational overhead. Finally, we show that it is optimal in a certain sense to explore with a Gaussian policy such that the covariance is proportional to the exponential of the scaled Hessian of the critic with respect to the actions. We present empirical results confirming that this new form of exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic in four challenging MuJoCo domains.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
75,503
1103.5348
Precoding for Outage Probability Minimization on Block Fading Channels
The outage probability limit is a fundamental and achievable lower bound on the word error rate of coded communication systems affected by fading. This limit is mainly determined by two parameters: the diversity order and the coding gain. With linear precoding, full diversity on a block fading channel can be achieved without error-correcting code. However, the effect of precoding on the coding gain is not well known, mainly due to the complicated expression of the outage probability. Using a geometric approach, this paper establishes simple upper bounds on the outage probability, the minimization of which yields to precoding matrices that achieve very good performance. For discrete alphabets, it is shown that the combination of constellation expansion and precoding is sufficient to closely approach the minimum possible outage achieved by an i.i.d. Gaussian input distribution, thus essentially maximizing the coding gain.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,780
2202.00514
Analyzing Community-aware Centrality Measures Using The Linear Threshold Model
Targeting influential nodes in complex networks allows fastening or hindering rumors, epidemics, and electric blackouts. Since communities are prevalent in real-world networks, community-aware centrality measures exploit this information to target influential nodes. Researches show that they compare favorably with classical measures that are agnostic about the community structure. Although the diffusion process is of prime importance, previous studies consider mainly the famous Susceptible-Infected-Recovered (SIR) epidemic propagation model. This work investigates the consistency of previous analyses using the popular Linear Threshold (LT) propagation model, which characterizes many spreading processes in our real life. We perform a comparative analysis of seven influential community-aware centrality measures on thirteen real-world networks. Overall, results show that Community-based Mediator, Comm Centrality, and Modularity Vitality outperform the other measures. Moreover, Community-based Mediator is more effective on a tight budget (i.e., a small fraction of initially activated nodes), while Comm Centrality and Modularity Vitality perform better with a medium to a high fraction of initially activated nodes.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
278,169
2205.09898
Let the Model Decide its Curriculum for Multitask Learning
Curriculum learning strategies in prior multi-task learning approaches arrange datasets in a difficulty hierarchy either based on human perception or by exhaustively searching the optimal arrangement. However, human perception of difficulty may not always correlate well with machine interpretation leading to poor performance and exhaustive search is computationally expensive. Addressing these concerns, we propose two classes of techniques to arrange training instances into a learning curriculum based on difficulty scores computed via model-based approaches. The two classes i.e Dataset-level and Instance-level differ in granularity of arrangement. Through comprehensive experiments with 12 datasets, we show that instance-level and dataset-level techniques result in strong representations as they lead to an average performance improvement of 4.17% and 3.15% over their respective baselines. Furthermore, we find that most of this improvement comes from correctly answering the difficult instances, implying a greater efficacy of our techniques on difficult tasks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
297,463
2011.07586
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
206,604
2206.09010
LIMO: Latent Inceptionism for Targeted Molecule Generation
Generation of drug-like molecules with high binding affinity to target proteins remains a difficult and resource-intensive task in drug discovery. Existing approaches primarily employ reinforcement learning, Markov sampling, or deep generative models guided by Gaussian processes, which can be prohibitively slow when generating molecules with high binding affinity calculated by computationally-expensive physics-based methods. We present Latent Inceptionism on Molecules (LIMO), which significantly accelerates molecule generation with an inceptionism-like technique. LIMO employs a variational autoencoder-generated latent space and property prediction by two neural networks in sequence to enable faster gradient-based reverse-optimization of molecular properties. Comprehensive experiments show that LIMO performs competitively on benchmark tasks and markedly outperforms state-of-the-art techniques on the novel task of generating drug-like compounds with high binding affinity, reaching nanomolar range against two protein targets. We corroborate these docking-based results with more accurate molecular dynamics-based calculations of absolute binding free energy and show that one of our generated drug-like compounds has a predicted $K_D$ (a measure of binding affinity) of $6 \cdot 10^{-14}$ M against the human estrogen receptor, well beyond the affinities of typical early-stage drug candidates and most FDA-approved drugs to their respective targets. Code is available at https://github.com/Rose-STL-Lab/LIMO.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
303,391
2412.16925
Quantifying Public Response to COVID-19 Events: Introducing the Community Sentiment and Engagement Index
This study introduces the Community Sentiment and Engagement Index (CSEI), developed to capture nuanced public sentiment and engagement variations on social media, particularly in response to major events related to COVID-19. Constructed with diverse sentiment indicators, CSEI integrates features like engagement, daily post count, compound sentiment, fine-grain sentiments (fear, surprise, joy, sadness, anger, disgust, and neutral), readability, offensiveness, and domain diversity. Each component is systematically weighted through a multi-step Principal Component Analysis (PCA)-based framework, prioritizing features according to their variance contributions across temporal sentiment shifts. This approach dynamically adjusts component importance, enabling CSEI to precisely capture high-sensitivity shifts in public sentiment. The development of CSEI showed statistically significant correlations with its constituent features, underscoring internal consistency and sensitivity to specific sentiment dimensions. CSEI's responsiveness was validated using a dataset of 4,510,178 Reddit posts about COVID-19. The analysis focused on 15 major events, including the WHO's declaration of COVID-19 as a pandemic, the first reported cases of COVID-19 across different countries, national lockdowns, vaccine developments, and crucial public health measures. Cumulative changes in CSEI revealed prominent peaks and valleys aligned with these events, indicating significant patterns in public sentiment across different phases of the pandemic. Pearson correlation analysis further confirmed a statistically significant relationship between CSEI daily fluctuations and these events (p = 0.0428), highlighting the capacity of CSEI to infer and interpret shifts in public sentiment and engagement in response to major events related to COVID-19.
false
false
false
true
true
false
true
false
true
false
false
false
false
true
false
false
false
false
519,751
2404.16548
Cross-Domain Spatial Matching for Camera and Radar Sensor Data Fusion in Autonomous Vehicle Perception System
In this paper, we propose a novel approach to address the problem of camera and radar sensor fusion for 3D object detection in autonomous vehicle perception systems. Our approach builds on recent advances in deep learning and leverages the strengths of both sensors to improve object detection performance. Precisely, we extract 2D features from camera images using a state-of-the-art deep learning architecture and then apply a novel Cross-Domain Spatial Matching (CDSM) transformation method to convert these features into 3D space. We then fuse them with extracted radar data using a complementary fusion strategy to produce a final 3D object representation. To demonstrate the effectiveness of our approach, we evaluate it on the NuScenes dataset. We compare our approach to both single-sensor performance and current state-of-the-art fusion methods. Our results show that the proposed approach achieves superior performance over single-sensor solutions and could directly compete with other top-level fusion methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
449,535
1401.5636
Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM
Discovering causal relations among observed variables in a given data set is a major objective in studies of statistics and artificial intelligence. Recently, some techniques to discover a unique causal model have been explored based on non-Gaussianity of the observed data distribution. However, most of these are limited to continuous data. In this paper, we present a novel causal model for binary data and propose an efficient new approach to deriving the unique causal model governing a given binary data set under skew distributions of external binary noises. Experimental evaluation shows excellent performance for both artificial and real world data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
30,215
1806.02400
A Comparative Study on Unsupervised Domain Adaptation Approaches for Coffee Crop Mapping
In this work, we investigate the application of existing unsupervised domain adaptation (UDA) approaches to the task of transferring knowledge between crop regions having different coffee patterns. Given a geographical region with fully mapped coffee plantations, we observe that this knowledge can be used to train a classifier and to map a new county with no need of samples indicated in the target region. Experimental results show that transferring knowledge via some UDA strategies performs better than just applying a classifier trained in a region to predict coffee crops in a new one. However, UDA methods may lead to negative transfer, which may indicate that domains are too different that transferring knowledge is not appropriate. We also verify that normalization affect significantly some UDA methods; we observe a meaningful complementary contribution between coffee crops data; and a visual behavior suggests an existent of a cluster of samples that are more likely to be drawn from a specific data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
99,770
1909.08964
To Detect Irregular Trade Behaviors In Stock Market By Using Graph Based Ranking Methods
To detect the irregular trade behaviors in the stock market is the important problem in machine learning field. These irregular trade behaviors are obviously illegal. To detect these irregular trade behaviors in the stock market, data scientists normally employ the supervised learning techniques. In this paper, we employ the three graph Laplacian based semi-supervised ranking methods to solve the irregular trade behavior detection problem. Experimental results show that that the un-normalized and symmetric normalized graph Laplacian based semi-supervised ranking methods outperform the random walk Laplacian based semi-supervised ranking method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
146,104
2405.16310
An Empirical Exploration of Trust Dynamics in LLM Supply Chains
With the widespread proliferation of AI systems, trust in AI is an important and timely topic to navigate. Researchers so far have largely employed a myopic view of this relationship. In particular, a limited number of relevant trustors (e.g., end-users) and trustees (i.e., AI systems) have been considered, and empirical explorations have remained in laboratory settings, potentially overlooking factors that impact human-AI relationships in the real world. In this paper, we argue for broadening the scope of studies addressing `trust in AI' by accounting for the complex and dynamic supply chains that AI systems result from. AI supply chains entail various technical artifacts that diverse individuals, organizations, and stakeholders interact with, in a variety of ways. We present insights from an in-situ, empirical study of LLM supply chains. Our work reveals additional types of trustors and trustees and new factors impacting their trust relationships. These relationships were found to be central to the development and adoption of LLMs, but they can also be the terrain for uncalibrated trust and reliance on untrustworthy LLMs. Based on these findings, we discuss the implications for research on `trust in AI'. We highlight new research opportunities and challenges concerning the appropriate study of inter-actor relationships across the supply chain and the development of calibrated trust and meaningful reliance behaviors. We also question the meaning of building trust in the LLM supply chain.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
457,357
2111.12489
Repeated-root Constacyclic Codes with Optimal Locality
A code is called a locally repairable code (LRC) if any code symbol is a function of a small fraction of other code symbols. When a locally repairable code is employed in a distributed storage systems, an erased symbol can be recovered by accessing only a small number of other symbols, and hence alleviating the network resources required during the repair process. In this paper we consider repeated-root constacyclic codes, which is a generalization of cyclic codes, that are optimal with respect to a Singleton-like bound on minimum distance. An LRC with the structure of a constacyclic code can be encoded efficiently using any encoding algorithm for constacyclic codes in general. In this paper we obtain optimal LRCs among these repeated-root constacyclic codes. Several infinite classes of optimal LRCs over a fixed alphabet are found. Under a further assumption that the ambient space of the repeated-root constacyclic codes is a chain ring, we show that there is no other optimal LRC.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
267,981
1207.7144
Information and Estimation over Binomial and Negative Binomial Models
In recent years, a number of results have been developed which connect information measures and estimation measures under various models, including, predominently, Gaussian and Poisson models. More recent results due to Taborda and Perez-Cruz relate the relative entropy to certain mismatched estimation errors in the context of binomial and negative binomial models, where, unlike in the case of Gaussian and Poisson models, the conditional mean estimates concern models of different parameters than those of the original model. In this note, a different set of results in simple forms are developed for binomial and negative binomial models, where the conditional mean estimates are produced through the original models. The new results are more consistent with existing results for Gaussian and Poisson models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
17,829
1109.0687
Performance of distributed mechanisms for flow admission in wireless adhoc networks
Given a wireless network where some pairs of communication links interfere with each other, we study sufficient conditions for determining whether a given set of minimum bandwidth quality-of-service (QoS) requirements can be satisfied. We are especially interested in algorithms which have low communication overhead and low processing complexity. The interference in the network is modeled using a conflict graph whose vertices correspond to the communication links in the network. Two links are adjacent in this graph if and only if they interfere with each other due to being in the same vicinity and hence cannot be simultaneously active. The problem of scheduling the transmission of the various links is then essentially a fractional, weighted vertex coloring problem, for which upper bounds on the fractional chromatic number are sought using only localized information. We recall some distributed algorithms for this problem, and then assess their worst-case performance. Our results on this fundamental problem imply that for some well known classes of networks and interference models, the performance of these distributed algorithms is within a bounded factor away from that of an optimal, centralized algorithm. The performance bounds are simple expressions in terms of graph invariants. It is seen that the induced star number of a network plays an important role in the design and performance of such networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
11,957
2102.10130
Image Classification using CNN for Traffic Signs in Pakistan
The autonomous automotive industry is one of the largest and most conventional projects worldwide, with many technology companies effectively designing and orienting their products towards automobile safety and accuracy. These products are performing very well over the roads in developed countries. But can fail in the first minute in an underdeveloped country because there is much difference between a developed country environment and an underdeveloped country environment. The following study proposed to train these Artificial intelligence models in environment space in an underdeveloped country like Pakistan. The proposed approach on image classification uses convolutional neural networks for image classification for the model. For model pre-training German traffic signs data set was selected then fine-tuned on Pakistan's dataset. The experimental setup showed the best results and accuracy from the previously conducted experiments. In this work to increase the accuracy, more dataset was collected to increase the size of images in every class in the data set. In the future, a low number of classes are required to be further increased where more images for traffic signs are required to be collected to get more accuracy on the training of the model over traffic signs of Pakistan's most used and popular roads motorway and national highway, whose traffic signs color, size, and shapes are different from common traffic signs.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
220,979
2411.12181
Enhancing Low Dose Computed Tomography Images Using Consistency Training Techniques
Diffusion models have significant impact on wide range of generative tasks, especially on image inpainting and restoration. Although the improvements on aiming for decreasing number of function evaluations (NFE), the iterative results are still computationally expensive. Consistency models are as a new family of generative models, enable single-step sampling of high quality data without the need for adversarial training. In this paper, we introduce the beta noise distribution, which provides flexibility in adjusting noise levels. This is combined with a sinusoidal curriculum that enhances the learning of the trajectory between the noise distribution and the posterior distribution of interest, allowing High Noise Improved Consistency Training (HN-iCT) to be trained in a supervised fashion. Additionally, High Noise Improved Consistency Training with Image Condition (HN-iCT-CN) architecture is introduced, enables to take Low Dose images as a condition for extracting significant features by Weighted Attention Gates (WAG).Our results indicate that unconditional image generation using HN-iCT significantly outperforms basic CT and iCT training techniques with NFE=1 on the CIFAR10 and CelebA datasets. Moreover, our image-conditioned model demonstrates exceptional performance in enhancing low-dose (LD) CT scans.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
509,325
2405.12701
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
In the medical domain, numerous scenarios necessitate the long-form generation ability of large language models (LLMs). Specifically, when addressing patients' questions, it is essential that the model's response conveys factual claims, highlighting the need for an automated method to evaluate those claims. Thus, we introduce MedLFQA, a benchmark dataset reconstructed using long-form question-answering datasets related to the biomedical domain. We use MedLFQA to facilitate a cost-effective automatic evaluations of factuality. We also propose OLAPH, a simple and novel framework that utilizes cost-effective and multifaceted automatic evaluation to construct a synthetic preference set and answers questions in our preferred manner. Our framework leads us to train LLMs step-by-step to reduce hallucinations and include crucial medical claims. We highlight that, even on evaluation metrics not used during training, LLMs trained with our OLAPH framework demonstrate significant performance improvement in factuality. Our findings reveal that a 7B LLM trained with our OLAPH framework can provide long answers comparable to the medical experts' answers in terms of factuality. We believe that our work could shed light on gauging the long-text generation ability of LLMs in the medical domain. Our code and datasets are available.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
455,618
2311.03551
Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models
The lack of contextual information in text data can make the annotation process of text-based emotion classification datasets challenging. As a result, such datasets often contain labels that fail to consider all the relevant emotions in the vocabulary. This misalignment between text inputs and labels can degrade the performance of machine learning models trained on top of them. As re-annotating entire datasets is a costly and time-consuming task that cannot be done at scale, we propose to use the expressive capabilities of large language models to synthesize additional context for input text to increase its alignment with the annotated emotional labels. In this work, we propose a formal definition of textual context to motivate a prompting strategy to enhance such contextual information. We provide both human and empirical evaluation to demonstrate the efficacy of the enhanced context. Our method improves alignment between inputs and their human-annotated labels from both an empirical and human-evaluated standpoint.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
405,890
2405.01538
Multi-Space Alignments Towards Universal LiDAR Segmentation
A unified and versatile LiDAR segmentation model with strong robustness and generalizability is desirable for safe autonomous driving perception. This work presents M3Net, a one-of-a-kind framework for fulfilling multi-task, multi-dataset, multi-modality LiDAR segmentation in a universal manner using just a single set of parameters. To better exploit data volume and diversity, we first combine large-scale driving datasets acquired by different types of sensors from diverse scenes and then conduct alignments in three spaces, namely data, feature, and label spaces, during the training. As a result, M3Net is capable of taming heterogeneous data for training state-of-the-art LiDAR segmentation models. Extensive experiments on twelve LiDAR segmentation datasets verify our effectiveness. Notably, using a shared set of parameters, M3Net achieves 75.1%, 83.1%, and 72.4% mIoU scores, respectively, on the official benchmarks of SemanticKITTI, nuScenes, and Waymo Open.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
451,392
1902.02629
SAPSAM - Sparsely Annotated Pathological Sign Activation Maps - A novel approach to train Convolutional Neural Networks on lung CT scans using binary labels only
Chronic Pulmonary Aspergillosis (CPA) is a complex lung disease caused by infection with Aspergillus. Computed tomography (CT) images are frequently requested in patients with suspected and established disease, but the radiological signs on CT are difficult to quantify making accurate follow-up challenging. We propose a novel method to train Convolutional Neural Networks using only regional labels on the presence of pathological signs, to not only detect CPA, but also spatially localize pathological signs. We use average intensity projections within different ranges of Hounsfield-unit (HU) values, transforming input 3D CT scans into 2D RGB-like images. CNN architectures are trained for hierarchical tasks, leading to precise activation maps of pathological patterns. Results on a cohort of 352 subjects demonstrate high classification accuracy, localization precision and predictive power of 2 year survival. Such tool opens the way to CPA patient stratification and quantitative follow-up of CPA pathological signs, for patients under drug therapy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,918
1912.11176
Unsupervised Learning of Graph Hierarchical Abstractions with Differentiable Coarsening and Optimal Transport
Hierarchical abstractions are a methodology for solving large-scale graph problems in various disciplines. Coarsening is one such approach: it generates a pyramid of graphs whereby the one in the next level is a structural summary of the prior one. With a long history in scientific computing, many coarsening strategies were developed based on mathematically driven heuristics. Recently, resurgent interests exist in deep learning to design hierarchical methods learnable through differentiable parameterization. These approaches are paired with downstream tasks for supervised learning. In practice, however, supervised signals (e.g., labels) are scarce and are often laborious to obtain. In this work, we propose an unsupervised approach, coined OTCoarsening, with the use of optimal transport. Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs. We demonstrate that the proposed approach produces meaningful coarse graphs and yields competitive performance compared with supervised methods for graph classification and regression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,497
2204.07980
Does Recommend-Revise Produce Reliable Annotations? An Analysis on Missing Instances in DocRED
DocRED is a widely used dataset for document-level relation extraction. In the large-scale annotation, a \textit{recommend-revise} scheme is adopted to reduce the workload. Within this scheme, annotators are provided with candidate relation instances from distant supervision, and they then manually supplement and remove relational facts based on the recommendations. However, when comparing DocRED with a subset relabeled from scratch, we find that this scheme results in a considerable amount of false negative samples and an obvious bias towards popular entities and relations. Furthermore, we observe that the models trained on DocRED have low recall on our relabeled dataset and inherit the same bias in the training data. Through the analysis of annotators' behaviors, we figure out the underlying reason for the problems above: the scheme actually discourages annotators from supplementing adequate instances in the revision phase. We appeal to future research to take into consideration the issues with the recommend-revise scheme when designing new models and annotation schemes. The relabeled dataset is released at \url{https://github.com/AndrewZhe/Revisit-DocRED}, to serve as a more reliable test set of document RE models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
291,922
2301.00057
A Mapping of Assurance Techniques for Learning Enabled Autonomous Systems to the Systems Engineering Lifecycle
Learning enabled autonomous systems provide increased capabilities compared to traditional systems. However, the complexity of and probabilistic nature in the underlying methods enabling such capabilities present challenges for current systems engineering processes for assurance, and test, evaluation, verification, and validation (TEVV). This paper provides a preliminary attempt to map recently developed technical approaches in the assurance and TEVV of learning enabled autonomous systems (LEAS) literature to a traditional systems engineering v-model. This mapping categorizes such techniques into three main approaches: development, acquisition, and sustainment. We review the latest techniques to develop safe, reliable, and resilient learning enabled autonomous systems, without recommending radical and impractical changes to existing systems engineering processes. By performing this mapping, we seek to assist acquisition professionals by (i) informing comprehensive test and evaluation planning, and (ii) objectively communicating risk to leaders.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
338,781
2404.00099
Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes
We study the evaluation of a policy under best- and worst-case perturbations to a Markov decision process (MDP), using transition observations from the original MDP, whether they are generated under the same or a different policy. This is an important problem when there is the possibility of a shift between historical and future environments, $\textit{e.g.}$ due to unmeasured confounding, distributional shift, or an adversarial environment. We propose a perturbation model that allows changes in the transition kernel densities up to a given multiplicative factor or its reciprocal, extending the classic marginal sensitivity model (MSM) for single time-step decision-making to infinite-horizon RL. We characterize the sharp bounds on policy value under this model $\unicode{x2013}$ $\textit{i.e.}$, the tightest possible bounds based on transition observations from the original MDP $\unicode{x2013}$ and we study the estimation of these bounds from such transition observations. We develop an estimator with several important guarantees: it is semiparametrically efficient, and remains so even when certain necessary nuisance functions, such as worst-case Q-functions, are estimated at slow, nonparametric rates. Our estimator is also asymptotically normal, enabling straightforward statistical inference using Wald confidence intervals. Moreover, when certain nuisances are estimated inconsistently, the estimator still provides valid, albeit possibly not sharp, bounds on the policy value. We validate these properties in numerical simulations. The combination of accounting for environment shifts from train to test (robustness), being insensitive to nuisance-function estimation (orthogonality), and addressing the challenge of learning from finite samples (inference) together leads to credible and reliable policy evaluation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
442,762
2305.09057
Self-Supervised Pretraining on Paired Sequences of fMRI Data for Transfer Learning to Brain Decoding Tasks
In this work we introduce a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
364,494
1611.08229
Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors
Dictionary learning is the task of determining a data-dependent transform that yields a sparse representation of some observed data. The dictionary learning problem is non-convex, and usually solved via computationally complex iterative algorithms. Furthermore, the resulting transforms obtained generally lack structure that permits their fast application to data. To address this issue, this paper develops a framework for learning orthonormal dictionaries which are built from products of a few Householder reflectors. Two algorithms are proposed to learn the reflector coefficients: one that considers a sequential update of the reflectors and one with a simultaneous update of all reflectors that imposes an additional internal orthogonal constraint. The proposed methods have low computational complexity and are shown to converge to local minimum points which can be described in terms of the spectral properties of the matrices involved. The resulting dictionaries balance between the computational complexity and the quality of the sparse representations by controlling the number of Householder reflectors in their product. Simulations of the proposed algorithms are shown in the image processing setting where well-known fast transforms are available for comparisons. The proposed algorithms have favorable reconstruction error and the advantage of a fast implementation relative to the classical, unstructured, dictionaries.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
64,467
2211.01839
HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks
Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals. Recent applications of INRs include image super-resolution, compression of high-dimensional signals, or 3D rendering. However, these solutions usually focus on visual data, and adapting them to the audio domain is not trivial. Moreover, it requires a separately trained model for every data sample. To address this limitation, we propose HyperSound, a meta-learning method leveraging hypernetworks to produce INRs for audio signals unseen at training time. We show that our approach can reconstruct sound waves with quality comparable to other state-of-the-art models.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
328,380
2401.11491
BA-LINS: A Frame-to-Frame Bundle Adjustment for LiDAR-Inertial Navigation
Bundle Adjustment (BA) has been proven to improve the accuracy of the LiDAR mapping. However, the BA method has not yet been properly employed in a dead-reckoning navigation system. In this paper, we present a frame-to-frame (F2F) BA for LiDAR-inertial navigation, named BA-LINS. Based on the direct F2F point-cloud association, the same-plane points are associated among the LiDAR keyframes. Hence, the F2F plane-point BA measurement can be constructed using the same-plane points. The LiDAR BA and the inertial measurement unit (IMU)-preintegration measurements are tightly integrated under the framework of factor graph optimization. An effective adaptive covariance estimation algorithm for LiDAR BA measurements is proposed to further improve the accuracy. We conduct exhaustive real-world experiments on public and private datasets to examine the proposed BA-LINS. The results demonstrate that BA-LINS yields superior accuracy to state-of-the-art methods. Compared to the baseline system FF-LINS, the absolute translation accuracy and state-estimation efficiency of BA-LINS are improved by 29.5% and 28.7% on the private dataset, respectively. Besides, the ablation experiment results exhibit that the proposed adaptive covariance estimation algorithm can notably improve the accuracy and robustness of BA-LINS.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
423,026
2401.02274
ShapeAug: Occlusion Augmentation for Event Camera Data
Recently, Dynamic Vision Sensors (DVSs) sparked a lot of interest due to their inherent advantages over conventional RGB cameras. These advantages include a low latency, a high dynamic range and a low energy consumption. Nevertheless, the processing of DVS data using Deep Learning (DL) methods remains a challenge, particularly since the availability of event training data is still limited. This leads to a need for event data augmentation techniques in order to improve accuracy as well as to avoid over-fitting on the training data. Another challenge especially in real world automotive applications is occlusion, meaning one object is hindering the view onto the object behind it. In this paper, we present a novel event data augmentation approach, which addresses this problem by introducing synthetic events for randomly moving objects in a scene. We test our method on multiple DVS classification datasets, resulting in an relative improvement of up to 6.5 % in top1-accuracy. Moreover, we apply our augmentation technique on the real world Gen1 Automotive Event Dataset for object detection, where we especially improve the detection of pedestrians by up to 5 %.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
419,655
2010.01494
PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision
User modeling is critical for many personalized web services. Many existing methods model users based on their behaviors and the labeled data of target tasks. However, these methods cannot exploit useful information in unlabeled user behavior data, and their performance may be not optimal when labeled data is scarce. Motivated by pre-trained language models which are pre-trained on large-scale unlabeled corpus to empower many downstream tasks, in this paper we propose to pre-train user models from large-scale unlabeled user behaviors data. We propose two self-supervision tasks for user model pre-training. The first one is masked behavior prediction, which can model the relatedness between historical behaviors. The second one is next $K$ behavior prediction, which can model the relatedness between past and future behaviors. The pre-trained user models are finetuned in downstream tasks to learn task-specific user representations. Experimental results on two real-world datasets validate the effectiveness of our proposed user model pre-training method.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
198,675
2403.00189
The Road to Next-Generation Multiple Access: A 50-Year Tutorial Review
The evolution of wireless communications has been significantly influenced by remarkable advancements in multiple access (MA) technologies over the past five decades, shaping the landscape of modern connectivity. Within this context, a comprehensive tutorial review is presented, focusing on representative MA techniques developed over the past 50 years. The following areas are explored: i) The foundational principles and information-theoretic capacity limits of power-domain non-orthogonal multiple access (NOMA) are characterized, along with its extension to multiple-input multiple-output (MIMO)-NOMA. ii) Several MA transmission schemes exploiting the spatial domain are investigated, encompassing both conventional space-division multiple access (SDMA)/MIMO-NOMA systems and near-field MA systems utilizing spherical-wave propagation models. iii) The application of NOMA to integrated sensing and communications (ISAC) systems is studied. This includes an introduction to typical NOMA-based downlink/uplink ISAC frameworks, followed by an evaluation of their performance limits using a mutual information (MI)-based analytical framework. iv) Major issues and research opportunities associated with the integration of MA with other emerging technologies are identified to facilitate MA in next-generation networks, i.e., next-generation multiple access (NGMA). Throughout the paper, promising directions are highlighted to inspire future research endeavors in the realm of MA and NGMA.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
433,886
cs/0003043
Automatic Classification of Text Databases through Query Probing
Many text databases on the web are "hidden" behind search interfaces, and their documents are only accessible through querying. Search engines typically ignore the contents of such search-only databases. Recently, Yahoo-like directories have started to manually organize these databases into categories that users can browse to find these valuable resources. We propose a novel strategy to automate the classification of search-only text databases. Our technique starts by training a rule-based document classifier, and then uses the classifier's rules to generate probing queries. The queries are sent to the text databases, which are then classified based on the number of matches that they produce for each query. We report some initial exploratory experiments that show that our approach is promising to automatically characterize the contents of text databases accessible on the web.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
537,056
2012.11369
Flexible, Non-parametric Modeling Using Regularized Neural Networks
Non-parametric, additive models are able to capture complex data dependencies in a flexible, yet interpretable way. However, choosing the format of the additive components often requires non-trivial data exploration. Here, as an alternative, we propose PrAda-net, a one-hidden-layer neural network, trained with proximal gradient descent and adaptive lasso. PrAda-net automatically adjusts the size and architecture of the neural network to reflect the complexity and structure of the data. The compact network obtained by PrAda-net can be translated to additive model components, making it suitable for non-parametric statistical modelling with automatic model selection. We demonstrate PrAda-net on simulated data, where wecompare the test error performance, variable importance and variable subset identification properties of PrAda-net to other lasso-based regularization approaches for neural networks. We also apply PrAda-net to the massive U.K. black smoke data set, to demonstrate how PrAda-net can be used to model complex and heterogeneous data with spatial and temporal components. In contrast to classical, statistical non-parametric approaches, PrAda-net requires no preliminary modeling to select the functional forms of the additive components, yet still results in an interpretable model representation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,620
2403.05564
Promoting Fair Vaccination Strategies Through Influence Maximization: A Case Study on COVID-19 Spread
The aftermath of the Covid-19 pandemic saw more severe outcomes for racial minority groups and economically-deprived communities. Such disparities can be explained by several factors, including unequal access to healthcare, as well as the inability of low income groups to reduce their mobility due to work or social obligations. Moreover, senior citizens were found to be more susceptible to severe symptoms, largely due to age-related health reasons. Adapting vaccine distribution strategies to consider a range of demographics is therefore essential to address these disparities. In this study, we propose a novel approach that utilizes influence maximization (IM) on mobility networks to develop vaccination strategies which incorporate demographic fairness. By considering factors such as race, social status, age, and associated risk factors, we aim to optimize vaccine distribution to achieve various fairness definitions for one or more protected attributes at a time. Through extensive experiments conducted on Covid-19 spread in three major metropolitan areas across the United States, we demonstrate the effectiveness of our proposed approach in reducing disease transmission and promoting fairness in vaccination distribution.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
436,060
2108.11513
Learning Effective and Efficient Embedding via an Adaptively-Masked Twins-based Layer
Embedding learning for categorical features is crucial for the deep learning-based recommendation models (DLRMs). Each feature value is mapped to an embedding vector via an embedding learning process. Conventional methods configure a fixed and uniform embedding size to all feature values from the same feature field. However, such a configuration is not only sub-optimal for embedding learning but also memory costly. Existing methods that attempt to resolve these problems, either rule-based or neural architecture search (NAS)-based, need extensive efforts on the human design or network training. They are also not flexible in embedding size selection or in warm-start-based applications. In this paper, we propose a novel and effective embedding size selection scheme. Specifically, we design an Adaptively-Masked Twins-based Layer (AMTL) behind the standard embedding layer. AMTL generates a mask vector to mask the undesired dimensions for each embedding vector. The mask vector brings flexibility in selecting the dimensions and the proposed layer can be easily added to either untrained or trained DLRMs. Extensive experimental evaluations show that the proposed scheme outperforms competitive baselines on all the benchmark tasks, and is also memory-efficient, saving 60\% memory usage without compromising any performance metrics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
252,196
2405.05363
LOC-ZSON: Language-driven Object-Centric Zero-Shot Object Retrieval and Navigation
In this paper, we present LOC-ZSON, a novel Language-driven Object-Centric image representation for object navigation task within complex scenes. We propose an object-centric image representation and corresponding losses for visual-language model (VLM) fine-tuning, which can handle complex object-level queries. In addition, we design a novel LLM-based augmentation and prompt templates for stability during training and zero-shot inference. We implement our method on Astro robot and deploy it in both simulated and real-world environments for zero-shot object navigation. We show that our proposed method can achieve an improvement of 1.38 - 13.38% in terms of text-to-image recall on different benchmark settings for the retrieval task. For object navigation, we show the benefit of our approach in simulation and real world, showing 5% and 16.67% improvement in terms of navigation success rate, respectively.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
452,889
2007.07591
Learning Invariances for Interpretability using Supervised VAE
We propose to learn model invariances as a means of interpreting a model. This is motivated by a reverse engineering principle. If we understand a problem, we may introduce inductive biases in our model in the form of invariances. Conversely, when interpreting a complex supervised model, we can study its invariances to understand how that model solves a problem. To this end we propose a supervised form of variational auto-encoders (VAEs). Crucially, only a subset of the dimensions in the latent space contributes to the supervised task, allowing the remaining dimensions to act as nuisance parameters. By sampling solely the nuisance dimensions, we are able to generate samples that have undergone transformations that leave the classification unchanged, revealing the invariances of the model. Our experimental results show the capability of our proposed model both in terms of classification, and generation of invariantly transformed samples. Finally we show how combining our model with feature attribution methods it is possible to reach a more fine-grained understanding about the decision process of the model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
187,381
2003.02306
Reduced Dilation-Erosion Perceptron for Binary Classification
Dilation and erosion are two elementary operations from mathematical morphology, a non-linear lattice computing methodology widely used for image processing and analysis. The dilation-erosion perceptron (DEP) is a morphological neural network obtained by a convex combination of a dilation and an erosion followed by the application of a hard-limiter function for binary classification tasks. A DEP classifier can be trained using a convex-concave procedure along with the minimization of the hinge loss function. As a lattice computing model, the DEP classifier assumes the feature and class spaces are partially ordered sets. In many practical situations, however, there is no natural ordering for the feature patterns. Using concepts from multi-valued mathematical morphology, this paper introduces the reduced dilation-erosion (r-DEP) classifier. An r-DEP classifier is obtained by endowing the feature space with an appropriate reduced ordering. Such reduced ordering can be determined using two approaches: One based on an ensemble of support vector classifiers (SVCs) with different kernels and the other based on a bagging of similar SVCs trained using different samples of the training set. Using several binary classification datasets from the OpenML repository, the ensemble and bagging r-DEP classifiers yielded in mean higher balanced accuracy scores than the linear, polynomial, and radial basis function (RBF) SVCs as well as their ensemble and a bagging of RBF SVCs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,901
2206.06126
Robust Time Series Denoising with Learnable Wavelet Packet Transform
Signal denoising is a key preprocessing step for many applications, as the performance of a learning task is closely related to the quality of the input data. In this paper, we apply a signal processing based deep neural network architecture, a learnable extension of the wavelet packet transform. As main advantages, this model has few parameters, an intuitive initialization and strong learning capabilities. Moreover, we show that it is possible to easily modify the parameters of the model after the training step to tailor to different noise intensities. Two case studies are conducted to compare this model with the state of the art and commonly used denoising procedures. The first experiment uses standard signals to study denoising properties of the algorithms. The second experiment is a real application with the objective to remove audio background noises. We show that the learnable wavelet packet transform has the learning capabilities of deep learning methods while maintaining the robustness of standard signal processing approaches. More specifically, we demonstrate that our approach maintains excellent denoising performances on signal classes separate from those used during the training step. Moreover, the learnable wavelet packet transform was found to be robust when different noise intensities, noise varieties and artifacts are considered.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
302,262
2502.08106
PoGDiff: Product-of-Gaussians Diffusion Models for Imbalanced Text-to-Image Generation
Diffusion models have made significant advancements in recent years. However, their performance often deteriorates when trained or fine-tuned on imbalanced datasets. This degradation is largely due to the disproportionate representation of majority and minority data in image-text pairs. In this paper, we propose a general fine-tuning approach, dubbed PoGDiff, to address this challenge. Rather than directly minimizing the KL divergence between the predicted and ground-truth distributions, PoGDiff replaces the ground-truth distribution with a Product of Gaussians (PoG), which is constructed by combining the original ground-truth targets with the predicted distribution conditioned on a neighboring text embedding. Experiments on real-world datasets demonstrate that our method effectively addresses the imbalance problem in diffusion models, improving both generation accuracy and quality.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
532,892
2404.12292
Reducing Bias in Pre-trained Models by Tuning while Penalizing Change
Deep models trained on large amounts of data often incorporate implicit biases present during training time. If later such a bias is discovered during inference or deployment, it is often necessary to acquire new data and retrain the model. This behavior is especially problematic in critical areas such as autonomous driving or medical decision-making. In these scenarios, new data is often expensive and hard to come by. In this work, we present a method based on change penalization that takes a pre-trained model and adapts the weights to mitigate a previously detected bias. We achieve this by tuning a zero-initialized copy of a frozen pre-trained network. Our method needs very few, in extreme cases only a single, examples that contradict the bias to increase performance. Additionally, we propose an early stopping criterion to modify baselines and reduce overfitting. We evaluate our approach on a well-known bias in skin lesion classification and three other datasets from the domain shift literature. We find that our approach works especially well with very few images. Simple fine-tuning combined with our early stopping also leads to performance benefits for a larger number of tuning samples.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,822
2003.04593
3D printed cable-driven continuum robots with generally routed cables: modeling and experiments
Continuum robots are becoming increasingly popular for applications which require the robots to deform and change shape, while also being compliant. A cable-driven continuum robot is one of the most commonly used type. Typical cable driven continuum robots consist of a flexible backbone with spacer disks attached to the backbone and cables passing through the holes in the spacer disks from the fixed base to a free end. In most such robots, the routing of the cables are straight or a smooth helical curve. In this paper, we analyze the experimental and theoretical deformations of a 3D printed continuum robot, for 6 different kinds of cable routings. The results are compared for discrete optimization based kinematic modelling as well as static modelling using Cosserat rod theory. It is shown that the experimental results match the theoretical results with an error margin of 2%. It is also shown that the optimization based approach is faster than the one based on Cosserat rod theory. We also present a three-fingered gripper prototype where each of the fingers are 3D printed continuum robots with general cable routing. It is demonstrated that the prototype can be used for gripping objects and for its manipulation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
167,596
1710.05233
Learners that Use Little Information
We study learning algorithms that are restricted to using a small amount of information from their input sample. We introduce a category of learning algorithms we term $d$-bit information learners, which are algorithms whose output conveys at most $d$ bits of information of their input. A central theme in this work is that such algorithms generalize. We focus on the learning capacity of these algorithms, and prove sample complexity bounds with tight dependencies on the confidence and error parameters. We also observe connections with well studied notions such as sample compression schemes, Occam's razor, PAC-Bayes and differential privacy. We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information. On the other hand, we show that in the distribution-dependent setting every VC class has empirical risk minimizers that do not reveal a lot of information.
false
false
false
false
true
false
true
false
false
true
false
false
true
false
false
false
false
false
82,604
2401.08649
Deep Pulse-Coupled Neural Networks
Spiking Neural Networks (SNNs) capture the information processing mechanism of the brain by taking advantage of spiking neurons, such as the Leaky Integrate-and-Fire (LIF) model neuron, which incorporates temporal dynamics and transmits information via discrete and asynchronous spikes. However, the simplified biological properties of LIF ignore the neuronal coupling and dendritic structure of real neurons, which limits the spatio-temporal dynamics of neurons and thus reduce the expressive power of the resulting SNNs. In this work, we leverage a more biologically plausible neural model with complex dynamics, i.e., a pulse-coupled neural network (PCNN), to improve the expressiveness and recognition performance of SNNs for vision tasks. The PCNN is a type of cortical model capable of emulating the complex neuronal activities in the primary visual cortex. We construct deep pulse-coupled neural networks (DPCNNs) by replacing commonly used LIF neurons in SNNs with PCNN neurons. The intra-coupling in existing PCNN models limits the coupling between neurons only within channels. To address this limitation, we propose inter-channel coupling, which allows neurons in different feature maps to interact with each other. Experimental results show that inter-channel coupling can efficiently boost performance with fewer neurons, synapses, and less training time compared to widening the networks. For instance, compared to the LIF-based SNN with wide VGG9, DPCNN with VGG9 uses only 50%, 53%, and 73% of neurons, synapses, and training time, respectively. Furthermore, we propose receptive field and time dependent batch normalization (RFTD-BN) to speed up the convergence and performance of DPCNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
421,972
1707.01939
High-Performance FPGA Implementation of Equivariant Adaptive Separation via Independence Algorithm for Independent Component Analysis
Independent Component Analysis (ICA) is a dimensionality reduction technique that can boost efficiency of machine learning models that deal with probability density functions, e.g. Bayesian neural networks. Algorithms that implement adaptive ICA converge slower than their nonadaptive counterparts, however, they are capable of tracking changes in underlying distributions of input features. This intrinsically slow convergence of adaptive methods combined with existing hardware implementations that operate at very low clock frequencies necessitate fundamental improvements in both algorithm and hardware design. This paper presents an algorithm that allows efficient hardware implementation of ICA. Compared to previous work, our FPGA implementation of adaptive ICA improves clock frequency by at least one order of magnitude and throughput by at least two orders of magnitude. Our proposed algorithm is not limited to ICA and can be used in various machine learning problems that use stochastic gradient descent optimization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
76,620
2410.00054
Transferable Unsupervised Outlier Detection Framework for Human Semantic Trajectories
Semantic trajectories, which enrich spatial-temporal data with textual information such as trip purposes or location activities, are key for identifying outlier behaviors critical to healthcare, social security, and urban planning. Traditional outlier detection relies on heuristic rules, which requires domain knowledge and limits its ability to identify unseen outliers. Besides, there lacks a comprehensive approach that can jointly consider multi-modal data across spatial, temporal, and textual dimensions. Addressing the need for a domain-agnostic model, we propose the Transferable Outlier Detection for Human Semantic Trajectories (TOD4Traj) framework.TOD4Traj first introduces a modality feature unification module to align diverse data feature representations, enabling the integration of multi-modal information and enhancing transferability across different datasets. A contrastive learning module is further pro-posed for identifying regular mobility patterns both temporally and across populations, allowing for a joint detection of outliers based on individual consistency and group majority patterns. Our experimental results have shown TOD4Traj's superior performance over existing models, demonstrating its effectiveness and adaptability in detecting human trajectory outliers across various datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
493,213
1811.03519
Few-shot learning with attention-based sequence-to-sequence models
End-to-end approaches have recently become popular as a means of simplifying the training and deployment of speech recognition systems. However, they often require large amounts of data to perform well on large vocabulary tasks. With the aim of making end-to-end approaches usable by a broader range of researchers, we explore the potential to use end-to-end methods in small vocabulary contexts where smaller datasets may be used. A significant drawback of small-vocabulary systems is the difficulty of expanding the vocabulary beyond the original training samples -- therefore we also study strategies to extend the vocabulary with only few examples per new class (few-shot learning). Our results show that an attention-based encoder-decoder can be competitive against a strong baseline on a small vocabulary keyword classification task, reaching 97.5% of accuracy on Tensorflow's Speech Commands dataset. It also shows promising results on the few-shot learning problem where a simple strategy achieved 68.8\% of accuracy on new keywords with only 10 examples for each new class. This score goes up to 88.4\% with a larger set of 100 examples.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
112,858
2202.09631
Confidence-rich Localization and Mapping based on Particle Filter for Robotic Exploration
This paper mainly studies the localization and mapping of range sensing robots in the confidence-rich map (CRM) and then extends it to provide a full state estimate for information-theoretic exploration. Most previous works about active simultaneous localization and mapping and exploration always assumed the known robot poses or utilized inaccurate information metrics to approximate pose uncertainty, resulting in imbalanced exploration performance and efficiency in the unknown environment. This inspires us to extend the confidence-rich mutual information (CRMI) with measurable pose uncertainty. Specifically, we propose a Rao-Blackwellized particle filter-based localization and mapping scheme (RBPF-CLAM) for CRM, then we develop a new closed-form weighting method to improve the localization accuracy without scan matching. We further derive the uncertain CRMI (UCRMI) with the weighted particles by a more accurate approximation. Simulations and experimental evaluations show the localization accuracy and exploration performance of the proposed methods.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
281,260
2109.12969
Challenging the Semi-Supervised VAE Framework for Text Classification
Semi-Supervised Variational Autoencoders (SSVAEs) are widely used models for data efficient learning. In this paper, we question the adequacy of the standard design of sequence SSVAEs for the task of text classification as we exhibit two sources of overcomplexity for which we provide simplifications. These simplifications to SSVAEs preserve their theoretical soundness while providing a number of practical advantages in the semi-supervised setup where the result of training is a text classifier. These simplifications are the removal of (i) the Kullback-Liebler divergence from its objective and (ii) the fully unobserved latent variable from its probabilistic model. These changes relieve users from choosing a prior for their latent variables, make the model smaller and faster, and allow for a better flow of information into the latent variables. We compare the simplified versions to standard SSVAEs on 4 text classification tasks. On top of the above-mentioned simplification, experiments show a speed-up of 26%, while keeping equivalent classification scores. The code to reproduce our experiments is public.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
257,476
2106.10456
Humble Teachers Teach Better Students for Semi-Supervised Object Detection
We propose a semi-supervised approach for contemporary object detectors following the teacher-student dual model framework. Our method is featured with 1) the exponential moving averaging strategy to update the teacher from the student online, 2) using plenty of region proposals and soft pseudo-labels as the student's training targets, and 3) a light-weighted detection-specific data ensemble for the teacher to generate more reliable pseudo-labels. Compared to the recent state-of-the-art -- STAC, which uses hard labels on sparsely selected hard pseudo samples, the teacher in our model exposes richer information to the student with soft-labels on many proposals. Our model achieves COCO-style AP of 53.04% on VOC07 val set, 8.4% better than STAC, when using VOC12 as unlabeled data. On MS-COCO, it outperforms prior work when only a small percentage of data is taken as labeled. It also reaches 53.8% AP on MS-COCO test-dev with 3.1% gain over the fully supervised ResNet-152 Cascaded R-CNN, by tapping into unlabeled data of a similar size to the labeled data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
242,024
2103.06467
Pavement Distress Detection and Segmentation using YOLOv4 and DeepLabv3 on Pavements in the Philippines
Road transport infrastructure is critical for safe, fast, economical, and reliable mobility within the whole country that is conducive to a productive society. However, roads tend to deteriorate over time due to natural causes in the environment and repeated traffic loads. Pavement Distress (PD) detection is essential in monitoring the current conditions of the public roads to enable targeted rehabilitation and preventive maintenance. Nonetheless, distress detection surveys are still done via manual inspection for developing countries such as the Philippines. This study proposed the use of deep learning for two ways of recording pavement distresses from 2D RGB images - detection and segmentation. YOLOv4 is used for pavement distress detection while DeepLabv3 is employed for pavement distress segmentation on a small dataset of pavement images in the Philippines. This study aims to provide a basis to potentially spark solutions in building a cheap, scalable, and automated end-to-end solution for PD detection in the country.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
224,310
2003.01916
Optimal Deep Learning for Robot Touch
This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applied to tactile robotics, focussing on optical tactile sensors, which help bridge from deep learning for vision to touch. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables such as motion-dependent shear. This involves including representative motions as unlabelled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of pose from touch will enable robots to safely and precisely control their physical interactions, underlying a wide range of object exploration and manipulation tasks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
166,804
2308.00958
Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Despite the broad application of Machine Learning models as a Service (MLaaS), they are vulnerable to model stealing attacks. These attacks can replicate the model functionality by using the black-box query process without any prior knowledge of the target victim model. Existing stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers. However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice. To address the problems, this paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses. Instead of deploying auxiliary defense modules that introduce redundant inference time, InI directly trains a defensive model by isolating the adversary's training gradient from the expected gradient, which can effectively reduce the inference computational cost. In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries, which can induce the adversary to extract little useful knowledge from victim models with minimal impact on the benign performance. Extensive experiments on several visual classification datasets (e.g., MNIST and CIFAR10) demonstrate the superior robustness (up to 48% reduction on stealing accuracy) and speed (up to 25.4x faster) of our InI over other state-of-the-art methods. Our codes can be found in https://github.com/DIG-Beihang/InI-Model-Stealing-Defense.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
383,094
2410.11463
Advanced Persistent Threats (APT) Attribution Using Deep Reinforcement Learning
The development of the DRL model for malware attribution involved extensive research, iterative coding, and numerous adjustments based on the insights gathered from predecessor models and contemporary research papers. This preparatory work was essential to establish a robust foundation for the model, ensuring it could adapt and respond effectively to the dynamic nature of malware threats. Initially, the model struggled with low accuracy levels, but through persistent adjustments to its architecture and learning algorithms, accuracy improved dramatically from about 7 percent to over 73 percent in early iterations. By the end of the training, the model consistently reached accuracy levels near 98 percent, demonstrating its strong capability to accurately recognise and attribute malware activities. This upward trajectory in training accuracy is graphically represented in the Figure, which vividly illustrates the model maturation and increasing proficiency over time.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
498,577
2106.09330
A Simple Generative Network
Generative neural networks are able to mimic intricate probability distributions such as those of handwritten text, natural images, etc. Since their inception several models were proposed. The most successful of these were based on adversarial (GAN), auto-encoding (VAE) and maximum mean discrepancy (MMD) relatively complex architectures and schemes. Surprisingly, a very simple architecture (a single feed-forward neural network) in conjunction with an obvious optimization goal (Kullback_Leibler divergence) was apparently overlooked. This paper demonstrates that such a model (denoted SGN for its simplicity) is able to generate samples visually and quantitatively competitive as compared with the fore-mentioned state of the art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
241,630
2408.10524
XCB: an effective contextual biasing approach to bias cross-lingual phrases in speech recognition
Contextualized ASR models have been demonstrated to effectively improve the recognition accuracy of uncommon phrases when a predefined phrase list is available. However, these models often struggle with bilingual settings, which are prevalent in code-switching speech recognition. In this study, we make the initial attempt to address this challenge by introducing a Cross-lingual Contextual Biasing(XCB) module. Specifically, we augment a pre-trained ASR model for the dominant language by integrating an auxiliary language biasing module and a supplementary language-specific loss, aimed at enhancing the recognition of phrases in the secondary language. Experimental results conducted on our in-house code-switching dataset have validated the efficacy of our approach, demonstrating significant improvements in the recognition of biasing phrases in the secondary language, even without any additional inference overhead. Additionally, our proposed system exhibits both efficiency and generalization when is applied by the unseen ASRU-2019 test set.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
481,891
2312.04282
Adaptive Recursive Query Optimization
Performance-critical industrial applications, including large-scale program, network, and distributed system analyses, are increasingly reliant on recursive queries for data analysis. Yet traditional relational algebra-based query optimization techniques do not scale well to recursive query processing due to the iterative nature of query evaluation, where relation cardinalities can change unpredictably during the course of a single query execution. To avoid error-prone cardinality estimation, adaptive query processing techniques use runtime information to inform query optimization, but these systems are not optimized for the specific needs of recursive query processing. In this paper, we introduce Adaptive Metaprogramming, an innovative technique that shifts recursive query optimization and code generation from compile-time to runtime using principled metaprogramming, enabling dynamic optimization and re-optimization before and after query execution has begun. We present a custom join-ordering optimization applicable at multiple stages during query compilation and execution. Through Carac, a custom Datalog engine, we evaluate the optimization potential of Adaptive Metaprogramming and show unoptimized recursive query execution time can be improved by three orders of magnitude and hand-optimized queries by 6x.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
413,621
2204.10648
Exposure Correction Model to Enhance Image Quality
Exposure errors in an image cause a degradation in the contrast and low visibility in the content. In this paper, we address this problem and propose an end-to-end exposure correction model in order to handle both under- and overexposure errors with a single model. Our model contains an image encoder, consecutive residual blocks, and image decoder to synthesize the corrected image. We utilize perceptual loss, feature matching loss, and multi-scale discriminator to increase the quality of the generated image as well as to make the training more stable. The experimental results indicate the effectiveness of proposed model. We achieve the state-of-the-art result on a large-scale exposure dataset. Besides, we investigate the effect of exposure setting of the image on the portrait matting task. We find that under- and overexposed images cause severe degradation in the performance of the portrait matting models. We show that after applying exposure correction with the proposed model, the portrait matting quality increases significantly. https://github.com/yamand16/ExposureCorrection
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
292,867
1106.4232
Approximate controllability for linear degenerate parabolic problems with bilinear control
In this work we study the global approximate multiplicative controllability for the linear degenerate parabolic Cauchy-Neumann problem $$ \{{array}{l} \displaystyle{v_t-(a(x) v_x)_x =\alpha (t,x)v\,\,\qquad {in} \qquad Q_T \,=\,(0,T)\times(-1,1)} [2.5ex] \displaystyle{a(x)v_x(t,x)|_{x=\pm 1} = 0\,\,\qquad\qquad\qquad\,\, t\in (0,T)} [2.5ex] \displaystyle{v(0,x)=v_0 (x) \,\qquad\qquad\qquad\qquad\quad\,\, x\in (-1,1)}, {array}. $$ with the bilinear control $\alpha(t,x)\in L^\infty (Q_T).$ The problem is strongly degenerate in the sense that $a\in C^1([-1,1]),$ positive on $(-1,1),$ is allowed to vanish at $\pm 1$ provided that a certain integrability condition is fulfilled. We will show that the above system can be steered in $L^2(\Omega)$ from any nonzero, nonnegative initial state into any neighborhood of any desirable nonnegative target-state by bilinear static controls. Moreover, we extend the above result relaxing the sign constraint on $v_0$.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
10,934
1612.02203
A Functional Regression approach to Facial Landmark Tracking
Linear regression is a fundamental building block in many face detection and tracking algorithms, typically used to predict shape displacements from image features through a linear mapping. This paper presents a Functional Regression solution to the least squares problem, which we coin Continuous Regression, resulting in the first real-time incremental face tracker. Contrary to prior work in Functional Regression, in which B-splines or Fourier series were used, we propose to approximate the input space by its first-order Taylor expansion, yielding a closed-form solution for the continuous domain of displacements. We then extend the continuous least squares problem to correlated variables, and demonstrate the generalisation of our approach. We incorporate Continuous Regression into the cascaded regression framework, and show its computational benefits for both training and testing. We then present a fast approach for incremental learning within Cascaded Continuous Regression, coined iCCR, and show that its complexity allows real-time face tracking, being 20 times faster than the state of the art. To the best of our knowledge, this is the first incremental face tracker that is shown to operate in real-time. We show that iCCR achieves state-of-the-art performance on the 300-VW dataset, the most recent, large-scale benchmark for face tracking.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,206
2401.14241
New Algorithms for Computing Sibson Capacity and Arimoto Capacity
The Sibson and Arimoto capacity, which are based on the Sibson and Arimoto mutual information (MI) of order {\alpha}, respectively, are well-known generalizations of the channel capacity C. In this study, we derive novel alternating optimization algorithms for computing these capacities by providing new variational characterizations of the Sibson and Arimoto MI. Moreover, we prove that all iterative algorithms for computing these capacities are equivalent under appropriate conditions imposed on their initial distributions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
424,020
1901.05657
Certainty Driven Consistency Loss on Multi-Teacher Networks for Semi-Supervised Learning
One of the successful approaches in semi-supervised learning is based on the consistency regularization. Typically, a student model is trained to be consistent with teacher prediction for the inputs under different perturbations. To be successful, the prediction targets given by teacher should have good quality, otherwise the student can be misled by teacher. Unfortunately, existing methods do not assess the quality of the teacher targets. In this paper, we propose a novel Certainty-driven Consistency Loss (CCL) that exploits the predictive uncertainty in the consistency loss to let the student dynamically learn from reliable targets. Specifically, we propose two approaches, i.e. Filtering CCL and Temperature CCL to either filter out uncertain predictions or pay less attention on them in the consistency regularization. We further introduce a novel decoupled framework to encourage model difference. Experimental results on SVHN, CIFAR-10, and CIFAR-100 demonstrate the advantages of our method over a few existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
118,833
2408.09845
Predicting Long-term Dynamics of Complex Networks via Identifying Skeleton in Hyperbolic Space
Learning complex network dynamics is fundamental for understanding, modeling, and controlling real-world complex systems. Though great efforts have been made to predict the future states of nodes on networks, the capability of capturing long-term dynamics remains largely limited. This is because they overlook the fact that long-term dynamics in complex network are predominantly governed by their inherent low-dimensional manifolds, i.e., skeletons. Therefore, we propose the Dynamics-Invariant Skeleton Neural Net}work (DiskNet), which identifies skeletons of complex networks based on the renormalization group structure in hyperbolic space to preserve both topological and dynamics properties. Specifically, we first condense complex networks with various dynamics into simple skeletons through physics-informed hyperbolic embeddings. Further, we design graph neural ordinary differential equations to capture the condensed dynamics on the skeletons. Finally, we recover the skeleton networks and dynamics to the original ones using a degree-based super-resolution module. Extensive experiments across three representative dynamics as well as five real-world and two synthetic networks demonstrate the superior performances of the proposed DiskNet, which outperforms the state-of-the-art baselines by an average of 10.18\% in terms of long-term prediction accuracy. Code for reproduction is available at: https://github.com/tsinghua-fib-lab/DiskNet.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
481,615
2203.07774
An Empirical Study of Market Inefficiencies in Uniswap and SushiSwap
Decentralized exchanges are revolutionizing finance. With their ever-growing increase in popularity, a natural question that begs to be asked is: how efficient are these new markets? We find that nearly 30% of analyzed trades are executed at an unfavorable rate. Additionally, we observe that, especially during the DeFi summer in 2020, price inaccuracies across the market plagued DEXes. Uniswap and SushiSwap, however, quickly adapt to their increased volumes. We see an increase in market efficiency with time during the observation period. Nonetheless, the DEXes still struggle to track the reference market when cryptocurrency prices are highly volatile. During such periods of high volatility, we observe the market becoming less efficient - manifested by an increased prevalence in cyclic arbitrage opportunities.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
285,556
1809.08925
Constrained Exploration and Recovery from Experience Shaping
We consider the problem of reinforcement learning under safety requirements, in which an agent is trained to complete a given task, typically formalized as the maximization of a reward signal over time, while concurrently avoiding undesirable actions or states, associated to lower rewards, or penalties. The construction and balancing of different reward components can be difficult in the presence of multiple objectives, yet is crucial for producing a satisfying policy. For example, in reaching a target while avoiding obstacles, low collision penalties can lead to reckless movements while high penalties can discourage exploration. To circumvent this limitation, we examine the effect of past actions in terms of safety to estimate which are acceptable or should be avoided in the future. We then actively reshape the action space of the agent during reinforcement learning, so that reward-driven exploration is constrained within safety limits. We propose an algorithm enabling the learning of such safety constraints in parallel with reinforcement learning and demonstrate its effectiveness in terms of both task completion and training time.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
108,621
2204.09595
Exploring Continuous Integrate-and-Fire for Adaptive Simultaneous Speech Translation
Simultaneous speech translation (SimulST) is a challenging task aiming to translate streaming speech before the complete input is observed. A SimulST system generally includes two components: the pre-decision that aggregates the speech information and the policy that decides to read or write. While recent works had proposed various strategies to improve the pre-decision, they mainly adopt the fixed wait-k policy, leaving the adaptive policies rarely explored. This paper proposes to model the adaptive policy by adapting the Continuous Integrate-and-Fire (CIF). Compared with monotonic multihead attention (MMA), our method has the advantage of simpler computation, superior quality at low latency, and better generalization to long utterances. We conduct experiments on the MuST-C V2 dataset and show the effectiveness of our approach.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
292,490
2412.05976
Lightweight Spatial Embedding for Vision-based 3D Occupancy Prediction
Occupancy prediction has garnered increasing attention in recent years for its comprehensive fine-grained environmental representation and strong generalization to open-set objects. However, cumbersome voxel features and 3D convolution operations inevitably introduce large overheads in both memory and computation, obstructing the deployment of occupancy prediction approaches in real-time autonomous driving systems. Although some methods attempt to efficiently predict 3D occupancy from 2D Bird's-Eye-View (BEV) features through the Channel-to-Height mechanism, BEV features are insufficient to store all the height information of the scene, which limits performance. This paper proposes LightOcc, an innovative 3D occupancy prediction framework that leverages Lightweight Spatial Embedding to effectively supplement the height clues for the BEV-based representation while maintaining its deployability. Firstly, Global Spatial Sampling is used to obtain the Single-Channel Occupancy from multi-view depth distribution. Spatial-to-Channel mechanism then takes the arbitrary spatial dimension of Single-Channel Occupancy as the feature dimension and extracts Tri-Perspective Views (TPV) Embeddings by 2D convolution. Finally, TPV Embeddings will interact with each other by Lightweight TPV Interaction module to obtain the Spatial Embedding that is optimal supplementary to BEV features. Sufficient experimental results show that LightOcc significantly increases the prediction accuracy of the baseline and achieves state-of-the-art performance on the Occ3D-nuScenes benchmark.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
515,046
2110.01389
A Survey of Selected Algorithms Used in Military Applications from the Viewpoints of Dataflow and GaAs
This is a short survey of ten algorithms that are often used for military purposes, followed by analysis of their potential suitability for dataflow and GaAs, which are a specific architecture and technology for supercomputers on a chip, respectively. Whenever an algorithm or a device is used in military settings, it is natural to assume strict requirements related to speed, reliability, scale, energy, size, and accuracy. The two aforementioned paradigms seem to be promising in fulfilling most of these requirements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
258,750
2412.05711
On an Analytical Inversion Formula for the Modulo Radon Transform
This paper proves a novel analytical inversion formula for the so-called modulo Radon transform (MRT), which models a recently proposed approach to one-shot high dynamic range tomography. It is based on the solution of a Poisson problem linking the Laplacian of the Radon transform (RT) of a function to its MRT in combination with the classical filtered back projection formula for inverting the RT. Discretizing the inversion formula using Fourier techniques leads to our novel Laplacian Modulo Unfolding - Filtered Back Projection algorithm, in short LMU-FBP, to recover a function from fully discrete MRT data. Our theoretical findings are finally supported by numerical experiments.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
514,938
2112.02828
PP-MSVSR: Multi-Stage Video Super-Resolution
Different from the Single Image Super-Resolution(SISR) task, the key for Video Super-Resolution(VSR) task is to make full use of complementary information across frames to reconstruct the high-resolution sequence. Since images from different frames with diverse motion and scene, accurately aligning multiple frames and effectively fusing different frames has always been the key research work of VSR tasks. To utilize rich complementary information of neighboring frames, in this paper, we propose a multi-stage VSR deep architecture, dubbed as PP-MSVSR, with local fusion module, auxiliary loss and re-align module to refine the enhanced result progressively. Specifically, in order to strengthen the fusion of features across frames in feature propagation, a local fusion module is designed in stage-1 to perform local feature fusion before feature propagation. Moreover, we introduce an auxiliary loss in stage-2 to make the features obtained by the propagation module reserve more correlated information connected to the HR space, and introduce a re-align module in stage-3 to make full use of the feature information of the previous stage. Extensive experiments substantiate that PP-MSVSR achieves a promising performance of Vid4 datasets, which achieves a PSNR of 28.13dB with only 1.45M parameters. And the PP-MSVSR-L exceeds all state of the art method on REDS4 datasets with considerable parameters. Code and models will be released in PaddleGAN\footnote{https://github.com/PaddlePaddle/PaddleGAN.}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,981
2107.06563
Multi-Label Generalized Zero Shot Learning for the Classification of Disease in Chest Radiographs
Despite the success of deep neural networks in chest X-ray (CXR) diagnosis, supervised learning only allows the prediction of disease classes that were seen during training. At inference, these networks cannot predict an unseen disease class. Incorporating a new class requires the collection of labeled data, which is not a trivial task, especially for less frequently-occurring diseases. As a result, it becomes inconceivable to build a model that can diagnose all possible disease classes. Here, we propose a multi-label generalized zero shot learning (CXR-ML-GZSL) network that can simultaneously predict multiple seen and unseen diseases in CXR images. Given an input image, CXR-ML-GZSL learns a visual representation guided by the input's corresponding semantics extracted from a rich medical text corpus. Towards this ambitious goal, we propose to map both visual and semantic modalities to a latent feature space using a novel learning objective. The objective ensures that (i) the most relevant labels for the query image are ranked higher than irrelevant labels, (ii) the network learns a visual representation that is aligned with its semantics in the latent feature space, and (iii) the mapped semantics preserve their original inter-class representation. The network is end-to-end trainable and requires no independent pre-training for the offline feature extractor. Experiments on the NIH Chest X-ray dataset show that our network outperforms two strong baselines in terms of recall, precision, f1 score, and area under the receiver operating characteristic curve. Our code is publicly available at: https://github.com/nyuad-cai/CXR-ML-GZSL.git
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
246,130
2112.07917
SPTS: Single-Point Text Spotting
Existing scene text spotting (i.e., end-to-end text detection and recognition) methods rely on costly bounding box annotations (e.g., text-line, word-level, or character-level bounding boxes). For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance. We propose an end-to-end scene text spotting method that tackles scene text spotting as a sequence prediction task. Given an image as input, we formulate the desired detection and recognition results as a sequence of discrete tokens and use an auto-regressive Transformer to predict the sequence. The proposed method is simple yet effective, which can achieve state-of-the-art results on widely used benchmarks. Most significantly, we show that the performance is not very sensitive to the positions of the point annotation, meaning that it can be much easier to be annotated or even be automatically generated than the bounding box that requires precise positions. We believe that such a pioneer attempt indicates a significant opportunity for scene text spotting applications of a much larger scale than previously possible. The code is available at https://github.com/shannanyinxiang/SPTS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
271,639
1608.07897
Using k-nearest neighbors to construct cancelable minutiae templates
Fingerprint is widely used in a variety of applications. Security measures have to be taken to protect the privacy of fingerprint data. Cancelable biometrics is proposed as an effective mechanism of using and protecting biometrics. In this paper we propose a new method of constructing cancelable fingerprint template by combining real template with synthetic template. Specifically, each user is given one synthetic minutia template generated with random number generator. Every minutia point from the real template is individually thrown into the synthetic template, from which its k-nearest neighbors are found. The verification template is constructed by combining an arbitrary set of the k-nearest neighbors. To prove the validity of the scheme, testing is carried out on three databases. The results show that the constructed templates satisfy the requirements of cancelable biometrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
60,285
1905.05934
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis
Reducing the test time resource requirements of a neural network while preserving test accuracy is crucial for running inference on resource-constrained devices. To achieve this goal, we introduce a novel network reparameterization based on the Kronecker-factored eigenbasis (KFE), and then apply Hessian-based structured pruning methods in this basis. As opposed to existing Hessian-based pruning algorithms which do pruning in parameter coordinates, our method works in the KFE where different weights are approximately independent, enabling accurate pruning and fast computation. We demonstrate empirically the effectiveness of the proposed method through extensive experiments. In particular, we highlight that the improvements are especially significant for more challenging datasets and networks. With negligible loss of accuracy, an iterative-pruning version gives a 10$\times$ reduction in model size and a 8$\times$ reduction in FLOPs on wide ResNet32.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
130,860
1809.07043
NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task
This paper presents the NICT's participation in the WMT18 shared parallel corpus filtering task. The organizers provided 1 billion words German-English corpus crawled from the web as part of the Paracrawl project. This corpus is too noisy to build an acceptable neural machine translation (NMT) system. Using the clean data of the WMT18 shared news translation task, we designed several features and trained a classifier to score each sentence pairs in the noisy data. Finally, we sampled 100 million and 10 million words and built corresponding NMT systems. Empirical results show that our NMT systems trained on sampled data achieve promising performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
108,197
1307.0201
Simulating Ability: Representing Skills in Games
Throughout the history of games, representing the abilities of the various agents acting on behalf of the players has been a central concern. With increasingly sophisticated games emerging, these simulations have become more realistic, but the underlying mechanisms are still, to a large extent, of an ad hoc nature. This paper proposes using a logistic model from psychometrics as a unified mechanism for task resolution in simulation-oriented games.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
25,532
2501.01811
QuantumBind-RBFE: Accurate Relative Binding Free Energy Calculations Using Neural Network Potentials
Accurate prediction of protein-ligand binding affinities is crucial in drug discovery, particularly during hit-to-lead and lead optimization phases, however, limitations in ligand force fields continue to impact prediction accuracy. In this work, we validate relative binding free energy (RBFE) accuracy using neural network potentials (NNPs) for the ligands. We utilize a novel NNP model, AceForce 1.0, based on the TensorNet architecture for small molecules that broadens the applicability to diverse drug-like compounds, including all important chemical elements and supporting charged molecules. Using established benchmarks, we show overall improved accuracy and correlation in binding affinity predictions compared with GAFF2 for molecular mechanics and ANI2-x for NNPs. Slightly less accuracy but comparable correlations with OPLS4. We also show that we can run the NNP simulations at 2 fs timestep, at least two times larger than previous NNP models, providing significant speed gains. The results show promise for further evolutions of free energy calculations using NNPs while demonstrating its practical use already with the current generation. The code and NNP model are publicly available for research use.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,222
2110.07582
Network Representation Learning: From Preprocessing, Feature Extraction to Node Embedding
Network representation learning (NRL) advances the conventional graph mining of social networks, knowledge graphs, and complex biomedical and physics information networks. Over dozens of network representation learning algorithms have been reported in the literature. Most of them focus on learning node embeddings for homogeneous networks, but they differ in the specific encoding schemes and specific types of node semantics captured and used for learning node embedding. This survey paper reviews the design principles and the different node embedding techniques for network representation learning over homogeneous networks. To facilitate the comparison of different node embedding algorithms, we introduce a unified reference framework to divide and generalize the node embedding learning process on a given network into preprocessing steps, node feature extraction steps and node embedding model training for a NRL task such as link prediction and node clustering. With this unifying reference framework, we highlight the representative methods, models, and techniques used at different stages of the node embedding model learning process. This survey not only helps researchers and practitioners to gain an in-depth understanding of different network representation learning techniques but also provides practical guidelines for designing and developing the next generation of network representation learning algorithms and systems.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,046
1812.01874
Learning to Take Directions One Step at a Time
We present a method to generate a video sequence given a single image. Because items in an image can be animated in arbitrarily many different ways, we introduce as control signal a sequence of motion strokes. Such control signal can be automatically transferred from other videos, e.g., via bounding box tracking. Each motion stroke provides the direction to the moving object in the input image and we aim to train a network to generate an animation following a sequence of such directions. To address this task we design a novel recurrent architecture, which can be trained easily and effectively thanks to an explicit separation of past, future and current states. As we demonstrate in the experiments, our proposed architecture is capable of generating an arbitrary number of frames from a single image and a sequence of motion strokes. Key components of our architecture are an autoencoding constraint to ensure consistency with the past and a generative adversarial scheme to ensure that images look realistic and are temporally smooth. We demonstrate the effectiveness of our approach on the MNIST, KTH, Human3.6M, Push and Weizmann datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
115,637
2306.04928
Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance
This study presents a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater superlimb for diving assistance. The underwater environment severely limits the divers' capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. Here, we present a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater superlimb system. This study's results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
371,969
2412.08289
k-HyperEdge Medoids for Clustering Ensemble
Clustering ensemble has been a popular research topic in data science due to its ability to improve the robustness of the single clustering method. Many clustering ensemble methods have been proposed, most of which can be categorized into clustering-view and sample-view methods. The clustering-view method is generally efficient, but it could be affected by the unreliability that existed in base clustering results. The sample-view method shows good performance, while the construction of the pairwise sample relation is time-consuming. In this paper, the clustering ensemble is formulated as a k-HyperEdge Medoids discovery problem and a clustering ensemble method based on k-HyperEdge Medoids that considers the characteristics of the above two types of clustering ensemble methods is proposed. In the method, a set of hyperedges is selected from the clustering view efficiently, then the hyperedges are diffused and adjusted from the sample view guided by a hyperedge loss function to construct an effective k-HyperEdge Medoid set. The loss function is mainly reduced by assigning samples to the hyperedge with the highest degree of belonging. Theoretical analyses show that the solution can approximate the optimal, the assignment method can gradually reduce the loss function, and the estimation of the belonging degree is statistically reasonable. Experiments on artificial data show the working mechanism of the proposed method. The convergence of the method is verified by experimental analysis of twenty data sets. The effectiveness and efficiency of the proposed method are also verified on these data, with nine representative clustering ensemble algorithms as reference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
516,030
2404.04654
Music Recommendation Based on Facial Emotion Recognition
Introduction: Music provides an incredible avenue for individuals to express their thoughts and emotions, while also serving as a delightful mode of entertainment for enthusiasts and music lovers. Objectives: This paper presents a comprehensive approach to enhancing the user experience through the integration of emotion recognition, music recommendation, and explainable AI using GRAD-CAM. Methods: The proposed methodology utilizes a ResNet50 model trained on the Facial Expression Recognition (FER) dataset, consisting of real images of individuals expressing various emotions. Results: The system achieves an accuracy of 82% in emotion classification. By leveraging GRAD-CAM, the model provides explanations for its predictions, allowing users to understand the reasoning behind the system's recommendations. The model is trained on both FER and real user datasets, which include labelled facial expressions, and real images of individuals expressing various emotions. The training process involves pre-processing the input images, extracting features through convolutional layers, reasoning with dense layers, and generating emotion predictions through the output layer. Conclusion: The proposed methodology, leveraging the Resnet50 model with ROI-based analysis and explainable AI techniques, offers a robust and interpretable solution for facial emotion detection paper.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
444,740
2004.08773
Safe Screening Rules for $\ell_0$-Regression
We give safe screening rules to eliminate variables from regression with $\ell_0$ regularization or cardinality constraint. These rules are based on guarantees that a feature may or may not be selected in an optimal solution. The screening rules can be computed from a convex relaxation solution in linear time, without solving the $\ell_0$ optimization problem. Thus, they can be used in a preprocessing step to safely remove variables from consideration apriori. Numerical experiments on real and synthetic data indicate that, on average, 76\% of the variables can be fixed to their optimal values, hence, reducing the computational burden for optimization substantially. Therefore, the proposed fast and effective screening rules extend the scope of algorithms for $\ell_0$-regression to larger data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
173,167
2209.09240
Distributed Semi-supervised Fuzzy Regression with Interpolation Consistency Regularization
Recently, distributed semi-supervised learning (DSSL) algorithms have shown their effectiveness in leveraging unlabeled samples over interconnected networks, where agents cannot share their original data with each other and can only communicate non-sensitive information with their neighbors. However, existing DSSL algorithms cannot cope with data uncertainties and may suffer from high computation and communication overhead problems. To handle these issues, we propose a distributed semi-supervised fuzzy regression (DSFR) model with fuzzy if-then rules and interpolation consistency regularization (ICR). The ICR, which was proposed recently for semi-supervised problem, can force decision boundaries to pass through sparse data areas, thus increasing model robustness. However, its application in distributed scenarios has not been considered yet. In this work, we proposed a distributed Fuzzy C-means (DFCM) method and a distributed interpolation consistency regularization (DICR) built on the well-known alternating direction method of multipliers to respectively locate parameters in antecedent and consequent components of DSFR. Notably, the DSFR model converges very fast since it does not involve back-propagation procedure and is scalable to large-scale datasets benefiting from the utilization of DFCM and DICR. Experiments results on both artificial and real-world datasets show that the proposed DSFR model can achieve much better performance than the state-of-the-art DSSL algorithm in terms of both loss value and computational cost.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
318,442
2306.16643
Cautious explorers generate more future academic impact
Some scientists are more likely to explore unfamiliar research topics while others tend to exploit existing ones. In previous work, correlations have been found between scientists' topic choices and their career performances. However, literature has yet to untangle the intricate interplay between scientific impact and research topic choices, where scientific exploration and exploitation intertwine. Here we study two metrics that gauge how frequently scientists switch topic areas and how large those jumps are, and discover that 'cautious explorers' who switch topics frequently but do so to 'close' domains have notably better future performance and can be identified at a remarkably early career stage. Cautious explorers who balance exploration and exploitation in their first four career years have up to 19% more citations per future paper. Our results suggest that the proposed metrics depict the scholarly traits of scientists throughout their careers and provide fresh insight, especially for nurturing junior scientists.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
376,430
2411.01897
LE-PDE++: Mamba for accelerating PDEs Simulations
Partial Differential Equations are foundational in modeling science and natural systems such as fluid dynamics and weather forecasting. The Latent Evolution of PDEs method is designed to address the computational intensity of classical and deep learning-based PDE solvers by proposing a scalable and efficient alternative. To enhance the efficiency and accuracy of LE-PDE, we incorporate the Mamba model, an advanced machine learning model known for its predictive efficiency and robustness in handling complex dynamic systems with a progressive learning strategy. The LE-PDE was tested on several benchmark problems. The method demonstrated a marked reduction in computational time compared to traditional solvers and standalone deep learning models while maintaining high accuracy in predicting system behavior over time. Our method doubles the inference speed compared to the LE-PDE while retaining the same level of parameter efficiency, making it well-suited for scenarios requiring long-term predictions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
505,275
2004.09780
Strong Consistency, Graph Laplacians, and the Stochastic Block Model
Spectral clustering has become one of the most popular algorithms in data clustering and community detection. We study the performance of classical two-step spectral clustering via the graph Laplacian to learn the stochastic block model. Our aim is to answer the following question: when is spectral clustering via the graph Laplacian able to achieve strong consistency, i.e., the exact recovery of the underlying hidden communities? Our work provides an entrywise analysis (an $\ell_{\infty}$-norm perturbation bound) of the Fielder eigenvector of both the unnormalized and the normalized Laplacian associated with the adjacency matrix sampled from the stochastic block model. We prove that spectral clustering is able to achieve exact recovery of the planted community structure under conditions that match the information-theoretic limits.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
173,453
2404.12832
COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images
Deep learning is dramatically transforming the field of medical imaging and radiology, enabling the identification of pathologies in medical images, including computed tomography (CT) and X-ray scans. However, the performance of deep learning models, particularly in segmentation tasks, is often limited by the need for extensive annotated datasets. To address this challenge, the capabilities of weakly supervised semantic segmentation are explored through the lens of Explainable AI and the generation of counterfactual explanations. The scope of this research is development of a novel counterfactual inpainting approach (COIN) that flips the predicted classification label from abnormal to normal by using a generative model. For instance, if the classifier deems an input medical image X as abnormal, indicating the presence of a pathology, the generative model aims to inpaint the abnormal region, thus reversing the classifier's original prediction label. The approach enables us to produce precise segmentations for pathologies without depending on pre-existing segmentation masks. Crucially, image-level labels are utilized, which are substantially easier to acquire than creating detailed segmentation masks. The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia. The findings indicate that COIN greatly surpasses established attribution methods, such as RISE, ScoreCAM, and LayerCAM, as well as an alternative counterfactual explanation method introduced by Singla et al. This evidence suggests that COIN is a promising approach for semantic segmentation of tumors in CT images, and presents a step forward in making deep learning applications more accessible and effective in healthcare, where annotated data is scarce.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
448,052
2211.00842
Delivery by Drones with Arbitrary Energy Consumption Models: A New Formulation Approach
This paper presents a new approach for formulating the delivery problem by drones with general energy consumption models where the drones visit a set of places to deliver parcels to customers. Drones can perform multiple trips that start and end at a central depot while visiting several customers along their paths. The problem determines the routing and scheduling decisions of the drones in order to minimize the total transportation cost of serving customers. For the first time, the new formulation approach enables us to use the best available energy consumption model without the need of any extra approximations. Though the approach works in a very general setting including non-convex energy consumption models, it is also computationally efficient as the resulting optimization model has a linear relaxation. A numerical study on 255 benchmark instances with up to 50 customers and a specific energy function indicate that all the instances can be solved 20 times faster on average using the new formulation when compared to the best existing branch-and-cut algorithm. All the 15 benchmark instances with 50 customers are solved exactly, whereas none of them has been solved optimally before. Moreover, new instances with up to 150 customers are solved with small error bounds within a few hours. The new approach can be simply applied to consider the extra energy required when a drone needs to continue hovering until opening the delivery time window. It can also be applied to the case where the flight time is dependent on the drone's payload weight. Owing to the flexibility of the new approach, these challenging extensions are formulated as linear optimization models for the first time.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
328,019
2108.02170
Curriculum learning for language modeling
Language Models like ELMo and BERT have provided robust representations of natural language, which serve as the language understanding component for a diverse range of downstream tasks.Curriculum learning is a method that employs a structured training regime instead, which has been leveraged in computer vision and machine translation to improve model training speed and model performance. While language models have proven transformational for the natural language processing community, these models have proven expensive, energy-intensive, and challenging to train. In this work, we explore the effect of curriculum learning on language model pretraining using various linguistically motivated curricula and evaluate transfer performance on the GLUE Benchmark. Despite a broad variety of training methodologies and experiments we do not find compelling evidence that curriculum learning methods improve language model training.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
249,240
2501.10610
Automated Water Irrigation System
This paper presents the design and implementation of an automated water irrigation system aimed at optimizing plant care through precision moisture monitoring and controlled water delivery. The system uses a capacitive soil moisture sensor, an ADC (analog-to-digital converter), and a relay-driven water pump to ensure plants receive adequate hydration based on real-time data. In addition, this work aims to build on existing applications for Raspberry Pi (4B) and Arduino-based automatic irrigation systems by integrating advanced calibration methods, employing optimized algorithms, and introducing new technologies to further enhance overall system efficiency and reliability.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
525,587
2104.06512
Trust and Safety
Robotics in Australia have a long history of conforming with safety standards and risk managed practices. This chapter articulates the current state of trust and safety in robotics including society's expectations, safety management systems and system safety as well as emerging issues and methods for ensuring safety in increasingly autonomous robotics. The future of trust and safety will combine standards with iterative, adaptive and responsive regulatory and assurance methods for diverse applications of robotics, autonomous systems and artificial intelligence (RAS-AI). Robotics will need novel technical and social approaches to achieve assurance, particularly for game-changing innovations. The ability for users to easily update algorithms and software, which alters the performance of a system, implies that traditional machine assurance performed prior to deployment or sale, will no longer be viable. Moreover, the high frequency of updates implies that traditional certification that requires substantial time will no longer be practical. To alleviate these difficulties, automation of assurance will likely be needed; something like 'ASsurance-as-a-Service' (ASaaS), where APIs constantly ping RAS-AI to ensure abidance with various rules, frameworks and behavioural expectations. There are exceptions to this, such as in contested or communications denied environments, or in underground or undersea mining; and these systems need their own risk assessments and limitations imposed. Indeed, self-monitors are already operating within some systems. To ensure safe operations of future robotics systems, Australia needs to invest in RAS-AI assurance research, stakeholder engagement and continued development and refinement of robust frameworks, methods, guidelines and policy in order to educate and prepare its technology developers, certifiers, and general population.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
230,099
2411.02783
BrainBits: How Much of the Brain are Generative Reconstruction Methods Using?
When evaluating stimuli reconstruction results it is tempting to assume that higher fidelity text and image generation is due to an improved understanding of the brain or more powerful signal extraction from neural recordings. However, in practice, new reconstruction methods could improve performance for at least three other reasons: learning more about the distribution of stimuli, becoming better at reconstructing text or images in general, or exploiting weaknesses in current image and/or text evaluation metrics. Here we disentangle how much of the reconstruction is due to these other factors vs. productively using the neural recordings. We introduce BrainBits, a method that uses a bottleneck to quantify the amount of signal extracted from neural recordings that is actually necessary to reproduce a method's reconstruction fidelity. We find that it takes surprisingly little information from the brain to produce reconstructions with high fidelity. In these cases, it is clear that the priors of the methods' generative models are so powerful that the outputs they produce extrapolate far beyond the neural signal they decode. Given that reconstructing stimuli can be improved independently by either improving signal extraction from the brain or by building more powerful generative models, improving the latter may fool us into thinking we are improving the former. We propose that methods should report a method-specific random baseline, a reconstruction ceiling, and a curve of performance as a function of bottleneck size, with the ultimate goal of using more of the neural recordings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,655
1912.11652
Confounder Selection via Support Intersection
Confounding matters in almost all observational studies that focus on causality. In order to eliminate bias caused by connfounders, oftentimes a substantial number of features need to be collected in the analysis. In this case, large p small n problem can arise and dimensional reduction technique is required. However, the traditional variable selection methods which focus on prediction are problematic in this setting. Throughout this paper, we analyze this issue in detail and assume the sparsity of confounders which is different from the previous works. Under this assumption we propose several variable selection methods based on support intersection to pick out the confounders. Also we discussed the different approaches for estimation of causal effect and unconfoundedness test. To aid in our description, finally we provide numerical simulations to support our claims and compare to common heuristic methods, as well as applications on real dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,621
2403.17994
Solution for Point Tracking Task of ICCV 1st Perception Test Challenge 2023
This report proposes an improved method for the Tracking Any Point (TAP) task, which tracks any physical surface through a video. Several existing approaches have explored the TAP by considering the temporal relationships to obtain smooth point motion trajectories, however, they still suffer from the cumulative error caused by temporal prediction. To address this issue, we propose a simple yet effective approach called TAP with confident static points (TAPIR+), which focuses on rectifying the tracking of the static point in the videos shot by a static camera. To clarify, our approach contains two key components: (1) Multi-granularity Camera Motion Detection, which could identify the video sequence by the static camera shot. (2) CMR-based point trajectory prediction with one moving object segmentation approach to isolate the static point from the moving object. Our approach ranked first in the final test with a score of 0.46.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
441,719
2108.00422
An Effective and Robust Detector for Logo Detection
In recent years, intellectual property (IP), which represents literary, inventions, artistic works, etc, gradually attract more and more people's attention. Particularly, with the rise of e-commerce, the IP not only represents the product design and brands, but also represents the images/videos displayed on e-commerce platforms. Unfortunately, some attackers adopt some adversarial methods to fool the well-trained logo detection model for infringement. To overcome this problem, a novel logo detector based on the mechanism of looking and thinking twice is proposed in this paper for robust logo detection. The proposed detector is different from other mainstream detectors, which can effectively detect small objects, long-tail objects, and is robust to adversarial images. In detail, we extend detectoRS algorithm to a cascade schema with an equalization loss function, multi-scale transformations, and adversarial data augmentation. A series of experimental results have shown that the proposed method can effectively improve the robustness of the detection model. Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams. Code is available at https://github.com/jiaxiaojunQAQ/Robust-Logo-Detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
248,719