id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1207.4404
Better Mixing via Deep Representations
It has previously been hypothesized, and supported with some experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. We study the following related conjecture: better representations, in the sense of better disentangling, can be exploited to produce faster-mixing Markov chains. Consequently, mixing would be more efficient at higher levels of representation. To better understand why and how this is happening, we propose a secondary conjecture: the higher-level samples fill more uniformly the space they occupy and the high-density manifolds tend to unfold when represented at higher levels. The paper discusses these hypotheses and tests them experimentally through visualization and measurements of mixing and interpolating between samples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
17,617
2012.13801
Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device
3D object detection is an important task, especially in the autonomous driving application domain. However, it is challenging to support the real-time performance with the limited computation and memory resources on edge-computing devices in self-driving cars. To achieve this, we propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques, to enable real-time inference of 3D object detection on the resource-limited edge-computing devices. Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically, without human expertise and assistance. And the evaluated performance of the unified schemes can be fed back to train the generator RNN. The experimental results demonstrate that the proposed framework firstly achieves real-time 3D object detection on mobile devices (Samsung Galaxy S20 phone) with competitive detection performance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
213,333
2107.07831
Modeling User Behaviour in Research Paper Recommendation System
User intention which often changes dynamically is considered to be an important factor for modeling users in the design of recommendation systems. Recent studies are starting to focus on predicting user intention (what users want) beyond user preference (what users like). In this work, a user intention model is proposed based on deep sequential topic analysis. The model predicts a user's intention in terms of the topic of interest. The Hybrid Topic Model (HTM) comprising Latent Dirichlet Allocation (LDA) and Word2Vec is proposed to derive the topic of interest of users and the history of preferences. HTM finds the true topics of papers estimating word-topic distribution which includes syntactic and semantic correlations among words. Next, to model user intention, a Long Short Term Memory (LSTM) based sequential deep learning model is proposed. This model takes into account temporal context, namely the time difference between clicks of two consecutive papers seen by a user. Extensive experiments with the real-world research paper dataset indicate that the proposed approach significantly outperforms the state-of-the-art methods. Further, the proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
246,542
1901.10254
Learning for Multi-Model and Multi-Type Fitting
Multi-model fitting has been extensively studied from the random sampling and clustering perspectives. Most assume that only a single type/class of model is present and their generalizations to fitting multiple types of models/structures simultaneously are non-trivial. The inherent challenges include choice of types and numbers of models, sampling imbalance and parameter tuning, all of which render conventional approaches ineffective. In this work, we formulate the multi-model multi-type fitting problem as one of learning deep feature embedding that is clustering-friendly. In other words, points of the same clusters are embedded closer together through the network. For inference, we apply K-means to cluster the data in the embedded feature space and model selection is enabled by analyzing the K-means residuals. Experiments are carried out on both synthetic and real world multi-type fitting datasets, producing state-of-the-art results. Comparisons are also made on single-type multi-model fitting tasks with promising results as well.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
119,973
2311.13213
Artificial Intelligence in the Service of Entrepreneurial Finance: Knowledge Structure and the Foundational Algorithmic Paradigm
While the application of Artificial Intelligence in Finance has a long tradition, its potential in Entrepreneurship has been intensively explored only recently. In this context, Entrepreneurial Finance is a particularly fertile ground for future Artificial Intelligence proliferation. To support the latter, the study provides a bibliometric review of Artificial Intelligence applications in (1) entrepreneurial finance literature, and (2) corporate finance literature with implications for Entrepreneurship. Rigorous search and screening procedures of the scientific database Web of Science Core Collection resulted in the identification of 1890 relevant journal articles subjected to analysis. The bibliometric analysis gives a rich insight into the knowledge field's conceptual, intellectual, and social structure, indicating nascent and underdeveloped research directions. As far as we were able to identify, this is the first study to map and bibliometrically analyze the academic field concerning the relationship between Artificial Intelligence, Entrepreneurship, and Finance, and the first review that deals with Artificial Intelligence methods in Entrepreneurship. According to the results, Artificial Neural Network, Deep Neural Network and Support Vector Machine are highly represented in almost all identified topic niches. At the same time, applying Topic Modeling, Fuzzy Neural Network and Growing Hierarchical Self-organizing Map is quite rare. As an element of the research, and before final remarks, the article deals as well with a discussion of certain gaps in the relationship between Computer Science and Economics. These gaps do represent problems in the application of Artificial Intelligence in Economic Science. As a way to at least in part remedy this situation, the foundational paradigm and the bespoke demonstration of the Monte Carlo randomized algorithm are presented.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
409,668
2310.13291
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Large language models have revolutionized the field of NLP by achieving state-of-the-art performance on various tasks. However, there is a concern that these models may disclose information in the training data. In this study, we focus on the summarization task and investigate the membership inference (MI) attack: given a sample and black-box access to a model's API, it is possible to determine if the sample was part of the training data. We exploit text similarity and the model's resistance to document modifications as potential MI signals and evaluate their effectiveness on widely used datasets. Our results demonstrate that summarization models are at risk of exposing data membership, even in cases where the reference summary is not available. Furthermore, we discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
401,386
2102.12258
Classification with abstention but without disparities
Classification with abstention has gained a lot of attention in recent years as it allows to incorporate human decision-makers in the process. Yet, abstention can potentially amplify disparities and lead to discriminatory predictions. The goal of this work is to build a general purpose classification algorithm, which is able to abstain from prediction, while avoiding disparate impact. We formalize this problem as risk minimization under fairness and abstention constraints for which we derive the form of the optimal classifier. Building on this result, we propose a post-processing classification algorithm, which is able to modify any off-the-shelf score-based classifier using only unlabeled sample. We establish finite sample risk, fairness, and abstention guarantees for the proposed algorithm. In particular, it is shown that fairness and abstention constraints can be achieved independently from the initial classifier as long as sufficiently many unlabeled data is available. The risk guarantee is established in terms of the quality of the initial classifier. Our post-processing scheme reduces to a sparse linear program allowing for an efficient implementation, which we provide. Finally, we validate our method empirically showing that moderate abstention rates allow to bypass the risk-fairness trade-off.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
221,674
2206.13628
Multi-scale Network with Attentional Multi-resolution Fusion for Point Cloud Semantic Segmentation
In this paper, we present a comprehensive point cloud semantic segmentation network that aggregates both local and global multi-scale information. First, we propose an Angle Correlation Point Convolution (ACPConv) module to effectively learn the local shapes of points. Second, based upon ACPConv, we introduce a local multi-scale split (MSS) block that hierarchically connects features within one single block and gradually enlarges the receptive field which is beneficial for exploiting the local context. Third, inspired by HRNet which has excellent performance on 2D image vision tasks, we build an HRNet customized for point cloud to learn global multi-scale context. Lastly, we introduce a point-wise attention fusion approach that fuses multi-resolution predictions and further improves point cloud semantic segmentation performance. Our experimental results and ablations on several benchmark datasets show that our proposed method is effective and able to achieve state-of-the-art performances compared to existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
305,030
2411.03766
Number Cookbook: Number Understanding of Language Models and How to Improve It
Large language models (LLMs) can solve an increasing number of complex reasoning tasks while making surprising mistakes in basic numerical understanding and processing (such as 9.11 > 9.9). The latter ability is essential for tackling complex arithmetic and mathematical problems and serves as a foundation for most reasoning tasks, but previous work paid little attention to it or only discussed several restricted tasks (like integer addition). In this paper, we comprehensively investigate the numerical understanding and processing ability (NUPA) of LLMs. Firstly, we introduce a benchmark covering four common numerical representations and 17 distinct numerical tasks in four major categories, resulting in 41 meaningful combinations in total. These tasks are derived from primary and secondary education curricula, encompassing nearly all everyday numerical understanding and processing scenarios, and the rules of these tasks are very simple and clear. Through the benchmark, we find that current LLMs fail frequently in many of the tasks. To study the problem, we train small models with existing and potential techniques for enhancing NUPA (such as tokenizers, PEs, and number formats), comprehensively evaluating their effectiveness using our testbed. We also finetune practical-scale LLMs on our proposed NUPA tasks and find that 1) naive finetuning can improve NUPA a lot on many but not all tasks, and 2) surprisingly, techniques designed to enhance NUPA prove ineffective for finetuning pretrained models. We further explore the impact of chain-of-thought techniques on NUPA. Our work provides a more detailed and comprehensive understanding of NUPA in LLMs. Our benchmark and code are released at https://github.com/GraphPKU/number_cookbook.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
506,031
2502.12403
Sensing-based Robustness Challenges in Agricultural Robotic Harvesting
This paper presents the challenges agricultural robotic harvesters face in detecting and localising fruits under various environmental disturbances. In controlled laboratory settings, both the traditional HSV (Hue Saturation Value) transformation and the YOLOv8 (You Only Look Once) deep learning model were employed. However, only YOLOv8 was utilised in outdoor experiments, as the HSV transformation was not capable of accurately drawing fruit contours. Experiments include ten distinct fruit patterns with six apples and six oranges. A grid structure for homography (perspective) transformation was employed to convert detected midpoints into 3D world coordinates. The experiments evaluated detection and localisation under varying lighting and background disturbances, revealing accurate performance indoors, but significant challenges outdoors. Our results show that indoor experiments using YOLOv8 achieved 100% detection accuracy, while outdoor conditions decreased performance, with an average accuracy of 69.15% for YOLOv8 under direct sunlight. The study demonstrates that real-world applications reveal significant limitations due to changing lighting, background disturbances, and colour and shape variability. These findings underscore the need for further refinement of algorithms and sensors to enhance the robustness of robotic harvesters for agricultural use.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
534,846
2407.05192
Optimizing Bipolar Constellations for High-Rate Transmission in Short-Reach Fiber Links with Direct Detection
Bipolar modulation increases the achievable information rate of communication links with direct-detection receivers. This paper optimizes bipolar transmission with a modulator bias offset for short-reach fiber links. A neural network equalizer with successive interference cancellation is shown to gain over 100 Gbit/s compared to standard receivers.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
470,861
1905.01248
Asymmetric Dual-Arm Task Execution using an Extended Relative Jacobian
Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
129,669
1705.09045
Cross-Domain Perceptual Reward Functions
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
74,135
2411.18594
Biomolecular Analysis of Soil Samples and Rock Imagery for Tracing Evidence of Life Using a Mobile Robot
The search for evidence of past life on Mars presents a tremendous challenge that requires the usage of very advanced robotic technologies to overcome it. Current digital microscopic imagers and spectrometers used for astrobiological examination suffer from limitations such as insufficient resolution, narrow detection range, and lack of portability. To overcome these challenges, this research study presents modifications to the Phoenix rover to expand its capability for detecting biosignatures on Mars. This paper examines the modifications implemented on the Phoenix rover to enhance its capability to detect a broader spectrum of biosignatures. One of the notable improvements comprises the integration of advanced digital microscopic imagers and spectrometers, enabling high-resolution examination of soil samples. Additionally, the mechanical components of the device have been reinforced to enhance maneuverability and optimize subsurface sampling capabilities. Empirical investigations have demonstrated that Phoenix has the capability to navigate diverse geological environments and procure samples for the purpose of biomolecular analysis. The biomolecular instrumentation and hybrid analytical methods showcased in this study demonstrate considerable potential for future astrobiology missions on Mars. The potential for enhancing the system lies in the possibility of broadening the range of detectable biomarkers and biosignatures.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
511,922
2412.11034
SAM-IF: Leveraging SAM for Incremental Few-Shot Instance Segmentation
We propose SAM-IF, a novel method for incremental few-shot instance segmentation leveraging the Segment Anything Model (SAM). SAM-IF addresses the challenges of class-agnostic instance segmentation by introducing a multi-class classifier and fine-tuning SAM to focus on specific target objects. To enhance few-shot learning capabilities, SAM-IF employs a cosine-similarity-based classifier, enabling efficient adaptation to novel classes with minimal data. Additionally, SAM-IF supports incremental learning by updating classifier weights without retraining the decoder. Our method achieves competitive but more reasonable results compared to existing approaches, particularly in scenarios requiring specific object segmentation with limited labeled data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
517,221
1904.08613
Disentangled Representation Learning with Information Maximizing Autoencoder
Learning disentangled representation from any unlabelled data is a non-trivial problem. In this paper we propose Information Maximising Autoencoder (InfoAE) where the encoder learns powerful disentangled representation through maximizing the mutual information between the representation and given information in an unsupervised fashion. We have evaluated our model on MNIST dataset and achieved 98.9 ($\pm .1$) $\%$ test accuracy while using complete unsupervised training.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
128,128
2112.13254
On Dynamic Pricing with Covariates
We consider dynamic pricing with covariates under a generalized linear demand model: a seller can dynamically adjust the price of a product over a horizon of $T$ time periods, and at each time period $t$, the demand of the product is jointly determined by the price and an observable covariate vector $x_t\in\mathbb{R}^d$ through a generalized linear model with unknown co-efficients. Most of the existing literature assumes the covariate vectors $x_t$'s are independently and identically distributed (i.i.d.); the few papers that relax this assumption either sacrifice model generality or yield sub-optimal regret bounds. In this paper, we show that UCB and Thompson sampling-based pricing algorithms can achieve an $O(d\sqrt{T}\log T)$ regret upper bound without assuming any statistical structure on the covariates $x_t$. Our upper bound on the regret matches the lower bound up to logarithmic factors. We thus show that (i) the i.i.d. assumption is not necessary for obtaining low regret, and (ii) the regret bound can be independent of the (inverse) minimum eigenvalue of the covariance matrix of the $x_t$'s, a quantity present in previous bounds. Moreover, we consider a constrained setting of the dynamic pricing problem where there is a limited and unreplenishable inventory and we develop theoretical results that relate the best achievable algorithm performance to a variation measure with respect to the temporal distribution shift of the covariates. We also discuss conditions under which a better regret is achievable and demonstrate the proposed algorithms' performance with numerical experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,188
1912.10824
Differentiable Reasoning on Large Knowledge Bases and Natural Language
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at https://github.com/uclnlp/gntp.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
158,422
1804.07344
Effects of sampling skewness of the importance-weighted risk estimator on model selection
Importance-weighting is a popular and well-researched technique for dealing with sample selection bias and covariate shift. It has desirable characteristics such as unbiasedness, consistency and low computational complexity. However, weighting can have a detrimental effect on an estimator as well. In this work, we empirically show that the sampling distribution of an importance-weighted estimator can be skewed. For sample selection bias settings, and for small sample sizes, the importance-weighted risk estimator produces overestimates for datasets in the body of the sampling distribution, i.e. the majority of cases, and large underestimates for data sets in the tail of the sampling distribution. These over- and underestimates of the risk lead to suboptimal regularization parameters when used for importance-weighted validation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
95,505
1501.04686
Deep Convolutional Neural Networks for Action Recognition Using Depth Map Sequences
Recently, deep learning approach has achieved promising results in various fields of computer vision. In this paper, a new framework called Hierarchical Depth Motion Maps (HDMM) + 3 Channel Deep Convolutional Neural Networks (3ConvNets) is proposed for human action recognition using depth map sequences. Firstly, we rotate the original depth data in 3D pointclouds to mimic the rotation of cameras, so that our algorithms can handle view variant cases. Secondly, in order to effectively extract the body shape and motion information, we generate weighted depth motion maps (DMM) at several temporal scales, referred to as Hierarchical Depth Motion Maps (HDMM). Then, three channels of ConvNets are trained on the HDMMs from three projected orthogonal planes separately. The proposed algorithms are evaluated on MSRAction3D, MSRAction3DExt, UTKinect-Action and MSRDailyActivity3D datasets respectively. We also combine the last three datasets into a larger one (called Combined Dataset) and test the proposed method on it. The results show that our approach can achieve state-of-the-art results on the individual datasets and without dramatical performance degradation on the Combined Dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,396
1907.10982
Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation
Overfitting in deep learning has been the focus of a number of recent works, yet its exact impact on the behavior of neural networks is not well understood. This study analyzes overfitting by examining how the distribution of logits alters in relation to how much the model overfits. Specifically, we find that when training with few data samples, the distribution of logit activations when processing unseen test samples of an under-represented class tends to shift towards and even across the decision boundary, while the over-represented class seems unaffected. In image segmentation, foreground samples are often heavily under-represented. We observe that sensitivity of the model drops as a result of overfitting, while precision remains mostly stable. Based on our analysis, we derive asymmetric modifications of existing loss functions and regularizers including a large margin loss, focal loss, adversarial training and mixup, which specifically aim at reducing the shift observed when embedding unseen samples of the under-represented class. We study the case of binary segmentation of brain tumor core and show that our proposed simple modifications lead to significantly improved segmentation performance over the symmetric variants.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
139,753
2402.06886
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF
Bilevel optimization has been recently applied to many machine learning tasks. However, their applications have been restricted to the supervised learning setting, where static objective functions with benign structures are considered. But bilevel problems such as incentive design, inverse reinforcement learning (RL), and RL from human feedback (RLHF) are often modeled as dynamic objective functions that go beyond the simple static objective structures, which pose significant challenges of using existing bilevel solutions. To tackle this new class of bilevel problems, we introduce the first principled algorithmic framework for solving bilevel RL problems through the lens of penalty formulation. We provide theoretical studies of the problem landscape and its penalty-based (policy) gradient algorithms. We demonstrate the effectiveness of our algorithms via simulations in the Stackelberg Markov game, RL from human feedback and incentive design.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
428,474
2003.02357
Plasticity-Enhanced Domain-Wall MTJ Neural Networks for Energy-Efficient Online Learning
Machine learning implements backpropagation via abundant training samples. We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly (with few samples). We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20 $\mu J$ even for large tasks used typically in machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
166,915
2302.01073
Learning in Multi-Memory Games Triggers Complex Dynamics Diverging from Nash Equilibrium
Repeated games consider a situation where multiple agents are motivated by their independent rewards throughout learning. In general, the dynamics of their learning become complex. Especially when their rewards compete with each other like zero-sum games, the dynamics often do not converge to their optimum, i.e., the Nash equilibrium. To tackle such complexity, many studies have understood various learning algorithms as dynamical systems and discovered qualitative insights among the algorithms. However, such studies have yet to handle multi-memory games (where agents can memorize actions they played in the past and choose their actions based on their memories), even though memorization plays a pivotal role in artificial intelligence and interpersonal relationship. This study extends two major learning algorithms in games, i.e., replicator dynamics and gradient ascent, into multi-memory games. Then, we prove their dynamics are identical. Furthermore, theoretically and experimentally, we clarify that the learning dynamics diverge from the Nash equilibrium in multi-memory zero-sum games and reach heteroclinic cycles (sojourn longer around the boundary of the strategy space), providing a fundamental advance in learning in games.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
343,472
2205.14716
Vision-Assisted User Clustering for Robust mmWave-NOMA Systems
When operated in the mmWave band, user channels get highly correlated which can be exploited in mmWave-NOMA systems to cluster a set of "correlated" users together. Identifying the set of users to cluster greatly affects the viability of NOMA systems. Typically, only channel state information (CSI) is used to make these clustering decisions. When any problem arises in accessing up-to-date and accurate CSI, user clustering will not properly function due to its hard-dependency on CSI, and obviously, this will negatively affect the robustness of the NOMA systems. To improve the robustness of the NOMA systems, we propose to utilize emerging trends such as location-aware and camera-equipped base stations (CBSs) which do not require any extra radio frequency resource consumption. Specifically, we explore three different dimensions of feedback that a CBS can benefit from to solve the user clustering problem, namely CSI-based feedback and non-CSI-based feedback, comprised of user equipment (UE) location and the CBS camera feed. We first investigate how the vision assistance of a CBS can be used in conjunction with other dimensions of feedback to make clustering decisions in various scenarios. Later, we provide a simple user case study to illustrate how to implement vision-assisted user clustering in mmWave-NOMA systems to improve robustness, in which a deep learning (DL) beam selection algorithm is trained on the images captured by the CBS to perform NOMA clustering. We demonstrate that user clustering without CSI can achieve comparable performance to accurate CSI-based solutions, and user clustering can continue to function without much performance loss even in the scenarios where CSI is severely outdated or not available at all.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
299,466
2305.04603
Privacy-Preserving Representations are not Enough -- Recovering Scene Content from Camera Poses
Visual localization is the task of estimating the camera pose from which a given image was taken and is central to several 3D computer vision applications. With the rapid growth in the popularity of AR/VR/MR devices and cloud-based applications, privacy issues are becoming a very important aspect of the localization process. Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service. In this paper, we show that an attacker can learn about details of a scene without any access by simply querying a localization service. The attack is based on the observation that modern visual localization algorithms are robust to variations in appearance and geometry. While this is in general a desired property, it also leads to algorithms localizing objects that are similar enough to those present in a scene. An attacker can thus query a server with a large enough set of images of objects, \eg, obtained from the Internet, and some of them will be localized. The attacker can thus learn about object placements from the camera poses returned by the service (which is the minimal information returned by such a service). In this paper, we develop a proof-of-concept version of this attack and demonstrate its practical feasibility. The attack does not place any requirements on the localization algorithm used, and thus also applies to privacy-preserving representations. Current work on privacy-preserving representations alone is thus insufficient.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
362,835
2009.07436
Weakly-Supervised Online Hashing
With the rapid development of social websites, recent years have witnessed an explosive growth of social images with user-provided tags which continuously arrive in a streaming fashion. Due to the fast query speed and low storage cost, hashing-based methods for image search have attracted increasing attention. However, existing hashing methods for social image retrieval are based on batch mode which violates the nature of social images, i.e., social images are usually generated periodically or collected in a stream fashion. Although there exist many online image hashing methods, they either adopt unsupervised learning which ignore the relevant tags, or are designed in the supervised manner which needs high-quality labels. In this paper, to overcome the above limitations, we propose a new method named Weakly-supervised Online Hashing (WOH). In order to learn high-quality hash codes, WOH exploits the weak supervision by considering the semantics of tags and removing the noise. Besides, We develop a discrete online optimization algorithm for WOH, which is efficient and scalable. Extensive experiments conducted on two real-world datasets demonstrate the superiority of WOH compared with several state-of-the-art hashing baselines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
195,924
1506.04799
Power Systems Without Fuel
The finiteness of fossil fuels implies that future electric power systems may predominantly source energy from fuel-free renewable resources like wind and solar. Evidently, these power systems without fuel will be environmentally benign, sustainable, and subject to milder failure scenarios. Many of these advantages were projected decades ago with the definition of the soft energy path, which describes a future where all energy is provided by numerous small, simple, and diverse renewable sources. Here we provide a thorough investigation of power systems without fuel from technical and economic standpoints. The paper is organized by timescale and covers issues like the irrelevance of unit commitment in networks without large, fuel-based generators, the dubiousness of nodal pricing without fuel costs, and the need for new system-level models and control methods for semiconductor-based energy-conversion interfaces.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
44,210
2305.17355
Rethinking PRL: A Multiscale Progressively Residual Learning Network for Inverse Halftoning
Image inverse halftoning is a classic image restoration task, aiming to recover continuous-tone images from halftone images with only bilevel pixels. Because the halftone images lose much of the original image content, inverse halftoning is a classic ill-problem. Although existing inverse halftoning algorithms achieve good performance, their results lose image details and features. Therefore, it is still a challenge to recover high-quality continuous-tone images. In this paper, we propose an end-to-end multiscale progressively residual learning network (MSPRL), which has a UNet architecture and takes multiscale input images. To make full use of different input image information, we design a shallow feature extraction module to capture similar features between images of different scales. We systematically study the performance of different methods and compare them with our proposed method. In addition, we employ different training strategies to optimize the model, which is important for optimizing the training process and improving performance. Extensive experiments demonstrate that our MSPRL model obtains considerable performance gains in detail restoration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
368,545
2406.16685
A locking-free isogeometric thin shell formulation based on higher order accurate local strain projection via approximate dual splines
We present a novel isogeometric discretization approach for the Kirchhoff-Love shell formulation based on the Hellinger-Reissner variational principle. For mitigating membrane locking, we discretize the independent strains with spline basis functions that are one degree lower than those used for the displacements. To enable computationally efficient condensation of the independent strains, we first discretize the variations of the independent strains with approximate dual splines to obtain a projection matrix that is close to a diagonal matrix. We then diagonalize this strain projection matrix via row-sum lumping. The combination of approximate dual test functions with row-sum lumping enables the direct condensation of the independent strain fields at the quadrature point level, while maintaining higher-order accuracy at optimal rates of convergence. We illustrate the numerical properties and the performance of our approach through numerical benchmarks, including a curved Euler-Bernoulli beam and the examples of the shell obstacle course.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
467,227
1409.3126
Performance Analysis of Cognitive Radio Systems with Imperfect Channel Sensing and Estimation
In cognitive radio systems, employing sensing-based spectrum access strategies, secondary users are required to perform channel sensing in order to detect the activities of primary users. In realistic scenarios, channel sensing occurs with possible errors due to miss-detections and false alarms. As another challenge, time-varying fading conditions in the channel between the secondary transmitter and the secondary receiver have to be learned via channel estimation. In this paper, performance of causal channel estimation methods in correlated cognitive radio channels under imperfect channel sensing results is analyzed, and achievable rates under both channel and sensing uncertainty are investigated. Initially, cognitive radio channel model with channel sensing error and channel estimation is described. Then, using pilot symbols, minimum mean square error (MMSE) and linear-MMSE (L-MMSE) estimation methods are employed at the secondary receiver to learn the channel fading coefficients. Expressions for the channel estimates and mean-squared errors (MSE) are determined, and their dependencies on channel sensing results, and pilot symbol period and energy are investigated. Since sensing uncertainty leads to uncertainty in the variance of the additive disturbance, channel estimation strategies and performance are interestingly shown to depend on the sensing reliability. It is further shown that the L-MMSE estimation method, which is in general suboptimal, performs very close to MMSE estimation. Furthermore, assuming the channel estimation errors and the interference introduced by the primary users as zero-mean and Gaussian distributed, achievable rate expressions of linear modulation schemes and Gaussian signaling are determined. Subsequently, the training period, and data and pilot symbol energy allocations are jointly optimized to maximize the achievable rates for both signaling schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,963
2002.11096
Causal Inference With Selectively Deconfounded Data
Given only data generated by a standard confounding graph with unobserved confounder, the Average Treatment Effect (ATE) is not identifiable. To estimate the ATE, a practitioner must then either (a) collect deconfounded data;(b) run a clinical trial; or (c) elucidate further properties of the causal graph that might render the ATE identifiable. In this paper, we consider the benefit of incorporating a large confounded observational dataset (confounder unobserved) alongside a small deconfounded observational dataset (confounder revealed) when estimating the ATE. Our theoretical results suggest that the inclusion of confounded data can significantly reduce the quantity of deconfounded data required to estimate the ATE to within a desired accuracy level. Moreover, in some cases -- say, genetics -- we could imagine retrospectively selecting samples to deconfound. We demonstrate that by actively selecting these samples based upon the (already observed) treatment and outcome, we can reduce sample complexity further. Our theoretical and empirical results establish that the worst-case relative performance of our approach (vs. a natural benchmark) is bounded while our best-case gains are unbounded. Finally, we demonstrate the benefits of selective deconfounding using a large real-world dataset related to genetic mutation in cancer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
165,607
2312.03179
CaloQVAE : Simulating high-energy particle-calorimeter interactions using hybrid quantum-classical generative models
The Large Hadron Collider's high luminosity era presents major computational challenges in the analysis of collision events. Large amounts of Monte Carlo (MC) simulation will be required to constrain the statistical uncertainties of the simulated datasets below these of the experimental data. Modelling of high-energy particles propagating through the calorimeter section of the detector is the most computationally intensive MC simulation task. We introduce a technique combining recent advancements in generative models and quantum annealing for fast and efficient simulation of high-energy particle-calorimeter interactions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
413,160
1302.1554
Object-Oriented Bayesian Networks
Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of medium-scale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an object-oriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of inter-related objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding part-of hierarchies. Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language has clear declarative semantics: an OOBN can be interpreted as a stochastic functional program, so that it uniquely specifies a probabilistic model. We provide an inference algorithm for OOBNs, and show that much of the structural information encoded by an OOBN--particularly the encapsulation of variables within an object and the reuse of model fragments in different contexts--can also be used to speed up the inference process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,855
2405.14883
Spectral Image Data Fusion for Multisource Data Augmentation
Multispectral and hyperspectral images are increasingly popular in different research fields, such as remote sensing, astronomical imaging, or precision agriculture. However, the amount of free data available to perform machine learning tasks is relatively small. Moreover, artificial intelligence models developed in the area of spectral imaging require input images with a fixed spectral signature, expecting the data to have the same number of spectral bands or the same spectral resolution. This requirement significantly reduces the number of usable sources that can be used for a given model. The scope of this study is to introduce a methodology for spectral image data fusion, in order to allow machine learning models to be trained and/or used on data from a larger number of sources, thus providing better generalization. For this purpose, we propose different interpolation techniques, in order to make multisource spectral data compatible with each other. The interpolation outcomes are evaluated through various approaches. This includes direct assessments using surface plots and metrics such as a Custom Mean Squared Error (CMSE) and the Normalized Difference Vegetation Index (NDVI). Additionally, indirect evaluation is done by estimating their impact on machine learning model training, particularly for semantic segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
456,652
2003.07339
Reinforcement Learning for Electricity Network Operation
This paper presents the background material required for the Learning to Run Power Networks Challenge. The challenge is focused on using Reinforcement Learning to train an agent to manage the real-time operations of a power grid, balancing power flows and making interventions to maintain stability. We present an introduction to power systems targeted at the machine learning community and an introduction to reinforcement learning targeted at the power systems community. This is to enable and encourage broader participation in the challenge and collaboration between these two communities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
168,398
1711.03711
Synchronization of Kuramoto Oscillators via Cutset Projections
Synchronization in coupled oscillators networks is a remarkable phenomenon of relevance in numerous fields. For Kuramoto oscillators the loss of synchronization is determined by a trade-off between coupling strength and oscillator heterogeneity. Despite extensive prior work, the existing sufficient conditions for synchronization are either very conservative or heuristic and approximate. Using a novel cutset projection operator, we propose a new family of sufficient synchronization conditions; these conditions rigorously identify the correct functional form of the trade-off between coupling strength and oscillator heterogeneity. To overcome the need to solve a nonconvex optimization problem, we then provide two explicit bounding methods, thereby obtaining (i) the best-known sufficient condition for unweighted graphs based on the 2-norm, and (ii) the first-known generally-applicable sufficient condition based on the $\infty$-norm. We conclude with a comparative study of our novel $\infty$-norm condition for specific topologies and IEEE test cases; for most IEEE test cases our new sufficient condition is one to two orders of magnitude more accurate than previous rigorous tests.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
84,267
1211.5723
The Survey of Data Mining Applications And Feature Scope
In this paper we have focused a variety of techniques, approaches and different areas of the research which are helpful and marked as the important field of data mining Technologies. As we are aware that many Multinational companies and large organizations are operated in different places of the different countries.Each place of operation may generate large volumes of data. Corporate decision makers require access from all such sources and take strategic decisions.The data warehouse is used in the significant business value by improving the effectiveness of managerial decision-making. In an uncertain and highly competitive business environment, the value of strategic information systems such as these are easily recognized however in todays business environment,efficiency or speed is not the only key for competitiveness.This type of huge amount of data are available in the form of tera-topeta-bytes which has drastically changed in the areas of science and engineering.To analyze,manage and make a decision of such type of huge amount of data we need techniques called the data mining which will transforming in many fields.This paper imparts more number of applications of the data mining and also focuses scope of the data mining which will helpful in the further research.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
19,914
2101.07292
Data Protection Impact Assessment for the Corona App
Since SARS-CoV-2 started spreading in Europe in early 2020, there has been a strong call for technical solutions to combat or contain the pandemic, with contact tracing apps at the heart of the debates. The EU's General Daten Protection Regulation (GDPR) requires controllers to carry out a data protection impact assessment (DPIA) where their data processing is likely to result in a high risk to the rights and freedoms (Art. 35 GDPR). A DPIA is a structured risk analysis that identifies and evaluates possible consequences of data processing relevant to fundamental rights and describes the measures envisaged to address these risks or expresses the inability to do so. Based on the Standard Data Protection Model (SDM), we present a scientific DPIA which thoroughly examines three published contact tracing app designs that are considered to be the most "privacy-friendly": PEPP-PT, DP-3T and a concept summarized by Chaos Computer Club member Linus Neumann, all of which process personal health data. The DPIA starts with an analysis of the processing context and some expected use cases. Then, the processing activities are described by defining a realistic processing purpose. This is followed by the legal assessment and threshold analysis. Finally, we analyse the weak points, the risks and determine appropriate protective measures. We show that even decentralized implementations involve numerous serious weaknesses and risks. Legally, consent is unfit as legal ground hence data must be processed based on a law. We also found that measures to realize the rights of data subjects and affected people are not sufficient. Last but not least, we show that anonymization must be understood as a continuous process, which aims at separating the personal reference and is based on a mix of legal, organizational and technical measures. All currently available proposals lack such an explicit separation process.
false
false
false
true
false
false
false
false
false
false
false
false
true
true
false
false
false
false
215,985
1007.4149
A rate-distortion scenario for the emergence and evolution of noisy molecular codes
We discuss, in terms of rate-distortion theory, the fitness of molecular codes as the problem of designing an optimal information channel. The fitness is governed by an interplay between the cost and quality of the channel, which induces smoothness in the code. By incorporating this code fitness into population dynamics models, we suggest that the emergence and evolution of molecular codes may be explained by simple channel design considerations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,110
2402.03327
Uni3D-LLM: Unifying Point Cloud Perception, Generation and Editing with Large Language Models
In this paper, we introduce Uni3D-LLM, a unified framework that leverages a Large Language Model (LLM) to integrate tasks of 3D perception, generation, and editing within point cloud scenes. This framework empowers users to effortlessly generate and modify objects at specified locations within a scene, guided by the versatility of natural language descriptions. Uni3D-LLM harnesses the expressive power of natural language to allow for precise command over the generation and editing of 3D objects, thereby significantly enhancing operational flexibility and controllability. By mapping point cloud into the unified representation space, Uni3D-LLM achieves cross-application functionality, enabling the seamless execution of a wide array of tasks, ranging from the accurate instantiation of 3D objects to the diverse requirements of interactive design. Through a comprehensive suite of rigorous experiments, the efficacy of Uni3D-LLM in the comprehension, generation, and editing of point cloud has been validated. Additionally, we have assessed the impact of integrating a point cloud perception module on the generation and editing processes, confirming the substantial potential of our approach for practical applications.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
426,963
0909.4995
Geometrical Interpretation of Shannon's Entropy Based on the Born Rule
In this paper we will analyze discrete probability distributions in which probabilities of particular outcomes of some experiment (microstates) can be represented by the ratio of natural numbers (in other words, probabilities are represented by digital numbers of finite representation length). We will introduce several results that are based on recently proposed JoyStick Probability Selector, which represents a geometrical interpretation of the probability based on the Born rule. The terms of generic space and generic dimension of the discrete distribution, as well as, effective dimension are going to be introduced. It will be shown how this simple geometric representation can lead to an optimal code length coding of the sequence of signals. Then, we will give a new, geometrical, interpretation of the Shannon entropy of the discrete distribution. We will suggest that the Shannon entropy represents the logarithm of the effective dimension of the distribution. Proposed geometrical interpretation of the Shannon entropy can be used to prove some information inequalities in an elementary way.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
4,581
2206.07920
PInKS: Preconditioned Commonsense Inference with Minimal Supervision
Reasoning with preconditions such as "glass can be used for drinking water unless the glass is shattered" remains an open problem for language models. The main challenge lies in the scarcity of preconditions data and the model's lack of support for such reasoning. We present PInKS, Preconditioned Commonsense Inference with WeaK Supervision, an improved model for reasoning with preconditions through minimum supervision. We show, both empirically and theoretically, that PInKS improves the results on benchmarks focused on reasoning with the preconditions of commonsense knowledge (up to 40% Macro-F1 scores). We further investigate PInKS through PAC-Bayesian informativeness analysis, precision measures, and ablation study.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
302,937
2106.03974
A Nonlinear Observability Analysis of Ambient Wind Estimation with Uncalibrated Sensors, Inspired by Insect Neural Encoding
Estimating the direction of ambient fluid flow is key for many flying or swimming animals and robots, but can only be accomplished through indirect measurements and active control. Recent work with tethered flying insects indicates that their sensory representation of orientation, apparent flow, direction of movement, and control is represented by a 2-dimensional angular encoding in the central brain. This representation simplifies sensory integration by projecting the direction (but not scale) of measurements with different units onto a universal polar coordinate frame. To align these angular measurements with one another and the motor system does, however, require a calibration of angular gain and offset for each sensor. This calibration could change with time due to changes in the environment or physical structure. The circumstances under which small robots and animals with angular sensors and changing calibrations could self-calibrate and estimate the direction of ambient fluid flow while moving remains an open question. Here, a methodical nonlinear observability analysis is presented to address this. The analysis shows that it is mathematically feasible to continuously estimate flow direction and perform regular self-calibrations by adopting frequent changes in course (or active prevention thereof) and orientation, and requires fusion and temporal differentiation of three sensory measurements: apparent flow, orientation (or its derivative), and direction of motion (or its derivative). These conclusions are consistent with the zigzagging trajectories exhibited by many plume tracking organisms, suggesting that perhaps flow estimation is a secondary driver of their trajectory structure.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
239,534
1806.10306
Unsupervised and Efficient Vocabulary Expansion for Recurrent Neural Network Language Models in ASR
In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM's vocabulary set accommodates only small shortlist of most frequent words. This leads to suboptimal performance if an input speech contains many out-of-shortlist (OOS) words. An effective solution is to increase the shortlist size and retrain the entire network which is highly inefficient. Therefore, we propose an efficient method to expand the shortlist set of a pretrained RNNLM without incurring expensive retraining and using additional training data. Our method exploits the structure of RNNLM which can be decoupled into three parts: input projection layer, middle layers, and output projection layer. Specifically, our method expands the word embedding matrices in projection layers and keeps the middle layers unchanged. In this approach, the functionality of the pretrained RNNLM will be correctly maintained as long as OOS words are properly modeled in two embedding spaces. We propose to model the OOS words by borrowing linguistic knowledge from appropriate in-shortlist words. Additionally, we propose to generate the list of OOS words to expand vocabulary in unsupervised manner by automatically extracting them from ASR output.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
101,523
1805.00224
Multi-objective path planning of an autonomous mobile robot using hybrid PSO-MFB optimization algorithm
The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments. The problem is solved by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The proposed path planning algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed algorithm consists of three modules. The first module forms an optimized path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimizes distance and follows path smoothness criteria. The second module detects any infeasible points generated by the proposed hybrid PSO-MFB Algorithm by a novel Local Search (LS) algorithm integrated with the hybrid PSO-MFB algorithm to be converted into feasible solutions. The third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. The simulation results indicate that this method generates an optimal feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Moreover, compared to recent path planning techniques, simulation results show that the proposed hybrid PSO-MFB algorithm is highly competitive in terms of path optimality.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
96,386
2010.12687
Robust Correction of Sampling Bias Using Cumulative Distribution Functions
Varying domains and biased datasets can lead to differences between the training and the target distributions, known as covariate shift. Current approaches for alleviating this often rely on estimating the ratio of training and target probability density functions. These techniques require parameter tuning and can be unstable across different datasets. We present a new method for handling covariate shift using the empirical cumulative distribution function estimates of the target distribution by a rigorous generalization of a recent idea proposed by Vapnik and Izmailov. Further, we show experimentally that our method is more robust in its predictions, is not reliant on parameter tuning and shows similar classification performance compared to the current state-of-the-art techniques on synthetic and real datasets.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
202,800
2308.16725
Terrain Diffusion Network: Climatic-Aware Terrain Generation with Geological Sketch Guidance
Sketch-based terrain generation seeks to create realistic landscapes for virtual environments in various applications such as computer games, animation and virtual reality. Recently, deep learning based terrain generation has emerged, notably the ones based on generative adversarial networks (GAN). However, these methods often struggle to fulfill the requirements of flexible user control and maintain generative diversity for realistic terrain. Therefore, we propose a novel diffusion-based method, namely terrain diffusion network (TDN), which actively incorporates user guidance for enhanced controllability, taking into account terrain features like rivers, ridges, basins, and peaks. Instead of adhering to a conventional monolithic denoising process, which often compromises the fidelity of terrain details or the alignment with user control, a multi-level denoising scheme is proposed to generate more realistic terrains by taking into account fine-grained details, particularly those related to climatic patterns influenced by erosion and tectonic activities. Specifically, three terrain synthesisers are designed for structural, intermediate, and fine-grained level denoising purposes, which allow each synthesiser concentrate on a distinct terrain aspect. Moreover, to maximise the efficiency of our TDN, we further introduce terrain and sketch latent spaces for the synthesizers with pre-trained terrain autoencoders. Comprehensive experiments on a new dataset constructed from NASA Topology Images clearly demonstrate the effectiveness of our proposed method, achieving the state-of-the-art performance. Our code and dataset will be publicly available.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
389,089
2105.07532
Towards Synthetic Multivariate Time Series Generation for Flare Forecasting
One of the limiting factors in training data-driven, rare-event prediction algorithms is the scarcity of the events of interest resulting in an extreme imbalance in the data. There have been many methods introduced in the literature for overcoming this issue; simple data manipulation through undersampling and oversampling, utilizing cost-sensitive learning algorithms, or by generating synthetic data points following the distribution of the existing data. While synthetic data generation has recently received a great deal of attention, there are real challenges involved in doing so for high-dimensional data such as multivariate time series. In this study, we explore the usefulness of the conditional generative adversarial network (CGAN) as a means to perform data-informed oversampling in order to balance a large dataset of multivariate time series. We utilize a flare forecasting benchmark dataset, named SWAN-SF, and design two verification methods to both quantitatively and qualitatively evaluate the similarity between the generated minority and the ground-truth samples. We further assess the quality of the generated samples by training a classical, supervised machine learning algorithm on synthetic data, and testing the trained model on the unseen, real data. The results show that the classifier trained on the data augmented with the synthetic multivariate time series achieves a significant improvement compared with the case where no augmentation is used. The popular flare forecasting evaluation metrics, TSS and HSS, report 20-fold and 5-fold improvements, respectively, indicating the remarkable statistical similarities, and the usefulness of CGAN-based data generation for complicated tasks such as flare forecasting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
235,462
2004.05422
The Role of Stem Noise in Visual Perception and Image Quality Measurement
This paper considers reference free quality assessment of distorted and noisy images. Specifically, it considers the first and second order statistics of stem noise that can be evaluated given any image. In the research field of Image quality Assessment (IQA), the stem noise is defined as the input of an Auto Regressive (AR) process, from which a low-energy and de-correlated version of the image can be recovered. To estimate the AR model parameters and associated stem noise energy, the Yule-walker equations are used such that the accompanying Auto Correlation Function (ACF) coefficients can be treated as model parameters for image reconstruction. To characterize systematic signal dependent and signal independent distortions, the mean and variance of stem noise can be evaluated over the image. Crucially, this paper shows that these statistics have a predictive validity in relation to human ratings of image quality. Furthermore, under certain kinds of image distortion, stem noise statistics show very significant correlations with established measures of image quality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
172,181
2003.02089
Gradient Statistics Aware Power Control for Over-the-Air Federated Learning
Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks. By exploiting the superposition nature of wireless waveforms, over-the-air computation (AirComp) can accelerate model aggregation and hence facilitate communication-efficient FL. Due to channel fading, power control is crucial in AirComp. Prior works assume that the signals to be aggregated from each device, i.e., local gradients have identical statistics. In FL, however, gradient statistics vary over both training iterations and feature dimensions, and are unknown in advance. This paper studies the power control problem for over-the-air FL by taking gradient statistics into account. The goal is to minimize the aggregation error by optimizing the transmit power at each device subject to peak power constraints. We obtain the optimal policy in closed form when gradient statistics are given. Notably, we show that the optimal transmit power is continuous and monotonically decreases with the squared multivariate coefficient of variation (SMCV) of gradient vectors. We then propose a method to estimate gradient statistics with negligible communication cost. Experimental results demonstrate that the proposed gradient-statistics-aware power control achieves higher test accuracy than the existing schemes for a wide range of scenarios.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
166,850
cs/0604040
Optimal Distortion-Power Tradeoffs in Sensor Networks: Gauss-Markov Random Processes
We investigate the optimal performance of dense sensor networks by studying the joint source-channel coding problem. The overall goal of the sensor network is to take measurements from an underlying random process, code and transmit those measurement samples to a collector node in a cooperative multiple access channel with feedback, and reconstruct the entire random process at the collector node. We provide lower and upper bounds for the minimum achievable expected distortion when the underlying random process is stationary and Gaussian. In the case where the random process is also Markovian, we evaluate the lower and upper bounds explicitly and show that they are of the same order for a wide range of sum power constraints. Thus, for a Gauss-Markov random process, under these sum power constraints, we determine the achievability scheme that is order-optimal, and express the minimum achievable expected distortion as a function of the sum power constraint.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,382
2112.10297
DXML: Distributed Extreme Multilabel Classification
As a big data application, extreme multilabel classification has emerged as an important research topic with applications in ranking and recommendation of products and items. A scalable hybrid distributed and shared memory implementation of extreme classification for large scale ranking and recommendation is proposed. In particular, the implementation is a mix of message passing using MPI across nodes and using multithreading on the nodes using OpenMP. The expression for communication latency and communication volume is derived. Parallelism using work-span model is derived for shared memory architecture. This throws light on the expected scalability of similar extreme classification methods. Experiments show that the implementation is relatively faster to train and test on some large datasets. In some cases, model size is relatively small.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
true
272,396
1709.10433
On the Capacity of Face Representation
In this paper we address the following question, given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity problem in terms of packing bounds on a low-dimensional manifold embedded within a deep representation space. By explicitly accounting for the manifold structure of the representation as well two different sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of a given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of two deep neural network based face representations, namely 128-dimensional FaceNet and 512-dimensional SphereFace. Numerical experiments on unconstrained faces (IJB-C) provides a capacity upper bound of $2.7\times10^4$ for FaceNet and $8.4\times10^4$ for SphereFace representation at a false acceptance rate (FAR) of 1%. As expected, capacity reduces drastically at lower FARs. The capacity at FAR of 0.1% and 0.001% is $2.2\times10^3$ and $1.6\times10^{1}$, respectively for FaceNet and $3.6\times10^3$ and $6.0\times10^0$, respectively for SphereFace.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
81,780
1904.00956
Guided Meta-Policy Search
Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch. Meta-RL aims to address this challenge by leveraging experience from previous tasks so as to more quickly solve new tasks. However, in practice, these algorithms generally also require large amounts of on-policy experience during the meta-training process, making them impractical for use in many problems. To this end, we propose to learn a reinforcement learning procedure in a federated way, where individual off-policy learners can solve the individual meta-training tasks, and then consolidate these solutions into a single meta-learner. Since the central meta-learner learns by imitating the solutions to the individual tasks, it can accommodate either the standard meta-RL problem setting or a hybrid setting where some or all tasks are provided with example demonstrations. The former results in an approach that can leverage policies learned for previous tasks without significant amounts of on-policy data during meta-training, whereas the latter is particularly useful in cases where demonstrations are easy for a person to provide. Across a number of continuous control meta-RL problems, we demonstrate significant improvements in meta-RL sample efficiency in comparison to prior work as well as the ability to scale to domains with visual observations.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
126,025
2211.01943
Quantized Precoding and RIS-Assisted Modulation for Integrated Sensing and Communications Systems
In this paper, we present a novel reconfigurable intelligent surface (RIS)-assisted integrated sensing and communication (ISAC) system with 1-bit quantization at the ISAC base station. An RIS is introduced in the ISAC system to mitigate the effects of coarse quantization and to enable the co-existence between sensing and communication functionalities. Specifically, we design transmit precoder to obtain 1-bit sensing waveforms having a desired radiation pattern. The RIS phase shifts are then designed to modulate the 1-bit sensing waveform to transmit M-ary phase shift keying symbols to users. Through numerical simulations, we show that the proposed method offers significantly improved symbol error probabilities when compared to MIMO communication systems having quantized linear precoders, while still offering comparable sensing performance as that of unquantized sensing systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
328,413
2306.06276
Contrastive Learning for Predicting Cancer Prognosis Using Gene Expression Values
Recent advancements in image classification have demonstrated that contrastive learning (CL) can aid in further learning tasks by acquiring good feature representation from a limited number of data samples. In this paper, we applied CL to tumor transcriptomes and clinical data to learn feature representations in a low-dimensional space. We then utilized these learned features to train a classifier to categorize tumors into a high- or low-risk group of recurrence. Using data from The Cancer Genome Atlas (TCGA), we demonstrated that CL can significantly improve classification accuracy. Specifically, our CL-based classifiers achieved an area under the receiver operating characteristic curve (AUC) greater than 0.8 for 14 types of cancer, and an AUC greater than 0.9 for 2 types of cancer. We also developed CL-based Cox (CLCox) models for predicting cancer prognosis. Our CLCox models trained with the TCGA data outperformed existing methods significantly in predicting the prognosis of 19 types of cancer under consideration. The performance of CLCox models and CL-based classifiers trained with TCGA lung and prostate cancer data were validated using the data from two independent cohorts. We also show that the CLCox model trained with the whole transcriptome significantly outperforms the Cox model trained with the 21 genes of Oncotype DX that is in clinical use for breast cancer patients. CL-based classifiers and CLCox models for 19 types of cancer are publicly available and can be used to predict cancer prognosis using the RNA-seq transcriptome of an individual tumor. Python codes for model training and testing are also publicly accessible, and can be applied to train new CL-based models using gene expression data of tumors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
372,541
1901.01660
Deeper and Wider Siamese Networks for Real-Time Visual Tracking
Siamese networks have drawn great attention in visual tracking because of their balanced accuracy and speed. However, the backbone networks used in Siamese trackers are relatively shallow, such as AlexNet [18], which does not fully take advantage of the capability of modern deep neural networks. In this paper, we investigate how to leverage deeper and wider convolutional neural networks to enhance tracking robustness and accuracy. We observe that direct replacement of backbones with existing powerful architectures, such as ResNet [14] and Inception [33], does not bring improvements. The main reasons are that 1)large increases in the receptive field of neurons lead to reduced feature discriminability and localization precision; and 2) the network padding for convolutions induces a positional bias in learning. To address these issues, we propose new residual modules to eliminate the negative impact of padding, and further design new architectures using these modules with controlled receptive field size and network stride. The designed architectures are lightweight and guarantee real-time tracking speed when applied to SiamFC [2] and SiamRPN [20]. Experiments show that solely due to the proposed network architectures, our SiamFC+ and SiamRPN+ obtain up to 9.8%/5.7% (AUC), 23.3%/8.8% (EAO) and 24.4%/25.0% (EAO) relative improvements over the original versions [2, 20] on the OTB-15, VOT-16 and VOT-17 datasets, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
118,030
2107.00778
On Bridging Generic and Personalized Federated Learning for Image Classification
Federated learning is promising for its capability to collaboratively train models with multiple clients without accessing their data, but vulnerable when clients' data distributions diverge from each other. This divergence further leads to a dilemma: "Should we prioritize the learned model's generic performance (for future use at the server) or its personalized performance (for each client)?" These two, seemingly competing goals have divided the community to focus on one or the other, yet in this paper we show that it is possible to approach both at the same time. Concretely, we propose a novel federated learning framework that explicitly decouples a model's dual duties with two prediction tasks. On the one hand, we introduce a family of losses that are robust to non-identical class distributions, enabling clients to train a generic predictor with a consistent objective across them. On the other hand, we formulate the personalized predictor as a lightweight adaptive module that is learned to minimize each client's empirical risk on top of the generic predictor. With this two-loss, two-predictor framework which we name Federated Robust Decoupling (Fed-RoD), the learned model can simultaneously achieve state-of-the-art generic and personalized performance, essentially bridging the two tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
244,266
1812.03201
Residual Reinforcement Learning for Robot Control
Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
115,947
1702.01894
CAAD: Computer Architecture for Autonomous Driving
We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
67,892
1701.02273
Visual Multiple-Object Tracking for Unknown Clutter Rate
In multi-object tracking applications, model parameter tuning is a prerequisite for reliable performance. In particular, it is difficult to know statistics of false measurements due to various sensing conditions and changes in the field of views. In this paper we are interested in designing a multi-object tracking algorithm that handles unknown false measurement rate. Recently proposed robust multi-Bernoulli filter is employed for clutter estimation while generalized labeled multi-Bernoulli filter is considered for target tracking. Performance evaluation with real videos demonstrates the effectiveness of the tracking algorithm for real-world scenarios.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,528
2106.09343
Lost in Interpreting: Speech Translation from Source or Interpreter?
Interpreters facilitate multi-lingual meetings but the affordable set of languages is often smaller than what is needed. Automatic simultaneous speech translation can extend the set of provided languages. We investigate if such an automatic system should rather follow the original speaker, or an interpreter to achieve better translation quality at the cost of increased delay. To answer the question, we release Europarl Simultaneous Interpreting Corpus (ESIC), 10 hours of recordings and transcripts of European Parliament speeches in English, with simultaneous interpreting into Czech and German. We evaluate quality and latency of speaker-based and interpreter-based spoken translation systems from English to Czech. We study the differences in implicit simplification and summarization of the human interpreter compared to a machine translation system trained to shorten the output to some extent. Finally, we perform human evaluation to measure information loss of each of these approaches.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
241,633
2412.05321
Collaborative and parametric insurance on the Ethereum blockchain
This paper introduces a blockchain-based insurance scheme that integrates parametric and collaborative elements. A pool of investors, referred to as surplus providers, locks funds in a smart contract, enabling blockchain users to underwrite parametric insurance contracts. These contracts automatically trigger compensation when predefined conditions are met. The collaborative aspect is embodied in the generation of tokens, which are distributed to both surplus providers and policyholders. These tokens represent each participant's share of the surplus and grant voting rights for management decisions. The smart contract is developed in Solidity, a high-level programming language for the Ethereum blockchain, and deployed on the Sepolia testnet, with data processing and analysis conducted using Python. In addition, open-source code is provided and main research challenges are identified, so that further research can be carried out to overcome limitations of this first proof of concept.
false
true
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
true
514,785
2212.01995
Approximate Order-Preserving Pattern Mining for Time Series
The order-preserving pattern mining can be regarded as discovering frequent trends in time series, since the same order-preserving pattern has the same relative order which can represent a trend. However, in the case where data noise is present, the relative orders of many meaningful patterns are usually similar rather than the same. To mine similar relative orders in time series, this paper addresses an approximate order-preserving pattern (AOP) mining method based on (delta-gamma) distance to effectively measure the similarity, and proposes an algorithm called AOP-Miner to mine AOPs according to global and local approximation parameters. AOP-Miner adopts a pattern fusion strategy to generate candidate patterns generation and employs the screening strategy to calculate the supports of candidate patterns. Experimental results validate that AOP-Miner outperforms other competitive methods and can find more similar trends in time series.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
334,640
1209.1318
Finding and Recommending Scholarly Articles
The rate at which scholarly literature is being produced has been increasing at approximately 3.5 percent per year for decades. This means that during a typical 40 year career the amount of new literature produced each year increases by a factor of four. The methods scholars use to discover relevant literature must change. Just like everybody else involved in information discovery, scholars are confronted with information overload. Two decades ago, this discovery process essentially consisted of paging through abstract books, talking to colleagues and librarians, and browsing journals. A time-consuming process, which could even be longer if material had to be shipped from elsewhere. Now much of this discovery process is mediated by online scholarly information systems. All these systems are relatively new, and all are still changing. They all share a common goal: to provide their users with access to the literature relevant to their specific needs. To achieve this each system responds to actions by the user by displaying articles which the system judges relevant to the user's current needs. Recently search systems which use particularly sophisticated methodologies to recommend a few specific papers to the user have been called "recommender systems". These methods are in line with the current use of the term "recommender system" in computer science. We do not adopt this definition, rather we view systems like these as components in a larger whole, which is presented by the scholarly information systems themselves. In what follows we view the recommender system as an aspect of the entire information system; one which combines the massive memory capacities of the machine with the cognitive abilities of the human user to achieve a human-machine synergy.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
18,433
1401.6380
Properties of spatial coupling in compressed sensing
In this paper we address a series of open questions about the construction of spatially coupled measurement matrices in compressed sensing. For hardware implementations one is forced to depart from the limiting regime of parameters in which the proofs of the so-called threshold saturation work. We investigate quantitatively the behavior under finite coupling range, the dependence on the shape of the coupling interaction, and optimization of the so-called seed to minimize distance from optimality. Our analysis explains some of the properties observed empirically in previous works and provides new insight on spatially coupled compressed sensing.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
30,341
2004.12232
Reconstruct, Rasterize and Backprop: Dense shape and pose estimation from a single image
This paper presents a new system to obtain dense object reconstructions along with 6-DoF poses from a single image. Geared towards high fidelity reconstruction, several recent approaches leverage implicit surface representations and deep neural networks to estimate a 3D mesh of an object, given a single image. However, all such approaches recover only the shape of an object; the reconstruction is often in a canonical frame, unsuitable for downstream robotics tasks. To this end, we leverage recent advances in differentiable rendering (in particular, rasterization) to close the loop with 3D reconstruction in camera frame. We demonstrate that our approach---dubbed reconstruct, rasterize and backprop (RRB) achieves significantly lower pose estimation errors compared to prior art, and is able to recover dense object shapes and poses from imagery. We further extend our results to an (offline) setup, where we demonstrate a dense monocular object-centric egomotion estimation system.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
174,175
2101.02559
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) due to their powerful decision-making capabilities. However, they are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy. These threats get aggravated in emerging edge ML devices that have stringent constraints in terms of resources (e.g., compute, memory, power/energy), and that therefore cannot employ costly security and reliability measures. Security, reliability, and vulnerability mitigation techniques span from network security measures to hardware protection, with an increased interest towards formal verification of trained ML models. This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities, both at the cloud (i.e., during the ML training phase) and edge (i.e., during the ML inference stage), discusses the implications of a resource-constrained design on the reliability and security of the system, identifies verification methodologies to ensure correct system behavior, and describes open research challenges for building secure and reliable ML systems at both the edge and the cloud.
false
false
false
false
true
false
true
false
false
false
true
false
true
false
false
false
false
true
214,671
1106.0987
Nearest Prime Simplicial Complex for Object Recognition
The structure representation of data distribution plays an important role in understanding the underlying mechanism of generating data. In this paper, we propose nearest prime simplicial complex approaches (NSC) by utilizing persistent homology to capture such structures. Assuming that each class is represented with a prime simplicial complex, we classify unlabeled samples based on the nearest projection distances from the samples to the simplicial complexes. We also extend the extrapolation ability of these complexes with a projection constraint term. Experiments in simulated and practical datasets indicate that compared with several published algorithms, the proposed NSC approaches achieve promising performance without losing the structure representation.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
10,735
2307.06240
DSSE: a drone swarm search environment
The Drone Swarm Search project is an environment, based on PettingZoo, that is to be used in conjunction with multi-agent (or single-agent) reinforcement learning algorithms. It is an environment in which the agents (drones), have to find the targets (shipwrecked people). The agents do not know the position of the target and do not receive rewards related to their own distance to the target(s). However, the agents receive the probabilities of the target(s) being in a certain cell of the map. The aim of this project is to aid in the study of reinforcement learning algorithms that require dynamic probabilities as inputs.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
false
false
false
false
379,004
2211.04370
NESTER: An Adaptive Neurosymbolic Method for Causal Effect Estimation
Causal effect estimation from observational data is a central problem in causal inference. Methods based on potential outcomes framework solve this problem by exploiting inductive biases and heuristics from causal inference. Each of these methods addresses a specific aspect of causal effect estimation, such as controlling propensity score, enforcing randomization, etc., by designing neural network (NN) architectures and regularizers. In this paper, we propose an adaptive method called Neurosymbolic Causal Effect Estimator (NESTER), a generalized method for causal effect estimation. NESTER integrates the ideas used in existing methods based on multi-head NNs for causal effect estimation into one framework. We design a Domain Specific Language (DSL) tailored for causal effect estimation based on causal inductive biases used in literature. We conduct a theoretical analysis to investigate NESTER's efficacy in estimating causal effects. Our comprehensive empirical results show that NESTER performs better than state-of-the-art methods on benchmark datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
329,228
1707.05776
Optimizing the Latent Space of Generative Networks
Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most successful applications, GAN models share two common aspects: solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions; and parameterizing the generator and the discriminator as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
77,293
1410.6641
Partial Optimality by Pruning for MAP-Inference with General Graphical Models
We consider the energy minimization problem for undirected graphical models, also known as MAP-inference problem for Markov random fields which is NP-hard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal non-relaxed integral solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAP-inference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method's runtime is determined by the runtime of the convex relaxation solver for the MAP-inference problem.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
37,000
2103.07220
Real-time Timbre Transfer and Sound Synthesis using DDSP
Neural audio synthesis is an actively researched topic, having yielded a wide range of techniques that leverages machine learning architectures. Google Magenta elaborated a novel approach called Differential Digital Signal Processing (DDSP) that incorporates deep neural networks with preconditioned digital signal processing techniques, reaching state-of-the-art results especially in timbre transfer applications. However, most of these techniques, including the DDSP, are generally not applicable in real-time constraints, making them ineligible in a musical workflow. In this paper, we present a real-time implementation of the DDSP library embedded in a virtual synthesizer as a plug-in that can be used in a Digital Audio Workstation. We focused on timbre transfer from learned representations of real instruments to arbitrary sound inputs as well as controlling these models by MIDI. Furthermore, we developed a GUI for intuitive high-level controls which can be used for post-processing and manipulating the parameters estimated by the neural network. We have conducted a user experience test with seven participants online. The results indicated that our users found the interface appealing, easy to understand, and worth exploring further. At the same time, we have identified issues in the timbre transfer quality, in some components we did not implement, and in installation and distribution of our plugin. The next iteration of our design will address these issues. Our real-time MATLAB and JUCE implementations are available at https://github.com/SMC704/juce-ddsp and https://github.com/SMC704/matlab-ddsp , respectively.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
224,531
2011.10822
Control and implementation of fluid-driven soft gripper with dynamic uncertainty of object
Soft grippers, for stable grasping of objects, with high compliance could be considered a suitable candidate for replacement of conventional rigid grippers, and in recent years, they have been emerging exponentially in industries. Not only are these highly adaptable grippers capable of static grasping of an object, but also they can be utilized for performing object manipulation tasks. Plenty of contemporary studies have been emphasizing on static grasping ability of soft grippers. However, in this thesis, planar in-hand object manipulation in a soft gripper which comprises a pair of soft finger is investigated.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
207,632
2303.03932
FFT-based Dynamic Token Mixer for Vision
Multi-head-self-attention (MHSA)-equipped models have achieved notable performance in computer vision. Their computational complexity is proportional to quadratic numbers of pixels in input feature maps, resulting in slow processing, especially when dealing with high-resolution images. New types of token-mixer are proposed as an alternative to MHSA to circumvent this problem: an FFT-based token-mixer involves global operations similar to MHSA but with lower computational complexity. However, despite its attractive properties, the FFT-based token-mixer has not been carefully examined in terms of its compatibility with the rapidly evolving MetaFormer architecture. Here, we propose a novel token-mixer called Dynamic Filter and novel image recognition models, DFFormer and CDFFormer, to close the gaps above. The results of image classification and downstream tasks, analysis, and visualization show that our models are helpful. Notably, their throughput and memory efficiency when dealing with high-resolution image recognition is remarkable. Our results indicate that Dynamic Filter is one of the token-mixer options that should be seriously considered. The code is available at https://github.com/okojoalg/dfformer
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
349,899
2404.14099
DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical Images
Continual learning, the ability to acquire knowledge from new data while retaining previously learned information, is a fundamental challenge in machine learning. Various approaches, including memory replay, knowledge distillation, model regularization, and dynamic network expansion, have been proposed to address this issue. Thus far, dynamic network expansion methods have achieved state-of-the-art performance at the cost of incurring significant computational overhead. This is due to the need for additional model buffers, which makes it less feasible in resource-constrained settings, particularly in the medical domain. To overcome this challenge, we propose Dynamic Model Merging, DynaMMo, a method that merges multiple networks at different stages of model training to achieve better computational efficiency. Specifically, we employ lightweight learnable modules for each task and combine them into a unified model to minimize computational overhead. DynaMMo achieves this without compromising performance, offering a cost-effective solution for continual learning in medical applications. We evaluate DynaMMo on three publicly available datasets, demonstrating its effectiveness compared to existing approaches. DynaMMo offers around 10-fold reduction in GFLOPS with a small drop of 2.76 in average accuracy when compared to state-of-the-art dynamic-based approaches. The code implementation of this work will be available upon the acceptance of this work at https://github.com/BioMedIA-MBZUAI/DynaMMo.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
448,572
quant-ph/0607111
`Plausibilities of plausibilities': an approach through circumstances
Probability-like parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of `circumstance', a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The idea, which can be traced to Laplace and Jaynes, is that the usual inferential reasonings about the probability-like parameters of a statistical model can be conceived as reasonings about equivalence classes of `circumstances' - viz., real or hypothetical pieces of knowledge, like e.g. physical hypotheses, that are useful in assigning a probability and satisfy some additional logical properties - that are uniquely indexed by the probability distributions they lead to.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,901
2411.19626
GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding
Open-Vocabulary 3D object affordance grounding aims to anticipate ``action possibilities'' regions on 3D objects with arbitrary instructions, which is crucial for robots to generically perceive real scenarios and respond to operational changes. Existing methods focus on combining images or languages that depict interactions with 3D geometries to introduce external interaction priors. However, they are still vulnerable to a limited semantic space by failing to leverage implied invariant geometries and potential interaction intentions. Normally, humans address complex tasks through multi-step reasoning and respond to diverse situations by leveraging associative and analogical thinking. In light of this, we propose GREAT (GeometRy-intEntion collAboraTive inference) for Open-Vocabulary 3D Object Affordance Grounding, a novel framework that mines the object invariant geometry attributes and performs analogically reason in potential interaction scenarios to form affordance knowledge, fully combining the knowledge with both geometries and visual contents to ground 3D object affordance. Besides, we introduce the Point Image Affordance Dataset v2 (PIADv2), the largest 3D object affordance dataset at present to support the task. Extensive experiments demonstrate the effectiveness and superiority of GREAT. Code and dataset are available at project.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
512,326
2108.06682
ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D Object Detection
In this paper, we present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection. ST3D++ aims at reducing noise in pseudo label generation as well as alleviating the negative impacts of noisy pseudo labels on model training. First, ST3D++ pre-trains the 3D object detector on the labeled source domain with random object scaling (ROS) which is designed to reduce target domain pseudo label noise arising from object scale bias of the source domain. Then, the detector is progressively improved through alternating between generating pseudo labels and training the object detector with pseudo-labeled target domain data. Here, we equip the pseudo label generation process with a hybrid quality-aware triplet memory to improve the quality and stability of generated pseudo labels. Meanwhile, in the model training stage, we propose a source data assisted training strategy and a curriculum data augmentation policy to effectively rectify noisy gradient directions and avoid model over-fitting to noisy pseudo labeled data. These specific designs enable the detector to be trained on meticulously refined pseudo labeled target data with denoised training signals, and thus effectively facilitate adapting an object detector to a target domain without requiring annotations. Finally, our method is assessed on four 3D benchmark datasets (i.e., Waymo, KITTI, Lyft, and nuScenes) for three common categories (i.e., car, pedestrian and bicycle). ST3D++ achieves state-of-the-art performance on all evaluated settings, outperforming the corresponding baseline by a large margin (e.g., 9.6% $\sim$ 38.16% on Waymo $\rightarrow$ KITTI in terms of AP$_{\text{3D}}$), and even surpasses the fully supervised oracle results on the KITTI 3D object detection benchmark with target prior. Code will be available.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
250,681
2002.05274
Solving Missing-Annotation Object Detection with Background Recalibration Loss
This paper focuses on a novel and challenging detection scenario: A majority of true objects/instances is unlabeled in the datasets, so these missing-labeled areas will be regarded as the background during training. Previous art on this problem has proposed to use soft sampling to re-weight the gradients of RoIs based on the overlaps with positive instances, while their method is mainly based on the two-stage detector (i.e. Faster RCNN) which is more robust and friendly for the missing label scenario. In this paper, we introduce a superior solution called Background Recalibration Loss (BRL) that can automatically re-calibrate the loss signals according to the pre-defined IoU threshold and input image. Our design is built on the one-stage detector which is faster and lighter. Inspired by the Focal Loss formulation, we make several significant modifications to fit on the missing-annotation circumstance. We conduct extensive experiments on the curated PASCAL VOC and MS COCO datasets. The results demonstrate that our proposed method outperforms the baseline and other state-of-the-arts by a large margin. Code available: https://github.com/Dwrety/mmdetection-selective-iou.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
163,844
2209.12241
Exploring Example Influence in Continual Learning
Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks). Due to the fact that past training data is not available, it is valuable to explore the influence difference on S and P among training examples, which may improve the learning pattern towards better SP. Inspired by Influence Function (IF), we first study example influence via adding perturbation to example weight and computing the influence derivation. To avoid the storage and calculation burden of Hessian inverse in neural networks, we propose a simple yet effective MetaSP algorithm to simulate the two key steps in the computation of IF and obtain the S- and P-aware example influence. Moreover, we propose to fuse two kinds of example influence by solving a dual-objective optimization problem, and obtain a fused influence towards SP Pareto optimality. The fused influence can be used to control the update of model and optimize the storage of rehearsal. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on both task- and class-incremental benchmark CL datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
319,463
2406.09038
CGP++ : A Modern C++ Implementation of Cartesian Genetic Programming
The reference implementation of Cartesian Genetic Programming (CGP) was written in the C programming language. C inherently follows a procedural programming paradigm, which entails challenges in providing a reusable and scalable implementation model for complex structures and methods. Moreover, due to the limiting factors of C, the reference implementation of CGP does not provide a generic framework and is therefore restricted to a set of predefined evaluation types. Besides the reference implementation, we also observe that other existing implementations are limited with respect to the features provided. In this work, we therefore propose the first version of a modern C++ implementation of CGP that pursues object-oriented design and generic programming paradigm to provide an efficient implementation model that can facilitate the discovery of new problem domains and the implementation of complex advanced methods that have been proposed for CGP over time. With the proposal of our new implementation, we aim to generally promote interpretability, accessibility and reproducibility in the field of CGP.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
463,734
1109.2215
Finding missing edges and communities in incomplete networks
Many algorithms have been proposed for predicting missing edges in networks, but they do not usually take account of which edges are missing. We focus on networks which have missing edges of the form that is likely to occur in real networks, and compare algorithms that find these missing edges. We also investigate the effect of this kind of missing data on community detection algorithms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
12,101
1507.02973
Overcoming data scarcity of Twitter: using tweets as bootstrap with application to autism-related topic content analysis
Notwithstanding recent work which has demonstrated the potential of using Twitter messages for content-specific data mining and analysis, the depth of such analysis is inherently limited by the scarcity of data imposed by the 140 character tweet limit. In this paper we describe a novel approach for targeted knowledge exploration which uses tweet content analysis as a preliminary step. This step is used to bootstrap more sophisticated data collection from directly related but much richer content sources. In particular we demonstrate that valuable information can be collected by following URLs included in tweets. We automatically extract content from the corresponding web pages and treating each web page as a document linked to the original tweet show how a temporal topic model based on a hierarchical Dirichlet process can be used to track the evolution of a complex topic structure of a Twitter community. Using autism-related tweets we demonstrate that our method is capable of capturing a much more meaningful picture of information exchange than user-chosen hashtags.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
45,034
2411.02428
NMformer: A Transformer for Noisy Modulation Classification in Wireless Communication
Modulation classification is a very challenging task since the signals intertwine with various ambient noises. Methods are required that can classify them without adding extra steps like denoising, which introduces computational complexity. In this study, we propose a vision transformer (ViT) based model named NMformer to predict the channel modulation images with different noise levels in wireless communication. Since ViTs are most effective for RGB images, we generated constellation diagrams from the modulated signals. The diagrams provide the information from the signals in a 2-D representation form. We trained NMformer on 106, 800 modulation images to build the base classifier and only used 3, 000 images to fine-tune for specific tasks. Our proposed model has two different kinds of prediction setups: in-distribution and out-of-distribution. Our model achieves 4.67% higher accuracy than the base classifier when finetuned and tested on high signal-to-noise ratios (SNRs) in-distribution classes. Moreover, the fine-tuned low SNR task achieves a higher accuracy than the base classifier. The fine-tuned classifier becomes much more effective than the base classifier by achieving higher accuracy when predicted, even on unseen data from out-of-distribution classes. Extensive experiments show the effectiveness of NMformer for a wide range of SNRs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
505,489
1806.00811
Causal Inference with Noisy and Missing Covariates via Matrix Factorization
Valid causal inference in observational studies often requires controlling for confounders. However, in practice measurements of confounders may be noisy, and can lead to biased estimates of causal effects. We show that we can reduce the bias caused by measurement noise using a large number of noisy measurements of the underlying confounders. We propose the use of matrix factorization to infer the confounders from noisy covariates, a flexible and principled framework that adapts to missing values, accommodates a wide variety of data types, and can augment many causal inference methods. We bound the error for the induced average treatment effect estimator and show it is consistent in a linear regression setting, using Exponential Family Matrix Completion preprocessing. We demonstrate the effectiveness of the proposed procedure in numerical experiments with both synthetic data and real clinical data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
99,405
2312.10195
SoloPose: One-Shot Kinematic 3D Human Pose Estimation with Video Data Augmentation
While recent two-stage many-to-one deep learning models have demonstrated great success in 3D human pose estimation, such models are inefficient ways to detect 3D key points in a sequential video relative to one-shot and many-to-many models. Another key drawback of two-stage and many-to-one models is that errors in the first stage will be passed onto the second stage. In this paper, we introduce SoloPose, a novel one-shot, many-to-many spatio-temporal transformer model for kinematic 3D human pose estimation of video. SoloPose is further fortified by HeatPose, a 3D heatmap based on Gaussian Mixture Model distributions that factors target key points as well as kinematically adjacent key points. Finally, we address data diversity constraints with the 3D AugMotion Toolkit, a methodology to augment existing 3D human pose datasets, specifically by projecting four top public 3D human pose datasets (Humans3.6M, MADS, AIST Dance++, MPI INF 3DHP) into a novel dataset (Humans7.1M) with a universal coordinate system. Extensive experiments are conducted on Human3.6M as well as the augmented Humans7.1M dataset, and SoloPose demonstrates superior results relative to the state-of-the-art approaches.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
416,055
1907.07525
Low Barrier Nanomagnet Design for Binary Stochastic Neurons: Design Challenges for Real Nanomagnets with Fabrication Defects
Much attention has been focused on the design of low barrier nanomagnets (LBM), whose magnetizations vary randomly in time owing to thermal noise, for use in binary stochastic neurons (BSN) which are hardware accelerators for machine learning. The performance of BSNs depend on two important parameters: the correlation time associated with the random magnetization dynamics in a LBM, and the spin-polarized pinning current which stabilizes the magnetization of a LBM in a chosen direction within a chosen time. Here, we show that common fabrication defects in LBMs make these two parameters unpredictable since they are strongly sensitive to the defects. That makes the design of BSNs with real LBMs very challenging. Unless the LBMs are fabricated with extremely tight control, the BSNs which use them could be unreliable or suffer from poor yield.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
138,899
2112.07297
Parameterized codes over graphs
In this article we review known results on parameterized linear codes over graphs, introduced by Renter\'ia, Simis and Villarreal in 2011. Very little is known about their basic parameters and invariants. We review in detail the parameters dimension, regularity and minimum distance. As regards the parameter dimension, we explore the connection to Eulerian ideals in the ternary case and we give new combinatorial formulas.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
271,437
2108.07007
Flying Guide Dog: Walkable Path Discovery for the Visually Impaired Utilizing Drones and Transformer-based Semantic Segmentation
Lacking the ability to sense ambient environments effectively, blind and visually impaired people (BVIP) face difficulty in walking outdoors, especially in urban areas. Therefore, tools for assisting BVIP are of great importance. In this paper, we propose a novel "flying guide dog" prototype for BVIP assistance using drone and street view semantic segmentation. Based on the walkable areas extracted from the segmentation prediction, the drone can adjust its movement automatically and thus lead the user to walk along the walkable path. By recognizing the color of pedestrian traffic lights, our prototype can help the user to cross a street safely. Furthermore, we introduce a new dataset named Pedestrian and Vehicle Traffic Lights (PVTL), which is dedicated to traffic light recognition. The result of our user study in real-world scenarios shows that our prototype is effective and easy to use, providing new insight into BVIP assistance.
true
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
250,806
2412.08948
Mojito: Motion Trajectory and Intensity Control for Video Generation
Recent advancements in diffusion models have shown great promise in producing high-quality video content. However, efficiently training video diffusion models capable of integrating directional guidance and controllable motion intensity remains a challenging and under-explored area. To tackle these challenges, this paper introduces Mojito, a diffusion model that incorporates both motion trajectory and intensity control for text-to-video generation. Specifically, Mojito features a Directional Motion Control (DMC) module that leverages cross-attention to efficiently direct the generated object's motion without training, alongside a Motion Intensity Modulator (MIM) that uses optical flow maps generated from videos to guide varying levels of motion intensity. Extensive experiments demonstrate Mojito's effectiveness in achieving precise trajectory and intensity control with high computational efficiency, generating motion patterns that closely match specified directions and intensities, providing realistic dynamics that align well with natural motion in real-world scenarios.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
516,292
2207.03348
Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in Groups
We develop data-driven models to predict when a robot should feed during social dining scenarios. Being able to eat independently with friends and family is considered one of the most memorable and important activities for people with mobility limitations. While existing robotic systems for feeding people with mobility limitations focus on solitary dining, commensality, the act of eating together, is often the practice of choice. Sharing meals with others introduces the problem of socially appropriate bite timing for a robot, i.e. the appropriate timing for the robot to feed without disrupting the social dynamics of a shared meal. Our key insight is that bite timing strategies that take into account the delicate balance of social cues can lead to seamless interactions during robot-assisted feeding in a social dining scenario. We approach this problem by collecting a Human-Human Commensality Dataset (HHCD) containing 30 groups of three people eating together. We use this dataset to analyze human-human commensality behaviors and develop bite timing prediction models in social dining scenarios. We also transfer these models to human-robot commensality scenarios. Our user studies show that prediction improves when our algorithm uses multimodal social signaling cues between diners to model bite timing. The HHCD dataset, videos of user studies, and code are available at https://emprise.cs.cornell.edu/hrcom/
true
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
306,811
1707.09112
Almost everywhere injectivity conditions for the matrix recovery problem
Matrix recovery is raised in many areas. In this paper, we build up a framework for almost everywhere matrix recovery which means to recover almost all the $P\in {\mathcal M}\subset {\mathbb H}^{p\times q}$ from $Tr(A_jP), j=1,\ldots,N$ where $A_j\in V_j\subset {\mathbb H}^{p\times q}$. We mainly focus on the following question: how many measurements are needed to recover almost all the matrices in ${\mathcal M}$? For the case where both ${\mathcal M}$ and $V_j$ are algebraic varieties, we use the tools from algebraic geometry to study the question and present some results to address it under many different settings.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
77,954
2312.03730
FakeWatch ElectionShield: A Benchmarking Framework to Detect Fake News for Credible US Elections
In today's technologically driven world, the spread of fake news, particularly during crucial events such as elections, presents an increasing challenge to the integrity of information. To address this challenge, we introduce FakeWatch ElectionShield, an innovative framework carefully designed to detect fake news. We have created a novel dataset of North American election-related news articles through a blend of advanced language models (LMs) and thorough human verification, for precision and relevance. We propose a model hub of LMs for identifying fake news. Our goal is to provide the research community with adaptable and accurate classification models in recognizing the dynamic nature of misinformation. Extensive evaluation of fake news classifiers on our dataset and a benchmark dataset shows our that while state-of-the-art LMs slightly outperform the traditional ML models, classical models are still competitive with their balance of accuracy, explainability, and computational efficiency. This research sets the foundation for future studies to address misinformation related to elections.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
413,384
2405.19854
RTGen: Generating Region-Text Pairs for Open-Vocabulary Object Detection
Open-vocabulary object detection (OVD) requires solid modeling of the region-semantic relationship, which could be learned from massive region-text pairs. However, such data is limited in practice due to significant annotation costs. In this work, we propose RTGen to generate scalable open-vocabulary region-text pairs and demonstrate its capability to boost the performance of open-vocabulary object detection. RTGen includes both text-to-region and region-to-text generation processes on scalable image-caption data. The text-to-region generation is powered by image inpainting, directed by our proposed scene-aware inpainting guider for overall layout harmony. For region-to-text generation, we perform multiple region-level image captioning with various prompts and select the best matching text according to CLIP similarity. To facilitate detection training on region-text pairs, we also introduce a localization-aware region-text contrastive loss that learns object proposals tailored with different localization qualities. Extensive experiments demonstrate that our RTGen can serve as a scalable, semantically rich, and effective source for open-vocabulary object detection and continue to improve the model performance when more data is utilized, delivering superior performance compared to the existing state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
459,096
2108.04289
ACE: A Novel Approach for the Statistical Analysis of Pairwise Connectivity
Analysing correlations between streams of events is an important problem. It arises for example in Neurosciences, when the connectivity of neurons should be inferred from spike trains that record neurons' individual spiking activity. While recently some approaches for inferring delayed synaptic connections have been proposed, they are limited in the types of connectivities and delays they are able to handle, or require computation-intensive procedures. This paper proposes a faster and more flexible approach for analysing such delayed correlated activity: a statistical approach for the Analysis of Connectivity in spiking Events (ACE), based on the idea of hypothesis testing. It first computes for any pair of a source and a target neuron the inter-spike delays between subsequent source- and target-spikes. Then, it derives a null model for the distribution of inter-spike delays for \emph{uncorrelated}~neurons. Finally, it compares the observed distribution of inter-spike delays to this null model and infers pairwise connectivity based on the Pearson's Chi-squared test statistic. Thus, ACE is capable to detect connections with a priori unknown, non-discrete (and potentially large) inter-spike delays, which might vary between pairs of neurons. Since ACE works incrementally, it has potential for being used in online processing. In our experiments, we visualise the advantages of ACE in varying experimental scenarios (except for one special case) and in a state-of-the-art dataset which has been generated for neuro-scientific research under most realistic conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
249,954
2309.15676
Joint Sampling and Optimisation for Inverse Rendering
When dealing with difficult inverse problems such as inverse rendering, using Monte Carlo estimated gradients to optimise parameters can slow down convergence due to variance. Averaging many gradient samples in each iteration reduces this variance trivially. However, for problems that require thousands of optimisation iterations, the computational cost of this approach rises quickly. We derive a theoretical framework for interleaving sampling and optimisation. We update and reuse past samples with low-variance finite-difference estimators that describe the change in the estimated gradients between each iteration. By combining proportional and finite-difference samples, we continuously reduce the variance of our novel gradient meta-estimators throughout the optimisation process. We investigate how our estimator interlinks with Adam and derive a stable combination. We implement our method for inverse path tracing and demonstrate how our estimator speeds up convergence on difficult optimisation tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
395,068
1608.00426
The $\epsilon$-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems
Discrete-time linear systems with perturbed initial state are considered. A disturbance that infects the initial state is said to be $\epsilon$-tolerable if the corresponding output signal is relatively insensitive to their effects. In this paper, we will define a new set that characterize each gain matrix K and the associated feedback control law ui=Kxi, this set will be called the $\epsilon$-capacity of the gain matrix K. The set of all possible gain matrix that makes the system insensitive to all disturbances is noted $\Phi\cdot$. The characterization of $\Phi\cdot$ is investigated, and we propose an algorithmic approach that allows to determine if a control law is belongs to $\Phi\cdot$ or not. Numerical simulations are given.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
59,282