id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
cs/0509079
The WSSUS Pulse Design Problem in Multicarrier Transmission
Optimal link adaption to the scattering function of wide sense stationary uncorrelated mobile communication channels is still an unsolved problem despite its importance for next-generation system design. In multicarrier transmission such link adaption is performed by pulse shaping, i.e. by properly adjusting the transmit and receive filters. For example pulse shaped Offset--QAM systems have been recently shown to have superior performance over standard cyclic prefix OFDM (while operating at higher spectral efficiency).In this paper we establish a general mathematical framework for joint transmitter and receiver pulse shape optimization for so-called Weyl--Heisenberg or Gabor signaling with respect to the scattering function of the WSSUS channel. In our framework the pulse shape optimization problem is translated to an optimization problem over trace class operators which in turn is related to fidelity optimization in quantum information processing. By convexity relaxation the problem is shown to be equivalent to a \emph{convex constraint quasi-convex maximization problem} thereby revealing the non-convex nature of the overall WSSUS pulse design problem. We present several iterative algorithms for optimization providing applicable results even for large--scale problem constellations. We show that with transmitter-side knowledge of the channel statistics a gain of $3 - 6$dB in $\SINR$ can be expected.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
538,978
2306.13584
Revisiting the Optimal PMU Placement Problem in Multi-Machine Power Networks
To provide real-time visibility of physics-based states, phasor measurement units (PMUs) are deployed throughout power networks. PMU data enable real-time grid monitoring and control -- and are essential in transitioning to smarter grids. Various considerations are taken into account when determining the geographic, optimal PMU placements (OPP). This paper focuses on the control-theoretic, observability aspect of OPP. A myriad of studies have investigated observability-based formulations to determine the OPP within a transmission network. However, they have mostly adopted a simplified representation of system dynamics, ignored basic algebraic equations that model power flows, disregarded including renewables such as solar and wind, and did not model their uncertainty. Consequently, this paper revisits the observability-based OPP problem by addressing the literature's limitations. A nonlinear differential algebraic representation (NDAE) of the power system is considered. The system is discretized using various discretization approaches while explicitly accounting for uncertainty. A moving horizon estimation approach is explored to reconstruct the joint differential and algebraic initial states of the system, as a gateway to the OPP problem which is then formulated as a computationally tractable integer program (IP). Comprehensive numerical simulations on standard power networks are conducted to validate the different aspects of this approach and test its robustness to various dynamical conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
375,326
2202.13220
How Much Depth Information can Radar Contribute to a Depth Estimation Model?
Recently, several works have proposed fusing radar data as an additional perceptual signal into monocular depth estimation models because radar data is robust against varying light and weather conditions. Although improved performances were reported in prior works, it is still hard to tell how much depth information radar can contribute to a depth estimation model. In this paper, we propose radar inference and supervision experiments to investigate the intrinsic depth potential of radar data using state-of-the-art depth estimation models on the nuScenes dataset. In the inference experiment, the model predicts depth by taking only radar as input to demonstrate the inference capability using radar data. In the supervision experiment, a monocular depth estimation model is trained under radar supervision to show the intrinsic depth information that radar can contribute. Our experiments demonstrate that the model using only sparse radar as input can detect the shape of surroundings to a certain extent in the predicted depth. Furthermore, the monocular depth estimation model supervised by preprocessed radar achieves a good performance compared to the baseline model trained with sparse lidar supervision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
282,529
2012.06188
Recent Theoretical Advances in Non-Convex Optimization
Motivated by recent increased interest in optimization algorithms for non-convex optimization in application to training deep neural networks and other optimization problems in data analysis, we give an overview of recent theoretical results on global performance guarantees of optimization algorithms for non-convex optimization. We start with classical arguments showing that general non-convex problems could not be solved efficiently in a reasonable time. Then we give a list of problems that can be solved efficiently to find the global minimizer by exploiting the structure of the problem as much as it is possible. Another way to deal with non-convexity is to relax the goal from finding the global minimum to finding a stationary point or a local minimum. For this setting, we first present known results for the convergence rates of deterministic first-order methods, which are then followed by a general theoretical analysis of optimal stochastic and randomized gradient schemes, and an overview of the stochastic first-order methods. After that, we discuss quite general classes of non-convex problems, such as minimization of $\alpha$-weakly-quasi-convex functions and functions that satisfy Polyak--Lojasiewicz condition, which still allow obtaining theoretical convergence guarantees of first-order methods. Then we consider higher-order and zeroth-order/derivative-free methods and their convergence rates for non-convex optimization problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
211,032
2407.18914
Floating No More: Object-Ground Reconstruction from a Single Image
Recent advancements in 3D object reconstruction from single images have primarily focused on improving the accuracy of object shapes. Yet, these techniques often fail to accurately capture the inter-relation between the object, ground, and camera. As a result, the reconstructed objects often appear floating or tilted when placed on flat surfaces. This limitation significantly affects 3D-aware image editing applications like shadow rendering and object pose manipulation. To address this issue, we introduce ORG (Object Reconstruction with Ground), a novel task aimed at reconstructing 3D object geometry in conjunction with the ground surface. Our method uses two compact pixel-level representations to depict the relationship between camera, object, and ground. Experiments show that the proposed ORG model can effectively reconstruct object-ground geometry on unseen data, significantly enhancing the quality of shadow generation and pose manipulation compared to conventional single-image 3D reconstruction techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
476,566
2112.14349
Fast Subspace Identification Method Based on Containerised Cloud Workflow Processing System
Subspace identification (SID) has been widely used in system identification and control fields since it can estimate system models only relying on the input and output data by reliable numerical operations such as singular value decomposition (SVD). However, high-dimension Hankel matrices are involved to store these data and used to obtain the system models, which increases the computation amount of SID and leads SID not suitable for the large-scale or real-time identification tasks. In this paper, a novel fast SID method based on cloud workflow processing and container technology is proposed to accelerate the traditional algorithm. First, a workflow-based structure of SID is designed to match the distributed cloud environment, based on the computational feature of each calculation stage. Second, a containerised cloud workflow processing system is established to execute the logic- and data- dependent SID workflow mission based on Kubernetes system. Finally, the experiments show that the computation time is reduced by at most $91.6\%$ for large-scale SID mission and decreased to within 20 ms for the real-time mission parameter.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
273,504
2111.07061
Geometric PID Controller for Stabilization of Nonholonomic Mechanical Systems on Lie Groups
The PID controller is an elegant and versatile controller for set point tracking in double integrator systems of which mechanical systems evolving on Euclidean space constitute a large class. But since mechanical systems are typically constrained interconnections of rigid bodies whose configuration space is $SE(3)$, which is not even topologically Euclidean, a geometric PID controller has been developed for mechanical systems evolving on Lie groups. In this work, we extend the framework to such systems which have nonholonomic constraints. It encompasses many practically applicable mechanical systems encountered in robotics as robots are constrained interconnections of rigid bodies where the constraints could either be holonomic or nonholonomic.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
266,266
2304.03459
Integrated motion control and energy management of series hybrid electric vehicles: A multi-objective MPC approach
This paper considers the integrated motion control and energy management problems of the series hybrid electric vehicles (SHEV) with constraints. We propose a multi-objective model predictive control (MOMPC)-based energy management approach, which is embedded with the motion control to guarantee driving comfort. In addition, due to the slow response of the engine, it may cause excessive batter power when HEVs work in different conditions (e.g., uphill or sudden acceleration) with a certain request power; this implies the discharge current is too large. A battery current constraint is designed and incorporated into the MOMPC optimization problem and hence avoids the extra high charge-discharge current. This prevents potential safety hazards and extends the battery's life. Finally, numerical experiments are performed to verify the proposed approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
356,818
2111.14014
Unsupervised Domain Adaptive Person Re-Identification via Human Learning Imitation
Unsupervised domain adaptive person re-identification has received significant attention due to its high practical value. In past years, by following the clustering and finetuning paradigm, researchers propose to utilize the teacher-student framework in their methods to decrease the domain gap between different person re-identification datasets. Inspired by recent teacher-student framework based methods, which try to mimic the human learning process either by making the student directly copy behavior from the teacher or selecting reliable learning materials, we propose to conduct further exploration to imitate the human learning process from different aspects, \textit{i.e.}, adaptively updating learning materials, selectively imitating teacher behaviors, and analyzing learning materials structures. The explored three components, collaborate together to constitute a new method for unsupervised domain adaptive person re-identification, which is called Human Learning Imitation framework. The experimental results on three benchmark datasets demonstrate the efficacy of our proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
268,463
2306.06118
Estimation of River Water Surface Elevation Using UAV Photogrammetry and Machine Learning
Unmanned aerial vehicle (UAV) photogrammetry allows for the creation of orthophotos and digital surface models (DSMs) of a terrain. However, DSMs of water bodies mapped with this technique reveal water surface distortions, preventing the use of photogrammetric data for accurate determination of water surface elevation (WSE). Firstly, we propose a new solution in which a convolutional neural network (CNN) is used as a WSE estimator from photogrammetric DSMs and orthophotos. Second, we improved the previously known "water-edge" method by filtering the outliers using a forward-backwards exponential weighted moving average. Further improvement in these two methods was achieved by performing a linear regression of the WSE values against chainage. The solutions estimate the uncertainty of the predictions. This is the first approach in which DL was used for this task. A brand new machine learning data set has been created. It was collected on a small lowland river in winter and summer conditions. It consists of 322 samples, each corresponding to a 10 by 10 meter area of the river channel and adjacent land. Each data set sample contains orthophoto and DSM arrays as input, along with a single ground-truth WSE value as output. The data set was supplemented with data collected by other researchers that compared the state-of-the-art methods for determining WSE using an UAV. The results of the DL solution were verified using k-fold cross-validation method. This provided an in-depth examination of the model's ability to perform on previously unseen data. The WSE RMSEs differ for each k-fold cross-validation subset and range from 1.7 cm up to 17.2 cm. The RMSE results of the improved "water-edge" method are at least six times lower than the RMSE results achieved by the conventional "water-edge" method. The results obtained by new methods are predominantly outperforming existing ones.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
372,463
2203.15275
A Multi-size Kernel based Adaptive Convolutional Neural Network for Bearing Fault Diagnosis
Bearing fault identification and analysis is an important research area in the field of machinery fault diagnosis. Aiming at the common faults of rolling bearings, we propose a data-driven diagnostic algorithm based on the characteristics of bearing vibrations called multi-size kernel based adaptive convolutional neural network (MSKACNN). Using raw bearing vibration signals as the inputs, MSKACNN provides vibration feature learning and signal classification capabilities to identify and analyze bearing faults. Ball mixing is a ball bearing production quality problem that is difficult to identify using traditional frequency domain analysis methods since it requires high frequency resolutions of the measurement signals and results in a long analyzing time. The proposed MSKACNN is shown to improve the efficiency and accuracy of ball mixing diagnosis. To further demonstrate the effectiveness of MSKACNN in bearing fault identification, a bearing vibration data acquisition system was developed, and vibration signal acquisition was performed on rolling bearings under five different fault conditions including ball mixing. The resulting datasets were used to analyze the performance of our proposed model. To validate the adaptive ability of MSKACNN, fault test data from the Case Western Reserve University Bearing Data Center were also used. Test results show that MSKACNN can identify the different bearing conditions with high accuracy with high generalization ability. We presented an implementation of the MSKACNN as a lightweight module for a real-time bearing fault diagnosis system that is suitable for production.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
288,323
2102.12304
Two Problems about Monomial Bent Functions
In 2008, Langevin and Leander determined the dual function of three classes of monomial bent functions with the help of Stickelberger's theorem: Dillon, Gold and Kasami. In their paper, they proposed one very strong condition such that their method works, and showed that both Gold exponent and Kasami exponent satisfy this condition. In 2018, Pott {\em et al.} investigated the issue of vectorial functions with maximal number of bent components. They found one class of binomial functions which attains the upper bound. They also proposed an open problem regarding monomial function with maximal number of bent components. In this paper, we obtain an interesting result about the condition of Langevin and Leander, and solve the open problem of Pott {\em et al.}. Specifically, we show that: 1) for a monomial bent function over $\mathbb{F}_{2^{2k}}$, if the exponent satisfies the first part of the condition of Langevin and Leander, then it satisfies the entire condition; 2) $x^{2^k+1}$ is the only monomial function over $\mathbb{F}_{2^{2k}}$ which has maximal number of bent components. Fortunately, as a consequence, we also solve an open problem of Ness and Helleseth in 2006.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
221,688
2405.10213
Words as Trigger Points in Social Media Discussions
Trigger points, introduced by Mau et al . [30], are rooted in theories of affective political identity and relate to deeply lying beliefs about moral expectations and social dispositions. Examining trigger points in online discussions helps understand why and when social media users engage in disagreements or affective political deliberations. This opens the door to modelling social media user engagement more effectively and studying the conditions and causal mechanisms that lead to adverse reactions, hate speech, and abusive language in online debates.
false
false
false
true
false
false
false
false
true
false
false
false
false
true
false
false
false
false
454,677
1806.08294
Layouts from Panoramic Images with Geometry and Deep Learning
In this paper, we propose a novel procedure for 3D layout recovery of indoor scenes from single 360 degrees panoramic images. With such images, all scene is seen at once, allowing to recover closed geometries. Our method combines strategically the accuracy provided by geometric reasoning (lines and vanishing points) with the higher level of data abstraction and pattern recognition achieved by deep learning techniques (edge and normal maps). Thus, we extract structural corners from which we generate layout hypotheses of the room assuming Manhattan world. The best layout model is selected, achieving good performance on both simple rooms (box-type) and complex shaped rooms (with more than four walls). Experiments of the proposed approach are conducted within two public datasets, SUN360 and Stanford (2D-3D-S) demonstrating the advantages of estimating layouts by combining geometry and deep learning and the effectiveness of our proposal with respect to the state of the art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
101,134
2309.13055
Originality and the Future of Copyright in an Age of Generative AI
This papers explores the question of human authorship when works are created with generative AI tools.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
394,032
2401.06610
The Hand-object Kinematic Model for Bimanual Manipulation
This paper addresses the planar finger kinematics for seeking optimized manipulation strategies. The first step is to model based on geometric features of linear and rotation motion so that the robot can select the fingers configurations. This kinematic model considers the motion between hands and object. Based on 2-finger manipulation cases, this model can output the strategies for bimanual manipulation. For executing strategies, the second step is to seek the appropriate values of finger joints according to the ending orientation of fingers. The simulation shows that the computed solutions can complete the relative rotation and linear motion of unknown objects.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
421,216
2406.00920
Demystifying SGD with Doubly Stochastic Gradients
Optimization objectives in the form of a sum of intractable expectations are rising in importance (e.g., diffusion models, variational autoencoders, and many more), a setting also known as "finite sum with infinite data." For these problems, a popular strategy is to employ SGD with doubly stochastic gradients (doubly SGD): the expectations are estimated using the gradient estimator of each component, while the sum is estimated by subsampling over these estimators. Despite its popularity, little is known about the convergence properties of doubly SGD, except under strong assumptions such as bounded variance. In this work, we establish the convergence of doubly SGD with independent minibatching and random reshuffling under general conditions, which encompasses dependent component gradient estimators. In particular, for dependent estimators, our analysis allows fined-grained analysis of the effect correlations. As a result, under a per-iteration computational budget of $b \times m$, where $b$ is the minibatch size and $m$ is the number of Monte Carlo samples, our analysis suggests where one should invest most of the budget in general. Furthermore, we prove that random reshuffling (RR) improves the complexity dependence on the subsampling noise.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
460,083
1509.02223
Diffusion tensor imaging with deterministic error bounds
Errors in the data and the forward operator of an inverse problem can be handily modelled using partial order in Banach lattices. We present some existing results of the theory of regularisation in this novel framework, where errors are represented as bounds by means of the appropriate partial order. We apply the theory to Diffusion Tensor Imaging, where correct noise modelling is challenging: it involves the Rician distribution and the nonlinear Stejskal-Tanner equation. Linearisation of the latter in the statistical framework would complicate the noise model even further. We avoid this using the error bounds approach, which preserves simple error structure under monotone transformations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
46,705
1808.08098
Measuring LDA Topic Stability from Clusters of Replicated Runs
Background: Unstructured and textual data is increasing rapidly and Latent Dirichlet Allocation (LDA) topic modeling is a popular data analysis methods for it. Past work suggests that instability of LDA topics may lead to systematic errors. Aim: We propose a method that relies on replicated LDA runs, clustering, and providing a stability metric for the topics. Method: We generate k LDA topics and replicate this process n times resulting in n*k topics. Then we use K-medioids to cluster the n*k topics to k clusters. The k clusters now represent the original LDA topics and we present them like normal LDA topics showing the ten most probable words. For the clusters, we try multiple stability metrics, out of which we recommend Rank-Biased Overlap, showing the stability of the topics inside the clusters. Results: We provide an initial validation where our method is used for 270,000 Mozilla Firefox commit messages with k=20 and n=20. We show how our topic stability metrics are related to the contents of the topics. Conclusions: Advances in text mining enable us to analyze large masses of text in software engineering but non-deterministic algorithms, such as LDA, may lead to unreplicable conclusions. Our approach makes LDA stability transparent and is also complementary rather than alternative to many prior works that focus on LDA parameter tuning.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
105,872
2408.08682
LLM-PCGC: Large Language Model-based Point Cloud Geometry Compression
The key to effective point cloud compression is to obtain a robust context model consistent with complex 3D data structures. Recently, the advancement of large language models (LLMs) has highlighted their capabilities not only as powerful generators for in-context learning and generation but also as effective compressors. These dual attributes of LLMs make them particularly well-suited to meet the demands of data compression. Therefore, this paper explores the potential of using LLM for compression tasks, focusing on lossless point cloud geometry compression (PCGC) experiments. However, applying LLM directly to PCGC tasks presents some significant challenges, i.e., LLM does not understand the structure of the point cloud well, and it is a difficult task to fill the gap between text and point cloud through text description, especially for large complicated and small shapeless point clouds. To address these problems, we introduce a novel architecture, namely the Large Language Model-based Point Cloud Geometry Compression (LLM-PCGC) method, using LLM to compress point cloud geometry information without any text description or aligning operation. By utilizing different adaptation techniques for cross-modality representation alignment and semantic consistency, including clustering, K-tree, token mapping invariance, and Low Rank Adaptation (LoRA), the proposed method can translate LLM to a compressor/generator for point cloud. To the best of our knowledge, this is the first structure to employ LLM as a compressor for point cloud data. Experiments demonstrate that the LLM-PCGC outperforms the other existing methods significantly, by achieving -40.213% bit rate reduction compared to the reference software of MPEG Geometry-based Point Cloud Compression (G-PCC) standard, and by achieving -2.267% bit rate reduction compared to the state-of-the-art learning-based method.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
481,104
2306.17794
Vision Through the Veil: Differential Privacy in Federated Learning for Medical Image Classification
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions, a practice often associated with significant privacy concerns. This concern intensifies in medical image analysis, where privacy-preserving mechanisms are paramount due to the data being sensitive in nature. Federated learning, which enables cooperative model training without direct data exchange, presents a promising solution. Nevertheless, the inherent vulnerabilities of federated learning necessitate further privacy safeguards. This study addresses this need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification. We introduce a novel differentially private federated learning model and meticulously examine its impacts on privacy preservation and model performance. Our research confirms the existence of a trade-off between model accuracy and privacy settings. However, we demonstrate that strategic calibration of the privacy budget in differential privacy can uphold robust image classification performance while providing substantial privacy protection.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
376,809
1712.01665
Differentially Private Dropout
Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
86,149
1211.3010
Time-series Scenario Forecasting
Many applications require the ability to judge uncertainty of time-series forecasts. Uncertainty is often specified as point-wise error bars around a mean or median forecast. Due to temporal dependencies, such a method obscures some information. We would ideally have a way to query the posterior probability of the entire time-series given the predictive variables, or at a minimum, be able to draw samples from this distribution. We use a Bayesian dictionary learning algorithm to statistically generate an ensemble of forecasts. We show that the algorithm performs as well as a physics-based ensemble method for temperature forecasts for Houston. We conclude that the method shows promise for scenario forecasting where physics-based methods are absent.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
19,718
2102.03782
Using Gaussian Processes to Design Dynamic Experiments for Black-Box Model Discrimination under Uncertainty
Diverse domains of science and engineering use parameterised mechanistic models. Engineers and scientists can often hypothesise several rival models to explain a specific process or phenomenon. Consider a model discrimination setting where we wish to find the best mechanistic, dynamic model candidate and the best model parameter estimates. Typically, several rival mechanistic models can explain the available data, so design of dynamic experiments for model discrimination helps optimally collect additional data by finding experimental settings that maximise model prediction divergence. We argue there are two main approaches in the literature for solving the optimal design problem: (i) the analytical approach, using linear and Gaussian approximations to find closed-form expressions for the design objective, and (ii) the data-driven approach, which often relies on computationally intensive Monte Carlo techniques. Olofsson et al. (ICML 35, 2018) introduced Gaussian process (GP) surrogate models to hybridise the analytical and data-driven approaches, which allowed for computationally efficient design of experiments for discriminating between black-box models. In this study, we demonstrate that we can extend existing methods for optimal design of dynamic experiments to incorporate a wider range of problem uncertainty. We also extend the Olofsson et al. (2018) method of using GP surrogate models for discriminating between dynamic black-box models. We evaluate our approach on a well-known case study from literature, and explore the consequences of using GP surrogates to approximate gradient-based methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
218,870
1706.02390
CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks
Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
74,966
2311.14900
Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise
Recently, research on denoising diffusion models has expanded its application to the field of image restoration. Traditional diffusion-based image restoration methods utilize degraded images as conditional input to effectively guide the reverse generation process, without modifying the original denoising diffusion process. However, since the degraded images already include low-frequency information, starting from Gaussian white noise will result in increased sampling steps. We propose Resfusion, a general framework that incorporates the residual term into the diffusion forward process, starting the reverse process directly from the noisy degraded images. The form of our inference process is consistent with the DDPM. We introduced a weighted residual noise, named resnoise, as the prediction target and explicitly provide the quantitative relationship between the residual term and the noise term in resnoise. By leveraging a smooth equivalence transformation, Resfusion determine the optimal acceleration step and maintains the integrity of existing noise schedules, unifying the training and inference processes. The experimental results demonstrate that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps. Furthermore, Resfusion can be easily applied to image generation and emerges with strong versatility. Our code and model are available at https://github.com/nkicsl/Resfusion.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
410,296
2010.02407
Adversarial Grammatical Error Correction
Recent works in Grammatical Error Correction (GEC) have leveraged the progress in Neural Machine Translation (NMT), to learn rewrites from parallel corpora of grammatically incorrect and corrected sentences, achieving state-of-the-art results. At the same time, Generative Adversarial Networks (GANs) have been successful in generating realistic texts across many different tasks by learning to directly minimize the difference between human-generated and synthetic text. In this work, we present an adversarial learning approach to GEC, using the generator-discriminator framework. The generator is a Transformer model, trained to produce grammatically correct sentences given grammatically incorrect ones. The discriminator is a sentence-pair classification model, trained to judge a given pair of grammatically incorrect-correct sentences on the quality of grammatical correction. We pre-train both the discriminator and the generator on parallel texts and then fine-tune them further using a policy gradient method that assigns high rewards to sentences which could be true corrections of the grammatically incorrect text. Experimental results on FCE, CoNLL-14, and BEA-19 datasets show that Adversarial-GEC can achieve competitive GEC quality compared to NMT-based baselines.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
199,005
1909.06563
Multi-view and Multi-source Transfers in Neural Topic Modeling with Pretrained Topic and Word Embeddings
Though word embeddings and topics are complementary representations, several past works have only used pre-trained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents. However, no prior work has employed (pre-trained latent) topics in transfer learning paradigm. In this paper, we propose an approach to (1) perform knowledge transfer using latent topics obtained from a large source corpus, and (2) jointly transfer knowledge via the two representations (or views) in neural topic modeling to improve topic quality, better deal with polysemy and data sparsity issues in a target corpus. In doing so, we first accumulate topics and word representations from one or many source corpora to build a pool of topics and word vectors. Then, we identify one or multiple relevant source domain(s) and take advantage of corresponding topics and word features via the respective pools to guide meaningful learning in the sparse target domain. We quantify the quality of topic and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. We have demonstrated the state-of-the-art results on topic modeling with the proposed framework.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
145,411
1411.1490
Efficient Representations for Life-Long Learning and Autoencoding
It has been a long-standing goal in machine learning, as well as in AI more generally, to develop life-long learning systems that learn many different tasks over time, and reuse insights from tasks learned, "learning to learn" as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural "anchor-set" assumption.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
37,345
1008.3667
Pattern Classification In Symbolic Streams via Semantic Annihilation of Information
We propose a technique for pattern classification in symbolic streams via selective erasure of observed symbols, in cases where the patterns of interest are represented as Probabilistic Finite State Automata (PFSA). We define an additive abelian group for a slightly restricted subset of probabilistic finite state automata (PFSA), and the group sum is used to formulate pattern-specific semantic annihilators. The annihilators attempt to identify pre-specified patterns via removal of essentially all inter-symbol correlations from observed sequences, thereby turning them into symbolic white noise. Thus a perfect annihilation corresponds to a perfect pattern match. This approach of classification via information annihilation is shown to be strictly advantageous, with theoretical guarantees, for a large class of PFSA models. The results are supported by simulation experiments.
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
true
7,330
1904.01693
Multigrid Predictive Filter Flow for Unsupervised Learning on Videos
We introduce multigrid Predictive Filter Flow (mgPFF), a framework for unsupervised learning on videos. The mgPFF takes as input a pair of frames and outputs per-pixel filters to warp one frame to the other. Compared to optical flow used for warping frames, mgPFF is more powerful in modeling sub-pixel movement and dealing with corruption (e.g., motion blur). We develop a multigrid coarse-to-fine modeling strategy that avoids the requirement of learning large filters to capture large displacement. This allows us to train an extremely compact model (4.6MB) which operates in a progressive way over multiple resolutions with shared weights. We train mgPFF on unsupervised, free-form videos and show that mgPFF is able to not only estimate long-range flow for frame reconstruction and detect video shot transitions, but also readily amendable for video object segmentation and pose tracking, where it substantially outperforms the published state-of-the-art without bells and whistles. Moreover, owing to mgPFF's nature of per-pixel filter prediction, we have the unique opportunity to visualize how each pixel is evolving during solving these tasks, thus gaining better interpretability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,212
2010.14950
Predicting Engagement with the Internet Research Agency's Facebook and Instagram Campaigns around the 2016 U.S. Presidential Election
The Russian Internet Research Agency's (IRA) online interference campaign in the 2016 U.S. presidential election represents a turning point in the trajectory of democratic elections in the digital age. What can we learn about how the IRA engages U.S. audiences, ahead of the 2020 U.S. presidential election? We provide the first in-depth analysis of the relationships between IRA content characteristics and user engagement on Facebook and Instagram around the 2016 election. We find that content targeting right-wing and non-Black marginalised groups had the strongest positive association with engagement on both Facebook and Instagram, in contrast to findings from the IRA campaign on Twitter and to some previous commentary in the media. Higher engagement was associated with posting later in the 2015-2017 period and using less text on both platforms, using negative wording and not including links on Facebook, and using fewer hashtags on Instagram. The sub-audiences and sub-issues associated with most engagement differed across the platforms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
203,628
2203.02013
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
The ability for a human to understand an Artificial Intelligence (AI) model's decision-making process is critical in enabling stakeholders to visualize model behavior, perform model debugging, promote trust in AI models, and assist in collaborative human-AI decision-making. As a result, the research fields of interpretable and explainable AI have gained traction within AI communities as well as interdisciplinary scientists seeking to apply AI in their subject areas. In this paper, we focus on advancing the state-of-the-art in interpreting multimodal models - a class of machine learning methods that tackle core challenges in representing and capturing interactions between heterogeneous data sources such as images, text, audio, and time-series data. Multimodal models have proliferated numerous real-world applications across healthcare, robotics, multimedia, affective computing, and human-computer interaction. By performing model disentanglement into unimodal contributions (UC) and multimodal interactions (MI), our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models while maintaining generality across arbitrary modalities, model architectures, and tasks. Through a comprehensive suite of experiments on both synthetic and real-world multimodal tasks, we show that DIME generates accurate disentangled explanations, helps users of multimodal models gain a deeper understanding of model behavior, and presents a step towards debugging and improving these models for real-world deployment. Code for our experiments can be found at https://github.com/lvyiwei1/DIME.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
283,596
2401.10472
Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences
Named entity recognition is a key component of Information Extraction (IE), particularly in scientific domains such as biomedicine and chemistry, where large language models (LLMs), e.g., ChatGPT, fall short. We investigate the applicability of transfer learning for enhancing a named entity recognition model trained in the biomedical domain (the source domain) to be used in the chemical domain (the target domain). A common practice for training such a model in a few-shot learning setting is to pretrain the model on the labeled source data, and then, to finetune it on a hand-full of labeled target examples. In our experiments, we observed that such a model is prone to mislabeling the source entities, which can often appear in the text, as the target entities. To alleviate this problem, we propose a model to transfer the knowledge from the source domain to the target domain, but, at the same time, to project the source entities and target entities into separate regions of the feature space. This diminishes the risk of mislabeling the source entities as the target entities. Our model consists of two stages: 1) entity grouping in the source domain, which incorporates knowledge from annotated events to establish relations between entities, and 2) entity discrimination in the target domain, which relies on pseudo labeling and contrastive learning to enhance discrimination between the entities in the two domains. We conduct our extensive experiments across three source and three target datasets, demonstrating that our method outperforms the baselines by up to 5% absolute value.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
422,651
2110.09156
Enhancing exploration algorithms for navigation with visual SLAM
Exploration is an important step in autonomous navigation of robotic systems. In this paper we introduce a series of enhancements for exploration algorithms in order to use them with vision-based simultaneous localization and mapping (vSLAM) methods. We evaluate developed approaches in photo-realistic simulator in two modes: with ground-truth depths and neural network reconstructed depth maps as vSLAM input. We evaluate standard metrics in order to estimate exploration coverage.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
261,703
2308.13076
Exploring Gender-Based Toxic Speech on Twitter in Context of the #MeToo movement: A Mixed Methods Approach
The #MeToo movement has catalyzed widespread public discourse surrounding sexual harassment and assault, empowering survivors to share their stories and holding perpetrators accountable. While the movement has had a substantial and largely positive influence, this study aims to examine the potential negative consequences in the form of increased hostility against women and men on the social media platform Twitter. By analyzing tweets shared between October 2017 and January 2020 by more than 47.1k individuals who had either disclosed their own sexual abuse experiences on Twitter or engaged in discussions about the movement, we identify the overall increase in gender-based hostility towards both women and men since the start of the movement. We also monitor 16 pivotal real-life events that shaped the #MeToo movement to identify how these events may have amplified negative discussions targeting the opposite gender on Twitter. Furthermore, we conduct a thematic content analysis of a subset of gender-based hostile tweets, which helps us identify recurring themes and underlying motivations driving the expressions of anger and resentment from both men and women concerning the #MeToo movement. This study highlights the need for a nuanced understanding of the impact of social movements on online discourse and underscores the importance of addressing gender-based hostility in the digital sphere.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
387,772
1911.09661
Paraphrasing with Large Language Models
Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sentence level but also for longer spans of text such as paragraphs without needing to break the text into smaller chunks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
154,579
1705.10513
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
74,417
2502.11466
GiFT: Gibbs Fine-Tuning for Code Generation
Training Large Language Models (LLMs) with synthetic data is a prevalent practice in code generation. A key approach is self-training, where LLMs are iteratively trained on self-generated correct code snippets. In this case, the self-generated codes are drawn from a conditional distribution, conditioned on a specific seed description. However, the seed description is not the only valid representation that aligns with its intended meaning. With all valid descriptions and codes forming a joint space, codes drawn from the conditional distribution would lead to an underrepresentation of the full description-code space. As such, we propose Gibbs Fine-Tuning (GiFT), a novel self-training method inspired by Gibbs sampling. GiFT allows self-generated data to be drawn from the marginal distribution of the joint space, thereby mitigating the biases inherent in conditional sampling. We provide a theoretical analysis demonstrating the potential benefits of fine-tuning LLMs with code derived from the marginal distribution. Furthermore, we propose a perplexity-based code selection method to mitigate the imbalanced long-tail distribution of the self-generated codes. Empirical evaluation of two LLMs across four datasets demonstrates that GiFT achieves superior performance, particularly on more challenging benchmarks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
534,388
2305.06985
On the Advantages of Asynchrony in the Unsourced MAC
In this work we demonstrate how a lack of synchronization can in fact be advantageous in the problem of random access. Specifically, we consider a multiple-access problem over a frame-asynchronous 2-user binary-input adder channel in the unsourced setup (2-UBAC). Previous work has shown that under perfect synchronization the per-user rates achievable with linear codes over the 2-UBAC are limited by 0.5 bit per channel use (compared to the capacity of 0.75). In this paper, we first demonstrate that arbitrary small (even single-bit) shift between the user's frames enables (random) linear codes to attain full capacity of 0.75 bit/user. Furthermore, we derive density evolution equations for irregular LDPC codes, and prove (via concentration arguments) that they correctly track the asymptotic bit-error rate of a BP decoder. Optimizing the degree distributions we construct LDPC codes achieving per-user rates of 0.73 bit per channel use.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
363,730
cmp-lg/9408017
Reaping the Benefits of Interactive Syntax and Semantics
Semantic feedback is an important source of information that a parser could use to deal with local ambiguities in syntax. However, it is difficult to devise a systematic communication mechanism for interactive syntax and semantics. In this article, I propose a variant of left-corner parsing to define the points at which syntax and semantics should interact, an account of grammatical relations and thematic roles to define the content of the communication, and a conflict resolution strategy based on independent preferences from syntax and semantics. The resulting interactive model has been implemented in a program called COMPERE and shown to account for a wide variety of psycholinguistic data on structural and lexical ambiguities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,166
2208.07588
Human-to-Robot Manipulability Domain Adaptation with Parallel Transport and Manifold-Aware ICP
Manipulability ellipsoids efficiently capture the human pose and reveal information about the task at hand. Their use in task-dependent robot teaching - particularly their transfer from a teacher to a learner - can advance emulation of human-like motion. Although in recent literature focus is shifted towards manipulability transfer between two robots, the adaptation to the capabilities of the other kinematic system is to date not addressed and research in transfer from human to robot is still in its infancy. This work presents a novel manipulability domain adaptation method for the transfer of manipulability information to the domain of another kinematic system. As manipulability matrices/ellipsoids are symmetric positive-definite (SPD) they can be viewed as points on the Riemannian manifold of SPD matrices. We are the first to address the problem of manipulability transfer from the perspective of point cloud registration. We propose a manifold-aware Iterative Closest Point algorithm (ICP) with parallel transport initialization. Furthermore, we introduce a correspondence matching heuristic for manipulability ellipsoids based on inherent geometric features. We confirm our method in simulation experiments with 2-DoF manipulators as well as 7-DoF models representing the human-arm kinematics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
313,089
1509.04037
Measuring Partial Balance in Signed Networks
Is the enemy of an enemy necessarily a friend? If not, to what extent does this tend to hold? Such questions were formulated in terms of signed (social) networks and necessary and sufficient conditions for a network to be "balanced" were obtained around 1960. Since then the idea that signed networks tend over time to become more balanced has been widely used in several application areas. However, investigation of this hypothesis has been complicated by the lack of a standard measure of partial balance, since complete balance is almost never achieved in practice. We formalize the concept of a measure of partial balance, discuss various measures, compare the measures on synthetic datasets, and investigate their axiomatic properties. The synthetic data involves Erd\H{o}s-R\'enyi and specially structured random graphs. We show that some measures behave better than others in terms of axioms and ability to differentiate between graphs. We also use well-known datasets from the sociology and biology literature, such as Read's New Guinean tribes, gene regulatory networks related to two organisms, and a network involving senate bill co-sponsorship. Our results show that substantially different levels of partial balance is observed under cycle-based, eigenvalue-based, and frustration-based measures. We make some recommendations for measures to be used in future work.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
46,889
2406.17774
Fast and Uncertainty-Aware SVBRDF Recovery from Multi-View Capture using Frequency Domain Analysis
Relightable object acquisition is a key challenge in simplifying digital asset creation. Complete reconstruction of an object typically requires capturing hundreds to thousands of photographs under controlled illumination, with specialized equipment. The recent progress in differentiable rendering improved the quality and accessibility of inverse rendering optimization. Nevertheless, under uncontrolled illumination and unstructured viewpoints, there is no guarantee that the observations contain enough information to reconstruct the appearance properties of the captured object. We thus propose to consider the acquisition process from a signal-processing perspective. Given an object's geometry and a lighting environment, we estimate the properties of the materials on the object's surface in seconds. We do so by leveraging frequency domain analysis, considering the recovery of material properties as a deconvolution, enabling fast error estimation. We then quantify the uncertainty of the estimation, based on the available data, highlighting the areas for which priors or additional samples would be required for improved acquisition quality. We compare our approach to previous work and quantitatively evaluate our results, showing similar quality as previous work in a fraction of the time, and providing key information about the certainty of the results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
467,718
1009.3602
Construction of Frequency Hopping Sequence Set Based upon Generalized Cyclotomy
Frequency hopping (FH) sequences play a key role in frequency hopping spread spectrum communication systems. It is important to find FH sequences which have simultaneously good Hamming correlation, large family size and large period. In this paper, a new set of FH sequences with large period is proposed, and the Hamming correlation distribution of the new set is investigated. The construction of new FH sequences is based upon Whiteman's generalized cyclotomy. It is shown that the proposed FH sequence set is optimal with respect to the average Hamming correlation bound.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,584
1606.09140
Algebraic foundations for qualitative calculi and networks
A qualitative representation $\phi$ is like an ordinary representation of a relation algebra, but instead of requiring $(a; b)^\phi = a^\phi | b^\phi$, as we do for ordinary representations, we only require that $c^\phi\supseteq a^\phi | b^\phi \iff c\geq a ; b$, for each $c$ in the algebra. A constraint network is qualitatively satisfiable if its nodes can be mapped to elements of a qualitative representation, preserving the constraints. If a constraint network is satisfiable then it is clearly qualitatively satisfiable, but the converse can fail. However, for a wide range of relation algebras including the point algebra, the Allen Interval Algebra, RCC8 and many others, a network is satisfiable if and only if it is qualitatively satisfiable. Unlike ordinary composition, the weak composition arising from qualitative representations need not be associative, so we can generalise by considering network satisfaction problems over non-associative algebras. We prove that computationally, qualitative representations have many advantages over ordinary representations: whereas many finite relation algebras have only infinite representations, every finite qualitatively representable algebra has a finite qualitative representation; the representability problem for (the atom structures of) finite non-associative algebras is NP-complete; the network satisfaction problem over a finite qualitatively representable algebra is always in NP; the validity of equations over qualitative representations is co-NP-complete. On the other hand we prove that there is no finite axiomatisation of the class of qualitatively representable algebras.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
57,952
2205.07708
Exploring Diversity-based Active Learning for 3D Object Detection in Autonomous Driving
3D object detection has recently received much attention due to its great potential in autonomous vehicle (AV). The success of deep learning based object detectors relies on the availability of large-scale annotated datasets, which is time-consuming and expensive to compile, especially for 3D bounding box annotation. In this work, we investigate diversity-based active learning (AL) as a potential solution to alleviate the annotation burden. Given limited annotation budget, only the most informative frames and objects are automatically selected for human to annotate. Technically, we take the advantage of the multimodal information provided in an AV dataset, and propose a novel acquisition function that enforces spatial and temporal diversity in the selected samples. We benchmark the proposed method against other AL strategies under realistic annotation cost measurement, where the realistic costs for annotating a frame and a 3D bounding box are both taken into consideration. We demonstrate the effectiveness of the proposed method on the nuScenes dataset and show that it outperforms existing AL strategies significantly. Code is available at https://github.com/Linkon87/Exploring-Diversity-based-Active-Learning-for-3D-Object-Detection-in-Autonomous-Driving
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,686
2111.05267
Community detection using low-dimensional network embedding algorithms
With the increasing relevance of large networks in important areas such as the study of contact networks for spread of disease, or social networks for their impact on geopolitics, it has become necessary to study machine learning tools that are scalable to very large networks, often containing millions of nodes. One major class of such scalable algorithms is known as network representation learning or network embedding. These algorithms try to learn representations of network functionals (e.g.~nodes) by first running multiple random walks and then using the number of co-occurrences of each pair of nodes in observed random walk segments to obtain a low-dimensional representation of nodes on some Euclidean space. The aim of this paper is to rigorously understand the performance of two major algorithms, DeepWalk and node2vec, in recovering communities for canonical network models with ground truth communities. Depending on the sparsity of the graph, we find the length of the random walk segments required such that the corresponding observed co-occurrence window is able to perform almost exact recovery of the underlying community assignments. We prove that, given some fixed co-occurrence window, node2vec using random walks with a low non-backtracking probability can succeed for much sparser networks compared to DeepWalk using simple random walks. Moreover, if the sparsity parameter is low, we provide evidence that these algorithms might not succeed in almost exact recovery. The analysis requires developing general tools for path counting on random networks having an underlying low-rank structure, which are of independent interest.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
265,740
2412.19145
Impact of color and mixing proportion of synthetic point clouds on semantic segmentation
Deep learning (DL)-based point cloud segmentation is essential for understanding built environment. Despite synthetic point clouds (SPC) having the potential to compensate for data shortage, how synthetic color and mixing proportion impact DL-based segmentation remains a long-standing question. Therefore, this paper addresses this question with extensive experiments by introducing: 1) method to generate SPC with real colors and uniform colors from BIM, and 2) enhanced benchmarks for better performance evaluation. Experiments on DL models including PointNet, PointNet++, and DGCNN show that model performance on SPC with real colors outperforms that on SPC with uniform colors by 8.2 % + on both OA and mIoU. Furthermore, a higher than 70 % mixing proportion of SPC usually leads to better performance. And SPC can replace real ones to train a DL model for detecting large and flat building elements. Overall, this paper unveils the performance-improving mechanism of SPC and brings new insights to boost SPC's value (for building large models for point clouds).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
520,734
2003.08793
Deep Active Learning for Remote Sensing Object Detection
Recently, CNN object detectors have achieved high accuracy on remote sensing images but require huge labor and time costs on annotation. In this paper, we propose a new uncertainty-based active learning which can select images with more information for annotation and detector can still reach high performance with a fraction of the training images. Our method not only analyzes objects' classification uncertainty to find least confident objects but also considers their regression uncertainty to declare outliers. Besides, we bring out two extra weights to overcome two difficulties in remote sensing datasets, class-imbalance and difference in images' objects amount. We experiment our active learning algorithm on DOTA dataset with CenterNet as object detector. We achieve same-level performance as full supervision with only half images. We even override full supervision with 55% images and augmented weights on least confident images.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
168,865
2005.09025
FootTile: a Rugged Foot Sensor for Force and Center of Pressure Sensing in Soft Terrain
In this paper we present FootTile, a foot sensor for reaction force and center of pressure sensing in challenging terrain. We compare our sensor design to standard biomechanical devices, force plates and pressure plates. We show that FootTile can accurately estimate force and pressure distribution during legged locomotion. FootTile weighs 0.9g, has a sampling rate of 330Hz, a footprint of 10 by 10mm and can easily be adapted in sensor range to the required load case. In three experiments we validate: first the performance of the individual sensor, second an array of FootTiles for center of pressure sensing and third the ground reaction force estimation during locomotion in granular substrate. We then go on to show the accurate sensing capabilities of the waterproof sensor in liquid mud, as a showcase for real world rough terrain use.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
177,786
2404.19276
C2FDrone: Coarse-to-Fine Drone-to-Drone Detection using Vision Transformer Networks
A vision-based drone-to-drone detection system is crucial for various applications like collision avoidance, countering hostile drones, and search-and-rescue operations. However, detecting drones presents unique challenges, including small object sizes, distortion, occlusion, and real-time processing requirements. Current methods integrating multi-scale feature fusion and temporal information have limitations in handling extreme blur and minuscule objects. To address this, we propose a novel coarse-to-fine detection strategy based on vision transformers. We evaluate our approach on three challenging drone-to-drone detection datasets, achieving F1 score enhancements of 7%, 3%, and 1% on the FL-Drones, AOT, and NPS-Drones datasets, respectively. Additionally, we demonstrate real-time processing capabilities by deploying our model on an edge-computing device. Our code will be made publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
450,575
1712.09888
Improved Inception-Residual Convolutional Neural Network for Object Recognition
Machine learning and computer vision have driven many of the greatest advances in the modeling of Deep Convolutional Neural Networks (DCNNs). Nowadays, most of the research has been focused on improving recognition accuracy with better DCNN models and learning approaches. The recurrent convolutional approach is not applied very much, other than in a few DCNN architectures. On the other hand, Inception-v4 and Residual networks have promptly become popular among computer the vision community. In this paper, we introduce a new DCNN model called the Inception Recurrent Residual Convolutional Neural Network (IRRCNN), which utilizes the power of the Recurrent Convolutional Neural Network (RCNN), the Inception network, and the Residual network. This approach improves the recognition accuracy of the Inception-residual network with same number of network parameters. In addition, this proposed architecture generalizes the Inception network, the RCNN, and the Residual network with significantly improved training accuracy. We have empirically evaluated the performance of the IRRCNN model on different benchmarks including CIFAR-10, CIFAR-100, TinyImageNet-200, and CU3D-100. The experimental results show higher recognition accuracy against most of the popular DCNN models including the RCNN. We have also investigated the performance of the IRRCNN approach against the Equivalent Inception Network (EIN) and the Equivalent Inception Residual Network (EIRN) counterpart on the CIFAR-100 dataset. We report around 4.53%, 4.49% and 3.56% improvement in classification accuracy compared with the RCNN, EIN, and EIRN on the CIFAR-100 dataset respectively. Furthermore, the experiment has been conducted on the TinyImageNet-200 and CU3D-100 datasets where the IRRCNN provides better testing accuracy compared to the Inception Recurrent CNN (IRCNN), the EIN, and the EIRN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,422
2208.02946
Learning to Generate 3D Shapes from a Single Example
Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the tri-plane hybrid representation, which requires only 2D convolutions. We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation. Once trained, our model can generate diverse and high-quality 3D shapes possibly of different sizes and aspect ratios. The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
311,623
2411.11575
Analysis of Generalized Hebbian Learning Algorithm for Neuromorphic Hardware Using Spinnaker
Neuromorphic computing, inspired by biological neural networks, has emerged as a promising approach for solving complex machine learning tasks with greater efficiency and lower power consumption. The integration of biologically plausible learning algorithms, such as the Generalized Hebbian Algorithm (GHA), is key to enhancing the performance of neuromorphic systems. In this paper, we explore the application of GHA in large-scale neuromorphic platforms, specifically SpiNNaker, a hardware designed to simulate large neural networks. Our results demonstrate significant improvements in classification accuracy, showcasing the potential of biologically inspired learning algorithms in advancing the field of neuromorphic computing.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
509,104
2401.09011
Inductive Models for Artificial Intelligence Systems are Insufficient without Good Explanations
This paper discusses the limitations of machine learning (ML), particularly deep artificial neural networks (ANNs), which are effective at approximating complex functions but often lack transparency and explanatory power. It highlights the `problem of induction' : the philosophical issue that past observations may not necessarily predict future events, a challenge that ML models face when encountering new, unseen data. The paper argues for the importance of not just making predictions but also providing good explanations, a feature that current models often fail to deliver. It suggests that for AI to progress, we must seek models that offer insights and explanations, not just predictions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
422,115
2301.11538
Goal-Image Conditioned Dynamic Cable Manipulation through Bayesian Inference and Multi-Objective Black-Box Optimization
To perform dynamic cable manipulation to realize the configuration specified by a target image, we formulate dynamic cable manipulation as a stochastic forward model. Then, we propose a method to handle uncertainty by maximizing the expectation, which also considers estimation errors of the trained model. To avoid issues like multiple local minima and requirement of differentiability by gradient-based methods, we propose using a black-box optimization (BBO) to optimize joint angles to realize a goal image. Among BBO, we use the Tree-structured Parzen Estimator (TPE), a type of Bayesian optimization. By incorporating constraints into the TPE, the optimized joint angles are constrained within the range of motion. Since TPE is population-based, it is better able to detect multiple feasible configurations using the estimated inverse model. We evaluated image similarity between the target and cable images captured by executing the robot using optimal transport distance. The results show that the proposed method improves accuracy compared to conventional gradient-based approaches and methods that use deterministic models that do not consider uncertainty.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
342,186
2101.02051
Transformer-based approach towards music emotion recognition from lyrics
The task of identifying emotions from a given music track has been an active pursuit in the Music Information Retrieval (MIR) community for years. Music emotion recognition has typically relied on acoustic features, social tags, and other metadata to identify and classify music emotions. The role of lyrics in music emotion recognition remains under-appreciated in spite of several studies reporting superior performance of music emotion classifiers based on features extracted from lyrics. In this study, we use the transformer-based approach model using XLNet as the base architecture which, till date, has not been used to identify emotional connotations of music based on lyrics. Our proposed approach outperforms existing methods for multiple datasets. We used a robust methodology to enhance web-crawlers' accuracy for extracting lyrics. This study has important implications in improving applications involved in playlist generation of music based on emotions in addition to improving music recommendation systems.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
214,516
2408.08812
CAT: Caution Aware Transfer in Reinforcement Learning via Distributional Risk
Transfer learning in reinforcement learning (RL) has become a pivotal strategy for improving data efficiency in new, unseen tasks by utilizing knowledge from previously learned tasks. This approach is especially beneficial in real-world deployment scenarios where computational resources are constrained and agents must adapt rapidly to novel environments. However, current state-of-the-art methods often fall short in ensuring safety during the transfer process, particularly when unforeseen risks emerge in the deployment phase. In this work, we address these limitations by introducing a novel Caution-Aware Transfer Learning (CAT) framework. Unlike traditional approaches that limit risk considerations to mean-variance, we define "caution" as a more generalized and comprehensive notion of risk. Our core innovation lies in optimizing a weighted sum of reward return and caution-based on state-action occupancy measures-during the transfer process, allowing for a rich representation of diverse risk factors. To the best of our knowledge, this is the first work to explore the optimization of such a generalized risk notion within the context of transfer RL. Our contributions are threefold: (1) We propose a Caution-Aware Transfer (CAT) framework that evaluates source policies within the test environment and constructs a new policy that balances reward maximization and caution. (2) We derive theoretical sub-optimality bounds for our method, providing rigorous guarantees of its efficacy. (3) We empirically validate CAT, demonstrating that it consistently outperforms existing methods by delivering safer policies under varying risk conditions in the test tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,162
1909.12605
Towards Real-Time Multi-Object Tracking
Modern multiple object tracking (MOT) systems usually follow the \emph{tracking-by-detection} paradigm. It has 1) a detection model for target localization and 2) an appearance embedding model for data association. Having the two models separately executed might lead to efficiency problems, as the running time is simply a sum of the two steps without investigating potential structures that can be shared between them. Existing research efforts on real-time MOT usually focus on the association step, so they are essentially real-time association methods but not real-time MOT system. In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model. Specifically, we incorporate the appearance embedding model into a single-shot detector, such that the model can simultaneously output detections and the corresponding embeddings. We further propose a simple and fast association method that works in conjunction with the joint model. In both components the computation cost is significantly reduced compared with former MOT systems, resulting in a neat and fast baseline for future follow-ups on real-time MOT algorithm design. To our knowledge, this work reports the first (near) real-time MOT system, with a running speed of 22 to 40 FPS depending on the input resolution. Meanwhile, its tracking accuracy is comparable to the state-of-the-art trackers embodying separate detection and embedding (SDE) learning ($64.4\%$ MOTA \vs $66.1\%$ MOTA on MOT-16 challenge). Code and models are available at \url{https://github.com/Zhongdao/Towards-Realtime-MOT}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
147,173
2210.08763
ReasonChainQA: Text-based Complex Question Answering with Explainable Evidence Chains
The ability of reasoning over evidence has received increasing attention in question answering (QA). Recently, natural language database (NLDB) conducts complex QA in knowledge base with textual evidences rather than structured representations, this task attracts a lot of attention because of the flexibility and richness of textual evidence. However, existing text-based complex question answering datasets fail to provide explicit reasoning process, while it's important for retrieval effectiveness and reasoning interpretability. Therefore, we present a benchmark \textbf{ReasonChainQA} with explanatory and explicit evidence chains. ReasonChainQA consists of two subtasks: answer generation and evidence chains extraction, it also contains higher diversity for multi-hop questions with varying depths, 12 reasoning types and 78 relations. To obtain high-quality textual evidences for answering complex question. Additional experiment on supervised and unsupervised retrieval fully indicates the significance of ReasonChainQA. Dataset and codes will be made publicly available upon accepted.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
324,283
2402.14720
A Transformer Model for Boundary Detection in Continuous Sign Language
Sign Language Recognition (SLR) has garnered significant attention from researchers in recent years, particularly the intricate domain of Continuous Sign Language Recognition (CSLR), which presents heightened complexity compared to Isolated Sign Language Recognition (ISLR). One of the prominent challenges in CSLR pertains to accurately detecting the boundaries of isolated signs within a continuous video stream. Additionally, the reliance on handcrafted features in existing models poses a challenge to achieving optimal accuracy. To surmount these challenges, we propose a novel approach utilizing a Transformer-based model. Unlike traditional models, our approach focuses on enhancing accuracy while eliminating the need for handcrafted features. The Transformer model is employed for both ISLR and CSLR. The training process involves using isolated sign videos, where hand keypoint features extracted from the input video are enriched using the Transformer model. Subsequently, these enriched features are forwarded to the final classification layer. The trained model, coupled with a post-processing method, is then applied to detect isolated sign boundaries within continuous sign videos. The evaluation of our model is conducted on two distinct datasets, including both continuous signs and their corresponding isolated signs, demonstrates promising results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
431,805
2502.05586
A Cost-Benefit Analysis of Additive Manufacturing as a Service
The global manufacturing landscape is undergoing a fundamental shift from resource-intensive mass production to sustainable, localised manufacturing. This paper presents a comprehensive analysis of a Cloud Crafting Platform that enables Manufacturing as a Service (MaaS) through additive manufacturing technologies. The platform connects web shops with local three-dimensional (3D) printing facilities, allowing customers to purchase products that are manufactured on-demand in their vicinity. We present the platform's Service-Oriented Architecture (SOA), deployment on the Microsoft Azure cloud, and integration with three different 3D printer models in a testbed environment. A detailed cost-benefit analysis demonstrates the economic viability of the approach, which generates significant profit margins. The platform implements a weighted profit-sharing model that fairly compensates all stakeholders based on their investment and operational responsibilities. Our results show that on-demand, localised manufacturing through MaaS is not only technically feasible but also economically viable, while reducing environmental impact through shortened supply chains and elimination of inventory waste. The platform's extensible architecture allows for future integration of additional manufacturing technologies beyond 3D printing.
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
531,678
2005.00887
wisardpkg -- A library for WiSARD-based models
In order to facilitate the production of codes using WiSARD-based models, LabZero developed an ML library C++/Python called wisardpkg. This library is an MIT-licensed open-source package hosted on GitHub under the license.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
175,419
2207.00868
The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial
The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values' (i.e., objective functions) of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication (natural language) is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems; or, more loftily, designing robustly beneficial or ethical artificial agents.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
305,937
1808.01543
Designing molecular circuits for approximate maximum a posteriori demodulation of concentration modulated signals
Motivated by the fact that living cells use molecular circuits (i.e. a set of chemical reactions) for information processing, this paper investigates the problem of designing molecular circuits for demodulation. In our earlier work, we use a Markovian approach to derive a demodulator for diffusion-based molecular communication. The demodulation filters take the form of an ordinary differential equation which computes the log-posteriori probability of a transmission symbol being sent. This work considers the realisation of these demodulation filters using molecular circuits assuming the transmission symbols are rectangular pulses of the same duration but different amplitudes, i.e. concentration modulation. This paper makes a number of contributions. First, we use time-scale separation and renewal theory to analytically derive an approximation of the demodulation filter from our earlier work. Second, we present a method to turn this approximation into a molecular circuit. By using simulation, we show that the output of the derived molecular circuit is approximately equal to the log-posteriori probability calculated by the exact demodulation filter if the log-posteriori probability is positive. Third, we demonstrate that a biochemical circuit in yeast behaves similarly to the derived molecular demodulation filter and is therefore a candidate for implementing the derived filter.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
104,586
1905.13205
Near-Term Quantum-Classical Associative Adversarial Networks
We introduce a new hybrid quantum-classical adversarial machine learning architecture called a quantum-classical associative adversarial network (QAAN). This architecture consists of a classical generative adversarial network with a small auxiliary quantum Boltzmann machine that is simultaneously trained on an intermediate layer of the discriminator of the generative network. We numerically study the performance of QAANs compared to their classical counterparts on the MNIST and CIFAR-10 data sets, and show that QAANs attain a higher quality of learning when evaluated using the Inception score and the Fr\'{e}chet Inception distance. As the QAAN architecture only relies on sampling simple local observables of a small quantum Boltzmann machine, this model is particularly amenable for implementation on the current and next generations of quantum devices.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,032
2105.06005
Data-Driven Strategies for Hierarchical Predictive Control in Unknown Environments
This article proposes a hierarchical learning architecture for safe data-driven control in unknown environments. We consider a constrained nonlinear dynamical system and assume the availability of state-input trajectories solving control tasks in different environments. In addition to task-invariant system state and input constraints, a parameterized environment model generates task-specific state constraints, which are satisfied by the stored trajectories. Our goal is to use these trajectories to find a safe and high-performing policy for a new task in a new, unknown environment. We propose using the stored data to learn generalizable control strategies. At each time step, based on a local forecast of the new task environment, the learned strategy consists of a target region in the state space and input constraints to guide the system evolution to the target region. These target regions are used as terminal sets by a low-level model predictive controller. We show how to i) design the target sets from past data and then ii) incorporate them into a model predictive control scheme with shifting horizon that ensures safety of the closed-loop system when performing the new task. We prove the feasibility of the resulting control policy, and apply the proposed method to robotic path planning, racing, and computer game applications.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
234,988
2311.09536
Correlation networks: Interdisciplinary approaches beyond thresholding
Many empirical networks originate from correlational data, arising in domains as diverse as psychology, neuroscience, genomics, microbiology, finance, and climate science. Specialized algorithms and theory have been developed in different application domains for working with such networks, as well as in statistics, network science, and computer science, often with limited communication between practitioners in different fields. This leaves significant room for cross-pollination across disciplines. A central challenge is that it is not always clear how to best transform correlation matrix data into networks for the application at hand, and probably the most widespread method, i.e., thresholding on the correlation value to create either unweighted or weighted networks, suffers from multiple problems. In this article, we review various methods of constructing and analyzing correlation networks, ranging from thresholding and its improvements to weighted networks, regularization, dynamic correlation networks, threshold-free approaches, comparison with null models, and more. Finally, we propose and discuss recommended practices and a variety of key open questions currently confronting this field.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
408,169
2310.04367
A Marketplace Price Anomaly Detection System at Scale
Online marketplaces execute large volume of price updates that are initiated by individual marketplace sellers each day on the platform. This price democratization comes with increasing challenges with data quality. Lack of centralized guardrails that are available for a traditional online retailer causes a higher likelihood for inaccurate prices to get published on the website, leading to poor customer experience and potential for revenue loss. We present MoatPlus (Masked Optimal Anchors using Trees, Proximity-based Labeling and Unsupervised Statistical-features), a scalable price anomaly detection framework for a growing marketplace platform. The goal is to leverage proximity and historical price trends from unsupervised statistical features to generate an upper price bound. We build an ensemble of models to detect irregularities in price-based features, exclude irregular features and use optimized weighting scheme to build a reliable price bound in real-time pricing pipeline. We observed that our approach improves precise anchor coverage by up to 46.6% in high-vulnerability item subsets
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
397,621
2302.02561
Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Previous studies have shown that leveraging domain index can significantly boost domain adaptation performance (arXiv:2007.01807, arXiv:2202.03628). However, such domain indices are not always available. To address this challenge, we first provide a formal definition of domain index from the probabilistic perspective, and then propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data, thereby providing additional insight on domain relations and improving domain adaptation performance. Our theoretical analysis shows that our adversarial variational Bayesian framework finds the optimal domain index at equilibrium. Empirical results on both synthetic and real data verify that our model can produce interpretable domain indices which enable us to achieve superior performance compared to state-of-the-art domain adaptation methods. Code is available at https://github.com/Wang-ML-Lab/VDI.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
344,043
1703.09570
A Tidy Data Model for Natural Language Processing using cleanNLP
The package cleanNLP provides a set of fast tools for converting a textual corpus into a set of normalized tables. The underlying natural language processing pipeline utilizes Stanford's CoreNLP library, exposing a number of annotation tasks for text written in English, French, German, and Spanish. Annotators include tokenization, part of speech tagging, named entity recognition, entity linking, sentiment analysis, dependency parsing, coreference resolution, and information extraction.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
70,770
1912.08473
Conversational Agents for Insurance Companies: From Theory to Practice
Advances in artificial intelligence have renewed interest in conversational agents. Additionally to software developers, today all kinds of employees show interest in new technologies and their possible applications for customers. German insurance companies generally are interested in improving their customer service and digitizing their business processes. In this work we investigate the potential use of conversational agents in insurance companies theoretically by determining which classes of agents exist which are of interest to insurance companies, finding relevant use cases and requirements. We add two practical parts: First we develop a showcase prototype for an exemplary insurance scenario in claim management. Additionally in a second step, we create a prototype focusing on customer service in a chatbot hackathon, fostering innovation in interdisciplinary teams. In this work, we describe the results of both prototypes in detail. We evaluate both chatbots defining criteria for both settings in detail and compare the results and draw conclusions for the maturity of chatbot technology for practical use, describing the opportunities and challenges companies, especially small and medium enterprises, face.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
true
157,846
2006.02322
Deep Learning Methods for Real-time Detection and Analysis of Wagner Ulcer Classification System
At present, the ubiquity method to diagnose the severity of diabetic feet (DF) depends on professional podiatrists. However, in most cases, professional podiatrists have a heavy workload, especially in underdeveloped and developing countries and regions, and there are often insufficient podiatrists to meet the rapidly growing treatment needs of DF patients. It is necessary to develop a medical system that assists in diagnosing DF in order to reduce part of the workload for podiatrists and to provide timely relevant information to patients with DF. In this paper, we have developed a system that can classify and locate Wagner ulcers of diabetic foot in real-time. First, we proposed a dataset of 2688 diabetic feet with annotations. Then, in order to enable the system to detect diabetic foot ulcers in real time and accurately, this paper is based on the YOLOv3 algorithm coupled with image fusion, label smoothing, and variant learning rate mode technologies to improve the robustness and predictive accuracy of the original algorithm. Finally, the refinements on YOLOv3 was used as the optimal algorithm in this paper to deploy into Android smartphone to predict the classes and localization of the diabetic foot with real-time. The experimental results validate that the improved YOLOv3 algorithm achieves a mAP of 91.95%, and meets the needs of real-time detection and analysis of diabetic foot Wagner Ulcer on mobile devices, such as smart phones. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
180,008
2201.09075
Dynamic Channel Access via Meta-Reinforcement Learning
In this paper, we address the channel access problem in a dynamic wireless environment via meta-reinforcement learning. Spectrum is a scarce resource in wireless communications, especially with the dramatic increase in the number of devices in networks. Recently, inspired by the success of deep reinforcement learning (DRL), extensive studies have been conducted in addressing wireless resource allocation problems via DRL. However, training DRL algorithms usually requires a massive amount of data collected from the environment for each specific task and the well-trained model may fail if there is a small variation in the environment. In this work, in order to address these challenges, we propose a meta-DRL framework that incorporates the method of Model-Agnostic Meta-Learning (MAML). In the proposed framework, we train a common initialization for similar channel selection tasks. From the initialization, we show that only a few gradient descents are required for adapting to different tasks drawn from the same distribution. We demonstrate the performance improvements via simulation results.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
276,545
1708.06652
Build Your Own Visual-Inertial Drone: A Cost-Effective and Open-Source Autonomous Drone
This paper describes an approach to building a cost-effective and research grade visual-inertial odometry aided vertical taking-off and landing (VTOL) platform. We utilize an off-the-shelf visual-inertial sensor, an onboard computer, and a quadrotor platform that are factory-calibrated and mass-produced, thereby sharing similar hardware and sensor specifications (e.g., mass, dimensions, intrinsic and extrinsic of camera-IMU systems, and signal-to-noise ratio). We then perform a system calibration and identification enabling the use of our visual-inertial odometry, multi-sensor fusion, and model predictive control frameworks with the off-the-shelf products. This implies that we can partially avoid tedious parameter tuning procedures for building a full system. The complete system is extensively evaluated both indoors using a motion capture system and outdoors using a laser tracker while performing hover and step responses, and trajectory following tasks in the presence of external wind disturbances. We achieve root-mean-square (RMS) pose errors between a reference and actual trajectories of 0.036m, while performing hover. We also conduct relatively long distance flight (~180m) experiments on a farm site and achieve 0.82% drift error of the total distance flight. This paper conveys the insights we acquired about the platform and sensor module and returns to the community as open-source code with tutorial documentation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
79,356
2210.08856
TIVE: A Toolbox for Identifying Video Instance Segmentation Errors
Since first proposed, Video Instance Segmentation(VIS) task has attracted vast researchers' focus on architecture modeling to boost performance. Though great advances achieved in online and offline paradigms, there are still insufficient means to identify model errors and distinguish discrepancies between methods, as well approaches that correctly reflect models' performance in recognizing object instances of various temporal lengths remain barely available. More importantly, as the fundamental model abilities demanded by the task, spatial segmentation and temporal association are still understudied in both evaluation and interaction mechanisms. In this paper, we introduce TIVE, a Toolbox for Identifying Video instance segmentation Errors. By directly operating output prediction files, TIVE defines isolated error types and weights each type's damage to mAP, for the purpose of distinguishing model characters. By decomposing localization quality in spatial-temporal dimensions, model's potential drawbacks on spatial segmentation and temporal association can be revealed. TIVE can also report mAP over instance temporal length for real applications. We conduct extensive experiments by the toolbox to further illustrate how spatial segmentation and temporal association affect each other. We expect the analysis of TIVE can give the researchers more insights, guiding the community to promote more meaningful explorations for video instance segmentation. The proposed toolbox is available at https://github.com/wenhe-jia/TIVE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,322
2305.02472
Model-based and Data-based Dynamic Output Feedback for Externally Positive Systems
In this work, we derive dynamic output-feedback controllers that render the closed-loop system externally positive. We begin by expressing the class of discrete-time, linear, time-invariant systems and the class of dynamic controllers in the space of input-output behaviors, where a dynamic controller can be expressed as a static behavioral feedback gain. We leverage the static form of the controller to derive output-feedback controllers that achieve monotonic output tracking of a constant non-negative reference output. Further, we provide a direct data-driven approach to derive monotonic tracking output-feedback controllers for single-input-single-output (SISO) systems. Our approaches, model-based and data-based, allow us to obtain output-feedback controllers that render the closed-loop system externally positive. Finally, we validate our results numerically in a drone landing control problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
362,048
2409.19924
On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability
Recent advancements in Large Language Models (LLMs) have showcased their ability to perform complex reasoning tasks, but their effectiveness in planning remains underexplored. In this study, we evaluate the planning capabilities of OpenAI's o1 models across a variety of benchmark tasks, focusing on three key aspects: feasibility, optimality, and generalizability. Through empirical evaluations on constraint-heavy tasks (e.g., $\textit{Barman}$, $\textit{Tyreworld}$) and spatially complex environments (e.g., $\textit{Termes}$, $\textit{Floortile}$), we highlight o1-preview's strengths in self-evaluation and constraint-following, while also identifying bottlenecks in decision-making and memory management, particularly in tasks requiring robust spatial reasoning. Our results reveal that o1-preview outperforms GPT-4 in adhering to task constraints and managing state transitions in structured environments. However, the model often generates suboptimal solutions with redundant actions and struggles to generalize effectively in spatially complex tasks. This pilot study provides foundational insights into the planning limitations of LLMs, offering key directions for future research on improving memory management, decision-making, and generalization in LLM-based planning. Code available at https://github.com/VITA-Group/o1-planning.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
492,912
2303.06284
Prospecting Community Development Strength based on Economic Graph: From Categorization to Scoring
Recent years have witnessed a growing number of researches on community characterization. In contrast to the large body of researches on the categorical measures (rise or decline) for evaluating the community development, we propose to estimate the community development strength (to which degree the rise or decline is). More specifically, given already known categorical information of community development, we are attempting to quantify the community development strength, which is of great interest. Motivated by the increasing availability of large-scale data on the network between entities among communities, we investigate how to score the the community's development strength. We formally define our task as prospecting community development strength from categorization based on multi-relational network information and identify two challenges as follows: (1) limited guidance for integrating entity multi-relational network in quantifying the community development strength; (2) the existence of selection effect that the community development strength has on network formation. Aiming at these challenges, we start by a hybrid of discriminative and generative approaches on multi-relational network-based community development strength quantification. Then a network generation process is exploited to debias the selection process. In the end, we empirically evaluate the proposed model by applying it to quantify enterprise business development strength. Experimental results demonstrate the effectiveness of the proposed method.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
350,774
2303.05119
Entropic Wasserstein Component Analysis
Dimension reduction (DR) methods provide systematic approaches for analyzing high-dimensional data. A key requirement for DR is to incorporate global dependencies among original and embedded samples while preserving clusters in the embedding space. To achieve this, we combine the principles of optimal transport (OT) and principal component analysis (PCA). Our method seeks the best linear subspace that minimizes reconstruction error using entropic OT, which naturally encodes the neighborhood information of the samples. From an algorithmic standpoint, we propose an efficient block-majorization-minimization solver over the Stiefel manifold. Our experimental results demonstrate that our approach can effectively preserve high-dimensional clusters, leading to more interpretable and effective embeddings. Python code of the algorithms and experiments is available online.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
350,351
2205.09947
PGDP5K: A Diagram Parsing Dataset for Plane Geometry Problems
Diagram parsing is an important foundation for geometry problem solving, attracting increasing attention in the field of intelligent education and document image understanding. Due to the complex layout and between-primitive relationship, plane geometry diagram parsing (PGDP) is still a challenging task deserving further research and exploration. An appropriate dataset is critical for the research of PGDP. Although some datasets with rough annotations have been proposed to solve geometric problems, they are either small in scale or not publicly available. The rough annotations also make them not very useful. Thus, we propose a new large-scale geometry diagram dataset named PGDP5K and a novel annotation method. Our dataset consists of 5000 diagram samples composed of 16 shapes, covering 5 positional relations, 22 symbol types and 6 text types. Different from previous datasets, our PGDP5K dataset is labeled with more fine-grained annotations at primitive level, including primitive classes, locations and relationships. What is more, combined with above annotations and geometric prior knowledge, it can generate intelligible geometric propositions automatically and uniquely. We performed experiments on PGDP5K and IMP-Geometry3K datasets reveal that the state-of-the-art (SOTA) method achieves only 66.07% F1 value. This shows that PGDP5K presents a challenge for future research. Our dataset is available at http://www.nlpr.ia.ac.cn/databases/CASIA-PGDP5K/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,485
2308.15020
Massively Parallel Continuous Local Search for Hybrid SAT Solving on GPUs
Although state-of-the-art (SOTA) SAT solvers based on conflict-driven clause learning (CDCL) have achieved remarkable engineering success, their sequential nature limits the parallelism that may be extracted for acceleration on platforms such as the graphics processing unit (GPU). In this work, we propose FastFourierSAT, a highly parallel hybrid SAT solver based on gradient-driven continuous local search (CLS). This is realized by a novel parallel algorithm inspired by the Fast Fourier Transform (FFT)-based convolution for computing the elementary symmetric polynomials (ESPs), which is the major computational task in previous CLS methods. The complexity of our algorithm matches the best previous result. Furthermore, the substantial parallelism inherent in our algorithm can leverage the GPU for acceleration, demonstrating significant improvement over the previous CLS approaches. We also propose to incorporate the restart heuristics in CLS to improve search efficiency. We compare our approach with the SOTA parallel SAT solvers on several benchmarks. Our results show that FastFourierSAT computes the gradient 100+ times faster than previous prototypes implemented on CPU. Moreover, FastFourierSAT solves most instances and demonstrates promising performance on larger-size instances.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
true
388,537
1910.03177
Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based Approach
Traditional sequence-to-sequence (seq2seq) models and other variations of the attention-mechanism such as hierarchical attention have been applied to the text summarization problem. Though there is a hierarchy in the way humans use language by forming paragraphs from sentences and sentences from words, hierarchical models have usually not worked that much better than their traditional seq2seq counterparts. This effect is mainly because either the hierarchical attention mechanisms are too sparse using hard attention or noisy using soft attention. In this paper, we propose a method based on extracting the highlights of a document; a key concept that is conveyed in a few sentences. In a typical text summarization dataset consisting of documents that are 800 tokens in length (average), capturing long-term dependencies is very important, e.g., the last sentence can be grouped with the first sentence of a document to form a summary. LSTMs (Long Short-Term Memory) proved useful for machine translation. However, they often fail to capture long-term dependencies while modeling long sequences. To address these issues, we have adapted Neural Semantic Encoders (NSE) to text summarization, a class of memory-augmented neural networks by improving its functionalities and proposed a novel hierarchical NSE that outperforms similar previous models significantly. The quality of summarization was improved by augmenting linguistic factors, namely lemma, and Part-of-Speech (PoS) tags, to each word in the dataset for improved vocabulary coverage and generalization. The hierarchical NSE model on factored dataset outperformed the state-of-the-art by nearly 4 ROUGE points. We further designed and used the first GPU-based self-critical Reinforcement Learning model.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
148,435
2011.13772
Gradient Descent for Deep Matrix Factorization: Dynamics and Implicit Bias towards Low Rank
In deep learning, it is common to use more network parameters than training points. In such scenarioof over-parameterization, there are usually multiple networks that achieve zero training error so that thetraining algorithm induces an implicit bias on the computed solution. In practice, (stochastic) gradientdescent tends to prefer solutions which generalize well, which provides a possible explanation of thesuccess of deep learning. In this paper we analyze the dynamics of gradient descent in the simplifiedsetting of linear networks and of an estimation problem. Although we are not in an overparameterizedscenario, our analysis nevertheless provides insights into the phenomenon of implicit bias. In fact, wederive a rigorous analysis of the dynamics of vanilla gradient descent, and characterize the dynamicalconvergence of the spectrum. We are able to accurately locate time intervals where the effective rankof the iterates is close to the effective rank of a low-rank projection of the ground-truth matrix. Inpractice, those intervals can be used as criteria for early stopping if a certain regularity is desired. Wealso provide empirical evidence for implicit bias in more general scenarios, such as matrix sensing andrandom initialization. This suggests that deep learning prefers trajectories whose complexity (measuredin terms of effective rank) is monotonically increasing, which we believe is a fundamental concept for thetheoretical understanding of deep learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,583
1908.01420
Automatic Game Design via Mechanic Generation
Game designs often center on the game mechanics---rules governing the logical evolution of the game. We seek to develop an intelligent system that generates computer games. As first steps towards this goal we present a composable and cross-domain representation for game mechanics that draws from AI planning action representations. We use a constraint solver to generate mechanics subject to design requirements on the form of those mechanics---what they do in the game. A planner takes a set of generated mechanics and tests whether those mechanics meet playability requirements---controlling how mechanics function in a game to affect player behavior. We demonstrate our system by modeling and generating mechanics in a role-playing game, platformer game, and combined role-playing-platformer game.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
140,757
1905.06147
Embeddings and Representation Learning for Structured Data
Performing machine learning on structured data is complicated by the fact that such data does not have vectorial form. Therefore, multiple approaches have emerged to construct vectorial representations of structured data, from kernel and distance approaches to recurrent, recursive, and convolutional neural networks. Recent years have seen heightened attention in this demanding field of research and several new approaches have emerged, such as metric learning on structured data, graph convolutional neural networks, and recurrent decoder networks for structured data. In this contribution, we provide an high-level overview of the state-of-the-art in representation learning and embeddings for structured data across a wide range of machine learning fields.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
130,910
2004.07923
Deep Neural Network (DNN) for Water/Fat Separation: Supervised Training, Unsupervised Training, and No Training
Purpose: To use a deep neural network (DNN) for solving the optimization problem of water/fat separation and to compare supervised and unsupervised training. Methods: The current T2*-IDEAL algorithm for solving fat/water separation is dependent on initialization. Recently, deep neural networks (DNN) have been proposed to solve fat/water separation without the need for suitable initialization. However, this approach requires supervised training of DNN (STD) using the reference fat/water separation images. Here we propose two novel DNN water/fat separation methods 1) unsupervised training of DNN (UTD) using the physical forward problem as the cost function during training, and 2) no-training of DNN (NTD) using physical cost and backpropagation to directly reconstruct a single dataset. The STD, UTD and NTD methods were compared with the reference T2*-IDEAL. Results: All DNN methods generated consistent water/fat separation results that agreed well with T2*-IDEAL under proper initialization. Conclusion: The water/fat separation problem can be solved using unsupervised deep neural networks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
172,904
2407.11398
Animate3D: Animating Any 3D Model with Multi-view Video Diffusion
Recent advances in 4D generation mainly focus on generating 4D content by distilling pre-trained text or single-view image-conditioned models. It is inconvenient for them to take advantage of various off-the-shelf 3D assets with multi-view attributes, and their results suffer from spatiotemporal inconsistency owing to the inherent ambiguity in the supervision signals. In this work, we present Animate3D, a novel framework for animating any static 3D model. The core idea is two-fold: 1) We propose a novel multi-view video diffusion model (MV-VDM) conditioned on multi-view renderings of the static 3D object, which is trained on our presented large-scale multi-view video dataset (MV-Video). 2) Based on MV-VDM, we introduce a framework combining reconstruction and 4D Score Distillation Sampling (4D-SDS) to leverage the multi-view video diffusion priors for animating 3D objects. Specifically, for MV-VDM, we design a new spatiotemporal attention module to enhance spatial and temporal consistency by integrating 3D and video diffusion models. Additionally, we leverage the static 3D model's multi-view renderings as conditions to preserve its identity. For animating 3D models, an effective two-stage pipeline is proposed: we first reconstruct motions directly from generated multi-view videos, followed by the introduced 4D-SDS to refine both appearance and motion. Benefiting from accurate motion learning, we could achieve straightforward mesh animation. Qualitative and quantitative experiments demonstrate that Animate3D significantly outperforms previous approaches. Data, code, and models will be open-released.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,433
2208.03486
HaloAE: An HaloNet based Local Transformer Auto-Encoder for Anomaly Detection and Localization
Unsupervised anomaly detection and localization is a crucial task as it is impossible to collect and label all possible anomalies. Many studies have emphasized the importance of integrating local and global information to achieve accurate segmentation of anomalies. To this end, there has been a growing interest in Transformer, which allows modeling long-range content interactions. However, global interactions through self attention are generally too expensive for most image scales. In this study, we introduce HaloAE, the first auto-encoder based on a local 2D version of Transformer with HaloNet. With HaloAE, we have created a hybrid model that combines convolution and local 2D block-wise self-attention layers and jointly performs anomaly detection and segmentation through a single model. We achieved competitive results on the MVTec dataset, suggesting that vision models incorporating Transformer could benefit from a local computation of the self-attention operation, and pave the way for other applications.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
311,799
1407.8463
Gaussian Multiple Access via Compute-and-Forward
Lattice codes used under the Compute-and-Forward paradigm suggest an alternative strategy for the standard Gaussian multiple-access channel (MAC): The receiver successively decodes integer linear combinations of the messages until it can invert and recover all messages. In this paper, a multiple-access technique called CFMA (Compute-Forward Multiple Access) is proposed and analyzed. For the two-user MAC, it is shown that without time-sharing, the entire capacity region can be attained using CFMA with a single-user decoder as soon as the signal-to-noise ratios are above $1+\sqrt{2}$. A partial analysis is given for more than two users. Lastly the strategy is extended to the so-called dirty MAC where two interfering signals are known non-causally to the two transmitters in a distributed fashion. Our scheme extends the previously known results and gives new achievable rate regions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,035
2501.10100
Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics
Learning robust and generalizable world models is crucial for enabling efficient and scalable robotic control in real-world environments. In this work, we introduce a novel framework for learning world models that accurately capture complex, partially observable, and stochastic dynamics. The proposed method employs a dual-autoregressive mechanism and self-supervised training to achieve reliable long-horizon predictions without relying on domain-specific inductive biases, ensuring adaptability across diverse robotic tasks. We further propose a policy optimization framework that leverages world models for efficient training in imagined environments and seamless deployment in real-world systems. Through extensive experiments, our approach consistently outperforms state-of-the-art methods, demonstrating superior autoregressive prediction accuracy, robustness to noise, and generalization across manipulation and locomotion tasks. Notably, policies trained with our method are successfully deployed on ANYmal D hardware in a zero-shot transfer, achieving robust performance with minimal sim-to-real performance loss. This work advances model-based reinforcement learning by addressing the challenges of long-horizon prediction, error accumulation, and sim-to-real transfer. By providing a scalable and robust framework, the introduced methods pave the way for adaptive and efficient robotic systems in real-world applications.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
525,391
2111.02309
Pull or Wait: How to Optimize Query Age of Information
We study a pull-based status update communication model where a source node submits update packets to a channel with random transmission delay, at times requested by a remote destination node. The objective is to minimize the average query-age-of-information (QAoI), defined as the average age-of-information (AoI) measured at query instants that occur at the destination side according to a stochastic arrival process. In reference to a push-based problem formulation defined in the literature where the source decides to \textit{update or wait} at will, with the objective of minimizing the time average AoI at the destination, we name this problem the \textit{Pull-or-Wait} (PoW) problem. We provide a comparison of the two formulations: (i) Under Poisson query arrivals, an optimal policy that minimizes the time average AoI also minimizes the average QAoI, and these minimum values are equal; and (ii) the optimal average QAoI under periodic query arrivals is always less than or equal to the optimal time average AoI. We identify the PoW problem in the case of a single query as a stochastic shortest path (SSP) problem with uncountable state and action spaces, which has been not solved in previous literature. We derive an optimal solution for this SSP problem and use it as a building block for the solution of the PoW problem under periodic query arrivals.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
264,830
1303.3987
$l_{2,p}$ Matrix Norm and Its Application in Feature Selection
Recently, $l_{2,1}$ matrix norm has been widely applied to many areas such as computer vision, pattern recognition, biological study and etc. As an extension of $l_1$ vector norm, the mixed $l_{2,1}$ matrix norm is often used to find jointly sparse solutions. Moreover, an efficient iterative algorithm has been designed to solve $l_{2,1}$-norm involved minimizations. Actually, computational studies have showed that $l_p$-regularization ($0<p<1$) is sparser than $l_1$-regularization, but the extension to matrix norm has been seldom considered. This paper presents a definition of mixed $l_{2,p}$ $(p\in (0, 1])$ matrix pseudo norm which is thought as both generalizations of $l_p$ vector norm to matrix and $l_{2,1}$-norm to nonconvex cases $(0<p<1)$. Fortunately, an efficient unified algorithm is proposed to solve the induced $l_{2,p}$-norm $(p\in (0, 1])$ optimization problems. The convergence can also be uniformly demonstrated for all $p\in (0, 1]$. Typical $p\in (0,1]$ are applied to select features in computational biology and the experimental results show that some choices of $0<p<1$ do improve the sparse pattern of using $p=1$.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
22,972
2203.04371
Deep Learning for Sleep Stages Classification: Modified Rectified Linear Unit Activation Function and Modified Orthogonal Weight Initialisation
Background and Aim: Each stage of sleep can affect human health, and not getting enough sleep at any stage may lead to sleep disorder like parasomnia, apnea, insomnia, etc. Sleep-related diseases could be diagnosed using Convolutional Neural Network Classifier. However, this classifier has not been successfully implemented into sleep stage classification systems due to high complexity and low accuracy of classification. The aim of this research is to increase the accuracy and reduce the learning time of Convolutional Neural Network Classifier. Methodology: The proposed system used a modified Orthogonal Convolutional Neural Network and a modified Adam optimisation technique to improve the sleep stage classification accuracy and reduce the gradient saturation problem that occurs due to sigmoid activation function. The proposed system uses Leaky Rectified Linear Unit (ReLU) instead of sigmoid activation function as an activation function. Results: The proposed system called Enhanced Sleep Stage Classification system (ESSC) used six different databases for training and testing the proposed model on the different sleep stages. These databases are University College Dublin database (UCD), Beth Israel Deaconess Medical Center MIT database (MIT-BIH), Sleep European Data Format (EDF), Sleep EDF Extended, Montreal Archive of Sleep Studies (MASS), and Sleep Heart Health Study (SHHS). Our results show that the gradient saturation problem does not exist anymore. The modified Adam optimiser helps to reduce the noise which in turn result in faster convergence time. Conclusion: The convergence speed of ESSC is increased along with better classification accuracy compared to the state of art solution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
284,440
2008.13128
Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond
Quantized Neural Networks (QNNs) use low bit-width fixed-point numbers for representing weight parameters and activations, and are often used in real-world applications due to their saving of computation resources and reproducibility of results. Batch Normalization (BN) poses a challenge for QNNs for requiring floating points in reciprocal operations, and previous QNNs either require computing BN at high precision or revise BN to some variants in heuristic ways. In this work, we propose a novel method to quantize BN by converting an affine transformation of two floating points to a fixed-point operation with shared quantized scale, which is friendly for hardware acceleration and model deployment. We confirm that our method maintains same outputs through rigorous theoretical analysis and numerical analysis. Accuracy and efficiency of our quantization method are verified by experiments at layer level on CIFAR and ImageNet datasets. We also believe that our method is potentially useful in other problems involving quantization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
193,775
2207.09145
GAFX: A General Audio Feature eXtractor
Most machine learning models for audio tasks are dealing with a handcrafted feature, the spectrogram. However, it is still unknown whether the spectrogram could be replaced with deep learning based features. In this paper, we answer this question by comparing the different learnable neural networks extracting features with a successful spectrogram model and proposed a General Audio Feature eXtractor (GAFX) based on a dual U-Net (GAFX-U), ResNet (GAFX-R), and Attention (GAFX-A) modules. We design experiments to evaluate this model on the music genre classification task on the GTZAN dataset and perform a detailed ablation study of different configurations of our framework and our model GAFX-U, following the Audio Spectrogram Transformer (AST) classifier achieves competitive performance.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
308,814
2307.08411
Neurosymbolic AI for Reasoning on Biomedical Knowledge Graphs
Biomedical datasets are often modeled as knowledge graphs (KGs) because they capture the multi-relational, heterogeneous, and dynamic natures of biomedical systems. KG completion (KGC), can, therefore, help researchers make predictions to inform tasks like drug repositioning. While previous approaches for KGC were either rule-based or embedding-based, hybrid approaches based on neurosymbolic artificial intelligence are becoming more popular. Many of these methods possess unique characteristics which make them even better suited toward biomedical challenges. Here, we survey such approaches with an emphasis on their utilities and prospective benefits for biomedicine.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
379,792
2106.15350
LB-CNN: An Open Source Framework for Fast Training of Light Binary Convolutional Neural Networks using Chainer and Cupy
Light binary convolutional neural networks (LB-CNN) are particularly useful when implemented in low-energy computing platforms as required in many industrial applications. Herein, a framework for optimizing compact LB-CNN is introduced and its effectiveness is evaluated. The framework is freely available and may run on free-access cloud platforms, thus requiring no major investments. The optimized model is saved in the standardized .h5 format and can be used as input to specialized tools for further deployment into specific technologies, thus enabling the rapid development of various intelligent image sensors. The main ingredient in accelerating the optimization of our model, particularly the selection of binary convolution kernels, is the Chainer/Cupy machine learning library offering significant speed-ups for training the output layer as an extreme-learning machine. Additional training of the output layer using Keras/Tensorflow is included, as it allows an increase in accuracy. Results for widely used datasets including MNIST, GTSRB, ORL, VGG show very good compromise between accuracy and complexity. Particularly, for face recognition problems a carefully optimized LB-CNN model provides up to 100% accuracies. Such TinyML solutions are well suited for industrial applications requiring image recognition with low energy consumption.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
243,731
2406.06977
Cross-domain-aware Worker Selection with Training for Crowdsourced Annotation
Annotation through crowdsourcing draws incremental attention, which relies on an effective selection scheme given a pool of workers. Existing methods propose to select workers based on their performance on tasks with ground truth, while two important points are missed. 1) The historical performances of workers in other tasks. In real-world scenarios, workers need to solve a new task whose correlation with previous tasks is not well-known before the training, which is called cross-domain. 2) The dynamic worker performance as workers will learn from the ground truth. In this paper, we consider both factors in designing an allocation scheme named cross-domain-aware worker selection with training approach. Our approach proposes two estimation modules to both statistically analyze the cross-domain correlation and simulate the learning gain of workers dynamically. A framework with a theoretical analysis of the worker elimination process is given. To validate the effectiveness of our methods, we collect two novel real-world datasets and generate synthetic datasets. The experiment results show that our method outperforms the baselines on both real-world and synthetic datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
462,847