id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1301.3516 | Learnable Pooling Regions for Image Classification | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms showing that the smooth regularization term is crucial to achieve strong performance using the presented architecture. Finally, we propose an efficient and parallel method to train the model. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 21,094 |
2205.05907 | Machine Learning Workflow to Explain Black-box Models for Early
Alzheimer's Disease Classification Evaluated for Multiple Datasets | Purpose: Hard-to-interpret Black-box Machine Learning (ML) were often used for early Alzheimer's Disease (AD) detection. Methods: To interpret eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and Support Vector Machine (SVM) black-box models a workflow based on Shapley values was developed. All models were trained on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and evaluated for an independent ADNI test set, as well as the external Australian Imaging and Lifestyle flagship study of Ageing (AIBL), and Open Access Series of Imaging Studies (OASIS) datasets. Shapley values were compared to intuitively interpretable Decision Trees (DTs), and Logistic Regression (LR), as well as natural and permutation feature importances. To avoid the reduction of the explanation validity caused by correlated features, forward selection and aspect consolidation were implemented. Results: Some black-box models outperformed DTs and LR. The forward-selected features correspond to brain areas previously associated with AD. Shapley values identified biologically plausible associations with moderate to strong correlations with feature importances. The most important RF features to predict AD conversion were the volume of the amygdalae, and a cognitive test score. Good cognitive test performances and large brain volumes decreased the AD risk. The models trained using cognitive test scores significantly outperformed brain volumetric models ($p<0.05$). Cognitive Normal (CN) vs. AD models were successfully transferred to external datasets. Conclusion: In comparison to previous work, improved performances for ADNI and AIBL were achieved for CN vs. Mild Cognitive Impairment (MCI) classification using brain volumes. The Shapley values and the feature importances showed moderate to strong correlations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,081 |
2407.10978 | Building Artificial Intelligence with Creative Agency and Self-hood | This paper is an invited layperson summary for The Academic of the paper referenced on the last page. We summarize how the formal framework of autocatalytic networks offers a means of modeling the origins of self-organizing, self-sustaining structures that are sufficiently complex to reproduce and evolve, be they organisms undergoing biological evolution, novelty-generating minds driving cultural evolution, or artificial intelligence networks such as large language models. The approach can be used to analyze and detect phase transitions in vastly complex networks that have proven intractable with other approaches, and suggests a promising avenue to building an autonomous, agentic AI self. It seems reasonable to expect that such an autocatalytic AI would possess creative agency akin to that of humans, and undergo psychologically healing -- i.e., therapeutic -- internal transformation through engagement in creative tasks. Moreover, creative tasks would be expected to help such an AI solidify its self-identity. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 473,212 |
2405.13832 | Federated Learning in Healthcare: Model Misconducts, Security,
Challenges, Applications, and Future Research Directions -- A Systematic
Review | Data privacy has become a major concern in healthcare due to the increasing digitization of medical records and data-driven medical research. Protecting sensitive patient information from breaches and unauthorized access is critical, as such incidents can have severe legal and ethical complications. Federated Learning (FL) addresses this concern by enabling multiple healthcare institutions to collaboratively learn from decentralized data without sharing it. FL's scope in healthcare covers areas such as disease prediction, treatment customization, and clinical trial research. However, implementing FL poses challenges, including model convergence in non-IID (independent and identically distributed) data environments, communication overhead, and managing multi-institutional collaborations. A systematic review of FL in healthcare is necessary to evaluate how effectively FL can provide privacy while maintaining the integrity and usability of medical data analysis. In this study, we analyze existing literature on FL applications in healthcare. We explore the current state of model security practices, identify prevalent challenges, and discuss practical applications and their implications. Additionally, the review highlights promising future research directions to refine FL implementations, enhance data security protocols, and expand FL's use to broader healthcare applications, which will benefit future researchers and practitioners. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 456,102 |
2407.04551 | An AI Architecture with the Capability to Classify and Explain Hardware
Trojans | Hardware trojan detection methods, based on machine learning (ML) techniques, mainly identify suspected circuits but lack the ability to explain how the decision was arrived at. An explainable methodology and architecture is introduced based on the existing hardware trojan detection features. Results are provided for explaining digital hardware trojans within a netlist using trust-hub trojan benchmarks. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 470,607 |
2211.16855 | ATASI-Net: An Efficient Sparse Reconstruction Network for Tomographic
SAR Imaging with Adaptive Threshold | Tomographic SAR technique has attracted remarkable interest for its ability of three-dimensional resolving along the elevation direction via a stack of SAR images collected from different cross-track angles. The emerged compressed sensing (CS)-based algorithms have been introduced into TomoSAR considering its super-resolution ability with limited samples. However, the conventional CS-based methods suffer from several drawbacks, including weak noise resistance, high computational complexity, and complex parameter fine-tuning. Aiming at efficient TomoSAR imaging, this paper proposes a novel efficient sparse unfolding network based on the analytic learned iterative shrinkage thresholding algorithm (ALISTA) architecture with adaptive threshold, named Adaptive Threshold ALISTA-based Sparse Imaging Network (ATASI-Net). The weight matrix in each layer of ATASI-Net is pre-computed as the solution of an off-line optimization problem, leaving only two scalar parameters to be learned from data, which significantly simplifies the training stage. In addition, adaptive threshold is introduced for each azimuth-range pixel, enabling the threshold shrinkage to be not only layer-varied but also element-wise. Moreover, the final learned thresholds can be visualized and combined with the SAR image semantics for mutual feedback. Finally, extensive experiments on simulated and real data are carried out to demonstrate the effectiveness and efficiency of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 333,776 |
2205.07680 | BBDM: Image-to-image Translation with Brownian Bridge Diffusion Models | Image-to-image translation is an important and challenging problem in computer vision and image processing. Diffusion models (DM) have shown great potentials for high-quality image synthesis, and have gained competitive performance on the task of image-to-image translation. However, most of the existing diffusion models treat image-to-image translation as conditional generation processes, and suffer heavily from the gap between distinct domains. In this paper, a novel image-to-image translation method based on the Brownian Bridge Diffusion Model (BBDM) is proposed, which models image-to-image translation as a stochastic Brownian bridge process, and learns the translation between two domains directly through the bidirectional diffusion process rather than a conditional generation process. To the best of our knowledge, it is the first work that proposes Brownian Bridge diffusion process for image-to-image translation. Experimental results on various benchmarks demonstrate that the proposed BBDM model achieves competitive performance through both visual inspection and measurable metrics. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 296,680 |
2303.07194 | Neural Partial Differential Equations with Functional Convolution | We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs. Our key idea is to leverage the prior of ``translational similarity'' of numerical PDE differential operators to drastically reduce the scale of learning model and training data. We implemented three central network components, including a neural functional convolution operator, a Picard forward iterative procedure, and an adjoint backward gradient calculator. Our novel paradigm fully leverages the multifaceted priors that stem from the sparse and smooth nature of the physical PDE solution manifold and the various mature numerical techniques such as adjoint solver, linearization, and iterative procedure to accelerate the computation. We demonstrate the efficacy of our method by robustly discovering the model and accurately predicting the solutions of various types of PDEs with small-scale networks and training sets. We highlight that all the PDE examples we showed were trained with up to 8 data samples and within 325 network parameters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 351,161 |
2310.17303 | Demonstration-Regularized RL | Incorporating expert demonstrations has empirically helped to improve the sample efficiency of reinforcement learning (RL). This paper quantifies theoretically to what extent this extra information reduces RL's sample complexity. In particular, we study the demonstration-regularized reinforcement learning that leverages the expert demonstrations by KL-regularization for a policy learned by behavior cloning. Our findings reveal that using $N^{\mathrm{E}}$ expert demonstrations enables the identification of an optimal policy at a sample complexity of order $\widetilde{O}(\mathrm{Poly}(S,A,H)/(\varepsilon^2 N^{\mathrm{E}}))$ in finite and $\widetilde{O}(\mathrm{Poly}(d,H)/(\varepsilon^2 N^{\mathrm{E}}))$ in linear Markov decision processes, where $\varepsilon$ is the target precision, $H$ the horizon, $A$ the number of action, $S$ the number of states in the finite case and $d$ the dimension of the feature space in the linear case. As a by-product, we provide tight convergence guarantees for the behaviour cloning procedure under general assumptions on the policy classes. Additionally, we establish that demonstration-regularized methods are provably efficient for reinforcement learning from human feedback (RLHF). In this respect, we provide theoretical evidence showing the benefits of KL-regularization for RLHF in tabular and linear MDPs. Interestingly, we avoid pessimism injection by employing computationally feasible regularization to handle reward estimation uncertainty, thus setting our approach apart from the prior works. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 403,078 |
2202.08386 | Laplacian operator on statistical manifold | In this paper, we define a Laplacian operator on a statistical manifold, called the vector Laplacian. This vector Laplacian incorporates information from the Amari-Chentsov tensor. We derive a formula for the vector Laplacian. We also give two applications using the heat kernel associated with the vector Laplacian. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 280,850 |
2211.04086 | Does an ensemble of GANs lead to better performance when training
segmentation networks with synthetic images? | Large annotated datasets are required to train segmentation networks. In medical imaging, it is often difficult, time consuming and expensive to create such datasets, and it may also be difficult to share these datasets with other researchers. Different AI models can today generate very realistic synthetic images, which can potentially be openly shared as they do not belong to specific persons. However, recent work has shown that using synthetic images for training deep networks often leads to worse performance compared to using real images. Here we demonstrate that using synthetic images and annotations from an ensemble of 20 GANs, instead of from a single GAN, increases the Dice score on real test images with 4.7 % to 14.0 % on specific classes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 329,132 |
0902.2316 | On weak isometries of Preparata codes | Let C1 and C2 be codes with code distance d. Codes C1 and C2 are called weakly isometric, if there exists a mapping J:C1->C2, such that for any x,y from C1 the equality d(x,y)=d holds if and only if d(J(x),J(y))=d. Obviously two codes are weakly isometric if and only if the minimal distance graphs of these codes are isomorphic. In this paper we prove that Preparata codes of length n>=2^12 are weakly isometric if and only if these codes are equivalent. The analogous result is obtained for punctured Preparata codes of length not less than 2^10-1. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,159 |
2008.06148 | Complexity aspects of local minima and related notions | We consider the notions of (i) critical points, (ii) second-order points, (iii) local minima, and (iv) strict local minima for multivariate polynomials. For each type of point, and as a function of the degree of the polynomial, we study the complexity of deciding (1) if a given point is of that type, and (2) if a polynomial has a point of that type. Our results characterize the complexity of these two questions for all degrees left open by prior literature. Our main contributions reveal that many of these questions turn out to be tractable for cubic polynomials. In particular, we present an efficiently-checkable necessary and sufficient condition for local minimality of a point for a cubic polynomial. We also show that a local minimum of a cubic polynomial can be efficiently found by solving semidefinite programs of size linear in the number of variables. By contrast, we show that it is strongly NP-hard to decide if a cubic polynomial has a critical point. We also prove that the set of second-order points of any cubic polynomial is a spectrahedron, and conversely that any spectrahedron is the projection of the set of second-order points of a cubic polynomial. In our final section, we briefly present a potential application of finding local minima of cubic polynomials to the design of a third-order Newton method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 191,711 |
2007.00454 | Pricing cyber insurance for a large-scale network | Facing the lack of cyber insurance loss data, we propose an innovative approach for pricing cyber insurance for a large-scale network based on synthetic data. The synthetic data is generated by the proposed risk spreading and recovering algorithm that allows infection and recovery events to occur sequentially, and allows dependence of random waiting time to infection for different nodes. The scale-free network framework is adopted to account for the topology uncertainty of the random large-scale network. Extensive simulation studies are conducted to understand the risk spreading and recovering mechanism, and to uncover the most important underwriting risk factors. A case study is also presented to demonstrate that the proposed approach and algorithm can be adapted accordingly to provide reference for cyber insurance pricing. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 185,123 |
2403.05231 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | Motivated by the Parameter-Efficient Fine-Tuning (PEFT) in large language models, we propose LoRAT, a method that unveils the power of large ViT model for tracking within laboratory-level resources. The essence of our work lies in adapting LoRA, a technique that fine-tunes a small subset of model parameters without adding inference latency, to the domain of visual tracking. However, unique challenges and potential domain gaps make this transfer not as easy as the first intuition. Firstly, a transformer-based tracker constructs unshared position embedding for template and search image. This poses a challenge for the transfer of LoRA, usually requiring consistency in the design when applied to the pre-trained backbone, to downstream tasks. Secondly, the inductive bias inherent in convolutional heads diminishes the effectiveness of parameter-efficient fine-tuning in tracking models. To overcome these limitations, we first decouple the position embeddings in transformer-based trackers into shared spatial ones and independent type ones. The shared embeddings, which describe the absolute coordinates of multi-resolution images (namely, the template and search images), are inherited from the pre-trained backbones. In contrast, the independent embeddings indicate the sources of each token and are learned from scratch. Furthermore, we design an anchor-free head solely based on MLP to adapt PETR, enabling better performance with less computational overhead. With our design, 1) it becomes practical to train trackers with the ViT-g backbone on GPUs with only memory of 25.8GB (batch size of 16); 2) we reduce the training time of the L-224 variant from 35.0 to 10.8 GPU hours; 3) we improve the LaSOT SUC score from 0.703 to 0.742 with the L-224 variant; 4) we fast the inference speed of the L-224 variant from 52 to 119 FPS. Code and models are available at https://github.com/LitingLin/LoRAT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 435,926 |
1704.04932 | Deep Relaxation: partial differential equations for optimizing deep
neural networks | In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs). Relaxation techniques arising in statistical physics which have already been used successfully in this context are reinterpreted as solutions of a viscous Hamilton-Jacobi PDE. Using a stochastic control interpretation allows we prove that the modified algorithm performs better in expectation that stochastic gradient descent. Well-known PDE regularity results allow us to analyze the geometry of the relaxed energy landscape, confirming empirical evidence. The PDE is derived from a stochastic homogenization problem, which arises in the implementation of the algorithm. The algorithms scale well in practice and can effectively tackle the high dimensionality of modern neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 71,918 |
2312.00025 | Secure Transformer Inference Protocol | Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT. While recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, their adoption is practically infeasible due to the prohibitive cryptographic overheads involved. Drawing insights from our hands-on experience in developing two real-world Transformer-based services, we identify the inherent efficiency bottleneck in the two-party assumption. To overcome this limitation, we propose a novel three-party threat model. Within this framework, we design a semi-symmetric permutation-based protection scheme and present STIP, the first secure Transformer inference protocol without any inference accuracy loss. Experiments on representative Transformer models in real systems show that STIP has practical security and outperforms state-of-the-art secure two-party protocols in efficiency by millions of times. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 411,855 |
2402.07340 | Random Geometric Graph Alignment with Graph Neural Networks | We characterize the performance of graph neural networks for graph alignment problems in the presence of vertex feature information. More specifically, given two graphs that are independent perturbations of a single random geometric graph with noisy sparse features, the task is to recover an unknown one-to-one mapping between the vertices of the two graphs. We show under certain conditions on the sparsity and noise level of the feature vectors, a carefully designed one-layer graph neural network can with high probability recover the correct alignment between the vertices with the help of the graph structure. We also prove that our conditions on the noise level are tight up to logarithmic factors. Finally we compare the performance of the graph neural network to directly solving an assignment problem on the noisy vertex features. We demonstrate that when the noise level is at least constant this direct matching fails to have perfect recovery while the graph neural network can tolerate noise level growing as fast as a power of the size of the graph. | false | false | false | true | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 428,665 |
2304.07242 | Covidia: COVID-19 Interdisciplinary Academic Knowledge Graph | The pandemic of COVID-19 has inspired extensive works across different research fields. Existing literature and knowledge platforms on COVID-19 only focus on collecting papers on biology and medicine, neglecting the interdisciplinary efforts, which hurdles knowledge sharing and research collaborations between fields to address the problem. Studying interdisciplinary researches requires effective paper category classification and efficient cross-domain knowledge extraction and integration. In this work, we propose Covidia, COVID-19 interdisciplinary academic knowledge graph to bridge the gap between knowledge of COVID-19 on different domains. We design frameworks based on contrastive learning for disciplinary classification, and propose a new academic knowledge graph scheme for entity extraction, relation classification and ontology management in accordance with interdisciplinary researches. Based on Covidia, we also establish knowledge discovery benchmarks for finding COVID-19 research communities and predicting potential links. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 358,278 |
2106.11176 | Abstract Geometrical Computation 11: Slanted Firing Squad
Synchronisation on Signal Machines | Firing Squad Synchronisation on Cellular Automata is the dynamical synchronisation of finitely many cells without any prior knowledge of their range. This can be conceived as a signal with an infinite speed. Most of the proposed constructions naturally translate to the continuous setting of signal machines and generate fractal figures with an accumulation on a horizontal line, i.e. synchronously, in the space-time diagram. Signal machines are studied in a series of articles named Abstract Geometrical Computation. In the present article, we design a signal machine that is able to synchronise/accumulate on any non-infinite slope. The slope is encoded in the initial configuration. This is done by constructing an infinite tree such that each node computes the way the tree expands. The interest of Abstract Geometrical computation is to do away with the constraint of discrete space, while tackling new difficulties from continuous space. The interest of this paper in particular is to provide basic tools for further study of computable accumulation lines in the signal machine model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 242,303 |
2009.04647 | COVID-19 Pandemic Cyclic Lockdown Optimization Using Reinforcement
Learning | This work examines the use of reinforcement learning (RL) to optimize cyclic lockdowns, which is one of the methods available for control of the COVID-19 pandemic. The problem is structured as an optimal control system for tracking a reference value, corresponding to the maximum usage level of a critical resource, such as ICU beds. However, instead of using conventional optimal control methods, RL is used to find optimal control policies. A framework was developed to calculate optimal cyclic lockdown timings using an RL-based on-off controller. The RL-based controller is implemented as an RL agent that interacts with an epidemic simulator, implemented as an extended SEIR epidemic model. The RL agent learns a policy function that produces an optimal sequence of open/lockdown decisions such that goals specified in the RL reward function are optimized. Two concurrent goals were used: the first one is a public health goal that minimizes overshoots of ICU bed usage above an ICU bed threshold, and the second one is a socio-economic goal that minimizes the time spent under lockdowns. It is assumed that cyclic lockdowns are considered as a temporary alternative to extended lockdowns when a region faces imminent danger of overpassing resource capacity limits and when imposing an extended lockdown would cause severe social and economic consequences due to lack of necessary economic resources to support its affected population during an extended lockdown. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 195,110 |
2501.18229 | GPD: Guided Polynomial Diffusion for Motion Planning | Diffusion-based motion planners are becoming popular due to their well-established performance improvements, stemming from sample diversity and the ease of incorporating new constraints directly during inference. However, a primary limitation of the diffusion process is the requirement for a substantial number of denoising steps, especially when the denoising process is coupled with gradient-based guidance. In this paper, we introduce, diffusion in the parametric space of trajectories, where the parameters are represented as Bernstein coefficients. We show that this representation greatly improves the effectiveness of the cost function guidance and the inference speed. We also introduce a novel stitching algorithm that leverages the diversity in diffusion-generated trajectories to produce collision-free trajectories with just a single cost function-guided model. We demonstrate that our approaches outperform current SOTA diffusion-based motion planners for manipulators and provide an ablation study on key components. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 528,624 |
2101.05209 | Image Steganography based on Iteratively Adversarial Samples of A
Synchronized-directions Sub-image | Nowadays a steganography has to face challenges of both feature based staganalysis and convolutional neural network (CNN) based steganalysis. In this paper, we present a novel steganography scheme denoted as ITE-SYN (based on ITEratively adversarial perturbations onto a SYNchronized-directions sub-image), by which security data is embedded with synchronizing modification directions to enhance security and then iteratively increased perturbations are added onto a sub-image to reduce loss with cover class label of the target CNN classifier. Firstly an exist steganographic function is employed to compute initial costs. Then the cover image is decomposed into some non-overlapped sub-images. After each sub-image is embedded, costs will be adjusted following clustering modification directions profile. And then the next sub-image will be embedded with adjusted costs until all secret data has been embedded. If the target CNN classifier does not discriminate the stego image as a cover image, based on adjusted costs, we change costs with adversarial manners according to signs of gradients back-propagated from the CNN classifier. And then a sub-image is chosen to be re-embedded with changed costs. Adversarial intensity will be iteratively increased until the adversarial stego image can fool the target CNN classifier. Experiments demonstrate that the proposed method effectively enhances security to counter both conventional feature-based classifiers and CNN classifiers, even other non-target CNN classifiers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 215,363 |
2108.03601 | Using Biological Variables and Social Determinants to Predict Malaria
and Anemia among Children in Senegal | Integrating machine learning techniques in healthcare becomes very common nowadays, and it contributes positively to improving clinical care and health decisions planning. Anemia and malaria are two life-threatening diseases in Africa that affect the red blood cells and reduce hemoglobin production. This paper focuses on analyzing child health data in Senegal using four machine learning algorithms in Python: KNN, Random Forests, SVM, and Na\"ive Bayes. Our task aims to investigate large-scale data from The Demographic and Health Survey (DHS) and to find out hidden information for anemia and malaria. We present two classification models for the two blood disorders using biological variables and social determinants. The findings of this research will contribute to improving child healthcare in Senegal by eradicating anemia and malaria, and decreasing the child mortality rate. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 249,720 |
1909.01603 | Multi-DoF Time Domain Passivity Approach Based Drift Compensation for
Telemanipulation | When, in addition to stability, position synchronization is also desired in bilateral teleoperation, Time Domain Passivity Approach (TDPA) alone might not be able to fulfill the desired objective. This is due to an undesired effect caused by admittance type passivity controllers, namely position drift. Previous works focused on developing TDPA-based drift compensation methods to solve this issue. It was shown that, in addition to reducing drift, one of the proposed methods was able to keep the force signals within their normal range, guaranteeing the safety of the task. However, no multi-DoF treatment of those approaches has been addressed. In that scope, this paper focuses on providing an extension of previous TDPA-based approaches to multi-DoF Cartesian-space teleoperation. An analysis of the convergence properties of the presented method is also provided. In addition, its applicability to multi-DoF devices is shown through hardware experiments and numerical simulation with round-trip time delays up to 700 ms. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 143,954 |
2010.13499 | Optimization for Medical Image Segmentation: Theory and Practice when
evaluating with Dice Score or Jaccard Index | In many medical imaging and classical computer vision tasks, the Dice score and Jaccard index are used to evaluate the segmentation performance. Despite the existence and great empirical success of metric-sensitive losses, i.e. relaxations of these metrics such as soft Dice, soft Jaccard and Lovasz-Softmax, many researchers still use per-pixel losses, such as (weighted) cross-entropy to train CNNs for segmentation. Therefore, the target metric is in many cases not directly optimized. We investigate from a theoretical perspective, the relation within the group of metric-sensitive loss functions and question the existence of an optimal weighting scheme for weighted cross-entropy to optimize the Dice score and Jaccard index at test time. We find that the Dice score and Jaccard index approximate each other relatively and absolutely, but we find no such approximation for a weighted Hamming similarity. For the Tversky loss, the approximation gets monotonically worse when deviating from the trivial weight setting where soft Tversky equals soft Dice. We verify these results empirically in an extensive validation on six medical segmentation tasks and can confirm that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index. This further holds in a multi-class setting, and across different object sizes and foreground/background ratios. These results encourage a wider adoption of metric-sensitive loss functions for medical segmentation tasks where the performance measure of interest is the Dice score or Jaccard index. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 203,154 |
1904.03953 | Feature Learning Viewpoint of AdaBoost and a New Algorithm | The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 126,892 |
1708.08998 | Deep Structure for end-to-end inverse rendering | Inverse rendering in a 3D format denoted to recovering the 3D properties of a scene given 2D input image(s) and is typically done using 3D Morphable Model (3DMM) based methods from single view images. These models formulate each face as a weighted combination of some basis vectors extracted from the training data. In this paper a deep framework is proposed in which the coefficients and basis vectors are computed by training an autoencoder network and a Convolutional Neural Network (CNN) simultaneously. The idea is to find a common cause which can be mapped to both the 3D structure and corresponding 2D image using deep networks. The empirical results verify the power of deep framework in finding accurate 3D shapes of human faces from their corresponding 2D images on synthetic datasets of human faces. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 79,714 |
1212.4906 | SMML estimators for 1-dimensional continuous data | A method is given for calculating the strict minimum message length (SMML) estimator for 1-dimensional exponential families with continuous sufficient statistics. A set of $n$ equations are found that the $n$ cut-points of the SMML estimator must satisfy. These equations can be solved using Newton's method and this approach is used to produce new results and to replicate results that C. S. Wallace obtained using his boundary rules for the SMML estimator. A rigorous proof is also given that, despite being composed of step functions, the posterior probability corresponding to the SMML estimator is a continuous function of the data. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,497 |
1906.05741 | Distributed High-dimensional Regression Under a Quantile Loss Function | This paper studies distributed estimation and support recovery for high-dimensional linear regression model with heavy-tailed noise. To deal with heavy-tailed noise whose variance can be infinite, we adopt the quantile regression loss function instead of the commonly used squared loss. However, the non-smooth quantile loss poses new challenges to high-dimensional distributed estimation in both computation and theoretical development. To address the challenge, we transform the response variable and establish a new connection between quantile regression and ordinary linear regression. Then, we provide a distributed estimator that is both computationally and communicationally efficient, where only the gradient information is communicated at each iteration. Theoretically, we show that, after a constant number of iterations, the proposed estimator achieves a near-oracle convergence rate without any restriction on the number of machines. Moreover, we establish the theoretical guarantee for the support recovery. The simulation analysis is provided to demonstrate the effectiveness of our method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 135,105 |
1808.00414 | Variational dynamic interpolation for kinematic systems on trivial
principal bundles | This article presents the dynamic interpolation problem for locomotion systems evolving on a trivial principal bundle $Q$. Given an ordered set of points in $Q$, we wish to generate a trajectory which passes through these points by synthesizing suitable controls. The global product structure of the trivial bundle is used to obtain an induced Riemannian product metric on $Q$. The squared $L^2-$norm of the covariant acceleration is considered as the cost function, and its first order variations are taken for generating the trajectories. The nonholonomic constraint is enforced through the local form of the principal connection and the group symmetry is employed for reduction. The explicit form of the Riemannian connection for the trivial bundle is employed to arrive at the extremal of the cost function. The result is applied to generate a trajectory for the generalized Purcell's swimmer - a low Reynolds number microswimming mechanism. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 104,381 |
2408.13005 | EasyControl: Transfer ControlNet to Video Diffusion for Controllable
Generation and Interpolation | Following the advancements in text-guided image generation technology exemplified by Stable Diffusion, video generation is gaining increased attention in the academic community. However, relying solely on text guidance for video generation has serious limitations, as videos contain much richer content than images, especially in terms of motion. This information can hardly be adequately described with plain text. Fortunately, in computer vision, various visual representations can serve as additional control signals to guide generation. With the help of these signals, video generation can be controlled in finer detail, allowing for greater flexibility for different applications. Integrating various controls, however, is nontrivial. In this paper, we propose a universal framework called EasyControl. By propagating and injecting condition features through condition adapters, our method enables users to control video generation with a single condition map. With our framework, various conditions including raw pixels, depth, HED, etc., can be integrated into different Unet-based pre-trained video diffusion models at a low practical cost. We conduct comprehensive experiments on public datasets, and both quantitative and qualitative results indicate that our method outperforms state-of-the-art methods. EasyControl significantly improves various evaluation metrics across multiple validation datasets compared to previous works. Specifically, for the sketch-to-video generation task, EasyControl achieves an improvement of 152.0 on FVD and 19.9 on IS, respectively, in UCF101 compared with VideoComposer. For fidelity, our model demonstrates powerful image retention ability, resulting in high FVD and IS in UCF101 and MSR-VTT compared to other image-to-video models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,974 |
1507.06833 | Comparison between GFDM and VOFDM | This document provides a comparison of the transmission techniques used in Generalized Frequency Division Multiplexing (GFDM) and Vector-OFDM (VOFDM). Within the document both systems are coarsely described and common and distinct properties are highlighted. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 45,420 |
2409.14607 | Patch Ranking: Efficient CLIP by Learning to Rank Local Patches | Contrastive image-text pre-trained models such as CLIP have shown remarkable adaptability to downstream tasks. However, they face challenges due to the high computational requirements of the Vision Transformer (ViT) backbone. Current strategies to boost ViT efficiency focus on pruning patch tokens but fall short in addressing the multimodal nature of CLIP and identifying the optimal subset of tokens for maximum performance. To address this, we propose greedy search methods to establish a "Golden Ranking" and introduce a lightweight predictor specifically trained to approximate this Ranking. To compensate for any performance degradation resulting from token pruning, we incorporate learnable visual tokens that aid in restoring and potentially enhancing the model's performance. Our work presents a comprehensive and systematic investigation of pruning tokens within the ViT backbone of CLIP models. Through our framework, we successfully reduced 40% of patch tokens in CLIP's ViT while only suffering a minimal average accuracy loss of 0.3 across seven datasets. Our study lays the groundwork for building more computationally efficient multimodal models without sacrificing their performance, addressing a key challenge in the application of advanced vision-language models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 490,548 |
1507.00500 | Non-convex Regularizations for Feature Selection in Ranking With Sparse
SVM | Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as $\ell\_1$ or weighted $\ell\_1$ and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or $\ell\_p$ pseudo norm with $p\textless{}1$. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted $\ell\_1$ scheme to address the non-convex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex regularizations we propose leads to more sparsity in the resulting models while prediction performance is preserved. The number of features is decreased by up to a factor of six compared to the $\ell\_1$ regularization. In addition, the software is publicly available on the web. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 44,764 |
1305.0218 | Video Segmentation via Diffusion Bases | Identifying moving objects in a video sequence, which is produced by a static camera, is a fundamental and critical task in many computer-vision applications. A common approach performs background subtraction, which identifies moving objects as the portion of a video frame that differs significantly from a background model. A good background subtraction algorithm has to be robust to changes in the illumination and it should avoid detecting non-stationary background objects such as moving leaves, rain, snow, and shadows. In addition, the internal background model should quickly respond to changes in background such as objects that start to move or stop. We present a new algorithm for video segmentation that processes the input video sequence as a 3D matrix where the third axis is the time domain. Our approach identifies the background by reducing the input dimension using the \emph{diffusion bases} methodology. Furthermore, we describe an iterative method for extracting and deleting the background. The algorithm has two versions and thus covers the complete range of backgrounds: one for scenes with static backgrounds and the other for scenes with dynamic (moving) backgrounds. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 24,336 |
2103.13283 | Information-based Disentangled Representation Learning for Unsupervised
MR Harmonization | Accuracy and consistency are two key factors in computer-assisted magnetic resonance (MR) image analysis. However, contrast variation from site to site caused by lack of standardization in MR acquisition impedes consistent measurements. In recent years, image harmonization approaches have been proposed to compensate for contrast variation in MR images. Current harmonization approaches either require cross-site traveling subjects for supervised training or heavily rely on site-specific harmonization models to encourage harmonization accuracy. These requirements potentially limit the application of current harmonization methods in large-scale multi-site studies. In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory. CALAMITI learns a disentangled latent space using a unified structure for multi-site harmonization without the need for traveling subjects. Our model is also able to adapt itself to harmonize MR images from a new site with fine tuning solely on images from the new site. Both qualitative and quantitative results show that the proposed method achieves superior performance compared with other unsupervised harmonization approaches. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 226,445 |
2103.07281 | Empirical Mode Modeling: A data-driven approach to recover and forecast
nonlinear dynamics from noisy data | Data-driven, model-free analytics are natural choices for discovery and forecasting of complex, nonlinear systems. Methods that operate in the system state-space require either an explicit multidimensional state-space, or, one approximated from available observations. Since observational data are frequently sampled with noise, it is possible that noise can corrupt the state-space representation degrading analytical performance. Here, we evaluate the synthesis of empirical mode decomposition with empirical dynamic modeling, which we term empirical mode modeling, to increase the information content of state-space representations in the presence of noise. Evaluation of a mathematical, and, an ecologically important geophysical application across three different state-space representations suggests that empirical mode modeling may be a useful technique for data-driven, model-free, state-space analysis in the presence of noise. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 224,552 |
1805.07078 | Flexible IR-HARQ Scheme for Polar-Coded Modulation | A flexible incremental redundancy hybrid auto- mated repeat request (IR-HARQ) scheme for polar codes is proposed based on dynamically frozen bits and the quasi-uniform puncturing (QUP) algorithm. The length of each transmission is not restricted to a power of two. It is applicable for the binary input additive white Gaussian noise (biAWGN) channel as well as higher-order modulation. Simulation results show that this scheme has similar performance as directly designed polar codes with QUP and outperforms LTE-turbo and 5G-LDPC codes with IR-HARQ. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 97,739 |
1906.10881 | Automatic Hierarchical Classification of Kelps using Deep Residual
Features | Across the globe, remote image data is rapidly being collected for the assessment of benthic communities from shallow to extremely deep waters on continental slopes to the abyssal seas. Exploiting this data is presently limited by the time it takes for experts to identify organisms found in these images. With this limitation in mind, a large effort has been made globally to introduce automation and machine learning algorithms to accelerate both classification and assessment of marine benthic biota. One major issue lies with organisms that move with swell and currents, like kelps. This paper presents an automatic hierarchical classification method (local binary classification as opposed to the conventional flat classification) to classify kelps in images collected by autonomous underwater vehicles. The proposed kelp classification approach exploits learned feature representations extracted from deep residual networks. We show that these generic features outperform the traditional off-the-shelf CNN features and the conventional hand-crafted features. Experiments also demonstrate that the hierarchical classification method outperforms the traditional parallel multi-class classifications by a significant margin (90.0% vs 57.6% and 77.2% vs 59.0%) on Benthoz15 and Rottnest datasets respectively. Furthermore, we compare different hierarchical classification approaches and experimentally show that the sibling hierarchical training approach outperforms the inclusive hierarchical approach by a significant margin. We also report an application of our proposed method to study the change in kelp cover over time for annually repeated AUV surveys. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 136,537 |
1908.03687 | Color-Coded Fiber-Optic Tactile Sensor for an Elastomeric Robot Skin | The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6~N and the spatial resolution of 8~mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 141,296 |
2001.11062 | Safe Predictors for Enforcing Input-Output Specifications | We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 161,961 |
2203.16377 | A barrier function approach to constrained Pontryagin-based Nonlinear
Model Predictive Control | A Pontryagin-based approach to solve a class of constrained Nonlinear Model Predictive Control problems is proposed which employs the method of barrier functions for dealing with the state constraints. Unlike the existing works in literature the proposed method is able to cope with nonlinear input and state constraints without any significant modification of the optimization algorithm. A stability analysis of the closed-loop system is carried out by using the L-2 norm of the predicted state tracking error as a Lyapunov function. Theoretical results are tested and confirmed by numerical simulations on the Lotka-Volterra prey/predator system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 288,766 |
1903.08912 | PPGnet: Deep Network for Device Independent Heart Rate Estimation from
Photoplethysmogram | Photoplethysmogram (PPG) is increasingly used to provide monitoring of the cardiovascular system under ambulatory conditions. Wearable devices like smartwatches use PPG to allow long term unobtrusive monitoring of heart rate in free living conditions. PPG based heart rate measurement is unfortunately highly susceptible to motion artifacts, particularly when measured from the wrist. Traditional machine learning and deep learning approaches rely on tri-axial accelerometer data along with PPG to perform heart rate estimation. The conventional learning based approaches have not addressed the need for device-specific modeling due to differences in hardware design among PPG devices. In this paper, we propose a novel end to end deep learning model to perform heart rate estimation using 8 second length input PPG signal. We evaluate the proposed model on the IEEE SPC 2015 dataset, achieving a mean absolute error of 3.36+-4.1BPM for HR estimation on 12 subjects without requiring patient specific training. We also studied the feasibility of applying transfer learning along with sparse retraining from a comprehensive in house PPG dataset for heart rate estimation across PPG devices with different hardware design. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,939 |
2407.18632 | Robust VAEs via Generating Process of Noise Augmented Data | Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning. Our study focuses on a specific type of generative models - Variational Auto-Encoders (VAEs). Contrary to common beliefs and existing literature which suggest that noise injection towards training data can make models more robust, our preliminary experiments revealed that naive usage of noise augmentation technique did not substantially improve VAE robustness. In fact, it even degraded the quality of learned representations, making VAEs more susceptible to adversarial perturbations. This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data. Through incorporating a paired probabilistic prior into the standard variational lower bound, our method significantly boosts defense against adversarial attacks. Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs on widely-recognized benchmark datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 476,463 |
2006.11486 | Unsupervised Vehicle Re-identification with Progressive Adaptation | Vehicle re-identification (reID) aims at identifying vehicles across different non-overlapping cameras views. The existing methods heavily relied on well-labeled datasets for ideal performance, which inevitably causes fateful drop due to the severe domain bias between the training domain and the real-world scenes; worse still, these approaches required full annotations, which is labor-consuming. To tackle these challenges, we propose a novel progressive adaptation learning method for vehicle reID, named PAL, which infers from the abundant data without annotations. For PAL, a data adaptation module is employed for source domain, which generates the images with similar data distribution to unlabeled target domain as ``pseudo target samples''. These pseudo samples are combined with the unlabeled samples that are selected by a dynamic sampling strategy to make training faster. We further proposed a weighted label smoothing (WLS) loss, which considers the similarity between samples with different clusters to balance the confidence of pseudo labels. Comprehensive experimental results validate the advantages of PAL on both VehicleID and VeRi-776 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 183,256 |
2306.10270 | Old and New Minimalism: a Hopf algebra comparison | In this paper we compare some old formulations of Minimalism, in particular Stabler's computational minimalism, and Chomsky's new formulation of Merge and Minimalism, from the point of view of their mathematical description in terms of Hopf algebras. We show that the newer formulation has a clear advantage purely in terms of the underlying mathematical structure. More precisely, in the case of Stabler's computational minimalism, External Merge can be described in terms of a partially defined operated algebra with binary operation, while Internal Merge determines a system of right-ideal coideals of the Loday-Ronco Hopf algebra and corresponding right-module coalgebra quotients. This mathematical structure shows that Internal and External Merge have significantly different roles in the old formulations of Minimalism, and they are more difficult to reconcile as facets of a single algebraic operation, as desirable linguistically. On the other hand, we show that the newer formulation of Minimalism naturally carries a Hopf algebra structure where Internal and External Merge directly arise from the same operation. We also compare, at the level of algebraic properties, the externalization model of the new Minimalism with proposals for assignments of planar embeddings based on heads of trees. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 374,160 |
1509.08215 | Adaptive Agent-Based SCADA System | Modern supervisory control and data acquisition (SCADA) systems comprise variety of industrial equipment such as physical control processes, logical control systems, communication networks, computers, and communication protocols. They are concerned with control and supervision of production control processes. Modern SCADA networks contain highly distributed information, control, and location. Moreover, they contain large number of heterogeneous components situated in highly changing and uncertain environments. As a result, engineering modern SCADA is a challenging issue and conventional engineering approaches are no longer suitable for them because of their increasing complexity and highly distribution. In this research, Multi-Agent Systems (MAS) are used to enable building adaptive agent-based SCADA system by modeling system components as agents in the micro level and as organizations or societies of agents in the macro level. A prototype has been implemented and evaluated within a simulation environment for demonstrating the adaptive behavior of the system-to-be, which results in continuous improvement of system performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 47,342 |
2108.12151 | A Matching Algorithm based on Image Attribute Transfer and Local
Features for Underwater Acoustic and Optical Images | In the field of underwater vision research, image matching between the sonar sensors and optical cameras has always been a challenging problem. Due to the difference in the imaging mechanism between them, which are the gray value, texture, contrast, etc. of the acoustic images and the optical images are also variant in local locations, which makes the traditional matching method based on the optical image invalid. Coupled with the difficulties and high costs of underwater data acquisition, it further affects the research process of acousto-optic data fusion technology. In order to maximize the use of underwater sensor data and promote the development of multi-sensor information fusion (MSIF), this study applies the image attribute transfer method based on deep learning approach to solve the problem of acousto-optic image matching, the core of which is to eliminate the imaging differences between them as much as possible. At the same time, the advanced local feature descriptor is introduced to solve the challenging acousto-optic matching problem. Experimental results show that our proposed method could preprocess acousto-optic images effectively and obtain accurate matching results. Additionally, the method is based on the combination of image depth semantic layer, and it could indirectly display the local feature matching relationship between original image pair, which provides a new solution to the underwater multi-sensor image matching problem. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 252,409 |
1608.02904 | TweeTime: A Minimally Supervised Method for Recognizing and Normalizing
Time Expressions in Twitter | We describe TweeTIME, a temporal tagger for recognizing and normalizing time expressions in Twitter. Most previous work in social media analysis has to rely on temporal resolvers that are designed for well-edited text, and therefore suffer from the reduced performance due to domain mismatch. We present a minimally supervised method that learns from large quantities of unlabeled data and requires no hand-engineered rules or hand-annotated training corpora. TweeTIME achieves 0.68 F1 score on the end-to-end task of resolving date expressions, outperforming a broad range of state-of-the-art systems. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 59,615 |
2206.14362 | Lower Bounds on the Error Probability for Invariant Causal Prediction | It is common practice to collect observations of feature and response pairs from different environments. A natural question is how to identify features that have consistent prediction power across environments. The invariant causal prediction framework proposes to approach this problem through invariance, assuming a linear model that is invariant under different environments. In this work, we make an attempt to shed light on this framework by connecting it to the Gaussian multiple access channel problem. Specifically, we incorporate optimal code constructions and decoding methods to provide lower bounds on the error probability. We illustrate our findings by various simulation settings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 305,262 |
2404.01503 | Some Orders Are Important: Partially Preserving Orders in Top-Quality
Planning | The ability to generate multiple plans is central to using planning in real-life applications. Top-quality planners generate sets of such top-cost plans, allowing flexibility in determining equivalent ones. In terms of the order between actions in a plan, the literature only considers two extremes -- either all orders are important, making each plan unique, or all orders are unimportant, treating two plans differing only in the order of actions as equivalent. To allow flexibility in selecting important orders, we propose specifying a subset of actions the orders between which are important, interpolating between the top-quality and unordered top-quality planning problems. We explore the ways of adapting partial order reduction search pruning techniques to address this new computational problem and present experimental evaluations demonstrating the benefits of exploiting such techniques in this setting. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 443,445 |
2303.03986 | Multiplexed gradient descent: Fast online training of modern datasets on
hardware neural networks without backpropagation | We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 349,921 |
1110.3649 | Algorithms to automatically quantify the geometric similarity of
anatomical surfaces | We describe new approaches for distances between pairs of 2-dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 12,685 |
1107.4429 | High Accuracy Human Activity Monitoring using Neural network | This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical and empirical facts. The work shows a detailed description of the designing steps for the classification of human body acceleration data. A 4-layer back propagation neural network, with Levenberg-marquardt algorithm for training, showed best performance among the other neural network training algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 11,399 |
2209.02595 | A neuromorphic approach to image processing and machine vision | Neuromorphic engineering is essentially the development of artificial systems, such as electronic analog circuits that employ information representations found in biological nervous systems. Despite being faster and more accurate than the human brain, computers lag behind in recognition capability. However, it is envisioned that the advancement in neuromorphics, pertaining to the fields of computer vision and image processing will provide a considerable improvement in the way computers can interpret and analyze information. In this paper, we explore the implementation of visual tasks such as image segmentation, visual attention and object recognition. Moreover, the concept of anisotropic diffusion has been examined followed by a novel approach employing memristors to execute image segmentation. Additionally, we have discussed the role of neuromorphic vision sensors in artificial visual systems and the protocol involved in order to enable asynchronous transmission of signals. Moreover, two widely accepted algorithms that are used to emulate the process of object recognition and visual attention have also been discussed. Throughout the span of this paper, we have emphasized on the employment of non-volatile memory devices such as memristors to realize artificial visual systems. Finally, we discuss about hardware accelerators and wish to represent a case in point for arguing that progress in computer vision may benefit directly from progress in non-volatile memory technology. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | false | false | 316,256 |
2108.03004 | MmWave Radar and Vision Fusion for Object Detection in Autonomous
Driving: A Review | With autonomous driving developing in a booming stage, accurate object detection in complex scenarios attract wide attention to ensure the safety of autonomous driving. Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection. This article presents a detailed survey on mmWave radar and vision fusion based obstacle detection methods. First, we introduce the tasks, evaluation criteria, and datasets of object detection for autonomous driving. The process of mmWave radar and vision fusion is then divided into three parts: sensor deployment, sensor calibration, and sensor fusion, which are reviewed comprehensively. Specifically, we classify the fusion methods into data level, decision level, and feature level fusion methods. In addition, we introduce three-dimensional(3D) object detection, the fusion of lidar and vision in autonomous driving and multimodal information fusion, which are promising for the future. Finally, we summarize this article. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,526 |
2312.07423 | Holoported Characters: Real-time Free-viewpoint Rendering of Humans from
Sparse RGB Cameras | We present the first approach to render highly realistic free-viewpoint videos of a human actor in general apparel, from sparse multi-view recording to display, in real-time at an unprecedented 4K resolution. At inference, our method only requires four camera views of the moving actor and the respective 3D skeletal pose. It handles actors in wide clothing, and reproduces even fine-scale dynamic detail, e.g. clothing wrinkles, face expressions, and hand gestures. At training time, our learning-based approach expects dense multi-view video and a rigged static surface scan of the actor. Our method comprises three main stages. Stage 1 is a skeleton-driven neural approach for high-quality capture of the detailed dynamic mesh geometry. Stage 2 is a novel solution to create a view-dependent texture using four test-time camera views as input. Finally, stage 3 comprises a new image-based refinement network rendering the final 4K image given the output from the previous stages. Our approach establishes a new benchmark for real-time rendering resolution and quality using sparse input camera views, unlocking possibilities for immersive telepresence. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 414,919 |
2412.15983 | Never Reset Again: A Mathematical Framework for Continual Inference in
Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are widely used for sequential processing but face fundamental limitations with continual inference due to state saturation, requiring disruptive hidden state resets. However, reset-based methods impose synchronization requirements with input boundaries and increase computational costs at inference. To address this, we propose an adaptive loss function that eliminates the need for resets during inference while preserving high accuracy over extended sequences. By combining cross-entropy and Kullback-Leibler divergence, the loss dynamically modulates the gradient based on input informativeness, allowing the network to differentiate meaningful data from noise and maintain stable representations over time. Experimental results demonstrate that our reset-free approach outperforms traditional reset-based methods when applied to a variety of RNNs, particularly in continual tasks, enhancing both the theoretical and practical capabilities of RNNs for streaming applications. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 519,328 |
2404.04113 | BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal
and Masked Language Models | Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training. Probing is an inexpensive way to compare LMs of different sizes and training configurations. However, previous approaches rely on the objective function used in pre-training LMs and are thus applicable only to masked or causal LMs. As a result, comparing different types of LMs becomes impossible. To address this, we propose an approach that uses an LM's inherent ability to estimate the log-likelihood of any given textual statement. We carefully design an evaluation dataset of 7,731 instances (40,916 in a larger variant) from which we produce alternative statements for each relational fact, one of which is correct. We then evaluate whether an LM correctly assigns the highest log-likelihood to the correct statement. Our experimental evaluation of 22 common LMs shows that our proposed framework, BEAR, can effectively probe for knowledge across different LM types. We release the BEAR datasets and an open-source framework that implements the probing approach to the research community to facilitate the evaluation and development of LMs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 444,518 |
2303.12074 | CC3D: Layout-Conditioned Generation of Compositional 3D Scenes | In this work, we introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained using single-view images. Different from most existing 3D GANs that limit their applicability to aligned single objects, we focus on generating complex scenes with multiple objects, by modeling the compositional nature of 3D scenes. By devising a 2D layout-based approach for 3D synthesis and implementing a new 3D field representation with a stronger geometric inductive bias, we have created a 3D GAN that is both efficient and of high quality, while allowing for a more controllable generation process. Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality in comparison to previous works. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,126 |
2003.07596 | Construe: a software solution for the explanation-based interpretation
of time series | This paper presents a software implementation of a general framework for time series interpretation based on abductive reasoning. The software provides a data model and a set of algorithms to make inference to the best explanation of a time series, resulting in a description in multiple abstraction levels of the processes underlying the time series. As a proof of concept, a comprehensive knowledge base for the electrocardiogram (ECG) domain is provided, so it can be used directly as a tool for ECG analysis. This tool has been successfully validated in several noteworthy problems, such as heartbeat classification or atrial fibrillation detection. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 168,482 |
2308.05123 | Towards Automatic Scoring of Spinal X-ray for Ankylosing Spondylitis | Manually grading structural changes with the modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS) on spinal X-ray imaging is costly and time-consuming due to bone shape complexity and image quality variations. In this study, we address this challenge by prototyping a 2-step auto-grading pipeline, called VertXGradeNet, to automatically predict mSASSS scores for the cervical and lumbar vertebral units (VUs) in X-ray spinal imaging. The VertXGradeNet utilizes VUs generated by our previously developed VU extraction pipeline (VertXNet) as input and predicts mSASSS based on those VUs. VertXGradeNet was evaluated on an in-house dataset of lateral cervical and lumbar X-ray images for axial spondylarthritis patients. Our results show that VertXGradeNet can predict the mSASSS score for each VU when the data is limited in quantity and imbalanced. Overall, it can achieve a balanced accuracy of 0.56 and 0.51 for 4 different mSASSS scores (i.e., a score of 0, 1, 2, 3) on two test datasets. The accuracy of the presented method shows the potential to streamline the spinal radiograph readings and therefore reduce the cost of future clinical trials. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 384,691 |
2407.19164 | Addressing Topic Leakage in Cross-Topic Evaluation for Authorship
Verification | Authorship verification (AV) aims to identify whether a pair of texts has the same author. We address the challenge of evaluating AV models' robustness against topic shifts. The conventional evaluation assumes minimal topic overlap between training and test data. However, we argue that there can still be topic leakage in test data, causing misleading model performance and unstable rankings. To address this, we propose an evaluation method called Heterogeneity-Informed Topic Sampling (HITS), which creates a smaller dataset with a heterogeneously distributed topic set. Our experimental results demonstrate that HITS-sampled datasets yield a more stable ranking of models across random seeds and evaluation splits. Our contributions include: 1. An analysis of causes and effects of topic leakage. 2. A demonstration of the HITS in reducing the effects of topic leakage, and 3. The Robust Authorship Verification bENchmark (RAVEN) that allows topic shortcut test to uncover AV models' reliance on topic-specific features. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 476,669 |
2410.18270 | Multilingual Hallucination Gaps in Large Language Models | Large language models (LLMs) are increasingly used as alternatives to traditional search engines given their capacity to generate text that resembles human language. However, this shift is concerning, as LLMs often generate hallucinations, misleading or false information that appears highly credible. In this study, we explore the phenomenon of hallucinations across multiple languages in freeform text generation, focusing on what we call multilingual hallucination gaps. These gaps reflect differences in the frequency of hallucinated answers depending on the prompt and language used. To quantify such hallucinations, we used the FactScore metric and extended its framework to a multilingual setting. We conducted experiments using LLMs from the LLaMA, Qwen, and Aya families, generating biographies in 19 languages and comparing the results to Wikipedia pages. Our results reveal variations in hallucination rates, especially between high and low resource languages, raising important questions about LLM multilingual performance and the challenges in evaluating hallucinations in multilingual freeform text generation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 501,817 |
2311.01813 | FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain
Text-to-Video Generation | Recently, open-domain text-to-video (T2V) generation models have made remarkable progress. However, the promising results are mainly shown by the qualitative cases of generated videos, while the quantitative evaluation of T2V models still faces two critical problems. Firstly, existing studies lack fine-grained evaluation of T2V models on different categories of text prompts. Although some benchmarks have categorized the prompts, their categorization either only focuses on a single aspect or fails to consider the temporal information in video generation. Secondly, it is unclear whether the automatic evaluation metrics are consistent with human standards. To address these problems, we propose FETV, a benchmark for Fine-grained Evaluation of Text-to-Video generation. FETV is multi-aspect, categorizing the prompts based on three orthogonal aspects: the major content, the attributes to control and the prompt complexity. FETV is also temporal-aware, which introduces several temporal categories tailored for video generation. Based on FETV, we conduct comprehensive manual evaluations of four representative T2V models, revealing their pros and cons on different categories of prompts from different aspects. We also extend FETV as a testbed to evaluate the reliability of automatic T2V metrics. The multi-aspect categorization of FETV enables fine-grained analysis of the metrics' reliability in different scenarios. We find that existing automatic metrics (e.g., CLIPScore and FVD) correlate poorly with human evaluation. To address this problem, we explore several solutions to improve CLIPScore and FVD, and develop two automatic metrics that exhibit significant higher correlation with humans than existing metrics. Benchmark page: https://github.com/llyx97/FETV. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 405,187 |
2201.09457 | Homotopic Policy Mirror Descent: Policy Convergence, Implicit
Regularization, and Improved Sample Complexity | We propose a new policy gradient method, named homotopic policy mirror descent (HPMD), for solving discounted, infinite horizon MDPs with finite state and action spaces. HPMD performs a mirror descent type policy update with an additional diminishing regularization term, and possesses several computational properties that seem to be new in the literature. We first establish the global linear convergence of HPMD instantiated with Kullback-Leibler divergence, for both the optimality gap, and a weighted distance to the set of optimal policies. Then local superlinear convergence is obtained for both quantities without any assumption. With local acceleration and diminishing regularization, we establish the first result among policy gradient methods on certifying and characterizing the limiting policy, by showing, with a non-asymptotic characterization, that the last-iterate policy converges to the unique optimal policy with the maximal entropy. We then extend all the aforementioned results to HPMD instantiated with a broad class of decomposable Bregman divergences, demonstrating the generality of the these computational properties. As a by product, we discover the finite-time exact convergence for some commonly used Bregman divergences, implying the continuing convergence of HPMD to the limiting policy even if the current policy is already optimal. Finally, we develop a stochastic version of HPMD and establish similar convergence properties. By exploiting the local acceleration, we show that for small optimality gap, a better than $\tilde{\mathcal{O}}(\left|\mathcal{S}\right| \left|\mathcal{A}\right| / \epsilon^2)$ sample complexity holds with high probability, when assuming a generative model for policy evaluation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 276,682 |
1904.09374 | Two-Timescale Voltage Control in Distribution Grids Using Deep
Reinforcement Learning | Modern distribution grids are currently being challenged by frequent and sizable voltage fluctuations, due mainly to the increasing deployment of electric vehicles and renewable generators. Existing approaches to maintaining bus voltage magnitudes within the desired region can cope with either traditional utility-owned devices (e.g., shunt capacitors), or contemporary smart inverters that come with distributed generation units (e.g., photovoltaic plants). The discrete on-off commitment of capacitor units is often configured on an hourly or daily basis, yet smart inverters can be controlled within milliseconds, thus challenging joint control of these two types of assets. In this context, a novel two-timescale voltage regulation scheme is developed for distribution grids by judiciously coupling data-driven with physicsbased optimization. On a faster timescale, say every second, the optimal setpoints of smart inverters are obtained by minimizing instantaneous bus voltage deviations from their nominal values, based on either the exact alternating current power flow model or a linear approximant of it; whereas, on the slower timescale (e.g., every hour), shunt capacitors are configured to minimize the longterm discounted voltage deviations using a deep reinforcement learning algorithm. Extensive numerical tests on a real-world 47- bus distribution network as well as the IEEE 123-bus test feeder using real data corroborate the effectiveness of the novel scheme. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 128,357 |
2110.14013 | Deep Integrated Pipeline of Segmentation Guided Classification of Breast
Cancer from Ultrasound Images | Breast cancer has become a symbol of tremendous concern in the modern world, as it is one of the major causes of cancer mortality worldwide. In this regard, breast ultrasonography images are frequently utilized by doctors to diagnose breast cancer at an early stage. However, the complex artifacts and heavily noised breast ultrasonography images make diagnosis a great challenge. Furthermore, the ever-increasing number of patients being screened for breast cancer necessitates the use of automated end-to-end technology for highly accurate diagnosis at a low cost and in a short time. In this concern, to develop an end-to-end integrated pipeline for breast ultrasonography image classification, we conducted an exhaustive analysis of image preprocessing methods such as K Means++ and SLIC, as well as four transfer learning models such as VGG16, VGG19, DenseNet121, and ResNet50. With a Dice-coefficient score of 63.4 in the segmentation stage and accuracy and an F1-Score (Benign) of 73.72 percent and 78.92 percent in the classification stage, the combination of SLIC, UNET, and VGG16 outperformed all other integrated combinations. Finally, we have proposed an end to end integrated automated pipelining framework which includes preprocessing with SLIC to capture super-pixel features from the complex artifact of ultrasonography images, complementing semantic segmentation with modified U-Net, leading to breast tumor classification using a transfer learning approach with a pre-trained VGG16 and a densely connected neural network. The proposed automated pipeline can be effectively implemented to assist medical practitioners in making more accurate and timely diagnoses of breast cancer. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 263,380 |
2004.06091 | Selective Encoding Policies for Maximizing Information Freshness | An information source generates independent and identically distributed status update messages from an observed random phenomenon which takes $n$ distinct values based on a given pmf. These update packets are encoded at the transmitter node to be sent to a receiver node which wants to track the observed random variable with as little age as possible. The transmitter node implements a selective $k$ encoding policy such that rather than encoding all possible $n$ realizations, the transmitter node encodes the most probable $k$ realizations. We consider three different policies regarding the remaining $n-k$ less probable realizations: $highest$ $k$ $selective$ $encoding$ which disregards whenever a realization from the remaining $n-k$ values occurs; $randomized$ $selective$ $encoding$ which encodes and sends the remaining $n-k$ realizations with a certain probability to further inform the receiver node at the expense of longer codewords for the selected $k$ realizations; and $highest$ $k$ $selective$ $encoding$ $with$ $an$ $empty$ $symbol$ which sends a designated empty symbol when one of the remaining $n-k$ realizations occurs. For all of these three encoding schemes, we find the average age and determine the age-optimal real codeword lengths, including the codeword length for the empty symbol in the case of the latter scheme, such that the average age at the receiver node is minimized. Through numerical evaluations for arbitrary pmfs, we show that these selective encoding policies result in a lower average age than encoding every realization, and find the corresponding age-optimal $k$ values. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 172,413 |
1904.05216 | Dungeons for Science: Mapping Belief Places and Spaces | Tabletop fantasy role-playing games (TFRPGs) have existed in offline and online contexts for many decades, yet are rarely featured in scientific literature. This paper presents a case study where TFRPGs were used to generate and collect data for maps of belief environments using fiction co-created by multiple small groups of online tabletop gamers. The affordances of TFRPGs allowed us to collect repeatable, targeted data in online field conditions. These data not only included terms that allowed us to build our maps, but also to explore nuanced ethical problems from a situated, collaborative perspective. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 127,240 |
2501.18592 | Advances in Multimodal Adaptation and Generalization: From Traditional
Approaches to Foundation Models | In real-world scenarios, achieving domain adaptation and generalization poses significant challenges, as models must adapt to or generalize across unknown target distributions. Extending these capabilities to unseen multimodal distributions, i.e., multimodal domain adaptation and generalization, is even more challenging due to the distinct characteristics of different modalities. Significant progress has been made over the years, with applications ranging from action recognition to semantic segmentation. Besides, the recent advent of large-scale pre-trained multimodal foundation models, such as CLIP, has inspired works leveraging these models to enhance adaptation and generalization performances or adapting them to downstream tasks. This survey provides the first comprehensive review of recent advances from traditional approaches to foundation models, covering: (1) Multimodal domain adaptation; (2) Multimodal test-time adaptation; (3) Multimodal domain generalization; (4) Domain adaptation and generalization with the help of multimodal foundation models; and (5) Adaptation of multimodal foundation models. For each topic, we formally define the problem and thoroughly review existing methods. Additionally, we analyze relevant datasets and applications, highlighting open challenges and potential future research directions. We maintain an active repository that contains up-to-date literature at https://github.com/donghao51/Awesome-Multimodal-Adaptation. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 528,767 |
2008.06957 | Improving Services Offered by Internet Providers by Analyzing Online
Reviews using Text Analytics | With the proliferation of digital infrastructure, there is a plethora of demand for internet services, which makes the wireless communications industry highly competitive. Thus internet service providers (ISPs) must ensure that their efforts are targeted towards attracting and retaining customers to ensure continued growth. As Web 2.0 has gained traction and more tools have become available, customers in recent times are equipped to make well-informed decisions, specifically due to the colossal information available in online reviews. ISPs can use this information to better understand the views of the customers about their products and services. The goal of this paper is to identify the current strengths, weaknesses, opportunities, and threats (SWOT) of each ISP by exploring consumer reviews using text analytics. The proposed approach consists of four different stages: bigram and trigram analyses, topic identification, SWOT analysis and Root Cause Analysis (RCA). For each ISP, we first categorize online reviews into positive and negative based on customer ratings and then leverage text analytic tools to determine the most frequently used and co-occurring words in each categorization of reviews. Subsequently, looking at the positive and negative topics in each ISP, we conduct the SWOT analysis as well as the RCA to help companies identify the internal and external factors impacting customer satisfaction. We use a case study to illustrate the proposed approach. The proposed managerial insights that are derived from the results can act as a decision support tool for ISPs to offer better products and services for their customers. | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | 191,945 |
2403.17011 | SUDO: a framework for evaluating clinical artificial intelligence
systems without ground-truth annotations | A clinical artificial intelligence (AI) system is often validated on a held-out set of data which it has not been exposed to before (e.g., data from a different hospital with a distinct electronic health record system). This evaluation process is meant to mimic the deployment of an AI system on data in the wild; those which are currently unseen by the system yet are expected to be encountered in a clinical setting. However, when data in the wild differ from the held-out set of data, a phenomenon referred to as distribution shift, and lack ground-truth annotations, it becomes unclear the extent to which AI-based findings can be trusted on data in the wild. Here, we introduce SUDO, a framework for evaluating AI systems without ground-truth annotations. SUDO assigns temporary labels to data points in the wild and directly uses them to train distinct models, with the highest performing model indicative of the most likely label. Through experiments with AI systems developed for dermatology images, histopathology patches, and clinical reports, we show that SUDO can be a reliable proxy for model performance and thus identify unreliable predictions. We also demonstrate that SUDO informs the selection of models and allows for the previously out-of-reach assessment of algorithmic bias for data in the wild without ground-truth annotations. The ability to triage unreliable predictions for further inspection and assess the algorithmic bias of AI systems can improve the integrity of research findings and contribute to the deployment of ethical AI systems in medicine. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 441,289 |
2403.11211 | RCdpia: A Renal Carcinoma Digital Pathology Image Annotation dataset
based on pathologists | The annotation of digital pathological slide data for renal cell carcinoma is of paramount importance for correct diagnosis of artificial intelligence models due to the heterogeneous nature of the tumor. This process not only facilitates a deeper understanding of renal cell cancer heterogeneity but also aims to minimize noise in the data for more accurate studies. To enhance the applicability of the data, two pathologists were enlisted to meticulously curate, screen, and label a kidney cancer pathology image dataset from The Cancer Genome Atlas Program (TCGA) database. Subsequently, a Resnet model was developed to validate the annotated dataset against an additional dataset from the First Affiliated Hospital of Zhejiang University. Based on these results, we have meticulously compiled the TCGA digital pathological dataset with independent labeling of tumor regions and adjacent areas (RCdpia), which includes 109 cases of kidney chromophobe cell carcinoma, 486 cases of kidney clear cell carcinoma, and 292 cases of kidney papillary cell carcinoma. This dataset is now publicly accessible at http://39.171.241.18:8888/RCdpia/. Furthermore, model analysis has revealed significant discrepancies in predictive outcomes when applying the same model to datasets from different centers. Leveraging the RCdpia, we can now develop more precise digital pathology artificial intelligence models for tasks such as normalization, classification, and segmentation. These advancements underscore the potential for more nuanced and accurate AI applications in the field of digital pathology. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,596 |
2204.04980 | A Comparative Study of Pre-trained Encoders for Low-Resource Named
Entity Recognition | Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data. However, their performance in low-resource scenarios, where such data is not available, remains an open question. We introduce an encoder evaluation framework, and use it to systematically compare the performance of state-of-the-art pre-trained representations on the task of low-resource NER. We analyze a wide range of encoders pre-trained with different strategies, model architectures, intermediate-task fine-tuning, and contrastive learning. Our experimental results across ten benchmark NER datasets in English and German show that encoder performance varies significantly, suggesting that the choice of encoder for a specific low-resource scenario needs to be carefully evaluated. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 290,869 |
2002.07246 | Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness | Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against $\ell_2$ norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 164,411 |
1801.08099 | Logically-Constrained Reinforcement Learning | We present the first model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. The given temporal property is converted into a Limit Deterministic Buchi Automaton (LDBA) and a robust reward function is defined over the state-action pairs of the MDP according to the resulting LDBA. With this reward function, the policy synthesis procedure is "constrained" by the given specification. These constraints guide the MDP exploration so as to minimize the solution time by only considering the portion of the MDP that is relevant to satisfaction of the LTL property. This improves performance and scalability of the proposed method by avoiding an exhaustive update over the whole state space while the efficiency of standard methods such as dynamic programming is hindered by excessive memory requirements, caused by the need to store a full-model in memory. Additionally, we show that the RL procedure sets up a local value iteration method to efficiently calculate the maximum probability of satisfying the given property, at any given state of the MDP. We prove that our algorithm is guaranteed to find a policy whose traces probabilistically satisfy the LTL property if such a policy exists, and additionally we show that our method produces reasonable control policies even when the LTL property cannot be satisfied. The performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 88,898 |
2210.11475 | On the economic viability of solar energy when upgrading cellular
networks | The massive increase of data traffic, the widespread proliferation of wireless applications and the full-scale deployment of 5G and the IoT, imply a steep increase in cellular networks energy use, resulting in a significant carbon footprint. This paper presents a comprehensive model to show the interaction between the networking and energy features of the problem and study the economical and technical viability of green networking. Solar equipment, cell zooming, energy management and dynamic user allocation are considered in the upgrading network planning process. We propose a mixed-integer optimization model to minimize long-term capital costs and operational energy expenditures in a heterogeneous on-grid cellular network with different types of base station, including solar. Based on eight scenarios where realistic costs of solar panels, batteries, and inverters were considered, we first found that solar base stations are currently not economically interesting for cellular operators. We next studied the impact of a significant and progressive carbon tax on reducing greenhouse gas emissions (GHG). We found that, at current energy and equipment prices, a carbon tax ten-fold the current value is the only element that could make green base stations economically viable. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 325,328 |
2311.05006 | Familiarity-Based Open-Set Recognition Under Adversarial Attacks | Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 406,433 |
2102.08818 | SciDr at SDU-2020: IDEAS -- Identifying and Disambiguating Everyday
Acronyms for Scientific Domain | We present our systems submitted for the shared tasks of Acronym Identification (AI) and Acronym Disambiguation (AD) held under Workshop on SDU. We mainly experiment with BERT and SciBERT. In addition, we assess the effectiveness of "BIOless" tagging and blending along with the prowess of ensembling in AI. For AD, we formulate the problem as a span prediction task, experiment with different training techniques and also leverage the use of external data. Our systems rank 11th and 3rd in AI and AD tasks respectively. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 220,581 |
2312.14525 | An Approach to Reduce Computational Load: Precalculating Gain Matrices
for an LQR Controller of a Four-Axis Manipulator Using State Space Kinematics | When designing a power or CPU constrained device where a four-axis robotic arm is required and access to the Robot Operating System (ROS) is not an option, finding an efficient state space controller for a four-axis arm can be an obstacle. In this paper, I explore a method to optimize the computing power required for a computer algebra system (CAS) to compute linear quadratic regulator (LQR) matrices by precomputing the gain matrix for different states. Example C++ code is provided on Github, along with ideas for further exploration. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 417,669 |
1911.10119 | GANkyoku: a Generative Adversarial Network for Shakuhachi Music | A common approach to generating symbolic music using neural networks involves repeated sampling of an autoregressive model until the full output sequence is obtained. While such approaches have shown some promise in generating short sequences of music, this typically has not extended to cases where the final target sequence is significantly longer, for example an entire piece of music. In this work we propose a network trained in an adversarial process to generate entire pieces of solo shakuhachi music, in the form of symbolic notation. The pieces are intended to refer clearly to traditional shakuhachi music, maintaining idiomaticity and key aesthetic qualities, while also adding novel features, ultimately creating worthy additions to the contemporary shakuhachi repertoire. A key subproblem is also addressed, namely the lack of relevant training data readily available, in two steps: firstly, we introduce the PH_Shaku dataset for symbolic traditional shakuhachi music; secondly, we build on previous work using conditioning in generative adversarial networks to introduce a technique for data augmentation. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 154,736 |
1912.01553 | Learning Spatially Structured Image Transformations Using Planar Neural
Networks | Learning image transformations is essential to the idea of mental simulation as a method of cognitive inference. We take a connectionist modeling approach, using planar neural networks to learn fundamental imagery transformations, like translation, rotation, and scaling, from perceptual experiences in the form of image sequences. We investigate how variations in network topology, training data, and image shape, among other factors, affect the efficiency and effectiveness of learning visual imagery transformations, including effectiveness of transfer to operating on new types of data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 156,118 |
2306.02679 | Joint Pre-training and Local Re-training: Transferable Representation
Learning on Multi-source Knowledge Graphs | In this paper, we present the ``joint pre-training and local re-training'' framework for learning and applying multi-source knowledge graph (KG) embeddings. We are motivated by the fact that different KGs contain complementary information to improve KG embeddings and downstream tasks. We pre-train a large teacher KG embedding model over linked multi-source KGs and distill knowledge to train a student model for a task-specific KG. To enable knowledge transfer across different KGs, we use entity alignment to build a linked subgraph for connecting the pre-trained KGs and the target KG. The linked subgraph is re-trained for three-level knowledge distillation from the teacher to the student, i.e., feature knowledge distillation, network knowledge distillation, and prediction knowledge distillation, to generate more expressive embeddings. The teacher model can be reused for different target KGs and tasks without having to train from scratch. We conduct extensive experiments to demonstrate the effectiveness and efficiency of our framework. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 371,007 |
2412.04649 | Generating Whole-Body Avoidance Motion through Localized Proximity
Sensing | This paper presents a novel control algorithm for robotic manipulators in unstructured environments using proximity sensors partially distributed on the platform. The proposed approach exploits arrays of multi zone Time-of-Flight (ToF) sensors to generate a sparse point cloud representation of the robot surroundings. By employing computational geometry techniques, we fuse the knowledge of robot geometric model with ToFs sensory feedback to generate whole-body motion tasks, allowing to move both sensorized and non-sensorized links in response to unpredictable events such as human motion. In particular, the proposed algorithm computes the pair of closest points between the environment cloud and the robot links, generating a dynamic avoidance motion that is implemented as the highest priority task in a two-level hierarchical architecture. Such a design choice allows the robot to work safely alongside humans even without a complete sensorization over the whole surface. Experimental validation demonstrates the algorithm effectiveness both in static and dynamic scenarios, achieving comparable performances with respect to well established control techniques that aim to move the sensors mounting positions on the robot body. The presented algorithm exploits any arbitrary point on the robot surface to perform avoidance motion, showing improvements in the distance margin up to 100 mm, due to the rendering of virtual avoidance tasks on non-sensorized links. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 514,497 |
2103.07248 | Knowledge- and Data-driven Services for Energy Systems using Graph
Neural Networks | The transition away from carbon-based energy sources poses several challenges for the operation of electricity distribution systems. Increasing shares of distributed energy resources (e.g. renewable energy generators, electric vehicles) and internet-connected sensing and control devices (e.g. smart heating and cooling) require new tools to support accurate, datadriven decision making. Modelling the effect of such growing complexity in the electrical grid is possible in principle using state-of-the-art power-power flow models. In practice, the detailed information needed for these physical simulations may be unknown or prohibitively expensive to obtain. Hence, datadriven approaches to power systems modelling, including feedforward neural networks and auto-encoders, have been studied to leverage the increasing availability of sensor data, but have seen limited practical adoption due to lack of transparency and inefficiencies on large-scale problems. Our work addresses this gap by proposing a data- and knowledge-driven probabilistic graphical model for energy systems based on the framework of graph neural networks (GNNs). The model can explicitly factor in domain knowledge, in the form of grid topology or physics constraints, thus resulting in sparser architectures and much smaller parameters dimensionality when compared with traditional machine-learning models with similar accuracy. Results obtained from a real-world smart-grid demonstration project show how the GNN was used to inform grid congestion predictions and market bidding services for a distribution system operator participating in an energy flexibility market. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 224,540 |
2308.10120 | Deep Generative Modeling-based Data Augmentation with Demonstration
using the BFBT Benchmark Void Fraction Datasets | Deep learning (DL) has achieved remarkable successes in many disciplines such as computer vision and natural language processing due to the availability of ``big data''. However, such success cannot be easily replicated in many nuclear engineering problems because of the limited amount of training data, especially when the data comes from high-cost experiments. To overcome such a data scarcity issue, this paper explores the applications of deep generative models (DGMs) that have been widely used for image data generation to scientific data augmentation. DGMs, such as generative adversarial networks (GANs), normalizing flows (NFs), variational autoencoders (VAEs), and conditional VAEs (CVAEs), can be trained to learn the underlying probabilistic distribution of the training dataset. Once trained, they can be used to generate synthetic data that are similar to the training data and significantly expand the dataset size. By employing DGMs to augment TRACE simulated data of the steady-state void fractions based on the NUPEC Boiling Water Reactor Full-size Fine-mesh Bundle Test (BFBT) benchmark, this study demonstrates that VAEs, CVAEs, and GANs have comparable generative performance with similar errors in the synthetic data, with CVAEs achieving the smallest errors. The findings shows that DGMs have a great potential to augment scientific data in nuclear engineering, which proves effective for expanding the training dataset and enabling other DL models to be trained more accurately. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 386,582 |
1910.12388 | A memory enhanced LSTM for modeling complex temporal dependencies | In this paper, we present Gamma-LSTM, an enhanced long short term memory (LSTM) unit, to enable learning of hierarchical representations through multiple stages of temporal abstractions. Gamma memory, a hierarchical memory unit, forms the central memory of Gamma-LSTM with gates to regulate the information flow into various levels of hierarchy, thus providing the unit with a control to pick the appropriate level of hierarchy to process the input at a given instant of time. We demonstrate better performance of Gamma-LSTM model regular and stacked LSTMs in two settings (pixel-by-pixel MNIST digit classification and natural language inference) placing emphasis on the ability to generalize over long sequences. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,074 |
2203.02833 | Tabula: Efficiently Computing Nonlinear Activation Functions for Secure
Neural Network Inference | Multiparty computation approaches to secure neural network inference commonly rely on garbled circuits for securely executing nonlinear activation functions. However, garbled circuits require excessive communication between server and client, impose significant storage overheads, and incur large runtime penalties. To reduce these costs, we propose an alternative to garbled circuits: Tabula, an algorithm based on secure lookup tables. Our approach precomputes lookup tables during an offline phase that contains the result of all possible nonlinear function calls. Because these tables incur exponential storage costs in the number of operands and the precision of the input values, we use quantization to reduce these storage costs to make this approach practical. This enables an online phase where securely computing the result of a nonlinear function requires just a single round of communication, with communication cost equal to twice the number of bits of the input to the nonlinear function. In practice our approach costs 2 bytes of communication per nonlinear function call in the online phase. Compared to garbled circuits with 8-bit quantized inputs, when computing individual nonlinear functions during the online phase, experiments show Tabula with 8-bit activations uses between $280$-$560 \times$ less communication, is over $100\times$ faster, and uses a comparable (within a factor of 2) amount of storage; compared against other state-of-the-art protocols Tabula achieves greater than $40\times$ communication reduction. This leads to significant performance gains over garbled circuits with quantized inputs during the online phase of secure inference of neural networks: Tabula reduces end-to-end inference communication by up to $9 \times$ and achieves an end-to-end inference speedup of up to $50 \times$, while imposing comparable storage and offline preprocessing costs. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 283,878 |
1601.05650 | Exponent Function for Source Coding with Side Information at the Decoder
at Rates below the Rate Distortion Function | We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. The rate distortion function indicating the trade-off between the rate on the data compression and the quality of data obtained at the decoder was determined by Wyner and Ziv. In this paper, we study the error probability of decoding at rates below the rate distortion function. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level. We prove that when the rate of the data compression is below the rate distortion function this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner-Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 51,150 |
2102.09600 | Within-Document Event Coreference with BERT-Based Contextualized
Representations | Event coreference continues to be a challenging problem in information extraction. With the absence of any external knowledge bases for events, coreference becomes a clustering task that relies on effective representations of the context in which event mentions appear. Recent advances in contextualized language representations have proven successful in many tasks, however, their use in event linking been limited. Here we present a three part approach that (1) uses representations derived from a pretrained BERT model to (2) train a neural classifier to (3) drive a simple clustering algorithm to create coreference chains. We achieve state of the art results with this model on two standard datasets for within-document event coreference task and establish a new standard on a third newer dataset. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 220,826 |
2002.09849 | Multi-Antenna UAV Data Harvesting: Joint Trajectory and Communication
Optimization | Unmanned aerial vehicle (UAV)-enabled communication is a promising technology to extend coverage and enhance throughput for traditional terrestrial wireless communication systems. In this paper, we consider a UAV-enabled wireless sensor network (WSN), where a multi-antenna UAV is dispatched to collect data from a group of sensor nodes (SNs). The objective is to maximize the minimum data collection rate from all SNs via jointly optimizing their transmission scheduling and power allocations as well as the trajectory of the UAV, subject to the practical constraints on the maximum transmit power of the SNs and the maximum speed of the UAV. The formulated optimization problem is challenging to solve as it involves non-convex constraints and discrete-value variables. To draw useful insight, we first consider the special case of the formulated problem by ignoring the UAV speed constraint and optimally solve it based on the Lagrange duality method. It is shown that for this relaxed problem, the UAV should hover above a finite number of optimal locations with different durations in general. Next, we address the general case of the formulated problem where the UAV speed constraint is considered and propose a traveling salesman problem (TSP)-based trajectory initialization, where the UAV sequentially visits the locations obtained in the relaxed problem with minimum flying time. Given this initial trajectory, we then find the corresponding transmission scheduling and power allocations of the SNs and further optimize the UAV trajectory by applying the block coordinate descent (BCD) and successive convex approximation (SCA) techniques. Finally, numerical results are provided to illustrate the spectrum and energy efficiency gains of the proposed scheme for multi-antenna UAV data harvesting, as compared to benchmark schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 165,205 |
2303.08046 | Ultra-High-Resolution Detector Simulation with Intra-Event Aware GAN and
Self-Supervised Relational Reasoning | Simulating high-resolution detector responses is a computationally intensive process that has long been challenging in Particle Physics. Despite the ability of generative models to streamline it, full ultra-high-granularity detector simulation still proves to be difficult as it contains correlated and fine-grained information. To overcome these limitations, we propose Intra-Event Aware Generative Adversarial Network (IEA-GAN). IEA-GAN presents a Relational Reasoning Module that approximates an event in detector simulation, generating contextualized high-resolution full detector responses with a proper relational inductive bias. IEA-GAN also introduces a Self-Supervised intra-event aware loss and Uniformity loss, significantly enhancing sample fidelity and diversity. We demonstrate IEA-GAN's application in generating sensor-dependent images for the ultra-high-granularity Pixel Vertex Detector (PXD), with more than 7.5 M information channels at the Belle II Experiment. Applications of this work span from Foundation Models for high-granularity detector simulation, such as at the HL-LHC (High Luminosity LHC), to simulation-based inference and fine-grained density estimation. To our knowledge, IEA-GAN is the first algorithm for faithful ultra-high-granularity full detector simulation with event-based reasoning. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,487 |
2411.09020 | Predictive Visuo-Tactile Interactive Perception Framework for Object
Properties Inference | Interactive exploration of the unknown physical properties of objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. Precise identification of these properties is essential to manipulate objects in a stable and controlled way, and is also required to anticipate the outcomes of (prehensile or non-prehensile) manipulation actions such as pushing, pulling, lifting, etc. Our study focuses on autonomously inferring the physical properties of a diverse set of various homogeneous, heterogeneous, and articulated objects utilizing a robotic system equipped with vision and tactile sensors. We propose a novel predictive perception framework for identifying object properties of the diverse objects by leveraging versatile exploratory actions: non-prehensile pushing and prehensile pulling. As part of the framework, we propose a novel active shape perception to seamlessly initiate exploration. Our innovative dual differentiable filtering with Graph Neural Networks learns the object-robot interaction and performs consistent inference of indirectly observable time-invariant object properties. In addition, we formulate a $N$-step information gain approach to actively select the most informative actions for efficient learning and inference. Extensive real-robot experiments with planar objects show that our predictive perception framework results in better performance than the state-of-the-art baseline and demonstrate our framework in three major applications for i) object tracking, ii) goal-driven task, and iii) change in environment detection. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 508,102 |
2202.03173 | Towards Loosely-Coupling Knowledge Graph Embeddings and Ontology-based
Reasoning | Knowledge graph completion (a.k.a.~link prediction), i.e.,~the task of inferring missing information from knowledge graphs, is a widely used task in many applications, such as product recommendation and question answering. The state-of-the-art approaches of knowledge graph embeddings and/or rule mining and reasoning are data-driven and, thus, solely based on the information the input knowledge graph contains. This leads to unsatisfactory prediction results which make such solutions inapplicable to crucial domains such as healthcare. To further enhance the accuracy of knowledge graph completion we propose to loosely-couple the data-driven power of knowledge graph embeddings with domain-specific reasoning stemming from experts or entailment regimes (e.g., OWL2). In this way, we not only enhance the prediction accuracy with domain knowledge that may not be included in the input knowledge graph but also allow users to plugin their own knowledge graph embedding and reasoning method. Our initial results show that we enhance the MRR accuracy of vanilla knowledge graph embeddings by up to 3x and outperform hybrid solutions that combine knowledge graph embeddings with rule mining and reasoning up to 3.5x MRR. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 279,103 |
2403.12432 | Prototipo de video juego activo basado en una c\'amara 3D para motivar
la actividad f\'isica en ni\~nos y adultos mayores | This document describes the development of a video game prototype designed to encourage physical activity among children and older adults. The prototype consists of a laptop, a camera with 3D sensors, and optionally requires an LCD screen or a projector. The programming component of this prototype was developed in Scratch, a programming language geared towards children, which greatly facilitates the creation of a game tailored to the users' preferences. The idea to create such a prototype originated from the desire to offer an option that promotes physical activity among children and adults, given that a lack of physical exercise is a predominant factor in the development of chronic degenerative diseases such as diabetes and hypertension, to name the most common. As a result of this initiative, an active video game prototype was successfully developed, based on a ping-pong game, which allows both children and adults to interact in a fun way while encouraging the performance of physical activities that can positively impact the users' health. | true | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | 439,179 |
1706.01330 | Neuroevolution on the Edge of Chaos | Echo state networks represent a special type of recurrent neural networks. Recent papers stated that the echo state networks maximize their computational performance on the transition between order and chaos, the so-called edge of chaos. This work confirms this statement in a comprehensive set of experiments. Furthermore, the echo state networks are compared to networks evolved via neuroevolution. The evolved networks outperform the echo state networks, however, the evolution consumes significant computational resources. It is demonstrated that echo state networks with local connections combine the best of both worlds, the simplicity of random echo state networks and the performance of evolved networks. Finally, it is shown that evolution tends to stay close to the ordered side of the edge of chaos. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 74,783 |
2305.10055 | Optimized Joint Beamforming for Wireless Powered Over-the-Air
Computation | This correspondence studies the wireless powered over-the-air computation (AirComp) for achieving sustainable wireless data aggregation (WDA) by integrating AirComp and wireless power transfer (WPT) into a joint design. In particular, we consider that a multi-antenna hybrid access point (HAP) employs the transmit energy beamforming to charge multiple single-antenna low-power wireless devices (WDs) in the downlink, and the WDs use the harvested energy to simultaneously send their messages to the HAP for AirComp in the uplink. Under this setup, we minimize the computation mean square error (MSE), by jointly optimizing the transmit energy beamforming and the receive AirComp beamforming at the HAP, as well as the transmit power at the WDs, subject to the maximum transmit power constraint at the HAP and the wireless energy harvesting constraints at individual WDs. To tackle the non-convex computation MSE minimization problem, we present an efficient algorithm to find a converged high-quality solution by using the alternating optimization technique. Numerical results show that the proposed joint WPT-AirComp approach significantly reduces the computation MSE, as compared to other benchmark schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 364,890 |
2106.14623 | Polyconvex anisotropic hyperelasticity with neural networks | In the present work, two machine learning based constitutive models for finite deformations are proposed. Using input convex neural networks, the models are hyperelastic, anisotropic and fulfill the polyconvexity condition, which implies ellipticity and thus ensures material stability. The first constitutive model is based on a set of polyconvex, anisotropic and objective invariants. The second approach is formulated in terms of the deformation gradient, its cofactor and determinant, uses group symmetrization to fulfill the material symmetry condition, and data augmentation to fulfill objectivity approximately. The extension of the dataset for the data augmentation approach is based on mechanical considerations and does not require additional experimental or simulation data. The models are calibrated with highly challenging simulation data of cubic lattice metamaterials, including finite deformations and lattice instabilities. A moderate amount of calibration data is used, based on deformations which are commonly applied in experimental investigations. While the invariant-based model shows drawbacks for several deformation modes, the model based on the deformation gradient alone is able to reproduce and predict the effective material behavior very well and exhibits excellent generalization capabilities. In addition, the models are calibrated with transversely isotropic data, generated with an analytical polyconvex potential. For this case, both models show excellent results, demonstrating the straightforward applicability of the polyconvex neural network constitutive models to other symmetry groups. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 243,470 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.