id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.06708 | A Self-Paced Regularization Framework for Multi-Label Learning | In this paper, we propose a novel multi-label learning framework, called Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the self-paced learning strategy into multi-label learning regime. In light of the benefits of adopting the easy-to-hard strategy proposed by self-paced learning, the devised MLSPL aims to learn multiple labels jointly by gradually including label learning tasks and instances into model training from the easy to the hard. We first introduce a self-paced function as a regularizer in the multi-label learning formulation, so as to simultaneously rank priorities of the label learning tasks and the instances in each learning iteration. Considering that different multi-label learning scenarios often need different self-paced schemes during optimization, we thus propose a general way to find the desired self-paced functions. Experimental results on three benchmark datasets suggest the state-of-the-art performance of our approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 53,531 |
1908.06401 | On the Robustness of Human Pose Estimation | This paper provides a comprehensive and exhaustive study of adversarial attacks on human pose estimation models and the evaluation of their robustness. Besides highlighting the important differences between well-studied classification and human pose-estimation systems w.r.t. adversarial attacks, we also provide deep insights into the design choices of pose-estimation systems to shape future work. We benchmark the robustness of several 2D single person pose-estimation architectures trained on multiple datasets, MPII and COCO. In doing so, we also explore the problem of attacking non-classification networks including regression based networks, which has been virtually unexplored in the past. \par We find that compared to classification and semantic segmentation, human pose estimation architectures are relatively robust to adversarial attacks with the single-step attacks being surprisingly ineffective. Our study shows that the heatmap-based pose-estimation models are notably robust than their direct regression-based systems and that the systems which explicitly model anthropomorphic semantics of human body fare better than their other counterparts. Besides, targeted attacks are more difficult to obtain than un-targeted ones and some body-joints are easier to fool than the others. We present visualizations of universal perturbations to facilitate unprecedented insights into their workings on pose-estimation. Additionally, we show them to generalize well across different networks. Finally we perform a user study about perceptibility of these examples. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 142,011 |
1010.0624 | Eigenvalue Results for Large Scale Random Vandermonde Matrices with Unit
Complex Entries | This paper centers on the limit eigenvalue distribution for random Vandermonde matrices with unit magnitude complex entries. The phases of the entries are chosen independently and identically distributed from the interval $[-\pi,\pi]$. Various types of distribution for the phase are considered and we establish the existence of the empirical eigenvalue distribution in the large matrix limit on a wide range of cases. The rate of growth of the maximum eigenvalue is examined and shown to be no greater than $O(\log N)$ and no slower than $O(\log N/\log\log N)$ where $N$ is the dimension of the matrix. Additional results include the existence of the capacity of the Vandermonde channel (limit integral for the expected log determinant). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,776 |
2204.05529 | Forecasting SQL Query Cost at Twitter | With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query with traditional DBMS approaches. Can we estimate the cost of each query more efficiently without any computation in a SQL engine kernel? Can machine learning techniques help to estimate SQL query resource utilization? The answers are yes. We propose a SQL query cost predictor service, which employs machine learning techniques to train models from historical query request logs and rapidly forecasts the CPU and memory resource usages of online queries without any computation in a SQL engine. At Twitter, infrastructure engineers are maintaining a large-scale SQL federation system across on-premises and cloud data centers for serving ad-hoc queries. The proposed service can help to improve query scheduling by relieving the issue of imbalanced online analytical processing (OLAP) workloads in the SQL engine clusters. It can also assist in enabling preemptive scaling. Additionally, the proposed approach uses plain SQL statements for the model training and online prediction, indicating it is both hardware and software-agnostic. The method can be generalized to broader SQL systems and heterogeneous environments. The models can achieve 97.9\% accuracy for CPU usage prediction and 97\% accuracy for memory usage prediction. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 291,059 |
2409.17485 | Revisiting Deep Ensemble Uncertainty for Enhanced Medical Anomaly
Detection | Medical anomaly detection (AD) is crucial in pathological identification and localization. Current methods typically rely on uncertainty estimation in deep ensembles to detect anomalies, assuming that ensemble learners should agree on normal samples while exhibiting disagreement on unseen anomalies in the output space. However, these methods may suffer from inadequate disagreement on anomalies or diminished agreement on normal samples. To tackle these issues, we propose D2UE, a Diversified Dual-space Uncertainty Estimation framework for medical anomaly detection. To effectively balance agreement and disagreement for anomaly detection, we propose Redundancy-Aware Repulsion (RAR), which uses a similarity kernel that remains invariant to both isotropic scaling and orthogonal transformations, explicitly promoting diversity in learners' feature space. Moreover, to accentuate anomalous regions, we develop Dual-Space Uncertainty (DSU), which utilizes the ensemble's uncertainty in input and output spaces. In input space, we first calculate gradients of reconstruction error with respect to input images. The gradients are then integrated with reconstruction outputs to estimate uncertainty for inputs, enabling effective anomaly discrimination even when output space disagreement is minimal. We conduct a comprehensive evaluation of five medical benchmarks with different backbones. Experimental results demonstrate the superiority of our method to state-of-the-art methods and the effectiveness of each component in our framework. Our code is available at https://github.com/Rubiscol/D2UE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 491,824 |
2011.12652 | Evaluation of quality measures for color quantization | Visual quality evaluation is one of the challenging basic problems in image processing. It also plays a central role in the shaping, implementation, optimization, and testing of many methods. The existing image quality assessment methods focused on images corrupted by common degradation types while little attention was paid to color quantization. This in spite there is a wide range of applications requiring color quantization assessment being used as a preprocessing step when color-based tasks are more efficiently accomplished on a reduced number of colors. In this paper, we propose and carry-out a quantitative performance evaluation of nine well-known and commonly used full-reference image quality assessment measures. The evaluation is done by using two publicly available and subjectively rated image quality databases for color quantization degradation and by considering suitable combinations or subparts of them. The results indicate the quality measures that have closer performances in terms of their correlation to the subjective human rating and show that the evaluation of the statistical performance of the quality measures for color quantization is significantly impacted by the selected image quality database while maintaining a similar trend on each database. The detected strong similarity both on individual databases and on databases obtained by integration provides the ability to validate the integration process and to consider the quantitative performance evaluation on each database as an indicator for performance on the other databases. The experimental results are useful to address the choice of suitable quality measures for color quantization and to improve their future employment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 208,230 |
1402.7011 | Saving Human Lives: What Complexity Science and Information Systems can
Contribute | We discuss models and data of crowd disasters, crime, terrorism, war and disease spreading to show that conventional recipes, such as deterrence strategies, are often not effective and sufficient to contain them. Many common approaches do not provide a good picture of the actual system behavior, because they neglect feedback loops, instabilities and cascade effects. The complex and often counter-intuitive behavior of social systems and their macro-level collective dynamics can be better understood by means of complexity science. We highlight that a suitable system design and management can help to stop undesirable cascade effects and to enable favorable kinds of self-organization in the system. In such a way, complexity science can help to save human lives. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,223 |
1108.4618 | Artificial Neural Network and Rough Set for HV Bushings Condition
Monitoring | Most transformer failures are attributed to bushings failures. Hence it is necessary to monitor the condition of bushings. In this paper three methods are developed to monitor the condition of oil filled bushing. Multi-layer perceptron (MLP), Radial basis function (RBF) and Rough Set (RS) models are developed and combined through majority voting to form a committee. The MLP performs better that the RBF and the RS is terms of classification accuracy. The RBF is the fasted to train. The committee performs better than the individual models. The diversity of models is measured to evaluate their similarity when used in the committee. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 11,785 |
2112.06430 | Predicting Airbnb Rental Prices Using Multiple Feature Modalities | Figuring out the price of a listed Airbnb rental is an important and difficult task for both the host and the customer. For the former, it can enable them to set a reasonable price without compromising on their profits. For the customer, it helps understand the key drivers for price and also provides them with similarly priced places. This price prediction regression task can also have multiple downstream uses, such as in recommendation of similar rentals based on price. We propose to use geolocation, temporal, visual and natural language features to create a reliable and accurate price prediction algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 271,180 |
2010.04791 | ChrEn: Cherokee-English Machine Translation for Endangered Language
Revitalization | Cherokee is a highly endangered Native American language spoken by the Cherokee people. The Cherokee culture is deeply embedded in its language. However, there are approximately only 2,000 fluent first language Cherokee speakers remaining in the world, and the number is declining every year. To help save this endangered language, we introduce ChrEn, a Cherokee-English parallel dataset, to facilitate machine translation research between Cherokee and English. Compared to some popular machine translation language pairs, ChrEn is extremely low-resource, only containing 14k sentence pairs in total. We split our parallel data in ways that facilitate both in-domain and out-of-domain evaluation. We also collect 5k Cherokee monolingual data to enable semi-supervised learning. Besides these datasets, we propose several Cherokee-English and English-Cherokee machine translation systems. We compare SMT (phrase-based) versus NMT (RNN-based and Transformer-based) systems; supervised versus semi-supervised (via language model, back-translation, and BERT/Multilingual-BERT) methods; as well as transfer learning versus multilingual joint training with 4 other languages. Our best results are 15.8/12.7 BLEU for in-domain and 6.5/5.0 BLEU for out-of-domain Chr-En/EnChr translations, respectively, and we hope that our dataset and systems will encourage future work by the community for Cherokee language revitalization. Our data, code, and demo will be publicly available at https://github.com/ZhangShiyue/ChrEn | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 199,863 |
2212.01471 | Conditions for Estimation of Sensitivities of Voltage Magnitudes to
Complex Power Injections | Voltage phase angle measurements are often unavailable from sensors in distribution networks and transmission network boundaries. Therefore, this paper addresses the conditions for estimating sensitivities of voltage magnitudes with respect to complex (active and reactive) electric power injections based on sensor measurements. These sensitivities represent submatrices of the inverse power flow Jacobian. We extend previous results to show that the sensitivities of a bus voltage magnitude with respect to active power injections are unique and different from those with respect to reactive power. The classical Newton-Raphson power flow model is used to derive a novel representation of bus voltage magnitudes as an underdetermined linear operator of the active and reactive power injections; parameterized by the bus power factors. Two conditions that ensure the existence of unique complex power injections given voltage magnitudes are established for this underdetermined linear system, thereby compressing the solution space. The first is a sufficient condition based on the bus power factors. The second is a necessary and sufficient condition based on the system eigenvalues. We use matrix completion theory to develop estimation methods for recovering sensitivity matrices with varying levels of sensor availability. Simulations verify the results and demonstrate engineering use of the proposed methods. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 334,444 |
1503.09137 | Formalising Hypothesis Virtues in Knowledge Graphs: A General
Theoretical Framework and its Validation in Literature-Based Discovery
Experiments | We introduce an approach to discovery informatics that uses so called knowledge graphs as the essential representation structure. Knowledge graph is an umbrella term that subsumes various approaches to tractable representation of large volumes of loosely structured knowledge in a graph form. It has been used primarily in the Web and Linked Open Data contexts, but is applicable to any other area dealing with knowledge representation. In the perspective of our approach motivated by the challenges of discovery informatics, knowledge graphs correspond to hypotheses. We present a framework for formalising so called hypothesis virtues within knowledge graphs. The framework is based on a classic work in philosophy of science, and naturally progresses from mostly informative foundational notions to actionable specifications of measures corresponding to particular virtues. These measures can consequently be used to determine refined sub-sets of knowledge graphs that have large relative potential for making discoveries. We validate the proposed framework by experiments in literature-based discovery. The experiments have demonstrated the utility of our work and its superiority w.r.t. related approaches. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 41,654 |
1709.06868 | Learning quadrangulated patches for 3D shape parameterization and
completion | We propose a novel 3D shape parameterization by surface patches, that are oriented by 3D mesh quadrangulation of the shape. By encoding 3D surface detail on local patches, we learn a patch dictionary that identifies principal surface features of the shape. Unlike previous methods, we are able to encode surface patches of variable size as determined by the user. We propose novel methods for dictionary learning and patch reconstruction based on the query of a noisy input patch with holes. We evaluate the patch dictionary towards various applications in 3D shape inpainting, denoising and compression. Our method is able to predict missing vertices and inpaint moderately sized holes. We demonstrate a complete pipeline for reconstructing the 3D mesh from the patch encoding. We validate our shape parameterization and reconstruction methods on both synthetic shapes and real world scans. We show that our patch dictionary performs successful shape completion of complicated surface textures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 81,185 |
1809.08523 | Evolution of Threats in the Global Risk Network | With a steadily growing population and rapid advancements in technology, the global economy is increasing in size and complexity. This growth exacerbates global vulnerabilities and may lead to unforeseen consequences such as global pandemics fueled by air travel, cyberspace attacks, and cascading failures caused by the weakest link in a supply chain. Hence, a quantitative understanding of the mechanisms driving global network vulnerabilities is urgently needed. Developing methods for efficiently monitoring evolution of the global economy is essential to such understanding. Each year the World Economic Forum publishes an authoritative report on the state of the global economy and identifies risks that are likely to be active, impactful or contagious. Using a Cascading Alternating Renewal Process approach to model the dynamics of the global risk network, we are able to answer critical questions regarding the evolution of this network. To fully trace the evolution of the network we analyze the asymptotic state of risks (risk levels which would be reached in the long term if the risks were left unabated) given a snapshot in time, this elucidates the various challenges faced by the world community at each point in time. We also investigate the influence exerted by each risk on others. Results presented here are obtained through either quantitative analysis or computational simulations. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 108,524 |
1711.05253 | Learning Image-Conditioned Dynamics Models for Control of Under-actuated
Legged Millirobots | Millirobots are a promising robotic platform for many applications due to their small size and low manufacturing costs. Legged millirobots, in particular, can provide increased mobility in complex environments and improved scaling of obstacles. However, controlling these small, highly dynamic, and underactuated legged systems is difficult. Hand-engineered controllers can sometimes control these legged millirobots, but they have difficulties with dynamic maneuvers and complex terrains. We present an approach for controlling a real-world legged millirobot that is based on learned neural network models. Using less than 17 minutes of data, our method can learn a predictive model of the robot's dynamics that can enable effective gaits to be synthesized on the fly for following user-specified waypoints on a given terrain. Furthermore, by leveraging expressive, high-capacity neural network models, our approach allows for these predictions to be directly conditioned on camera images, endowing the robot with the ability to predict how different terrains might affect its dynamics. This enables sample-efficient and effective learning for locomotion of a dynamic legged millirobot on various terrains, including gravel, turf, carpet, and styrofoam. Experiment videos can be found at https://sites.google.com/view/imageconddyn | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 84,531 |
1601.06733 | Long Short-Term Memory-Networks for Machine Reading | In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 51,331 |
2001.08773 | Data Inference from Encrypted Databases: A Multi-dimensional
Order-Preserving Matching Approach | Due to increasing concerns of data privacy, databases are being encrypted before they are stored on an untrusted server. To enable search operations on the encrypted data, searchable encryption techniques have been proposed. Representative schemes use order-preserving encryption (OPE) for supporting efficient Boolean queries on encrypted databases. Yet, recent works showed the possibility of inferring plaintext data from OPE-encrypted databases, merely using the order-preserving constraints, or combined with an auxiliary plaintext dataset with similar frequency distribution. So far, the effectiveness of such attacks is limited to single-dimensional dense data (most values from the domain are encrypted), but it remains challenging to achieve it on high-dimensional datasets (e.g., spatial data) which are often sparse in nature. In this paper, for the first time, we study data inference attacks on multi-dimensional encrypted databases (with 2-D as a special case). We formulate it as a 2-D order-preserving matching problem and explore both unweighted and weighted cases, where the former maximizes the number of points matched using only order information and the latter further considers points with similar frequencies. We prove that the problem is NP-hard, and then propose a greedy algorithm, along with a polynomial-time algorithm with approximation guarantees. Experimental results on synthetic and real-world datasets show that the data recovery rate is significantly enhanced compared with the previous 1-D matching algorithm. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | true | 161,381 |
1410.5028 | Diffraction Patterns of Layered Close-packed Structures from Hidden
Markov Models | We recently derived analytical expressions for the pairwise (auto)correlation functions (CFs) between modular layers (MLs) in close-packed structures (CPSs) for the wide class of stacking processes describable as hidden Markov models (HMMs) [Riechers \etal, (2014), Acta Crystallogr.~A, XX 000-000]. We now use these results to calculate diffraction patterns (DPs) directly from HMMs, discovering that the relationship between the HMMs and DPs is both simple and fundamental in nature. We show that in the limit of large crystals, the DP is a function of parameters that specify the HMM. We give three elementary but important examples that demonstrate this result, deriving expressions for the DP of CPSs stacked (i) independently, (ii) as infinite-Markov-order randomly faulted 2H and 3C stacking structures over the entire range of growth and deformation faulting probabilities, and (iii) as a HMM that models Shockley-Frank stacking faults in 6H-SiC. While applied here to planar faulting in CPSs, extending the methods and results to planar disorder in other layered materials is straightforward. In this way, we effectively solve the broad problem of calculating a DP---either analytically or numerically---for any stacking structure---ordered or disordered---where the stacking process can be expressed as a HMM. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 36,860 |
2111.12174 | Using Distributional Principles for the Semantic Study of Contextual
Language Models | Many studies were recently done for investigating the properties of contextual language models but surprisingly, only a few of them consider the properties of these models in terms of semantic similarity. In this article, we first focus on these properties for English by exploiting the distributional principle of substitution as a probing mechanism in the controlled context of SemCor and WordNet paradigmatic relations. Then, we propose to adapt the same method to a more open setting for characterizing the differences between static and contextual language models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 267,890 |
1807.01081 | Solving Atari Games Using Fractals And Entropy | In this paper, we introduce a novel MCTS based approach that is derived from the laws of the thermodynamics. The algorithm coined Fractal Monte Carlo (FMC), allows us to create an agent that takes intelligent actions in both continuous and discrete environments while providing control over every aspect of the agent behavior. Results show that FMC is several orders of magnitude more efficient than similar techniques, such as MCTS, in the Atari games tested. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 101,984 |
2412.09805 | Universal Inceptive GNNs by Eliminating the Smoothness-generalization
Dilemma | Graph Neural Networks (GNNs) have demonstrated remarkable success in various domains, such as transaction and social net-works. However, their application is often hindered by the varyinghomophily levels across different orders of neighboring nodes, ne-cessitating separate model designs for homophilic and heterophilicgraphs. In this paper, we aim to develop a unified framework ca-pable of handling neighborhoods of various orders and homophilylevels. Through theoretical exploration, we identify a previouslyoverlooked architectural aspect in multi-hop learning: the cascadedependency, which leads to asmoothness-generalization dilemma.This dilemma significantly affects the learning process, especiallyin the context of high-order neighborhoods and heterophilic graphs.To resolve this issue, we propose an Inceptive Graph Neural Net-work (IGNN), a universal message-passing framework that replacesthe cascade dependency with an inceptive architecture. IGNN pro-vides independent representations for each hop, allowing personal-ized generalization capabilities, and captures neighborhood-wiserelationships to select appropriate receptive fields. Extensive ex-periments show that our IGNN outperforms 23 baseline methods,demonstrating superior performance on both homophilic and het-erophilic graphs, while also scaling efficiently to large graphs. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,649 |
2112.02194 | ALX: Large Scale Matrix Factorization on TPUs | We present ALX, an open-source library for distributed matrix factorization using Alternating Least Squares, written in JAX. Our design allows for efficient use of the TPU architecture and scales well to matrix factorization problems of O(B) rows/columns by scaling the number of available TPU cores. In order to spur future research on large scale matrix factorization methods and to illustrate the scalability properties of our own implementation, we also built a real world web link prediction dataset called WebGraph. This dataset can be easily modeled as a matrix factorization problem. We created several variants of this dataset based on locality and sparsity properties of sub-graphs. The largest variant of WebGraph has around 365M nodes and training a single epoch finishes in about 20 minutes with 256 TPU cores. We include speed and performance numbers of ALX on all variants of WebGraph. Both the framework code and the dataset is open-sourced. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 269,753 |
2212.09146 | Can Retriever-Augmented Language Models Reason? The Blame Game Between
the Retriever and the Language Model | Augmenting pretrained language models with retrievers has shown promise in effectively solving common NLP problems, such as language modeling and question answering. In this paper, we evaluate the strengths and weaknesses of popular retriever-augmented language models, namely kNN-LM, REALM, DPR + FiD, Contriever + ATLAS, and Contriever + Flan-T5, in reasoning over retrieved statements across different tasks. Our findings indicate that the simple similarity metric employed by retrievers is insufficient for retrieving all the necessary statements for reasoning. Additionally, the language models do not exhibit strong reasoning even when provided with only the required statements. Furthermore, when combined with imperfect retrievers, the performance of the language models becomes even worse, e.g., Flan-T5's performance drops by 28.6% when retrieving 5 statements using Contriever. While larger language models improve performance, there is still a substantial room for enhancement. Our further analysis indicates that multihop retrieve-and-read is promising for large language models like GPT-3.5, but does not generalize to other language models like Flan-T5-xxl. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 337,017 |
2103.08151 | Fast Antenna and Beam Switching Method for mmWave Handsets with Hand
Blockage | Many operators have been bullish on the role of millimeter-wave (mmWave) communications in fifth-generation (5G) mobile broadband because of its capability of delivering extreme data speeds and capacity. However, mmWave comes with challenges related to significantly high path loss and susceptibility to blockage. Particularly, when mmWave communication is applied to a mobile terminal device, communication can be frequently broken because of rampant hand blockage. Although a number of mobile phone companies have suggested configuring multiple sets of antenna modules at different locations on a mobile phone to circumvent this problem, identifying an optimal antenna module and a beam pair by simultaneously opening multiple sets of antenna modules causes the problem of excessive power consumption and device costs. In this study, a fast antenna and beam switching method termed Fast-ABS is proposed. In this method, only one antenna module is used for the reception to predict the best beam of other antenna modules. As such, unmasked antenna modules and their corresponding beam pairs can be rapidly selected for switching to avoid the problem of poor quality or disconnection of communications caused by hand blockage. Thorough analysis and extensive simulations, which include the derivation of relevant Cram\'{e}r-Rao lower bounds, show that the performance of Fast-ABS is close to that of an oracle solution that can instantaneously identify the best beam of other antenna modules even in complex multipath scenarios. Furthermore, Fast-ABS is implemented on a software defined radio and integrated into a 5G New Radio physical layer. Over-the-air experiments reveal that Fast-ABS can achieve efficient and seamless connectivity despite hand blockage. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 224,813 |
2105.12828 | Pouring Dynamics Estimation Using Gated Recurrent Units | One of the most commonly performed manipulation in a human's daily life is pouring. Many factors have an effect on target accuracy, including pouring velocity, rotation angle, geometric of the source, and the receiving containers. This paper presents an approach to increase the repeatability and accuracy of the robotic manipulator by estimating the change in the amount of water of the pouring cup to a sequence of pouring actions using multiple layers of the deep recurrent neural network, especially gated recurrent units (GRU). The proposed GRU model achieved a validation mean squared error as low as 1e-4 (lbf) for the predicted value of weight f(t). This paper contains a comprehensive evaluation and analysis of numerous experiments with various designs of recurrent neural networks and hyperparameters fine-tuning. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 237,111 |
2407.10596 | An evaluation of CNN models and data augmentation techniques in
hierarchical localization of mobile robots | This work presents an evaluation of CNN models and data augmentation to carry out the hierarchical localization of a mobile robot by using omnidireccional images. In this sense, an ablation study of different state-of-the-art CNN models used as backbone is presented and a variety of data augmentation visual effects are proposed for addressing the visual localization of the robot. The proposed method is based on the adaption and re-training of a CNN with a dual purpose: (1) to perform a rough localization step in which the model is used to predict the room from which an image was captured, and (2) to address the fine localization step, which consists in retrieving the most similar image of the visual map among those contained in the previously predicted room by means of a pairwise comparison between descriptors obtained from an intermediate layer of the CNN. In this sense, we evaluate the impact of different state-of-the-art CNN models such as ConvNeXt for addressing the proposed localization. Finally, a variety of data augmentation visual effects are separately employed for training the model and their impact is assessed. The performance of the resulting CNNs is evaluated under real operation conditions, including changes in the lighting conditions. Our code is publicly available on the project website https://github.com/juanjo-cabrera/IndoorLocalizationSingleCNN.git | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,045 |
2109.13580 | On the sensitivity of linear resource sharing problems to the arrival of
new agents | We consider a multi-agent optimal resource sharing problem that is represented by a linear program. The amount of resource to be shared is fixed, and agents belong to a population that is characterized probabilistically so as to allow heterogeneity among the agents. In this paper, we provide a characterization of the probability that the arrival of a new agent affects the resource share of other agents, which means that accommodating the new agent request at the detriment of the other agents allocation provides some payoff. This probability represents a sensitivity index for the optimal solution of a linear programming resource sharing problem when a new agent shows up, and it is of fundamental importance for a correct and profitable operation of the multi-agent system. Our developments build on the equivalence between the resource sharing problem and certain dual reformulations which can be interpreted as scenario programs with the number of scenarios corresponding to the number of agents in the primal problem. The recent "wait-and-judge" scenario approach is then used to obtain the sought sensitivity index. Our theoretical findings are demonstrated through a numerical example on optimal cargo aircraft loading. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 257,678 |
2205.00215 | An attention model for the formation of collectives in real-world
domains | We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals (e.g., shared mobility, cooperative learning). We propose a general approach for the formation of collectives based on a novel combination of an attention model and an integer linear program (ILP). In more detail, we propose an attention encoder-decoder model that transforms a collective formation instance to a weighted set packing problem, which is then solved by an ILP. Results on two real-world domains (i.e., ridesharing and team formation for cooperative learning) show that our approach provides solutions that are comparable (in terms of quality) to the ones produced by state-of-the-art approaches specific to each domain. Moreover, our solution outperforms the most recent general approach for forming collectives based on Monte Carlo tree search. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 294,170 |
2012.03677 | G-RCN: Optimizing the Gap between Classification and Localization Tasks
for Object Detection | Multi-task learning is widely used in computer vision. Currently, object detection models utilize shared feature map to complete classification and localization tasks simultaneously. By comparing the performance between the original Faster R-CNN and that with partially separated feature maps, we show that: (1) Sharing high-level features for the classification and localization tasks is sub-optimal; (2) Large stride is beneficial for classification but harmful for localization; (3) Global context information could improve the performance of classification. Based on these findings, we proposed a paradigm called Gap-optimized region based convolutional network (G-RCN), which aims to separating these two tasks and optimizing the gap between them. The paradigm was firstly applied to correct the current ResNet protocol by simply reducing the stride and moving the Conv5 block from the head to the feature extraction network, which brings 3.6 improvement of AP70 on the PASCAL VOC dataset and 1.5 improvement of AP on the COCO dataset for ResNet50. Next, the new method is applied on the Faster R-CNN with backbone of VGG16,ResNet50 and ResNet101, which brings above 2.0 improvement of AP70 on the PASCAL VOC dataset and above 1.9 improvement of AP on the COCO dataset. Noticeably, the implementation of G-RCN only involves a few structural modifications, with no extra module added. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 210,208 |
2408.09416 | Challenges and Responses in the Practice of Large Language Models | This paper carefully summarizes extensive and profound questions from all walks of life, focusing on the current high-profile AI field, covering multiple dimensions such as industry trends, academic research, technological innovation and business applications. This paper meticulously curates questions that are both thought-provoking and practically relevant, providing nuanced and insightful answers to each. To facilitate readers' understanding and reference, this paper specifically classifies and organizes these questions systematically and meticulously from the five core dimensions of computing power infrastructure, software architecture, data resources, application scenarios, and brain science. This work aims to provide readers with a comprehensive, in-depth and cutting-edge AI knowledge framework to help people from all walks of life grasp the pulse of AI development, stimulate innovative thinking, and promote industrial progress. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 481,427 |
2311.15297 | Controllable Expensive Multi-objective Learning with Warm-starting
Bayesian Optimization | Pareto Set Learning (PSL) is a promising approach for approximating the entire Pareto front in multi-objective optimization (MOO) problems. However, existing derivative-free PSL methods are often unstable and inefficient, especially for expensive black-box MOO problems where objective function evaluations are costly. In this work, we propose to address the instability and inefficiency of existing PSL methods with a novel controllable PSL method, called Co-PSL. Particularly, Co-PSL consists of two stages: (1) warm-starting Bayesian optimization to obtain quality Gaussian Processes priors and (2) controllable Pareto set learning to accurately acquire a parametric mapping from preferences to the corresponding Pareto solutions. The former is to help stabilize the PSL process and reduce the number of expensive function evaluations. The latter is to support real-time trade-off control between conflicting objectives. Performances across synthesis and real-world MOO problems showcase the effectiveness of our Co-PSL for expensive multi-objective optimization tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 410,456 |
1906.02243 | Energy and Policy Considerations for Deep Learning in NLP | Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 133,978 |
1511.04511 | Sequential Optimization for Efficient High-Quality Object Proposal
Generation | We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5% and 16.7% on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,899 |
2402.12629 | Television Discourse Decoded: Comprehensive Multimodal Analytics at
Scale | In this paper, we tackle the complex task of analyzing televised debates, with a focus on a prime time news debate show from India. Previous methods, which often relied solely on text, fall short in capturing the multimodal essence of these debates. To address this gap, we introduce a comprehensive automated toolkit that employs advanced computer vision and speech-to-text techniques for large-scale multimedia analysis. Utilizing state-of-the-art computer vision algorithms and speech-to-text methods, we transcribe, diarize, and analyze thousands of YouTube videos of a prime-time television debate show in India. These debates are a central part of Indian media but have been criticized for compromised journalistic integrity and excessive dramatization. Our toolkit provides concrete metrics to assess bias and incivility, capturing a comprehensive multimedia perspective that includes text, audio utterances, and video frames. Our findings reveal significant biases in topic selection and panelist representation, along with alarming levels of incivility. This work offers a scalable, automated approach for future research in multimedia analysis, with profound implications for the quality of public discourse and democratic debate. To catalyze further research in this area, we also release the code, dataset collected and supplemental pdf. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | true | 430,919 |
2305.16471 | Bias, Consistency, and Partisanship in U.S. Asylum Cases: A Machine
Learning Analysis of Extraneous Factors in Immigration Court Decisions | In this study, we introduce a novel two-pronged scoring system to measure individual and systemic bias in immigration courts under the U.S. Executive Office of Immigration Review (EOIR). We analyze nearly 6 million immigration court proceedings and 228 case features to build on prior research showing that U.S. asylum decisions vary dramatically based on factors that are extraneous to the merits of a case. We close a critical gap in the literature of variability metrics that can span space and time. Using predictive modeling, we explain 58.54% of the total decision variability using two metrics: partisanship and inter-judge cohort consistency. Thus, whether the EOIR grants asylum to an applicant or not depends in majority on the combined effects of the political climate and the individual variability of the presiding judge - not the individual merits of the case. Using time series analysis, we also demonstrate that partisanship increased in the early 1990s but plateaued following the turn of the century. These conclusions are striking to the extent that they diverge from the U.S. immigration system's commitments to independence and due process. Our contributions expose systemic inequities in the U.S. asylum decision-making process, and we recommend improved and standardized variability metrics to better diagnose and monitor these issues. | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 368,101 |
1906.09321 | Automatic Acrostic Couplet Generation with Three-Stage Neural Network
Pipelines | As one of the quintessence of Chinese traditional culture, couplet compromises two syntactically symmetric clauses equal in length, namely, an antecedent and subsequent clause. Moreover, corresponding characters and phrases at the same position of the two clauses are paired with each other under certain constraints of semantic and/or syntactic relatedness. Automatic couplet generation is recognized as a challenging problem even in the Artificial Intelligence field. In this paper, we comprehensively study on automatic generation of acrostic couplet with the first characters defined by users. The complete couplet generation is mainly divided into three stages, that is, antecedent clause generation pipeline, subsequent clause generation pipeline and clause re-ranker. To realize semantic and/or syntactic relatedness between two clauses, attention-based Sequence-to-Sequence (S2S) neural network is employed. Moreover, to provide diverse couplet candidates for re-ranking, a cluster-based beam search approach is incorporated into the S2S network. Both BLEU metrics and human judgments have demonstrated the effectiveness of our proposed method. Eventually, a mini-program based on this generation system is developed and deployed on Wechat for real users. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 136,115 |
2102.08820 | Crop mapping from image time series: deep learning with multi-scale
label hierarchies | The aim of this paper is to map agricultural crops by classifying satellite image time series. Domain experts in agriculture work with crop type labels that are organised in a hierarchical tree structure, where coarse classes (like orchards) are subdivided into finer ones (like apples, pears, vines, etc.). We develop a crop classification method that exploits this expert knowledge and significantly improves the mapping of rare crop types. The three-level label hierarchy is encoded in a convolutional, recurrent neural network (convRNN), such that for each pixel the model predicts three labels at different level of granularity. This end-to-end trainable, hierarchical network architecture allows the model to learn joint feature representations of rare classes (e.g., apples, pears) at a coarser level (e.g., orchard), thereby boosting classification performance at the fine-grained level. Additionally, labelling at different granularity also makes it possible to adjust the output according to the classification scores; as coarser labels with high confidence are sometimes more useful for agricultural practice than fine-grained but very uncertain labels. We validate the proposed method on a new, large dataset that we make public. ZueriCrop covers an area of 50 km x 48 km in the Swiss cantons of Zurich and Thurgau with a total of 116'000 individual fields spanning 48 crop classes, and 28,000 (multi-temporal) image patches from Sentinel-2. We compare our proposed hierarchical convRNN model with several baselines, including methods designed for imbalanced class distributions. The hierarchical approach performs superior by at least 9.9 percentage points in F1-score. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 220,583 |
2303.05214 | Taming Contrast Maximization for Learning Sequential, Low-latency,
Event-based Optical Flow | Event cameras have recently gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems. To unlock these solutions, it is necessary to develop algorithms that can leverage the unique nature of event data. However, the current state-of-the-art is still highly influenced by the frame-based literature, and usually fails to deliver on these promises. In this work, we take this into consideration and propose a novel self-supervised learning pipeline for the sequential estimation of event-based optical flow that allows for the scaling of the models to high inference frequencies. At its core, we have a continuously-running stateful neural model that is trained using a novel formulation of contrast maximization that makes it robust to nonlinearities and varying statistics in the input events. Results across multiple datasets confirm the effectiveness of our method, which establishes a new state of the art in terms of accuracy for approaches trained or optimized without ground truth. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 350,387 |
2106.05848 | Deep Probabilistic Time Series Forecasting using Augmented Recurrent
Input for Dynamic Systems | The demand of probabilistic time series forecasting has been recently raised in various dynamic system scenarios, for example, system identification and prognostic and health management of machines. To this end, we combine the advances in both deep generative models and state space model (SSM) to come up with a novel, data-driven deep probabilistic sequence model. Specifically, we follow the popular encoder-decoder generative structure to build the recurrent neural networks (RNN) assisted variational sequence model on an augmented recurrent input space, which could induce rich stochastic sequence dependency. Besides, in order to alleviate the inconsistency issue of the posterior between training and predicting as well as improving the mining of dynamic patterns, we (i) propose using a lagged hybrid output as input for the posterior at next time step, which brings training and predicting into alignment; and (ii) further devise a generalized auto-regressive strategy that encodes all the historical dependencies for the posterior. Thereafter, we first investigate the methodological characteristics of the proposed deep probabilistic sequence model on toy cases, and then comprehensively demonstrate the superiority of our model against existing deep probabilistic SSM models through extensive numerical experiments on eight system identification benchmarks from various dynamic systems. Finally, we apply our sequence model to a real-world centrifugal compressor forecasting problem, and again verify its outstanding performance by quantifying the time series predictive distribution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 240,252 |
1705.04658 | Inverse, forward and other dynamic computations computationally
optimized with sparse matrix factorizations | We propose an algorithm to compute the dynamics of articulated rigid-bodies with different sensor distributions. Prior to the on-line computations, the proposed algorithm performs an off-line optimisation step to simplify the computational complexity of the underlying solution. This optimisation step consists in formulating the dynamic computations as a system of linear equations. The computational complexity of computing the associated solution is reduced by performing a permuted LU-factorisation with off-line optimised permutations. We apply our algorithm to solve classical dynamic problems: inverse and forward dynamics. The computational complexity of the proposed solution is compared to `gold standard' algorithms: recursive Newton-Euler and articulated body algorithm. It is shown that our algorithm reduces the number of floating point operations with respect to previous approaches. We also evaluate the numerical complexity of our algorithm by performing tests on dynamic computations for which no gold standard is available. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 73,365 |
2203.07998 | Reinforcement Learning Framework for Server Placement and Workload
Allocation in Multi-Access Edge Computing | Cloud computing is a reliable solution to provide distributed computation power. However, real-time response is still challenging regarding the enormous amount of data generated by the IoT devices in 5G and 6G networks. Thus, multi-access edge computing (MEC), which consists of distributing the edge servers in the proximity of end-users to have low latency besides the higher processing power, is increasingly becoming a vital factor for the success of modern applications. This paper addresses the problem of minimizing both, the network delay, which is the main objective of MEC, and the number of edge servers to provide a MEC design with minimum cost. This MEC design consists of edge servers placement and base stations allocation, which makes it a joint combinatorial optimization problem (COP). Recently, reinforcement learning (RL) has shown promising results for COPs. However, modeling real-world problems using RL when the state and action spaces are large still needs investigation. We propose a novel RL framework with an efficient representation and modeling of the state space, action space and the penalty function in the design of the underlying Markov Decision Process (MDP) for solving our problem. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 285,637 |
1201.5338 | On Constrained Spectral Clustering and Its Applications | Constrained clustering has been well-studied for algorithms such as $K$-means and hierarchical clustering. However, how to satisfy many constraints in these algorithmic settings has been shown to be intractable. One alternative to encode many constraints is to use spectral clustering, which remains a developing area. In this paper, we propose a flexible framework for constrained spectral clustering. In contrast to some previous efforts that implicitly encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian or constraining the underlying eigenspace, we present a more natural and principled formulation, which explicitly encodes the constraints as part of a constrained optimization problem. Our method offers several practical advantages: it can encode the degree of belief in Must-Link and Cannot-Link constraints; it guarantees to lower-bound how well the given constraints are satisfied using a user-specified threshold; it can be solved deterministically in polynomial time through generalized eigendecomposition. Furthermore, by inheriting the objective function from spectral clustering and encoding the constraints explicitly, much of the existing analysis of unconstrained spectral clustering techniques remains valid for our formulation. We validate the effectiveness of our approach by empirical results on both artificial and real datasets. We also demonstrate an innovative use of encoding large number of constraints: transfer learning via constraints. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 13,954 |
2011.04437 | Interpretable collaborative data analysis on distributed data | This paper proposes an interpretable non-model sharing collaborative data analysis method as one of the federated learning systems, which is an emerging technology to analyze distributed data. Analyzing distributed data is essential in many applications such as medical, financial, and manufacturing data analyses due to privacy, and confidentiality concerns. In addition, interpretability of the obtained model has an important role for practical applications of the federated learning systems. By centralizing intermediate representations, which are individually constructed in each party, the proposed method obtains an interpretable model, achieving a collaborative analysis without revealing the individual data and learning model distributed over local parties. Numerical experiments indicate that the proposed method achieves better recognition performance for artificial and real-world problems than individual analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,582 |
2406.12289 | Stability of Data-Dependent Ridge-Regularization for Inverse Problems | Theoretical guarantees for the robust solution of inverse problems have important implications for applications. To achieve both guarantees and high reconstruction quality, we propose learning a pixel-based ridge regularizer with a data-dependent and spatially varying regularization strength. For this architecture, we establish the existence of solutions to the associated variational problem and the stability of its solution operator. Further, we prove that the reconstruction forms a maximum-a-posteriori approach. Simulations for biomedical imaging and material sciences demonstrate that the approach yields high-quality reconstructions even if only a small instance-specific training set is available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 465,323 |
1208.6247 | Solving Quadratic Equations via PhaseLift when There Are About As Many
Equations As Unknowns | This note shows that we can recover a complex vector x in C^n exactly from on the order of n quadratic equations of the form |<a_i, x>|^2 = b_i, i = 1, ..., m, by using a semidefinite program known as PhaseLift. This improves upon earlier bounds in [3], which required the number of equations to be at least on the order of n log n. We also demonstrate optimal recovery results from noisy quadratic measurements; these results are much sharper than previously known results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 18,312 |
2110.04684 | Can Audio Captions Be Evaluated with Image Caption Metrics? | Automated audio captioning aims at generating textual descriptions for an audio clip. To evaluate the quality of generated audio captions, previous works directly adopt image captioning metrics like SPICE and CIDEr, without justifying their suitability in this new domain, which may mislead the development of advanced models. This problem is still unstudied due to the lack of human judgment datasets on caption quality. Therefore, we firstly construct two evaluation benchmarks, AudioCaps-Eval and Clotho-Eval. They are established with pairwise comparison instead of absolute rating to achieve better inter-annotator agreement. Current metrics are found in poor correlation with human annotations on these datasets. To overcome their limitations, we propose a metric named FENSE, where we combine the strength of Sentence-BERT in capturing similarity, and a novel Error Detector to penalize erroneous sentences for robustness. On the newly established benchmarks, FENSE outperforms current metrics by 14-25% accuracy. Code, data and web demo available at: https://github.com/blmoistawinde/fense | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 259,995 |
2403.18922 | Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D | In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets. At the same time, there has been renewed interest in 3D scene representations such as neural radiance fields from multi-view images. However, the availability of 3D or multiview data is still substantially limited compared to 2D image datasets, making extending 2D vision models to 3D data highly desirable but also very challenging. Indeed, extending a single 2D vision operator like scene editing to 3D typically requires a highly creative method specialized to that task and often requires per-scene optimization. In this paper, we ask the question of whether any 2D vision model can be lifted to make 3D consistent predictions. We answer this question in the affirmative; our new Lift3D method trains to predict unseen views on feature spaces generated by a few visual models (i.e. DINO and CLIP), but then generalizes to novel vision operators and tasks, such as style transfer, super-resolution, open vocabulary segmentation and image colorization; for some of these tasks, there is no comparable previous 3D method. In many cases, we even outperform state-of-the-art methods specialized for the task in question. Moreover, Lift3D is a zero-shot method, in the sense that it requires no task-specific training, nor scene-specific optimization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 442,128 |
2305.14000 | Node-wise Diffusion for Scalable Graph Learning | Graph Neural Networks (GNNs) have shown superior performance for semi-supervised learning of numerous web applications, such as classification on web services and pages, analysis of online social networks, and recommendation in e-commerce. The state of the art derives representations for all nodes in graphs following the same diffusion (message passing) model without discriminating their uniqueness. However, (i) labeled nodes involved in model training usually account for a small portion of graphs in the semisupervised setting, and (ii) different nodes locate at different graph local contexts and it inevitably degrades the representation qualities if treating them undistinguishedly in diffusion. To address the above issues, we develop NDM, a universal node-wise diffusion model, to capture the unique characteristics of each node in diffusion, by which NDM is able to yield high-quality node representations. In what follows, we customize NDM for semisupervised learning and design the NIGCN model. In particular, NIGCN advances the efficiency significantly since it (i) produces representations for labeled nodes only and (ii) adopts well-designed neighbor sampling techniques tailored for node representation generation. Extensive experimental results on various types of web datasets, including citation, social and co-purchasing graphs, not only verify the state-of-the-art effectiveness of NIGCN but also strongly support the remarkable scalability of NIGCN. In particular, NIGCN completes representation generation and training within 10 seconds on the dataset with hundreds of millions of nodes and billions of edges, up to orders of magnitude speedups over the baselines, while achieving the highest F1-scores on classification. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 366,803 |
2206.07365 | Modern Machine-Learning Predictive Models for Diagnosing Infectious
Diseases | Controlling infectious diseases is a major health priority because they can spread and infect humans, thus evolving into epidemics or pandemics. Therefore, early detection of infectious diseases is a significant need, and many researchers have developed models to diagnose them in the early stages. This paper reviewed research articles for recent machine-learning (ML) algorithms applied to infectious disease diagnosis. We searched the Web of Science, ScienceDirect, PubMed, Springer, and IEEE databases from 2015 to 2022, identified the pros and cons of the reviewed ML models, and discussed the possible recommendations to advance the studies in this field. We found that most of the articles used small datasets, and few of them used real-time data. Our results demonstrated that a suitable ML technique depends on the nature of the dataset and the desired goal. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,716 |
2305.14269 | Source-Free Domain Adaptation for RGB-D Semantic Segmentation with
Vision Transformers | With the increasing availability of depth sensors, multimodal frameworks that combine color information with depth data are gaining interest. However, ground truth data for semantic segmentation is burdensome to provide, thus making domain adaptation a significant research area. Yet most domain adaptation methods are not able to effectively handle multimodal data. Specifically, we address the challenging source-free domain adaptation setting where the adaptation is performed without reusing source data. We propose MISFIT: MultImodal Source-Free Information fusion Transformer, a depth-aware framework which injects depth data into a segmentation module based on vision transformers at multiple stages, namely at the input, feature and output levels. Color and depth style transfer helps early-stage domain alignment while re-wiring self-attention between modalities creates mixed features, allowing the extraction of better semantic content. Furthermore, a depth-based entropy minimization strategy is also proposed to adaptively weight regions at different distances. Our framework, which is also the first approach using RGB-D vision transformers for source-free semantic segmentation, shows noticeable performance improvements with respect to standard strategies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 366,940 |
2405.15124 | Scaling Law for Time Series Forecasting | Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizons may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future work. Code for our experiments has been made public at https://github.com/JingzheShi/ScalingLawForTimeSeriesForecasting. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 456,758 |
2110.15639 | Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition | Gesture recognition is getting more and more popular due to various application possibilities in human-machine interaction. Existing multi-modal gesture recognition systems take multi-modal data as input to improve accuracy, but such methods require more modality sensors, which will greatly limit their application scenarios. Therefore we propose an end-to-end multi-task learning framework in training 2D convolutional neural networks. The framework can use the depth modality to improve accuracy during training and save costs by using only RGB modality during inference. Our framework is trained to learn a representation for multi-task learning: gesture segmentation and gesture recognition. Depth modality contains the prior information for the location of the gesture. Therefore it can be used as the supervision for gesture segmentation. A plug-and-play module named Multi-Scale-Decoder is designed to realize gesture segmentation, which contains two sub-decoder. It is used in the lower stage and higher stage respectively, and can help the network pay attention to key target areas, ignore irrelevant information, and extract more discriminant features. Additionally, the MSD module and depth modality are only used in the training stage to improve gesture recognition performance. Only RGB modality and network without MSD are required during inference. Experimental results on three public gesture recognition datasets show that our proposed method provides superior performance compared with existing gesture recognition frameworks. Moreover, using the proposed plug-and-play MSD in other 2D CNN-based frameworks also get an excellent accuracy improvement. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 263,942 |
2406.05726 | Region of Interest Loss for Anonymizing Learned Image Compression | The use of AI in public spaces continually raises concerns about privacy and the protection of sensitive data. An example is the deployment of detection and recognition methods on humans, where images are provided by surveillance cameras. This results in the acquisition of great amounts of sensitive data, since the capture and transmission of images taken by such cameras happens unaltered, for them to be received by a server on the network. However, many applications do not explicitly require the identity of a given person in a scene; An anonymized representation containing information of the person's position while preserving the context of them in the scene suffices. We show how using a customized loss function on region of interests (ROI) can achieve sufficient anonymization such that human faces become unrecognizable while persons are kept detectable, by training an end-to-end optimized autoencoder for learned image compression that utilizes the flexibility of the learned analysis and reconstruction transforms for the task of mutating parts of the compression result. This approach enables compression and anonymization in one step on the capture device, instead of transmitting sensitive, nonanonymized data over the network. Additionally, we evaluate how this anonymization impacts the average precision of pre-trained foundation models on detecting faces (MTCNN) and humans (YOLOv8) in comparison to non-ANN based methods, while considering compression rate and latency. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 462,267 |
2404.04996 | Fantastic Animals and Where to Find Them: Segment Any Marine Animal with
Dual SAM | As an important pillar of underwater intelligence, Marine Animal Segmentation (MAS) involves segmenting animals within marine environments. Previous methods don't excel in extracting long-range contextual features and overlook the connectivity between discrete pixels. Recently, Segment Anything Model (SAM) offers a universal framework for general segmentation tasks. Unfortunately, trained with natural images, SAM does not obtain the prior knowledge from marine images. In addition, the single-position prompt of SAM is very insufficient for prior guidance. To address these issues, we propose a novel feature learning framework, named Dual-SAM for high-performance MAS. To this end, we first introduce a dual structure with SAM's paradigm to enhance feature learning of marine images. Then, we propose a Multi-level Coupled Prompt (MCP) strategy to instruct comprehensive underwater prior information, and enhance the multi-level features of SAM's encoder with adapters. Subsequently, we design a Dilated Fusion Attention Module (DFAM) to progressively integrate multi-level features from SAM's encoder. Finally, instead of directly predicting the masks of marine animals, we propose a Criss-Cross Connectivity Prediction (C$^3$P) paradigm to capture the inter-connectivity between discrete pixels. With dual decoders, it generates pseudo-labels and achieves mutual supervision for complementary feature representations, resulting in considerable improvements over previous techniques. Extensive experiments verify that our proposed method achieves state-of-the-art performances on five widely-used MAS datasets. The code is available at https://github.com/Drchip61/Dual_SAM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 444,895 |
2312.07254 | The GUA-Speech System Description for CNVSRC Challenge 2023 | This study describes our system for Task 1 Single-speaker Visual Speech Recognition (VSR) fixed track in the Chinese Continuous Visual Speech Recognition Challenge (CNVSRC) 2023. Specifically, we use intermediate connectionist temporal classification (Inter CTC) residual modules to relax the conditional independence assumption of CTC in our model. Then we use a bi-transformer decoder to enable the model to capture both past and future contextual information. In addition, we use Chinese characters as the modeling units to improve the recognition accuracy of our model. Finally, we use a recurrent neural network language model (RNNLM) for shallow fusion in the inference stage. Experiments show that our system achieves a character error rate (CER) of 38.09% on the Eval set which reaches a relative CER reduction of 21.63% over the official baseline, and obtains a second place in the challenge. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 414,854 |
2308.11406 | Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition | Banks routinely use neural networks to make decisions. While these models offer higher accuracy, they are susceptible to adversarial attacks, a risk often overlooked in the context of event sequences, particularly sequences of financial transactions, as most works consider computer vision and NLP modalities. We propose a thorough approach to studying these risks: a novel type of competition that allows a realistic and detailed investigation of problems in financial transaction data. The participants directly oppose each other, proposing attacks and defenses -- so they are examined in close-to-real-life conditions. The paper outlines our unique competition structure with direct opposition of participants, presents results for several different top submissions, and analyzes the competition results. We also introduce a new open dataset featuring financial transactions with credit default labels, enhancing the scope for practical research and development. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 387,124 |
2403.14821 | Learning Gaussian Representation for Eye Fixation Prediction | Existing eye fixation prediction methods perform the mapping from input images to the corresponding dense fixation maps generated from raw fixation points. However, due to the stochastic nature of human fixation, the generated dense fixation maps may be a less-than-ideal representation of human fixation. To provide a robust fixation model, we introduce Gaussian Representation for eye fixation modeling. Specifically, we propose to model the eye fixation map as a mixture of probability distributions, namely a Gaussian Mixture Model. In this new representation, we use several Gaussian distribution components as an alternative to the provided fixation map, which makes the model more robust to the randomness of fixation. Meanwhile, we design our framework upon some lightweight backbones to achieve real-time fixation prediction. Experimental results on three public fixation prediction datasets (SALICON, MIT1003, TORONTO) demonstrate that our method is fast and effective. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 440,267 |
2410.05227 | The Dawn of Video Generation: Preliminary Explorations with SORA-like
Models | High-quality video generation, encompassing text-to-video (T2V), image-to-video (I2V), and video-to-video (V2V) generation, holds considerable significance in content creation to benefit anyone express their inherent creativity in new ways and world simulation to modeling and understanding the world. Models like SORA have advanced generating videos with higher resolution, more natural motion, better vision-language alignment, and increased controllability, particularly for long video sequences. These improvements have been driven by the evolution of model architectures, shifting from UNet to more scalable and parameter-rich DiT models, along with large-scale data expansion and refined training strategies. However, despite the emergence of DiT-based closed-source and open-source models, a comprehensive investigation into their capabilities and limitations remains lacking. Furthermore, the rapid development has made it challenging for recent benchmarks to fully cover SORA-like models and recognize their significant advancements. Additionally, evaluation metrics often fail to align with human preferences. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 495,621 |
2407.12216 | Mindful-RAG: A Study of Points of Failure in Retrieval Augmented
Generation | Large Language Models (LLMs) are proficient at generating coherent and contextually relevant text but face challenges when addressing knowledge-intensive queries in domain-specific and factual question-answering tasks. Retrieval-augmented generation (RAG) systems mitigate this by incorporating external knowledge sources, such as structured knowledge graphs (KGs). However, LLMs often struggle to produce accurate answers despite access to KG-extracted information containing necessary facts. Our study investigates this dilemma by analyzing error patterns in existing KG-based RAG methods and identifying eight critical failure points. We observed that these errors predominantly occur due to insufficient focus on discerning the question's intent and adequately gathering relevant context from the knowledge graph facts. Drawing on this analysis, we propose the Mindful-RAG approach, a framework designed for intent-based and contextually aligned knowledge retrieval. This method explicitly targets the identified failures and offers improvements in the correctness and relevance of responses provided by LLMs, representing a significant step forward from existing methods. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 473,816 |
2206.02348 | Finite-Sample Maximum Likelihood Estimation of Location | We consider 1-dimensional location estimation, where we estimate a parameter $\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i.i.d. from a known distribution $f$. For fixed $f$ the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as $n \to \infty$: it is asymptotically normal with variance matching the Cram\'er-Rao lower bound of $\frac{1}{n\mathcal{I}}$, where $\mathcal{I}$ is the Fisher information of $f$. However, this bound does not hold for finite $n$, or when $f$ varies with $n$. We show for arbitrary $f$ and $n$ that one can recover a similar theory based on the Fisher information of a smoothed version of $f$, where the smoothing radius decays with $n$. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 300,858 |
2304.11763 | The Case for Hierarchical Deep Learning Inference at the Network Edge | Resource-constrained Edge Devices (EDs), e.g., IoT sensors and microcontroller units, are expected to make intelligent decisions using Deep Learning (DL) inference at the edge of the network. Toward this end, there is a significant research effort in developing tinyML models - Deep Learning (DL) models with reduced computation and memory storage requirements - that can be embedded on these devices. However, tinyML models have lower inference accuracy. On a different front, DNN partitioning and inference offloading techniques were studied for distributed DL inference between EDs and Edge Servers (ESs). In this paper, we explore Hierarchical Inference (HI), a novel approach proposed by Vishnu et al. 2023, arXiv:2304.00891v1 , for performing distributed DL inference at the edge. Under HI, for each data sample, an ED first uses a local algorithm (e.g., a tinyML model) for inference. Depending on the application, if the inference provided by the local algorithm is incorrect or further assistance is required from large DL models on edge or cloud, only then the ED offloads the data sample. At the outset, HI seems infeasible as the ED, in general, cannot know if the local inference is sufficient or not. Nevertheless, we present the feasibility of implementing HI for machine fault detection and image classification applications. We demonstrate its benefits using quantitative analysis and argue that using HI will result in low latency, bandwidth savings, and energy savings in edge AI systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 359,964 |
2108.09598 | SERF: Towards better training of deep neural networks using log-Softplus
ERror activation Function | Activation functions play a pivotal role in determining the training dynamics and neural network performance. The widely adopted activation function ReLU despite being simple and effective has few disadvantages including the Dying ReLU problem. In order to tackle such problems, we propose a novel activation function called Serf which is self-regularized and nonmonotonic in nature. Like Mish, Serf also belongs to the Swish family of functions. Based on several experiments on computer vision (image classification and object detection) and natural language processing (machine translation, sentiment classification and multimodal entailment) tasks with different state-of-the-art architectures, it is observed that Serf vastly outperforms ReLU (baseline) and other activation functions including both Swish and Mish, with a markedly bigger margin on deeper architectures. Ablation studies further demonstrate that Serf based architectures perform better than those of Swish and Mish in varying scenarios, validating the effectiveness and compatibility of Serf with varying depth, complexity, optimizers, learning rates, batch sizes, initializers and dropout rates. Finally, we investigate the mathematical relation between Swish and Serf, thereby showing the impact of preconditioner function ingrained in the first derivative of Serf which provides a regularization effect making gradients smoother and optimization faster. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 251,652 |
2203.08991 | AdapLeR: Speeding up Inference by Adaptive Length Reduction | Pre-trained language models have shown stellar performance in various downstream tasks. But, this usually comes at the cost of high latency and computation, hindering their usage in resource-limited settings. In this work, we propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance. Our method dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost. To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method. Our experiments on several diverse classification tasks show speedups up to 22x during inference time without much sacrifice in performance. We also validate the quality of the selected tokens in our method using human annotations in the ERASER benchmark. In comparison to other widely used strategies for selecting important tokens, such as saliency and attention, our proposed method has a significantly lower false positive rate in generating rationales. Our code is freely available at https://github.com/amodaresi/AdapLeR . | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 285,983 |
2310.19256 | Online Data-Driven Safety Certification for Systems Subject to Unknown
Disturbances | Deploying autonomous systems in safety critical settings necessitates methods to verify their safety properties. This is challenging because real-world systems may be subject to disturbances that affect their performance, but are unknown a priori. This work develops a safety-verification strategy wherein data is collected online and incorporated into a reachability analysis approach to check in real-time that the system avoids dangerous regions of the state space. Specifically, we employ an optimization-based moving horizon estimator (MHE) to characterize the disturbance affecting the system, which is incorporated into an online reachability calculation. Reachable sets are calculated using a computational graph analysis tool to predict the possible future states of the system and verify that they satisfy safety constraints. We include theoretical arguments proving our approach generates reachable sets that bound the future states of the system, as well as numerical results demonstrating how it can be used for safety verification. Finally, we present results from hardware experiments demonstrating our approach's ability to perform online reachability calculations for an unmanned surface vehicle subject to currents and actuator failures. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 403,919 |
2306.14033 | Efficient and Scalable MIV-transistor with Extended Gate in Monolithic
3D Integration | Monolithic 3D integration has become a promising solution for future computing needs. The metal inter-layer via (MIV) forms interconnects between substrate layers in Monolithic 3D integration. Despite small size of MIV, the area overhead can become a major limitation for efficient M3D integration and, thus needs to be addressed. Previous works focused on the utilization of the substrate area around MIV to reduce this area overhead significantly but suffers from increased leakage and scaling factors. In this work, we discuss MIV-transistor realization that addresses both leakage and scaling issue along with similar area overhead reduction compared with previous works and, thus can be utilized efficiently. Our simulation results suggest that the leakage current $(I_{D,leak})$ has reduced by $14K\times$ and, the maximum current $(I_{D,max})$ increased by $58\%$ for the proposed MIV-transistor compared with the previous implementation. In addition, performance metrics of the inverter realization with our proposed MIV-transistor specifically the delay, slew time and power consumption reduced by $11.6\%$, $17.9\%$ and, $4.5\%$ respectively compared with the previous implementation with same MIV area overhead reduction. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 375,506 |
2311.02922 | Truly Scale-Equivariant Deep Nets with Fourier Layers | In computer vision, models must be able to adapt to changes in image resolution to effectively carry out tasks such as image segmentation; This is known as scale-equivariance. Recent works have made progress in developing scale-equivariant convolutional neural networks, e.g., through weight-sharing and kernel resizing. However, these networks are not truly scale-equivariant in practice. Specifically, they do not consider anti-aliasing as they formulate the down-scaling operation in the continuous domain. To address this shortcoming, we directly formulate down-scaling in the discrete domain with consideration of anti-aliasing. We then propose a novel architecture based on Fourier layers to achieve truly scale-equivariant deep nets, i.e., absolute zero equivariance-error. Following prior works, we test this model on MNIST-scale and STL-10 datasets. Our proposed model achieves competitive classification performance while maintaining zero equivariance-error. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 405,647 |
2405.02693 | TV White Space and LTE Network Optimization towards Energy Efficiency in
Suburban and Rural Scenarios | The radio spectrum is a limited resource. Demand for wireless communication services is increasing exponentially, stressing the availability of radio spectrum to accommodate new services. TV White Space (TVWS) technologies allow a dynamic usage of the spectrum. These technologies provide wireless connectivity, in the channels of the Very High Frequency (VHF) and Ultra High Frequency (UHF) television broadcasting bands. In this paper, we investigate and compare the coverage range, network capacity, and network energy efficiency for TVWS technologies and LTE. We consider Ghent, Belgium and Boyeros, Havana, Cuba to evaluate a realistic outdoor suburban and rural area, respectively. The comparison shows that TVWS networks have an energy efficiency 9-12 times higher than LTE networks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 451,874 |
2407.15852 | BSH for Collision Detection in Point Cloud models | Point cloud models are a common shape representation for several reasons. Three-dimensional scanning devices are widely used nowadays and points are an attractive primitive for rendering complex geometry. Nevertheless, there is not much literature on collision detection for point cloud models. This paper presents a novel collision detection algorithm for large point cloud models using voxels, octrees and bounding spheres hierarchies (BSH). The scene graph is divided in voxels. The objects of each voxel are organized into an octree. Due to the high number of points in the scene, each non-empty cell of the octree is organized in a bounding sphere hierarchy, based on an R-tree hierarchy like structure. The BSH hierarchies are used to group neighboring points and filter out very quickly parts of objects that do not interact with other models. Points derived from laser scanned data typically are not segmented and can have arbitrary spatial resolution thus introducing computational and modeling issues. We address these issues and our results show that the proposed collision detection algorithm effectively finds intersections between point cloud models since it is able to reduce the number of bounding volume checks and updates. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 475,368 |
2012.05009 | A Gumbel-based Rating Prediction Framework for Imbalanced Recommendation | Rating prediction is a core problem in recommender systems to quantify user's preferences towards items, however, rating imbalance naturally roots in real-world user ratings that cause biased predictions and lead to poor performance on tail ratings. While existing approaches in the rating prediction task deploy weighted cross-entropy to re-weight training samples, such approaches commonly assume an normal distribution, a symmetrical and balanced space. In contrast to the normal assumption, we propose a novel \underline{\emph{G}}umbel-based \underline{\emph{V}}ariational \underline{\emph{N}}etwork framework (GVN) to model rating imbalance and augment feature representations by the Gumbel distributions. We propose a Gumbel-based variational encoder to transform features into non-normal vector space. Second, we deploy a multi-scale convolutional fusion network to integrate comprehensive views of users and items from the rating matrix and user reviews. Third, we adopt a skip connection module to personalize final rating predictions. We conduct extensive experiments on five datasets with both error- and ranking-based metrics. Experiments on ranking and regression evaluation tasks prove that the GVN can effectively achieve state-of-the-art performance across the datasets and reduce the biased predictions of tail ratings. We compare with various distributions (e.g., normal and Poisson) and demonstrate the effectiveness of Gumbel-based methods on class-imbalance modeling. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 210,650 |
2206.09065 | Free-form Lesion Synthesis Using a Partial Convolution Generative
Adversarial Network for Enhanced Deep Learning Liver Tumor Segmentation | Automatic deep learning segmentation models has been shown to improve both the segmentation efficiency and the accuracy. However, training a robust segmentation model requires considerably large labeled training samples, which may be impractical. This study aimed to develop a deep learning framework for generating synthetic lesions that can be used to enhance network training. The lesion synthesis network is a modified generative adversarial network (GAN). Specifically, we innovated a partial convolution strategy to construct an Unet-like generator. The discriminator is designed using Wasserstein GAN with gradient penalty and spectral normalization. A mask generation method based on principal component analysis was developed to model various lesion shapes. The generated masks are then converted into liver lesions through a lesion synthesis network. The lesion synthesis framework was evaluated for lesion textures, and the synthetic lesions were used to train a lesion segmentation network to further validate the effectiveness of this framework. All the networks are trained and tested on the public dataset from LITS. The synthetic lesions generated by the proposed approach have very similar histogram distributions compared to the real lesions for the two employed texture parameters, GLCM-energy and GLCM-correlation. The Kullback-Leibler divergence of GLCM-energy and GLCM-correlation were 0.01 and 0.10, respectively. Including the synthetic lesions in the tumor segmentation network improved the segmentation dice performance of U-Net significantly from 67.3% to 71.4% (p<0.05). Meanwhile, the volume precision and sensitivity improve from 74.6% to 76.0% (p=0.23) and 66.1% to 70.9% (p<0.01), respectively. The synthetic data significantly improves the segmentation performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 303,416 |
2009.02095 | SEANet: A Multi-modal Speech Enhancement Network | We explore the possibility of leveraging accelerometer data to perform speech enhancement in very noisy conditions. Although it is possible to only partially reconstruct user's speech from the accelerometer, the latter provides a strong conditioning signal that is not influenced from noise sources in the environment. Based on this observation, we feed a multi-modal input to SEANet (Sound EnhAncement Network), a wave-to-wave fully convolutional model, which adopts a combination of feature losses and adversarial losses to reconstruct an enhanced version of user's speech. We trained our model with data collected by sensors mounted on an earbud and synthetically corrupted by adding different kinds of noise sources to the audio signal. Our experimental results demonstrate that it is possible to achieve very high quality results, even in the case of interfering speech at the same level of loudness. A sample of the output produced by our model is available at https://google-research.github.io/seanet/multimodal/speech. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 194,466 |
2412.19839 | Multi-View Fusion Neural Network for Traffic Demand Prediction | The extraction of spatial-temporal features is a crucial research in transportation studies, and current studies typically use a unified temporal modeling mechanism and fixed spatial graph for this purpose. However, the fixed spatial graph restricts the extraction of spatial features for similar but not directly connected nodes, while the unified temporal modeling mechanism overlooks the heterogeneity of temporal variation of different nodes. To address these challenges, a multi-view fusion neural network (MVFN) approach is proposed. In this approach, spatial local features are extracted through the use of a graph convolutional network (GCN), and spatial global features are extracted using a cosine re-weighting linear attention mechanism (CLA). The GCN and CLA are combined to create a graph-cosine module (GCM) for the extraction of overall spatial features. Additionally, the multi-channel separable temporal convolutional network (MSTCN) makes use of a multi-channel temporal convolutional network (MTCN) at each layer to extract unified temporal features, and a separable temporal convolutional network (STCN) to extract independent temporal features. Finally, the spatial-temporal feature data is input into the prediction layer to obtain the final result. The model has been validated on two traffic demand datasets and achieved the best prediction accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 520,994 |
2410.13250 | Perceptions of Discriminatory Decisions of Artificial Intelligence:
Unpacking the Role of Individual Characteristics | This study investigates how personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) and demographic factors (age, education, and income) are associated with perceptions of artificial intelligence (AI) outcomes exhibiting gender and racial bias and with general attitudes towards AI. Analyses of a large-scale experiment dataset (N = 1,206) indicate that digital self-efficacy and technical knowledge are positively associated with attitudes toward AI, while liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism. Furthermore, age and income are closely connected to cognitive gaps in understanding discriminatory AI outcomes. These findings highlight the importance of promoting digital literacy skills and enhancing digital self-efficacy to maintain trust in AI and beliefs in AI usefulness and safety. The findings also suggest that the disparities in understanding problematic AI outcomes may be aligned with economic inequalities and generational gaps in society. Overall, this study sheds light on the socio-technological system in which complex interactions occur between social hierarchies, divisions, and machines that reflect and exacerbate the disparities. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 499,445 |
2106.08970 | Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch | As the curation of data for machine learning becomes increasingly automated, dataset tampering is a mounting threat. Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data. This vulnerability is then activated at inference time by placing a "trigger" into the model's input. Typical backdoor attacks insert the trigger directly into the training data, although the presence of such an attack may be visible upon inspection. In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all. However, this hidden trigger attack is ineffective at poisoning neural networks trained from scratch. We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process. Sleeper Agent is the first hidden trigger backdoor attack to be effective against neural networks trained from scratch. We demonstrate its effectiveness on ImageNet and in black-box settings. Our implementation code can be found at https://github.com/hsouri/Sleeper-Agent. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 241,500 |
1507.00095 | Secret Key Agreement with Large Antenna Arrays under the Pilot
Contamination Attack | We present a secret key agreement (SKA) protocol for a multi-user time-division duplex system where a base-station (BS) with a large antenna array (LAA) shares secret keys with users in the presence of non-colluding eavesdroppers. In the system, when the BS transmits random sequences to legitimate users for sharing common randomness, the eavesdroppers can attempt the pilot contamination attack (PCA) in which each of eavesdroppers transmits its target user's training sequence in hopes of acquiring possible information leak by steering beam towards the eavesdropper. We show that there exists a crucial complementary relation between the received signal strengths at the eavesdropper and its target user. This relation tells us that the eavesdropper inevitably leaves a trace that enables us to devise a way of measuring the amount of information leakage to the eavesdropper even if PCA parameters are unknown. To this end, we derive an estimator for the channel gain from the BS to the eavesdropper and propose a rate-adaptation scheme for adjusting the length of secret key under the PCA. Extensive analysis and evaluations are carried out under various setups, which show that the proposed scheme adequately takes advantage of the LAA to establish the secret keys under the PCA. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 44,716 |
2103.06256 | A registration error estimation framework for correlative imaging | Correlative imaging workflows are now widely used in bioimaging and aims to image the same sample using at least two different and complementary imaging modalities. Part of the workflow relies on finding the transformation linking a source image to a target image. We are specifically interested in the estimation of registration error in point-based registration. We propose an application of multivariate linear regression to solve the registration problem allowing us to propose a framework for the estimation of the associated error in the case of rigid and affine transformations and with anisotropic noise. These developments can be used as a decision-support tool for the biologist to analyze multimodal correlative images and are available under Ec-CLEM, an open-source plugin under ICY. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 224,244 |
1611.07056 | The Recycling Gibbs Sampler for Efficient Learning | Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. This novel scheme arises naturally after pointing out the relationship between the standard Gibbs sampler and the chain rule used for sampling purposes. Numerical simulations involving simple and real inference problems confirm the excellent performance of the proposed scheme in terms of accuracy and computational efficiency. In particular we give empirical evidence of performance in a toy example, inference of Gaussian processes hyperparameters, and learning dependence graphs through regression. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 64,298 |
2112.00985 | Evaluation of mathematical questioning strategies using data collected
through weak supervision | A large body of research demonstrates how teachers' questioning strategies can improve student learning outcomes. However, developing new scenarios is challenging because of the lack of training data for a specific scenario and the costs associated with labeling. This paper presents a high-fidelity, AI-based classroom simulator to help teachers rehearse research-based mathematical questioning skills. Using a human-in-the-loop approach, we collected a high-quality training dataset for a mathematical questioning scenario. Using recent advances in uncertainty quantification, we evaluated our conversational agent for usability and analyzed the practicality of incorporating a human-in-the-loop approach for data collection and system evaluation for a mathematical questioning scenario. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 269,324 |
2101.12459 | On $f$-divergences between Cauchy distributions | We prove that the $f$-divergences between univariate Cauchy distributions are all symmetric, and can be expressed as strictly increasing scalar functions of the symmetric chi-squared divergence. We report the corresponding scalar functions for the total variation distance, the Kullback-Leibler divergence, the squared Hellinger divergence, and the Jensen-Shannon divergence among others. Next, we give conditions to expand the $f$-divergences as converging infinite series of higher-order power chi divergences, and illustrate the criterion for converging Taylor series expressing the $f$-divergences between Cauchy distributions. We then show that the symmetric property of $f$-divergences holds for multivariate location-scale families with prescribed matrix scales provided that the standard density is even which includes the cases of the multivariate normal and Cauchy families. However, the $f$-divergences between multivariate Cauchy densities with different scale matrices are shown asymmetric. Finally, we present several metrizations of $f$-divergences between univariate Cauchy distributions and further report geometric embedding properties of the Kullback-Leibler divergence. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 217,570 |
2403.17428 | Aligning Large Language Models for Enhancing Psychiatric Interviews
Through Symptom Delineation and Summarization: Pilot Study | Background: Advancements in large language models (LLMs) have opened new possibilities in psychiatric interviews, an underexplored area where LLMs could be valuable. This study focuses on enhancing psychiatric interviews by analyzing counseling data from North Korean defectors who have experienced trauma and mental health issues. Objective: The study investigates whether LLMs can (1) identify parts of conversations that suggest psychiatric symptoms and recognize those symptoms, and (2) summarize stressors and symptoms based on interview transcripts. Methods: LLMs are tasked with (1) extracting stressors from transcripts, (2) identifying symptoms and their corresponding sections, and (3) generating interview summaries using the extracted data. The transcripts were labeled by mental health experts for training and evaluation. Results: In the zero-shot inference setting using GPT-4 Turbo, 73 out of 102 segments demonstrated a recall mid-token distance d < 20 in identifying symptom-related sections. For recognizing specific symptoms, fine-tuning outperformed zero-shot inference, achieving an accuracy, precision, recall, and F1-score of 0.82. For the generative summarization task, LLMs using symptom and stressor information scored highly on G-Eval metrics: coherence (4.66), consistency (4.73), fluency (2.16), and relevance (4.67). Retrieval-augmented generation showed no notable performance improvement. Conclusions: LLMs, with fine-tuning or appropriate prompting, demonstrated strong accuracy (over 0.8) for symptom delineation and achieved high coherence (4.6+) in summarization. This study highlights their potential to assist mental health practitioners in analyzing psychiatric interviews. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 441,465 |
quant-ph/0102108 | Quantum Kolmogorov Complexity Based on Classical Descriptions | We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper bounded and can be effectively approximated from above under certain conditions. With high probability a quantum object is incompressible. Upper- and lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on the no-cloning properties of quantum states. In the quantum situation complexity is not sub-additive. We discuss some relations with ``no-cloning'' and ``approximate cloning'' properties. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 540,863 |
2107.12765 | Resource Optimization with Interference Coupling in Multi-RIS-assisted
Multi-cell Systems | Deploying reconfigurable intelligent surface (RIS) to enhance wireless transmission is a promising approach. In this paper, we investigate large-scale multi-RIS-assisted multi-cell systems, where multiple RISs are deployed in each cell. Different from the full-buffer scenario, the mutual interference in our system is not known a priori, and for this reason we apply the load coupling model to analyze this system. The objective is to minimize the total resource consumption subject to user demand requirement by optimizing the reflection coefficients in the cells. The cells are highly coupled and the overall problem is non-convex. To tackle this, we first investigate the single-cell case with given interference, and propose a low-complexity algorithm based on the Majorization-Minimization method to obtain a locally optimal solution. Then, we embed this algorithm into an algorithmic framework for the overall multi-cell problem, and prove its feasibility and convergence to a solution that is at least locally optimal. Simulation results demonstrate the benefit of RIS in time-frequency resource utilization in the multi-cell system. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 248,001 |
2103.10619 | Scalable Vision Transformers with Hierarchical Pooling | The recently proposed Visual image Transformers (ViT) with pure attention have achieved promising performance on image recognition tasks, such as image classification. However, the routine of the current ViT model is to maintain a full-length patch sequence during inference, which is redundant and lacks hierarchical representation. To this end, we propose a Hierarchical Visual Transformer (HVT) which progressively pools visual tokens to shrink the sequence length and hence reduces the computational cost, analogous to the feature maps downsampling in Convolutional Neural Networks (CNNs). It brings a great benefit that we can increase the model capacity by scaling dimensions of depth/width/resolution/patch size without introducing extra computational complexity due to the reduced sequence length. Moreover, we empirically find that the average pooled visual tokens contain more discriminative information than the single class token. To demonstrate the improved scalability of our HVT, we conduct extensive experiments on the image classification task. With comparable FLOPs, our HVT outperforms the competitive baselines on ImageNet and CIFAR-100 datasets. Code is available at https://github.com/MonashAI/HVT | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,513 |
1608.04996 | Open Problem: Approximate Planning of POMDPs in the class of Memoryless
Policies | Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 59,908 |
1901.00785 | A^2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes | Constructing of molecular structural models from Cryo-Electron Microscopy (Cryo-EM) density volumes is the critical last step of structure determination by Cryo-EM technologies. Methods have evolved from manual construction by structural biologists to perform 6D translation-rotation searching, which is extremely compute-intensive. In this paper, we propose a learning-based method and formulate this problem as a vision-inspired 3D detection and pose estimation task. We develop a deep learning framework for amino acid determination in a 3D Cryo-EM density volume. We also design a sequence-guided Monte Carlo Tree Search (MCTS) to thread over the candidate amino acids to form the molecular structure. This framework achieves 91% coverage on our newly proposed dataset and takes only a few minutes for a typical structure with a thousand amino acids. Our method is hundreds of times faster and several times more accurate than existing automated solutions without any human intervention. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 117,856 |
2105.01282 | Winter wheat yield prediction using convolutional neural networks from
environmental and phenological data | Crop yield forecasting depends on many interactive factors, including crop genotype, weather, soil, and management practices. This study analyzes the performance of machine learning and deep learning methods for winter wheat yield prediction using an extensive dataset of weather, soil, and crop phenology variables in 271 counties across Germany from 1999 to 2019. We proposed a Convolutional Neural Network (CNN) model, which uses a 1-dimensional convolution operation to capture the time dependencies of environmental variables. We used eight supervised machine learning models as baselines and evaluated their predictive performance using RMSE, MAE, and correlation coefficient metrics to benchmark the yield prediction results. Our findings suggested that nonlinear models such as the proposed CNN, Deep Neural Network (DNN), and XGBoost were more effective in understanding the relationship between the crop yield and input data compared to the linear models. Our proposed CNN model outperformed all other baseline models used for winter wheat yield prediction (7 to 14% lower RMSE, 3 to 15% lower MAE, and 4 to 50% higher correlation coefficient than the best performing baseline across test data). We aggregated soil moisture and meteorological features at the weekly resolution to address the seasonality of the data. We also moved beyond prediction and interpreted the outputs of our proposed CNN model using SHAP and force plots which provided key insights in explaining the yield prediction results (importance of variables by time). We found DUL, wind speed at week ten, and radiation amount at week seven as the most critical features in winter wheat yield prediction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 233,479 |
2009.13116 | Neural Baselines for Word Alignment | Word alignments identify translational correspondences between words in a parallel sentence pair and is used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems , or to perform quality estimation. In most areas of natural language processing, neural network models nowadays constitute the preferred approach, a situation that might also apply to word alignment models. In this work, we study and comprehensively evaluate neural models for unsupervised word alignment for four language pairs, contrasting several variants of neural models. We show that in most settings, neural versions of the IBM-1 and hidden Markov models vastly outperform their discrete counterparts. We also analyze typical alignment errors of the baselines that our models overcome to illustrate the benefits-and the limitations-of these new models for morphologically rich languages. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 197,644 |
2502.04317 | Factorized Implicit Global Convolution for Automotive Computational
Fluid Dynamics Prediction | Computational Fluid Dynamics (CFD) is crucial for automotive design, requiring the analysis of large 3D point clouds to study how vehicle geometry affects pressure fields and drag forces. However, existing deep learning approaches for CFD struggle with the computational complexity of processing high-resolution 3D data. We propose Factorized Implicit Global Convolution (FIGConv), a novel architecture that efficiently solves CFD problems for very large 3D meshes with arbitrary input and output geometries. FIGConv achieves quadratic complexity $O(N^2)$, a significant improvement over existing 3D neural CFD models that require cubic complexity $O(N^3)$. Our approach combines Factorized Implicit Grids to approximate high-resolution domains, efficient global convolutions through 2D reparameterization, and a U-shaped architecture for effective information gathering and integration. We validate our approach on the industry-standard Ahmed body dataset and the large-scale DrivAerNet dataset. In DrivAerNet, our model achieves an $R^2$ value of 0.95 for drag prediction, outperforming the previous state-of-the-art by a significant margin. This represents a 40% improvement in relative mean squared error and a 70% improvement in absolute mean squared error over previous methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,074 |
1910.10032 | GPU-Accelerated Viterbi Exact Lattice Decoder for Batched Online and
Offline Speech Recognition | We present an optimized weighted finite-state transducer (WFST) decoder capable of online streaming and offline batch processing of audio using Graphics Processing Units (GPUs). The decoder is efficient in memory utilization, input/output (I/O) bandwidth, and uses a novel Viterbi implementation designed to maximize parallelism. The reduced memory footprint allows the decoder to process significantly larger graphs than previously possible, while optimizing I/O increases the number of simultaneous streams supported. GPU preprocessing of lattice segments enables intermediate lattice results to be returned to the requestor during streaming inference. Collectively, the proposed algorithm yields up to a 240x speedup over single core CPU decoding, and up to 40x faster decoding than the current state-of-the-art GPU decoder, while returning equivalent results. This decoder design enables deployment of production-grade ASR models on a large spectrum of systems, ranging from large data center servers to low-power edge devices. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 150,370 |
2404.08540 | On the Robustness of Language Guidance for Low-Level Vision Tasks:
Findings from Depth Estimation | Recent advances in monocular depth estimation have been made by incorporating natural language as additional guidance. Although yielding impressive results, the impact of the language prior, particularly in terms of generalization and robustness, remains unexplored. In this paper, we address this gap by quantifying the impact of this prior and introduce methods to benchmark its effectiveness across various settings. We generate "low-level" sentences that convey object-centric, three-dimensional spatial relationships, incorporate them as additional language priors and evaluate their downstream impact on depth estimation. Our key finding is that current language-guided depth estimators perform optimally only with scene-level descriptions and counter-intuitively fare worse with low level descriptions. Despite leveraging additional data, these methods are not robust to directed adversarial attacks and decline in performance with an increase in distribution shift. Finally, to provide a foundation for future research, we identify points of failures and offer insights to better understand these shortcomings. With an increasing number of methods using language for depth estimation, our findings highlight the opportunities and pitfalls that require careful consideration for effective deployment in real-world settings | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,282 |
2111.07775 | Learning Representations for Pixel-based Control: What Matters and Why? | Learning representations for pixel-based control has garnered significant attention recently in reinforcement learning. A wide range of methods have been proposed to enable efficient learning, leading to sample complexities similar to those in the full state setting. However, moving beyond carefully curated pixel data sets (centered crop, appropriate lighting, clear background, etc.) remains challenging. In this paper, we adopt a more difficult setting, incorporating background distractors, as a first step towards addressing this challenge. We present a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning. We then analyze when and why previously proposed methods are likely to fail or reduce to the same performance as the baseline in this harder setting and why we should think carefully about extending such methods beyond the well curated environments. Our results show that finer categorization of benchmarks on the basis of characteristics like density of reward, planning horizon of the problem, presence of task-irrelevant components, etc., is crucial in evaluating algorithms. Based on these observations, we propose different metrics to consider when evaluating an algorithm on benchmark tasks. We hope such a data-centric view can motivate researchers to rethink representation learning when investigating how to best apply RL to real-world tasks. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 266,476 |
2407.00188 | A Novel Labeled Human Voice Signal Dataset for Misbehavior Detection | Voice signal classification based on human behaviours involves analyzing various aspects of speech patterns and delivery styles. In this study, a real-time dataset collection is performed where participants are instructed to speak twelve psychology questions in two distinct manners: first, in a harsh voice, which is categorized as "misbehaved"; and second, in a polite manner, categorized as "normal". These classifications are crucial in understanding how different vocal behaviours affect the interpretation and classification of voice signals. This research highlights the significance of voice tone and delivery in automated machine-learning systems for voice analysis and recognition. This research contributes to the broader field of voice signal analysis by elucidating the impact of human behaviour on the perception and categorization of voice signals, thereby enhancing the development of more accurate and context-aware voice recognition technologies. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 468,760 |
1606.04761 | Probabilistic Interpretation for Correntropy with Complex Data | Recent studies have demonstrated that correntropy is an efficient tool for analyzing higher-order statistical moments in nonGaussian noise environments. Although it has been used with complex data, some adaptations were then necessary without deriving a generic form so that similarities between complex random variables can be aggregated. This paper presents a novel probabilistic interpretation for correntropy using complex-valued data called complex correntropy. An analytical recursive solution for the maximum complex correntropy criterion (MCCC) is introduced as based on the fixedpoint solution. This technique is applied to a simple system identification case study, as the results demonstrate prominent advantages regarding the proposed cost function if compared to the complex recursive least squares (RLS) algorithm. By using such probabilistic interpretation, correntropy can be applied to solve several problems involving complex data in a more straightforward way. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 57,310 |
2112.10170 | Analysis of the HiSCORE Simulated Events in TAIGA Experiment Using
Convolutional Neural Networks | TAIGA is a hybrid observatory for gamma-ray astronomy at high energies in range from 10 TeV to several EeV. It consists of instruments such as TAIGA-IACT, TAIGA-HiSCORE, and others. TAIGA-HiSCORE, in particular, is an array of wide-angle timing Cherenkov light stations. TAIGA-HiSCORE data enable to reconstruct air shower characteristics, such as air shower energy, arrival direction, and axis coordinates. In this report, we propose to consider the use of convolution neural networks in task of air shower characteristics determination. We use Convolutional Neural Networks (CNN) to analyze HiSCORE events, treating them like images. For this, the times and amplitudes of events recorded at HiSCORE stations are used. The work discusses a simple convolutional neural network and its training. In addition, we present some preliminary results on the determination of the parameters of air showers such as the direction and position of the shower axis and the energy of the primary particle and compare them with the results obtained by the traditional method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 272,367 |
1607.05962 | Indoor occupancy estimation from carbon dioxide concentration | This paper presents an indoor occupancy estimator with which we can estimate the number of real-time indoor occupants based on the carbon dioxide (CO2) measurement. The estimator is actually a dynamic model of the occupancy level. To identify the dynamic model, we propose the Feature Scaled Extreme Learning Machine (FS-ELM) algorithm, which is a variation of the standard Extreme Learning Machine (ELM) but is shown to perform better for the occupancy estimation problem. The measured CO2 concentration suffers from serious spikes. We find that pre-smoothing the CO2 data can greatly improve the estimation accuracy. In real applications, however, we cannot obtain the real-time globally smoothed CO2 data. We provide a way to use the locally smoothed CO2 data instead, which is real-time available. We introduce a new criterion, i.e. $x$-tolerance accuracy, to assess the occupancy estimator. The proposed occupancy estimator was tested in an office room with 24 cubicles and 11 open seats. The accuracy is up to 94 percent with a tolerance of four occupants. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 58,821 |
2006.14361 | An updated version of "Leader-following consensus for linear multi-agent
systems via asynchronous sampled-data control," IEEE Transactions on
Automatic Control, DOI:10.1109/TAC.2019.2948256 | In this article, we update the reference [14] in two aspects. First, we note that in order for the control law (12) in [14] to be equivalent to the control law (3) in [14], we need to assume that the samplings for all subsystems must be synchronous, i.e., we need to assume that $T_{i}=T$ for all $i=1,\cdots,N$. Second, we extend our results from periodic sampling to aperiodic sampling. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 184,197 |
1910.02442 | Joint Stereo Video Deblurring, Scene Flow Estimation and Moving Object
Segmentation | Stereo videos for the dynamic scenes often show unpleasant blurred effects due to the camera motion and the multiple moving objects with large depth variations. Given consecutive blurred stereo video frames, we aim to recover the latent clean images, estimate the 3D scene flow and segment the multiple moving objects. These three tasks have been previously addressed separately, which fail to exploit the internal connections among these tasks and cannot achieve optimality. In this paper, we propose to jointly solve these three tasks in a unified framework by exploiting their intrinsic connections. To this end, we represent the dynamic scenes with the piece-wise planar model, which exploits the local structure of the scene and expresses various dynamic scenes. Under our model, these three tasks are naturally connected and expressed as the parameter estimation of 3D scene structure and camera motion (structure and motion for the dynamic scenes). By exploiting the blur model constraint, the moving objects and the 3D scene structure, we reach an energy minimization formulation for joint deblurring, scene flow and segmentation. We evaluate our approach extensively on both synthetic datasets and publicly available real datasets with fast-moving objects, camera motion, uncontrolled lighting conditions and shadows. Experimental results demonstrate that our method can achieve significant improvement in stereo video deblurring, scene flow estimation and moving object segmentation, over state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 148,246 |
2501.14115 | Passivity-Based Robust Shape Control of a Cable-Driven Solar Sail Boom
for the CABLESSail Concept | Solar sails provide a means of propulsion using solar radiation pressure, which offers the possibility of exciting new spacecraft capabilities. However, solar sails have attitude control challenges because of the significant disturbance torques that they encounter due to imperfections in the sail and its supporting structure, as well as limited actuation capabilities. The Cable-Actuated Bio-inspired Lightweight Elastic Solar Sail (CABLESSail) concept was previously proposed to overcome these challenges by controlling the shape of the sail through cable actuation. The structural flexibility of CABLESSail introduces control challenges, which necessitate the design of a robust feedback controller for this system. The purpose of the proposed research here is to design a robust controller to ensure precise and reliable control of CABLESSail's boom. Taking into account the system dynamics and the dynamic properties of the CABLESSail concept, a passivity-based proportional-derivative (PD) controller for a single boom on the CABLESSail system is designed. To reach the nonzero desired setpoints, a feedforward input is additionally applied to the control law and a time-varying feedforward input is used instead of the constant one to effectively track a time-varying desired boom tip deflection. This control law is assessed by numerical simulations and by tests using a smaller-scale prototype of Solar Cruiser. Both the simulation and the test results show that this PD control with the time-varying feedforward input robustly controls the flexible cable-actuated solar sail. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 526,984 |
2010.15303 | Automatic joint damage quantification using computer vision and deep
learning | Joint raveled or spalled damage (henceforth called joint damage) can affect the safety and long-term performance of concrete pavements. It is important to assess and quantify the joint damage over time to assist in building action plans for maintenance, predicting maintenance costs, and maximize the concrete pavement service life. A framework for the accurate, autonomous, and rapid quantification of joint damage with a low-cost camera is proposed using a computer vision technique with a deep learning (DL) algorithm. The DL model is employed to train 263 images of sawcuts with joint damage. The trained DL model is used for pixel-wise color-masking joint damage in a series of query 2D images, which are used to reconstruct a 3D image using open-source structure from motion algorithm. Another damage quantification algorithm using a color threshold is applied to detect and compute the surface area of the damage in the 3D reconstructed image. The effectiveness of the framework was validated through inspecting joint damage at four transverse contraction joints in Illinois, USA, including three acceptable joints and one unacceptable joint by visual inspection. The results show the framework achieves 76% recall and 10% error. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 203,725 |
1105.0319 | The Arbitrarily Varying Multiple-Access Channel with Conferencing
Encoders | We derive the capacity region of arbitrarily varying multiple-access channels with conferencing encoders for both deterministic and random coding. For a complete description it is sufficient that one conferencing capacity is positive. We obtain a dichotomy: either the channel's deterministic capacity region is zero or it equals the two-dimensional random coding region. We determine exactly when either case holds. We also discuss the benefits of conferencing. We give the example of an AV-MAC which does not achieve any non-zero rate pair without encoder cooperation, but the two-dimensional random coding capacity region if conferencing is possible. Unlike compound multiple-access channels, arbitrarily varying multiple-access channels may exhibit a discontinuous increase of the capacity region when conferencing in at least one direction is enabled. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 10,211 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.