id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.00401 | Border basis computation with gradient-weighted normalization | Normalization of polynomials plays a vital role in the approximate basis computation of vanishing ideals. Coefficient normalization, which normalizes a polynomial with its coefficient norm, is the most common method in computer algebra. This study proposes the gradient-weighted normalization method for the approximate border basis computation of vanishing ideals, inspired by recent developments in machine learning. The data-dependent nature of gradient-weighted normalization leads to better stability against perturbation and consistency in the scaling of input points, which cannot be attained by coefficient normalization. Only a subtle change is needed to introduce gradient normalization in the existing algorithms with coefficient normalization. The analysis of algorithms still works with a small modification, and the order of magnitude of time complexity of algorithms remains unchanged. We also prove that, with coefficient normalization, which does not provide the scaling consistency property, scaling of points (e.g., as a preprocessing) can cause an approximate basis computation to fail. This study is the first to theoretically highlight the crucial effect of scaling in approximate basis computation and presents the utility of data-dependent normalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 214,075 |
2412.08680 | Distinguishing Scams and Fraud with Ensemble Learning | Users increasingly query LLM-enabled web chatbots for help with scam defense. The Consumer Financial Protection Bureau's complaints database is a rich data source for evaluating LLM performance on user scam queries, but currently the corpus does not distinguish between scam and non-scam fraud. We developed an LLM ensemble approach to distinguishing scam and fraud CFPB complaints and describe initial findings regarding the strengths and weaknesses of LLMs in the scam defense context. | true | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 516,198 |
2006.00894 | Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case
Study for Autonomous Driving | Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries of objects and classify the identified objects, helping autonomous cars to recognise the road environment and the traffic situation. Not only is this task safety critical, but developing a DNN based object segmentation module presents a set of challenges that are significantly different from traditional development of safety critical software. The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation. Among these stages, training and evaluation are computation intensive while data collection and labelling are manual labour intensive. This paper shows how development of DNN based object segmentation can be improved by exploiting the correlation between Surprise Adequacy (SA) and model performance. The correlation allows us to predict model performance for inputs without manually labelling them. This, in turn, enables understanding of model performance, more guided data collection, and informed decisions about further training. In our industrial case study the technique allows cost savings of up to 50% with negligible evaluation inaccuracy. Furthermore, engineers can trade off cost savings versus the tolerable level of inaccuracy depending on different development phases and scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 179,605 |
2502.05491 | Lie-algebra Adaptive Tracking Control for Rigid Body Dynamics | Adaptive tracking control for rigid body dynamics is of critical importance in control and robotics, particularly for addressing uncertainties or variations in system model parameters. However, most existing adaptive control methods are designed for systems with states in vector spaces, often neglecting the manifold constraints inherent to robotic systems. In this work, we propose a novel Lie-algebra-based adaptive control method that leverages the intrinsic relationship between the special Euclidean group and its associated Lie algebra. By transforming the state space from the group manifold to a vector space, we derive a linear error dynamics model that decouples model parameters from the system state. This formulation enables the development of an adaptive optimal control method that is both geometrically consistent and computationally efficient. Extensive simulations demonstrate the effectiveness and efficiency of the proposed method. We have made our source code publicly available to the community to support further research and collaboration. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 531,638 |
1803.02914 | Translating Questions into Answers using DBPedia n-triples | In this paper we present a question answering system using a neural network to interpret questions learned from the DBpedia repository. We train a sequence-to-sequence neural network model with n-triples extracted from the DBpedia Infobox Properties. Since these properties do not represent the natural language, we further used question-answer dialogues from movie subtitles. Although the automatic evaluation shows a low overlap of the generated answers compared to the gold standard set, a manual inspection of the showed promising outcomes from the experiment for further work. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 92,154 |
2304.07646 | Herder Ants: Ant Colony Optimization with Aphids for Discrete
Event-Triggered Dynamic Optimization Problems | Currently available dynamic optimization strategies for Ant Colony Optimization (ACO) algorithm offer a trade-off of slower algorithm convergence or significant penalty to solution quality after each dynamic change occurs. This paper proposes a discrete dynamic optimization strategy called Ant Colony Optimization (ACO) with Aphids, modelled after a real-world symbiotic relationship between ants and aphids. ACO with Aphids strategy is designed to improve solution quality of discrete domain Dynamic Optimization Problems (DOPs) with event-triggered discrete dynamism. The proposed strategy aims to improve the inter-state convergence rate throughout the entire dynamic optimization. It does so by minimizing the fitness penalty and maximizing the convergence speed that occurs after the dynamic change. This strategy is tested against Full-Restart and Pheromone-Sharing strategies implemented on the same ACO core algorithm solving Dynamic Multidimensional Knapsack Problem (DMKP) benchmarks. ACO with Aphids has demonstrated superior performance over the Pheromone-Sharing strategy in every test on average gap reduced by 29.2%. Also, ACO with Aphids has outperformed the Full-Restart strategy for large datasets groups, and the overall average gap is reduced by 52.5%. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 358,423 |
2107.14549 | Evaluating the COVID-19 Identification ResNet (CIdeR) on the INTERSPEECH
COVID-19 from Audio Challenges | We report on cross-running the recent COVID-19 Identification ResNet (CIdeR) on the two Interspeech 2021 COVID-19 diagnosis from cough and speech audio challenges: ComParE and DiCOVA. CIdeR is an end-to-end deep learning neural network originally designed to classify whether an individual is COVID-positive or COVID-negative based on coughing and breathing audio recordings from a published crowdsourced dataset. In the current study, we demonstrate the potential of CIdeR at binary COVID-19 diagnosis from both the COVID-19 Cough and Speech Sub-Challenges of INTERSPEECH 2021, ComParE and DiCOVA. CIdeR achieves significant improvements over several baselines. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,492 |
1802.07417 | Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient
Algorithms | Mixture-of-Experts (MoE) is a widely popular model for ensemble learning and is a basic building block of highly successful modern neural networks as well as a component in Gated Recurrent Units (GRU) and Attention networks. However, present algorithms for learning MoE including the EM algorithm, and gradient descent are known to get stuck in local optima. From a theoretical viewpoint, finding an efficient and provably consistent algorithm to learn the parameters remains a long standing open problem for more than two decades. In this paper, we introduce the first algorithm that learns the true parameters of a MoE model for a wide class of non-linearities with global consistency guarantees. While existing algorithms jointly or iteratively estimate the expert parameters and the gating paramters in the MoE, we propose a novel algorithm that breaks the deadlock and can directly estimate the expert parameters by sensing its echo in a carefully designed cross-moment tensor between the inputs and the output. Once the experts are known, the recovery of gating parameters still requires an EM algorithm; however, we show that the EM algorithm for this simplified problem, unlike the joint EM algorithm, converges to the true parameters. We empirically validate our algorithm on both the synthetic and real data sets in a variety of settings, and show superior performance to standard baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,888 |
2411.15844 | Unveiling the Superior Paradigm: A Comparative Study of Source-Free
Domain Adaptation and Unsupervised Domain Adaptation | In domain adaptation, there are two popular paradigms: Unsupervised Domain Adaptation (UDA), which aligns distributions using source data, and Source-Free Domain Adaptation (SFDA), which leverages pre-trained source models without accessing source data. Evaluating the superiority of UDA versus SFDA is an open and timely question with significant implications for deploying adaptive algorithms in practical applications. In this study, we demonstrate through predictive coding theory and extensive experiments on multiple benchmark datasets that SFDA generally outperforms UDA in real-world scenarios. Specifically, SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting. Notably, SFDA is particularly effective in mitigating negative transfer when there are substantial distribution discrepancies between source and target domains. Additionally, we introduce a novel data-model fusion scenario, where data sharing among stakeholders varies (e.g., some provide raw data while others provide only models), and reveal that traditional UDA and SFDA methods do not fully exploit their potential in this context. To address this limitation and capitalize on the strengths of SFDA, we propose a novel weight estimation method that effectively integrates available source data into multi-SFDA (MSFDA) approaches, thereby enhancing model performance within this scenario. This work provides a thorough analysis of UDA versus SFDA and advances a practical approach to model adaptation across diverse real-world environments. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 510,798 |
2406.13626 | Fine-Tuning Gemma-7B for Enhanced Sentiment Analysis of Financial News
Headlines | In this study, we explore the application of sentiment analysis on financial news headlines to understand investor sentiment. By leveraging Natural Language Processing (NLP) and Large Language Models (LLM), we analyze sentiment from the perspective of retail investors. The FinancialPhraseBank dataset, which contains categorized sentiments of financial news headlines, serves as the basis for our analysis. We fine-tuned several models, including distilbert-base-uncased, Llama, and gemma-7b, to evaluate their effectiveness in sentiment classification. Our experiments demonstrate that the fine-tuned gemma-7b model outperforms others, achieving the highest precision, recall, and F1 score. Specifically, the gemma-7b model showed significant improvements in accuracy after fine-tuning, indicating its robustness in capturing the nuances of financial sentiment. This model can be instrumental in providing market insights, risk management, and aiding investment decisions by accurately predicting the sentiment of financial news. The results highlight the potential of advanced LLMs in transforming how we analyze and interpret financial information, offering a powerful tool for stakeholders in the financial industry. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 465,924 |
2406.11463 | Just How Flexible are Neural Networks in Practice? | It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters, underpinning notions of overparameterized and underparameterized models. In practice, however, we only find solutions accessible via our training procedure, including the optimizer and regularizers, limiting flexibility. Moreover, the exact parameterization of the function class, built into an architecture, shapes its loss surface and impacts the minima we find. In this work, we examine the ability of neural networks to fit data in practice. Our findings indicate that: (1) standard optimizers find minima where the model can only fit training sets with significantly fewer samples than it has parameters; (2) convolutional networks are more parameter-efficient than MLPs and ViTs, even on randomly labeled data; (3) while stochastic training is thought to have a regularizing effect, SGD actually finds minima that fit more training data than full-batch gradient descent; (4) the difference in capacity to fit correctly labeled and incorrectly labeled samples can be predictive of generalization; (5) ReLU activation functions result in finding minima that fit more data despite being designed to avoid vanishing and exploding gradients in deep architectures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 464,904 |
1912.03437 | Early Prediction for Merged vs Abandoned Code Changes in Modern Code
Reviews | The modern code review process is an integral part of the current software development practice. Considerable effort is given here to inspect code changes, find defects, suggest an improvement, and address the suggestions of the reviewers. In a code review process, usually, several iterations take place where an author submits code changes and a reviewer gives feedback until is happy to accept the change. In around 12% cases, the changes are abandoned, eventually wasting all the efforts. In this research, our objective is to design a tool that can predict whether a code change would be merged or abandoned at an early stage to reduce the waste of efforts of all stakeholders (e.g., program author, reviewer, project management, etc.) involved. The real-world demand for such a tool was formally identified by a study by Fan et al. [1]. We have mined 146,612 code changes from the code reviews of three large and popular open-source software and trained and tested a suite of supervised machine learning classifiers, both shallow and deep learning based. We consider a total of 25 features in each code change during the training and testing of the models. The best performing model named PredCR (Predicting Code Review), a LightGBM-based classifier achieves around 85% AUC score on average and relatively improves the state-of-the-art [1] by 14-23%. In our empirical study on the 146,612 code changes from the three software projects, we find that (1) The new features like reviewer dimensions that are introduced in PredCR are the most informative. (2) Compared to the baseline, PredCR is more effective towards reducing bias against new developers. (3) PredCR uses historical data in the code review repository and as such the performance of PredCR improves as a software system evolves with new and more data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 156,587 |
2305.18969 | MS-DETR: Natural Language Video Localization with Sampling Moment-Moment
Interaction | Given a query, the task of Natural Language Video Localization (NLVL) is to localize a temporal moment in an untrimmed video that semantically matches the query. In this paper, we adopt a proposal-based solution that generates proposals (i.e., candidate moments) and then select the best matching proposal. On top of modeling the cross-modal interaction between candidate moments and the query, our proposed Moment Sampling DETR (MS-DETR) enables efficient moment-moment relation modeling. The core idea is to sample a subset of moments guided by the learnable templates with an adopted DETR (DEtection TRansformer) framework. To achieve this, we design a multi-scale visual-linguistic encoder, and an anchor-guided moment decoder paired with a set of learnable templates. Experimental results on three public datasets demonstrate the superior performance of MS-DETR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 369,322 |
2310.20030 | Scaling Riemannian Diffusion Models | Riemannian diffusion models draw inspiration from standard Euclidean space diffusion models to learn distributions on general manifolds. Unfortunately, the additional geometric complexity renders the diffusion transition term inexpressible in closed form, so prior methods resort to imprecise approximations of the score matching training objective that degrade performance and preclude applications in high dimensions. In this work, we reexamine these approximations and propose several practical improvements. Our key observation is that most relevant manifolds are symmetric spaces, which are much more amenable to computation. By leveraging and combining various ans\"{a}tze, we can quickly compute relevant quantities to high precision. On low dimensional datasets, our correction produces a noticeable improvement, allowing diffusion to compete with other methods. Additionally, we show that our method enables us to scale to high dimensional tasks on nontrivial manifolds. In particular, we model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,220 |
2308.15870 | Deontic Paradoxes in ASP with Weak Constraints | The rise of powerful AI technology for a range of applications that are sensitive to legal, social, and ethical norms demands decision-making support in presence of norms and regulations. Normative reasoning is the realm of deontic logics, that are challenged by well-known benchmark problems (deontic paradoxes), and lack efficient computational tools. In this paper, we use Answer Set Programming (ASP) for addressing these shortcomings and showcase how to encode and resolve several well-known deontic paradoxes utilizing weak constraints. By abstracting and generalizing this encoding, we present a methodology for translating normative systems in ASP with weak constraints. This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | true | false | false | true | 388,826 |
2312.05248 | Topology-Based Reconstruction Prevention for Decentralised Learning | Decentralised learning has recently gained traction as an alternative to federated learning in which both data and coordination are distributed. To preserve the confidentiality of users' data, decentralised learning relies on differential privacy, multi-party computation, or both. However, running multiple privacy-preserving summations in sequence may allow adversaries to perform reconstruction attacks. Current reconstruction countermeasures either cannot trivially be adapted to the distributed setting, or add excessive amounts of noise. In this work, we first show that passive honest-but-curious adversaries can infer other users' private data after several privacy-preserving summations. For example, in subgraphs with 18 users, we show that only three passive honest-but-curious adversaries succeed at reconstructing private data 11.0% of the time, requiring an average of 8.8 summations per adversary. The success rate depends only on the adversaries' direct neighbourhood, and is independent of the size of the full network. We consider weak adversaries that do not control the graph topology, cannot exploit the summation's inner workings, and do not have auxiliary knowledge; and show that these adversaries can still infer private data. We analyse how reconstruction relates to topology and propose the first topology-based decentralised defence against reconstruction attacks. We show that reconstruction requires a number of adversaries linear in the length of the network's shortest cycle. Consequently, exact attacks over privacy-preserving summations are impossible in acyclic networks. Our work is a stepping stone for a formal theory of topology-based decentralised reconstruction defences. Such a theory would generalise our countermeasure beyond summation, define confidentiality in terms of entropy, and describe the interactions with (topology-aware) differential privacy. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 413,997 |
1912.02105 | Influence Maximization for Social Good: Use of Social Networks in Low
Resource Communities | This thesis proposal makes the following technical contributions: (i) we provide a definition of the Dynamic Influence Maximization Under Uncertainty (or DIME) problem, which models the problem faced by homeless shelters accurately; (ii) we propose a novel Partially Observable Markov Decision Process (POMDP) model for solving the DIME problem; (iii) we design two scalable POMDP algorithms (PSINET and HEALER) for solving the DIME problem, since conventional POMDP solvers fail to scale up to sizes of interest; and (iv) we test our algorithms effectiveness in the real world by conducting a pilot study with actual homeless youth in Los Angeles. The success of this pilot (as explained later) shows the promise of using influence maximization for social good on a larger scale. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 156,259 |
2312.06706 | UNeR3D: Versatile and Scalable 3D RGB Point Cloud Generation from 2D
Images in Unsupervised Reconstruction | In the realm of 3D reconstruction from 2D images, a persisting challenge is to achieve high-precision reconstructions devoid of 3D Ground Truth data reliance. We present UNeR3D, a pioneering unsupervised methodology that sets a new standard for generating detailed 3D reconstructions solely from 2D views. Our model significantly cuts down the training costs tied to supervised approaches and introduces RGB coloration to 3D point clouds, enriching the visual experience. Employing an inverse distance weighting technique for color rendering, UNeR3D ensures seamless color transitions, enhancing visual fidelity. Our model's flexible architecture supports training with any number of views, and uniquely, it is not constrained by the number of views used during training when performing reconstructions. It can infer with an arbitrary count of views during inference, offering unparalleled versatility. Additionally, the model's continuous spatial input domain allows the generation of point clouds at any desired resolution, empowering the creation of high-resolution 3D RGB point clouds. We solidify the reconstruction process with a novel multi-view geometric loss and color loss, demonstrating that our model excels with single-view inputs and beyond, thus reshaping the paradigm of unsupervised learning in 3D vision. Our contributions signal a substantial leap forward in 3D vision, offering new horizons for content creation across diverse applications. Code is available at https://github.com/HongbinLin3589/UNeR3D. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 414,643 |
2409.01092 | Two-Timescale Synchronization and Migration for Digital Twin Networks: A
Multi-Agent Deep Reinforcement Learning Approach | Digital twins (DTs) have emerged as a promising enabler for representing the real-time states of physical worlds and realizing self-sustaining systems. In practice, DTs of physical devices, such as mobile users (MUs), are commonly deployed in multi-access edge computing (MEC) networks for the sake of reducing latency. To ensure the accuracy and fidelity of DTs, it is essential for MUs to regularly synchronize their status with their DTs. However, MU mobility introduces significant challenges to DT synchronization. Firstly, MU mobility triggers DT migration which could cause synchronization failures. Secondly, MUs require frequent synchronization with their DTs to ensure DT fidelity. Nonetheless, DT migration among MEC servers, caused by MU mobility, may occur infrequently. Accordingly, we propose a two-timescale DT synchronization and migration framework with reliability consideration by establishing a non-convex stochastic problem to minimize the long-term average energy consumption of MUs. We use Lyapunov theory to convert the reliability constraints and reformulate the new problem as a partially observable Markov decision-making process (POMDP). Furthermore, we develop a heterogeneous agent proximal policy optimization with Beta distribution (Beta-HAPPO) method to solve it. Numerical results show that our proposed Beta-HAPPO method achieves significant improvements in energy savings when compared with other benchmarks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 485,208 |
1812.10033 | MMFNet: A Multi-modality MRI Fusion Network for Segmentation of
Nasopharyngeal Carcinoma | Segmentation of nasopharyngeal carcinoma (NPC) from Magnetic Resonance Images (MRI) is a crucial prerequisite for NPC radiotherapy. However, manually segmenting of NPC is time-consuming and labor-intensive. Additionally, single-modality MRI generally cannot provide enough information for its accurate delineation. Therefore, a multi-modality MRI fusion network (MMFNet) based on three modalities of MRI (T1, T2 and contrast-enhanced T1) is proposed to complete accurate segmentation of NPC. The backbone of MMFNet is designed as a multi-encoder-based network, consisting of several encoders to capture modality-specific features and one single decoder to fuse them and obtain high-level features for NPC segmentation. A fusion block is presented to effectively fuse features from multi-modality MRI. It firstly recalibrates low-level features captured from modality-specific encoders to highlight both informative features and regions of interest, then fuses weighted features by a residual fusion block to keep balance between fused ones and high-level features from decoder. Moreover, a training strategy named self-transfer, which utilizes pre-trained modality-specific encoders to initialize multi-encoder-based network, is proposed to make full mining of information from different modalities of MRI. The proposed method based on multi-modality MRI can effectively segment NPC and its advantages are validated by extensive experiments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 117,278 |
1602.04450 | Bayesian Optimization with Safety Constraints: Safe and Automatic
Parameter Tuning in Robotics | Robotic algorithms typically depend on various parameters, the choice of which significantly affects the robot's performance. While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance. Optimization algorithms, such as Bayesian optimization, have been used to automate this process. However, these methods may evaluate unsafe parameters during the optimization process that lead to safety-critical system failures. Recently, a safe Bayesian optimization algorithm, called SafeOpt, has been developed, which guarantees that the performance of the system never falls below a critical value; that is, safety is defined based on the performance function. However, coupling performance and safety is often not desirable in robotics. For example, high-gain controllers might achieve low average tracking error (performance), but can overshoot and violate input constraints. In this paper, we present a generalized algorithm that allows for multiple safety constraints separate from the objective. Given an initial set of safe parameters, the algorithm maximizes performance but only evaluates parameters that satisfy safety for all constraints with high probability. To this end, it carefully explores the parameter space by exploiting regularity assumptions in terms of a Gaussian process prior. Moreover, we show how context variables can be used to safely transfer knowledge to new situations and tasks. We provide a theoretical analysis and demonstrate that the proposed algorithm enables fast, automatic, and safe optimization of tuning parameters in experiments on a quadrotor vehicle. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 52,136 |
2012.12599 | Dynamics of a Stratified Population of Optimum Seeking Agents on a
Network -- Part I: Modeling and Convergence Analysis | In this work, we consider a population composed of a continuum of agents that seek to maximize a payoff function by moving on a network. The nodes in the network may represent physical locations or abstract choices. The population is stratified and hence agents opting for the same choice may not get the same payoff. In particular, we assume payoff functions that model diminishing returns, that is, agents in "newer" strata of a node receive a smaller payoff compared to "older" strata. In this first part of two-part work, we model the population dynamics under three choice revision policies, each having varying levels of coordination -- i. no coordination and the agents are selfish, ii. coordination among agents in each node and iii. coordination across the entire population. To model the case with selfish agents, we generalize the Smith dynamics to our setting, where we have a stratified population and network constraints. To model nodal coordination, we allow the fraction of population in a node, as a whole, to take the `best response' to the state of the population in the node's neighborhood. For the case of population-wide coordination, we explore a dynamics where the population evolves according to centralized gradient ascent of the social utility, though constrained by the network. In each case, we show that the dynamics has existence and uniqueness of solutions and also show that the solutions from any initial condition asymptotically converge to the set of Nash equilibria. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 212,991 |
1107.2781 | Face Recognition using Curvelet Transform | Face recognition has been studied extensively for more than 20 years now. Since the beginning of 90s the subject has became a major issue. This technology is used in many important real-world applications, such as video surveillance, smart cards, database security, internet and intranet access. This report reviews recent two algorithms for face recognition which take advantage of a relatively new multiscale geometric analysis tool - Curvelet transform, for facial processing and feature extraction. This transform proves to be efficient especially due to its good ability to detect curves and lines, which characterize the human's face. An algorithm which is based on the two algorithms mentioned above is proposed, and its performance is evaluated on three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech. k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are used, along with Principal Component Analysis (PCA) for dimensionality reduction. This algorithm shows good results, and it even outperforms other algorithms in some cases. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 11,290 |
2011.01436 | Developing High Quality Training Samples for Deep Learning Based Local
Climate Zone Classification in Korea | Two out of three people will be living in urban areas by 2050, as projected by the United Nations, emphasizing the need for sustainable urban development and monitoring. Common urban footprint data provide high-resolution city extents but lack essential information on the distribution, pattern, and characteristics. The Local Climate Zone (LCZ) offers an efficient and standardized framework that can delineate the internal structure and characteristics of urban areas. Global-scale LCZ mapping has been explored, but are limited by low accuracy, variable labeling quality, or domain adaptation challenges. Instead, this study developed a custom LCZ data to map key Korean cities using a multi-scale convolutional neural network. Results demonstrated that using a novel, custom LCZ data with deep learning can generate more accurate LCZ map results compared to conventional community-based LCZ mapping with machine learning as well as transfer learning of the global So2Sat dataset. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 204,581 |
2408.02253 | Advancing Post-OCR Correction: A Comparative Study of Synthetic Data | This paper explores the application of synthetic data in the post-OCR domain on multiple fronts by conducting experiments to assess the impact of data volume, augmentation, and synthetic data generation methods on model performance. Furthermore, we introduce a novel algorithm that leverages computer vision feature detection algorithms to calculate glyph similarity for constructing post-OCR synthetic data. Through experiments conducted across a variety of languages, including several low-resource ones, we demonstrate that models like ByT5 can significantly reduce Character Error Rates (CER) without the need for manually annotated data, and our proposed synthetic data generation method shows advantages over traditional methods, particularly in low-resource languages. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 478,564 |
2407.11236 | Toward RAPS: the Robot Autonomy Perception Scale | Human-robot interactions can change significantly depending on how autonomous humans perceive a robot to be. Yet, while previous work in the HRI community measured perceptions of human autonomy, there is little work on measuring perceptions of robot autonomy. In this paper, we present our progress toward the creation of the Robot Autonomy Perception Scale (RAPS): a theoretically motivated scale for measuring human perceptions of robot autonomy. We formulated a set of fifteen Likert scale items that are based on the definition of autonomy from Beer et al.'s work, which identifies five key autonomy components: ability to sense, ability to plan, ability to act, ability to act with an intent towards some goal, and an ability to do so without external control. We applied RAPS to an experimental context in which a robot communicated with a human teammate through different levels of Performative Autonomy (PA): an autonomy-driven strategy in which robots may "perform" a lower level of autonomy than they are truly capable of to increase human situational awareness. Our results present preliminary validation for RAPS by demonstrating its sensitivity to PA and motivate the further validation of RAPS. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 473,354 |
2407.19503 | Discrete Spectrum Analysis of Vector OFDM Signals | Vector OFDM (VOFDM) is equivalent to OTFS and is good for time-varying channels. However, due to its vector form, its signal spectrum is not as clear as that of the conventional OFDM. In this paper, we study the discrete spectrum of discrete VOFDM signals. We obtain a linear relationship between a vector of information symbols and a vector of the same size of components evenly distributed in the discrete VOFDM signal spectrum, and show that if a vector of information symbols is set to 0, then a corresponding vector of the same size of the discrete VOFDM signal spectrum is 0 as well, where the components of the 0 vector are not together but evenly distributed in the spectrum. With the linear relationship, the information symbol vectors can be locally precoded so that any of the discrete spectrum of VOFDM signals can be set to 0, similar to that of the conventional OFDM signals. These results are verified by simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 476,814 |
2308.04082 | Application-Oriented Benchmarking of Quantum Generative Learning Using
QUARK | Benchmarking of quantum machine learning (QML) algorithms is challenging due to the complexity and variability of QML systems, e.g., regarding model ansatzes, data sets, training techniques, and hyper-parameters selection. The QUantum computing Application benchmaRK (QUARK) framework simplifies and standardizes benchmarking studies for quantum computing applications. Here, we propose several extensions of QUARK to include the ability to evaluate the training and deployment of quantum generative models. We describe the updated software architecture and illustrate its flexibility through several example applications: (1) We trained different quantum generative models using several circuit ansatzes, data sets, and data transformations. (2) We evaluated our models on GPU and real quantum hardware. (3) We assessed the generalization capabilities of our generative models using a broad set of metrics that capture, e.g., the novelty and validity of the generated data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 384,287 |
2111.04475 | Identifying the Leading Factors of Significant Weight Gains Using a New
Rule Discovery Method | Overweight and obesity remain a major global public health concern and identifying the individualized patterns that increase the risk of future weight gains has a crucial role in preventing obesity and numerous sub-sequent diseases associated with obesity. In this work, we use a rule discovery method to study this problem, by presenting an approach that offers genuine interpretability and concurrently optimizes the accuracy(being correct often) and support (applying to many samples) of the identified patterns. Specifically, we extend an established subgroup-discovery method to generate the desired rules of type X -> Y and show how top features can be extracted from the X side, functioning as the best predictors of Y. In our obesity problem, X refers to the extracted features from very large and multi-site EHR data, and Y indicates significant weight gains. Using our method, we also extensively compare the differences and inequities in patterns across 22 strata determined by the individual's gender, age, race, insurance type, neighborhood type, and income level. Through extensive series of experiments, we show new and complementary findings regarding the predictors of future dangerous weight gains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,496 |
1806.00600 | Semantic-Aware Generative Adversarial Nets for Unsupervised Domain
Adaptation in Chest X-ray Segmentation | In spite of the compelling achievements that deep neural networks (DNNs) have made in medical image computing, these deep models often suffer from degraded performance when being applied to new test datasets with domain shift. In this paper, we present a novel unsupervised domain adaptation approach for segmentation tasks by designing semantic-aware generative adversarial networks (GANs). Specifically, we transform the test image into the appearance of source domain, with the semantic structural information being well preserved, which is achieved by imposing a nested adversarial learning in semantic label space. In this way, the segmentation DNN learned from the source domain is able to be directly generalized to the transformed test image, eliminating the need of training a new model for every new target dataset. Our domain adaptation procedure is unsupervised, without using any target domain labels. The adversarial learning of our network is guided by a GAN loss for mapping data distributions, a cycle-consistency loss for retaining pixel-level content, and a semantic-aware loss for enhancing structural information. We validated our method on two different chest X-ray public datasets for left/right lung segmentation. Experimental results show that the segmentation performance of our unsupervised approach is highly competitive with the upper bound of supervised transfer learning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 99,345 |
2204.10850 | Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation | We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 292,940 |
2206.10996 | ProtoCLIP: Prototypical Contrastive Language Image Pretraining | Contrastive Language Image Pretraining (CLIP) has received widespread attention, since its learned representations can be transferred well to various downstream tasks. During the training process of the CLIP model, the InfoNCE objective aligns positive image-text pairs and separates negative ones. We show an underlying representation grouping effect during this process: the InfoNCE objective indirectly groups semantically similar representations together via randomly emerged within-modal anchors. Based on this understanding, in this paper, Prototypical Contrastive Language Image Pretraining (ProtoCLIP) is introduced to enhance such grouping by boosting its efficiency and increasing its robustness against the modality gap. Specifically, ProtoCLIP sets up prototype-level discrimination between image and text spaces, which efficiently transfers higher-level structural knowledge. Further, Prototypical Back Translation (PBT) is proposed to decouple representation grouping from representation alignment, resulting in effective learning of meaningful representations under large modality gap. The PBT also enables us to introduce additional external teachers with richer prior language knowledge. ProtoCLIP is trained with an online episodic training strategy, which makes it can be scaled up to unlimited amounts of data. We train our ProtoCLIP on Conceptual Captions and achieved an +5.81% ImageNet linear probing improvement and an +2.01% ImageNet zero-shot classification improvement. On the larger YFCC-15M dataset, ProtoCLIP matches the performance of CLIP with 33% of training time. Codes are available at https://github.com/megvii-research/protoclip. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 304,114 |
2411.02126 | Unsupervised detection of semantic correlations in big data | In real-world data, information is stored in extremely large feature vectors. These variables are typically correlated due to complex interactions involving many features simultaneously. Such correlations qualitatively correspond to semantic roles and are naturally recognized by both the human brain and artificial neural networks. This recognition enables, for instance, the prediction of missing parts of an image or text based on their context. We present a method to detect these correlations in high-dimensional data represented as binary numbers. We estimate the binary intrinsic dimension of a dataset, which quantifies the minimum number of independent coordinates needed to describe the data, and is therefore a proxy of semantic complexity. The proposed algorithm is largely insensitive to the so-called curse of dimensionality, and can therefore be used in big data analysis. We test this approach identifying phase transitions in model magnetic systems and we then apply it to the detection of semantic correlations of images and text inside deep neural networks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 505,365 |
2208.14552 | Optimal possibly nonlinear 3-PIR codes of small size | First, we state a generalization of the minimum-distance bound for PIR codes. Then we describe a construction for linear PIR codes using packing designs and use it to construct some new 5-PIR codes. Finally, we show that no encoder (linear or nonlinear) for the binary $r$-th order Hamming code produces a 3-PIR code except when $r=2$. We use these results to determine the smallest length of a binary (possibly nonlinear) 3-PIR code of combinatorial dimension up to~6. A binary 3-PIR code of length 11 and size $2^7$ is necessarily nonlinear, and we pose the existence of such a code as an open problem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 315,347 |
1001.1221 | Boosting k-NN for categorization of natural scenes | The k-nearest neighbors (k-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good properties, the classic k-NN rule suffers from high variance when dealing with sparse prototype datasets in high dimensions. A few techniques have been proposed to improve k-NN classification, which rely on either deforming the nearest neighborhood relationship or modifying the input space. In this paper, we propose a novel boosting algorithm, called UNN (Universal Nearest Neighbors), which induces leveraged k-NN, thus generalizing the classic k-NN rule. We redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. Weak classifiers are learned by UNN so as to minimize a surrogate risk. A major feature of UNN is the ability to learn which prototypes are the most relevant for a given class, thus allowing one for effective data reduction. Experimental results on the synthetic two-class dataset of Ripley show that such a filtering strategy is able to reject "noisy" prototypes. We carried out image categorization experiments on a database containing eight classes of natural scenes. We show that our method outperforms significantly the classic k-NN classification, while enabling significant reduction of the computational cost by means of data filtering. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 5,293 |
2004.10908 | Taskflow: A Lightweight Parallel and Heterogeneous Task Graph Computing
System | Taskflow aims to streamline the building of parallel and heterogeneous applications using a lightweight task graph-based approach. Taskflow introduces an expressive task graph programming model to assist developers in the implementation of parallel and heterogeneous decomposition strategies on a heterogeneous computing platform. Our programming model distinguishes itself as a very general class of task graph parallelism with in-graph control flow to enable end-to-end parallel optimization. To support our model with high performance, we design an efficient system runtime that solves many of the new scheduling challenges arising out of our models and optimizes the performance across latency, energy efficiency, and throughput. We have demonstrated the promising performance of Taskflow in real-world applications. As an example, Taskflow solves a large-scale machine learning workload up to 29% faster, 1.5x less memory, and 1.9x higher throughput than the industrial system, oneTBB, on a machine of 40 CPUs and 4 GPUs. We have opened the source of Taskflow and deployed it to large numbers of users in the open-source community. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 173,753 |
2310.19642 | Consistent Query Answering for Primary Keys on Rooted Tree Queries | We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal subset of the database satisfying the primary key constraints. For a Boolean query q, the problem CERTAINTY(q) takes a database as input, and asks whether or not each repair satisfies q. The computational complexity of CERTAINTY(q) has been established whenever q is a self-join-free Boolean conjunctive query, or a (not necessarily self-join-free) Boolean path query. In this paper, we take one more step towards a general classification for all Boolean conjunctive queries by considering the class of rooted tree queries. In particular, we show that for every rooted tree query q, CERTAINTY(q) is in FO, NL-hard $\cap$ LFP, or coNP-complete, and it is decidable (in polynomial time), given q, which of the three cases applies. We also extend our classification to larger classes of queries with simple primary keys. Our classification criteria rely on query homomorphisms and our polynomial-time fixpoint algorithm is based on a novel use of context-free grammar (CFG). | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 404,067 |
2201.00785 | Implicit Autoencoder for Point-Cloud Self-Supervised Representation
Learning | This paper advocates the use of implicit surface representation in autoencoder-based self-supervised 3D representation learning. The most popular and accessible 3D representation, i.e., point clouds, involves discrete samples of the underlying continuous 3D surface. This discretization process introduces sampling variations on the 3D shape, making it challenging to develop transferable knowledge of the true 3D geometry. In the standard autoencoding paradigm, the encoder is compelled to encode not only the 3D geometry but also information on the specific discrete sampling of the 3D shape into the latent code. This is because the point cloud reconstructed by the decoder is considered unacceptable unless there is a perfect mapping between the original and the reconstructed point clouds. This paper introduces the Implicit AutoEncoder (IAE), a simple yet effective method that addresses the sampling variation issue by replacing the commonly-used point-cloud decoder with an implicit decoder. The implicit decoder reconstructs a continuous representation of the 3D shape, independent of the imperfections in the discrete samples. Extensive experiments demonstrate that the proposed IAE achieves state-of-the-art performance across various self-supervised learning benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 274,068 |
2005.02762 | Recurrent neural networks and Koopman-based frameworks for temporal
predictions in a low-order model of turbulence | The capabilities of recurrent neural networks and Koopman-based frameworks are assessed in the prediction of temporal dynamics of the low-order model of near-wall turbulence by Moehlis et al. (New J. Phys. 6, 56, 2004). Our results show that it is possible to obtain excellent reproductions of the long-term statistics and the dynamic behavior of the chaotic system with properly trained long-short-term memory (LSTM) networks, leading to relative errors in the mean and the fluctuations below $1\%$. Besides, a newly developed Koopman-based framework, called Koopman with nonlinear forcing (KNF), leads to the same level of accuracy in the statistics at a significantly lower computational expense. Furthermore, the KNF framework outperforms the LSTM network when it comes to short-term predictions. We also observe that using a loss function based only on the instantaneous predictions of the chaotic system can lead to suboptimal reproductions in terms of long-term statistics. Thus, we propose a model-selection criterion based on the computed statistics which allows to achieve excellent statistical reconstruction even on small datasets, with minimal loss of accuracy in the instantaneous predictions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 175,965 |
1412.1732 | Statistical models and regularization strategies in statistical image
reconstruction of low-dose X-ray CT: a survey | Statistical image reconstruction (SIR) methods have shown potential to substantially improve the image quality of low-dose X-ray computed tomography (CT) as compared to the conventional filtered back-projection (FBP) method for various clinical tasks. According to the maximum a posterior (MAP) estimation, the SIR methods can be typically formulated by an objective function consisting of two terms: (1) data-fidelity (or equivalently, data-fitting or data-mismatch) term modeling the statistics of projection measurements, and (2) regularization (or equivalently, prior or penalty) term reflecting prior knowledge or expectation on the characteristics of the image to be reconstructed. Existing SIR methods for low-dose CT can be divided into two groups: (1) those that use calibrated transmitted photon counts (before log-transform) with penalized maximum likelihood (pML) criterion, and (2) those that use calibrated line-integrals (after log-transform) with penalized weighted least-squares (PWLS) criterion. Accurate statistical modeling of the projection measurements is a prerequisite for SIR, while the regularization term in the objective function also plays a critical role for successful image reconstruction. This paper reviews several statistical models on CT projection measurements and various regularization strategies incorporating prior knowledge or expected properties of the image to be reconstructed, which together formulate the objective function of the SIR methods for low-dose X-ray CT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 38,131 |
2104.06410 | Reward Shaping with Subgoals for Social Navigation | Social navigation has been gaining attentions with the growth in machine intelligence. Since reinforcement learning can select an action in the prediction phase at a low computational cost, it has been formulated in a social navigation tasks. However, reinforcement learning takes an enormous number of iterations until acquiring a behavior policy in the learning phase. This negatively affects the learning of robot behaviors in the real world. In particular, social navigation includes humans who are unpredictable moving obstacles in an environment. We proposed a reward shaping method with subgoals to accelerate learning. The main part is an aggregation method that use subgoals to shape a reinforcement learning algorithm. We performed a learning experiment with a social navigation task in which a robot avoided collisions and then reached its goal. The experimental results show that our method improved the learning efficiency from a base algorithm in the task. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 230,069 |
1807.02425 | Beamforming in Millimeter Wave Systems: Prototyping and Measurement
Results | Demonstrating the feasibility of large antenna array beamforming is essential for realizing mmWave communication systems. This is due to the dependency of these systems on the large array beamforming gains to provide sufficient received signal power. In this paper, the design of a proof-of-concept prototype that demonstrates these gains in practice is explained in detail. We develop a mmWave system with digitally controlled analog front-end. The developed prototype uses 60 GHz phased arrays and universal software radio peripheral (USRP) controllers. The software interface of our design is easily reproducible and can be leveraged for future mmWave prototypes and demonstrations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 102,267 |
1701.08528 | Self-Adaptation of Activity Recognition Systems to New Sensors | Traditional activity recognition systems work on the basis of training, taking a fixed set of sensors into account. In this article, we focus on the question how pattern recognition can leverage new information sources without any, or with minimal user input. Thus, we present an approach for opportunistic activity recognition, where ubiquitous sensors lead to dynamically changing input spaces. Our method is a variation of well-established principles of machine learning, relying on unsupervised clustering to discover structure in data and inferring cluster labels from a small number of labeled dates in a semi-supervised manner. Elaborating the challenges, evaluations of over 3000 sensor combinations from three multi-user experiments are presented in detail and show the potential benefit of our approach. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 67,484 |
2309.07887 | Some notes concerning a generalized KMM-type optimization method for
density ratio estimation | In the present paper we introduce new optimization algorithms for the task of density ratio estimation. More precisely, we consider extending the well-known KMM method using the construction of a suitable loss function, in order to encompass more general situations involving the estimation of density ratio with respect to subsets of the training data and test data, respectively. The associated codes can be found at https://github.com/CDAlecsa/Generalized-KMM. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 391,941 |
0810.5663 | Effective Complexity and its Relation to Logical Depth | Effective complexity measures the information content of the regularities of an object. It has been introduced by M. Gell-Mann and S. Lloyd to avoid some of the disadvantages of Kolmogorov complexity, also known as algorithmic information content. In this paper, we give a precise formal definition of effective complexity and rigorous proofs of its basic properties. In particular, we show that incompressible binary strings are effectively simple, and we prove the existence of strings that have effective complexity close to their lengths. Furthermore, we show that effective complexity is related to Bennett's logical depth: If the effective complexity of a string $x$ exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,590 |
2106.06911 | An Interaction-based Convolutional Neural Network (ICNN) Towards Better
Understanding of COVID-19 X-ray Images | The field of Explainable Artificial Intelligence (XAI) aims to build explainable and interpretable machine learning (or deep learning) methods without sacrificing prediction performance. Convolutional Neural Networks (CNNs) have been successful in making predictions, especially in image classification. However, these famous deep learning models use tens of millions of parameters based on a large number of pre-trained filters which have been repurposed from previous data sets. We propose a novel Interaction-based Convolutional Neural Network (ICNN) that does not make assumptions about the relevance of local information. Instead, we use a model-free Influence Score (I-score) to directly extract the influential information from images to form important variable modules. We demonstrate that the proposed method produces state-of-the-art prediction performance of 99.8% on a real-world data set classifying COVID-19 Chest X-ray images without sacrificing the explanatory power of the model. This proposed design can efficiently screen COVID-19 patients before human diagnosis, and will be the benchmark for addressing future XAI problems in large-scale data sets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 240,681 |
1601.00524 | Ideal Databases | From algebraic geometry perspective database relations are succinctly defined as Finite Varieties. After establishing basic framework, we give analytic proof of Heath theorem from Database Dependency theory. Next, we leverage Algebra/Geometry dictionary and focus on algebraic counterparts of finite varieties, polynomial ideals. It is well known that intersection and sum of ideals are lattice operations. We generalize this fact to ideals from different rings, therefore establishing that algebra of ideals is Relational Lattice. The final stop is casting the framework into Linear Algebra, and traversing to Quantum Theory. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 50,650 |
1911.13029 | Progressive-Growing of Generative Adversarial Networks for Metasurface
Optimization | Generative adversarial networks, which can generate metasurfaces based on a training set of high performance device layouts, have the potential to significantly reduce the computational cost of the metasurface design process. However, basic GAN architectures are unable to fully capture the detailed features of topologically complex metasurfaces, and generated devices therefore require additional computationally-expensive design refinement. In this Letter, we show that GANs can better learn spatially fine features from high-resolution training data by progressively growing its network architecture and training set. Our results indicate that with this training methodology, the best generated devices have performances that compare well with the best devices produced by gradient-based topology optimization, thereby eliminating the need for additional design refinement. We envision that this network training method can generalize to other physical systems where device performance is strongly correlated with fine geometric structuring. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 155,562 |
2403.13737 | EthioLLM: Multilingual Large Language Models for Ethiopian Languages
with Task Evaluation | Large language models (LLMs) have gained popularity recently due to their outstanding performance in various downstream Natural Language Processing (NLP) tasks. However, low-resource languages are still lagging behind current state-of-the-art (SOTA) developments in the field of NLP due to insufficient resources to train LLMs. Ethiopian languages exhibit remarkable linguistic diversity, encompassing a wide array of scripts, and are imbued with profound religious and cultural significance. This paper introduces EthioLLM -- multilingual large language models for five Ethiopian languages (Amharic, Ge'ez, Afan Oromo, Somali, and Tigrinya) and English, and Ethiobenchmark -- a new benchmark dataset for various downstream NLP tasks. We evaluate the performance of these models across five downstream NLP tasks. We open-source our multilingual language models, new benchmark datasets for various downstream tasks, and task-specific fine-tuned language models and discuss the performance of the models. Our dataset and models are available at the https://huggingface.co/EthioNLP repository. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 439,754 |
1009.5249 | Defining and Generating Axial Lines from Street Center Lines for better
Understanding of Urban Morphologies | Axial lines are defined as the longest visibility lines for representing individual linear spaces in urban environments. The least number of axial lines that cover the free space of an urban environment or the space between buildings constitute what is often called an axial map. This is a fundamental tool in space syntax, a theory developed by Bill Hillier and his colleagues for characterizing the underlying urban morphologies. For a long time, generating axial lines with help of some graphic software has been a tedious manual process that is criticized for being time consuming, subjective, or even arbitrary. In this paper, we redefine axial lines as the least number of individual straight line segments mutually intersected along natural streets that are generated from street center lines using the Gestalt principle of good continuity. Based on this new definition, we develop an automatic solution to generating the newly defined axial lines from street center lines. We apply this solution to six typical street networks (three from North America and three from Europe), and generate a new set of axial lines for analyzing the urban morphologies. Through a comparison study between the new axial lines and the conventional or old axial lines, and between the new axial lines and natural streets, we demonstrate with empirical evidence that the newly defined axial lines are a better alternative in capturing the underlying urban structure. Keywords: Space syntax, street networks, topological analysis, traffic, head/tail division rule | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 7,691 |
2412.08029 | NeRF-NQA: No-Reference Quality Assessment for Scenes Generated by NeRF
and Neural View Synthesis Methods | Neural View Synthesis (NVS) has demonstrated efficacy in generating high-fidelity dense viewpoint videos using a image set with sparse views. However, existing quality assessment methods like PSNR, SSIM, and LPIPS are not tailored for the scenes with dense viewpoints synthesized by NVS and NeRF variants, thus, they often fall short in capturing the perceptual quality, including spatial and angular aspects of NVS-synthesized scenes. Furthermore, the lack of dense ground truth views makes the full reference quality assessment on NVS-synthesized scenes challenging. For instance, datasets such as LLFF provide only sparse images, insufficient for complete full-reference assessments. To address the issues above, we propose NeRF-NQA, the first no-reference quality assessment method for densely-observed scenes synthesized from the NVS and NeRF variants. NeRF-NQA employs a joint quality assessment strategy, integrating both viewwise and pointwise approaches, to evaluate the quality of NVS-generated scenes. The viewwise approach assesses the spatial quality of each individual synthesized view and the overall inter-views consistency, while the pointwise approach focuses on the angular qualities of scene surface points and their compound inter-point quality. Extensive evaluations are conducted to compare NeRF-NQA with 23 mainstream visual quality assessment methods (from fields of image, video, and light-field assessment). The results demonstrate NeRF-NQA outperforms the existing assessment methods significantly and it shows substantial superiority on assessing NVS-synthesized scenes without references. An implementation of this paper are available at https://github.com/VincentQQu/NeRF-NQA. | true | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 515,907 |
2502.13344 | K-Paths: Reasoning over Graph Paths for Drug Repurposing and Drug
Interaction Prediction | Drug discovery is a complex and time-intensive process that requires identifying and validating new therapeutic candidates. Computational approaches using large-scale biomedical knowledge graphs (KGs) offer a promising solution to accelerate this process. However, extracting meaningful insights from large-scale KGs remains challenging due to the complexity of graph traversal. Existing subgraph-based methods are tailored to graph neural networks (GNNs), making them incompatible with other models, such as large language models (LLMs). We introduce K-Paths, a retrieval framework that extracts structured, diverse, and biologically meaningful paths from KGs. Integrating these paths enables LLMs and GNNs to effectively predict unobserved drug-drug and drug-disease interactions. Unlike traditional path-ranking approaches, K-Paths retrieves and transforms paths into a structured format that LLMs can directly process, facilitating explainable reasoning. K-Paths employs a diversity-aware adaptation of Yen's algorithm to retrieve the K shortest loopless paths between entities in an interaction query, prioritizing biologically relevant and diverse relationships. Our experiments on benchmark datasets show that K-Paths improves the zero-shot performance of Llama 8.1B's F1-score by 12.45 points on drug repurposing and 13.42 points on interaction severity prediction. We also show that Llama 70B achieves F1-score gains of 6.18 and 8.46 points, respectively. K-Paths also improves the supervised training efficiency of EmerGNN, a state-of-the-art GNN, by reducing KG size by 90% while maintaining strong predictive performance. Beyond its scalability and efficiency, K-Paths uniquely bridges the gap between KGs and LLMs, providing explainable rationales for predicted interactions. These capabilities show that K-Paths is a valuable tool for efficient data-driven drug discovery. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 535,322 |
2408.12590 | xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed
Representations | We present xGen-VideoSyn-1, a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions. Building on recent advancements, such as OpenAI's Sora, we explore the latent diffusion model (LDM) architecture and introduce a video variational autoencoder (VidVAE). VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens and the computational demands associated with generating long-sequence videos. To further address the computational costs, we propose a divide-and-merge strategy that maintains temporal consistency across video segments. Our Diffusion Transformer (DiT) model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios. We have devised a data processing pipeline from the very beginning and collected over 13M high-quality video-text pairs. The pipeline includes multiple steps such as clipping, text detection, motion estimation, aesthetics scoring, and dense captioning based on our in-house video-LLM model. Training the VidVAE and DiT models required approximately 40 and 642 H100 days, respectively. Our model supports over 14-second 720p video generation in an end-to-end way and demonstrates competitive performance against state-of-the-art T2V models. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,793 |
2412.04896 | Comprehensive Analysis and Improvements in Pansharpening Using Deep
Learning | Pansharpening is a crucial task in remote sensing, enabling the generation of high-resolution multispectral images by fusing low-resolution multispectral data with high-resolution panchromatic images. This paper provides a comprehensive analysis of traditional and deep learning-based pansharpening methods. While state-of-the-art deep learning methods have significantly improved image quality, issues like spectral distortions persist. To address this, we propose enhancements to the PSGAN framework by introducing novel regularization techniques for the generator loss function. Experimental results on images from the Worldview-3 dataset demonstrate that the proposed modifications improve spectral fidelity and achieve superior performance across multiple quantitative metrics while delivering visually superior results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 514,618 |
2005.13449 | Segmentation Loss Odyssey | Loss functions are one of the crucial ingredients in deep learning-based medical image segmentation methods. Many loss functions have been proposed in existing literature, but are studied separately or only investigated with few other losses. In this paper, we present a systematic taxonomy to sort existing loss functions into four meaningful categories. This helps to reveal links and fundamental similarities between them. Moreover, we explore the relationship between the traditional region-based and the more recent boundary-based loss functions. The PyTorch implementations of these loss functions are publicly available at \url{https://github.com/JunMa11/SegLoss}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 179,012 |
2011.10508 | Planning Folding Motion with Simulation in the Loop Using Laser Forming
Origami and Thermal Behaviors as an Example | Designing a robot or structure that can fold itself into a target shape is a process that involves challenges originated from multiple sources. For example, the designer of rigid self-folding robots must consider foldability from geometric and kinematic aspects to avoid self-intersection and undesired deformations. Recent works have shown success in estimating foldability of a design using robot motion planners. However, many foldable structures are actuated using physically coupled reactions (i.e., folding originated from thermal, chemical, or electromagnetic loads). Therefore, a reliable foldability analysis must consider additional constraints that resulted from these critical phenomena. This work investigates the idea of efficiently incorporating computationally expensive physics simulation within the folding motion planner to provide a better estimation of the foldability. In this paper, we will use laser forming origami as an example to demonstrate the benefits of considering the properties beyond geometry. We show that the design produced by the proposed method can be folded more efficiently. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 207,531 |
1610.02481 | Frequency Estimation of Multiple Sinusoids with Three Sub-Nyquist
Channels | Frequency estimation of multiple sinusoids is significant in both theory and application. In some application scenarios, only sub-Nyquist samples are available to estimate the frequencies. A conventional approach is to sample the signals at several lower rates. In this paper, we address frequency estimation of the signals in the time domain through undersampled data. We analyze the impact of undersampling and demonstrate that three sub-Nyquist channels are generally enough to estimate the frequencies provided the undersampling ratios are pairwise coprime. We deduce the condition that leads to the failure of resolving frequency ambiguity when two coprime undersampling channels are utilized. When three-channel sub-Nyquist samples are used jointly, the frequencies can be determined uniquely and the correct frequencies are estimated. Numerical experiments verify the correctness of our analysis and conclusion. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 62,104 |
2006.07212 | Task-similarity Aware Meta-learning through Nonparametric Kernel
Regression | This paper investigates the use of nonparametric kernel-regression to obtain a tasksimilarity aware meta-learning algorithm. Our hypothesis is that the use of tasksimilarity helps meta-learning when the available tasks are limited and may contain outlier/ dissimilar tasks. While existing meta-learning approaches implicitly assume the tasks as being similar, it is generally unclear how this task-similarity could be quantified and used in the learning. As a result, most popular metalearning approaches do not actively use the similarity/dissimilarity between the tasks, but rely on availability of huge number of tasks for their working. Our contribution is a novel framework for meta-learning that explicitly uses task-similarity in the form of kernels and an associated meta-learning algorithm. We model the task-specific parameters to belong to a reproducing kernel Hilbert space where the kernel function captures the similarity across tasks. The proposed algorithm iteratively learns a meta-parameter which is used to assign a task-specific descriptor for every task. The task descriptors are then used to quantify the task-similarity through the kernel function. We show how our approach conceptually generalizes the popular meta-learning approaches of model-agnostic meta-learning (MAML) and Meta-stochastic gradient descent (Meta-SGD) approaches. Numerical experiments with regression tasks show that our algorithm outperforms these approaches when the number of tasks is limited, even in the presence of outlier or dissimilar tasks. This supports our hypothesis that task-similarity helps improve the metalearning performance in task-limited and adverse settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,721 |
2004.04699 | Scalable Active Learning for Object Detection | Deep Neural Networks trained in a fully supervised fashion are the dominant technology in perception-based autonomous driving systems. While collecting large amounts of unlabeled data is already a major undertaking, only a subset of it can be labeled by humans due to the effort needed for high-quality annotation. Therefore, finding the right data to label has become a key challenge. Active learning is a powerful technique to improve data efficiency for supervised learning methods, as it aims at selecting the smallest possible training set to reach a required performance. We have built a scalable production system for active learning in the domain of autonomous driving. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, present our current results at scale, and briefly describe the open problems and future directions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 171,966 |
1901.05744 | The Oracle of DLphi | We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results. Having access to a sufficiently large amount of labelled training data, our methodology is capable of predicting the labels of the test data almost always even if the training data is entirely unrelated to the test data. In other words, we prove in a specific setting that as long as one has access to enough data points, the quality of the data is irrelevant. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 118,849 |
2403.11706 | Generalized Multi-Source Inference for Text Conditioned Music Diffusion
Models | Multi-Source Diffusion Models (MSDM) allow for compositional musical generation tasks: generating a set of coherent sources, creating accompaniments, and performing source separation. Despite their versatility, they require estimating the joint distribution over the sources, necessitating pre-separated musical data, which is rarely available, and fixing the number and type of sources at training time. This paper generalizes MSDM to arbitrary time-domain diffusion models conditioned on text embeddings. These models do not require separated data as they are trained on mixtures, can parameterize an arbitrary number of sources, and allow for rich semantic control. We propose an inference procedure enabling the coherent generation of sources and accompaniments. Additionally, we adapt the Dirac separator of MSDM to perform source separation. We experiment with diffusion models trained on Slakh2100 and MTG-Jamendo, showcasing competitive generation and separation results in a relaxed data setting. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 438,824 |
2408.08543 | Language-Driven Interactive Shadow Detection | Traditional shadow detectors often identify all shadow regions of static images or video sequences. This work presents the Referring Video Shadow Detection (RVSD), which is an innovative task that rejuvenates the classic paradigm by facilitating the segmentation of particular shadows in videos based on descriptive natural language prompts. This novel RVSD not only achieves segmentation of arbitrary shadow areas of interest based on descriptions (flexibility) but also allows users to interact with visual content more directly and naturally by using natural language prompts (interactivity), paving the way for abundant applications ranging from advanced video editing to virtual reality experiences. To pioneer the RVSD research, we curated a well-annotated RVSD dataset, which encompasses 86 videos and a rich set of 15,011 paired textual descriptions with corresponding shadows. To the best of our knowledge, this dataset is the first one for addressing RVSD. Based on this dataset, we propose a Referring Shadow-Track Memory Network (RSM-Net) for addressing the RVSD task. In our RSM-Net, we devise a Twin-Track Synergistic Memory (TSM) to store intra-clip memory features and hierarchical inter-clip memory features, and then pass these memory features into a memory read module to refine features of the current video frame for referring shadow detection. We also develop a Mixed-Prior Shadow Attention (MSA) to utilize physical priors to obtain a coarse shadow map for learning more visual features by weighting it with the input video frame. Experimental results show that our RSM-Net achieves state-of-the-art performance for RVSD with a notable Overall IOU increase of 4.4\%. Our code and dataset are available at https://github.com/whq-xxh/RVSD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,034 |
1405.1359 | Latent semantics of action verbs reflect phonetic parameters of
intensity and emotional content | Conjuring up our thoughts, language reflects statistical patterns of word co-occurrences which in turn come to describe how we perceive the world. Whether counting how frequently nouns and verbs combine in Google search queries, or extracting eigenvectors from term document matrices made up of Wikipedia lines and Shakespeare plots, the resulting latent semantics capture not only the associative links which form concepts, but also spatial dimensions embedded within the surface structure of language. As both the shape and movements of objects have been found to be associated with phonetic contrasts already in toddlers, this study explores whether articulatory and acoustic parameters may likewise differentiate the latent semantics of action verbs. Selecting 3 x 20 emotion, face, and hand related verbs known to activate premotor areas in the brain, their mutual cosine similarities were computed using latent semantic analysis LSA, and the resulting adjacency matrices were compared based on two different large scale text corpora; HAWIK and TASA. Applying hierarchical clustering to identify common structures across the two text corpora, the verbs largely divide into combined mouth and hand movements versus emotional expressions. Transforming the verbs into their constituent phonemes, the clustered small and large size movements appear differentiated by front versus back vowels corresponding to increasing levels of arousal. Whereas the clustered emotional verbs seem characterized by sequences of close versus open jaw produced phonemes, generating up- or downwards shifts in formant frequencies that may influence their perceived valence. Suggesting, that the latent semantics of action verbs reflect parameters of intensity and emotional polarity that appear correlated with the articulatory contrasts and acoustic characteristics of phonemes | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 32,862 |
1912.05344 | Reconfigurable Intelligent Surfaces: Bridging the gap between scattering
and reflection | In this work we address the distance dependence of reconfigurable intelligent surfaces (RIS). As differentiating factor to other works in the literature, we focus on the array near-field, what allows us to comprehend and expose the promising potential of RIS. The latter mostly implies an interplay between the physical size of the RIS and the size of the Fresnel zones at the RIS location, highlighting the major role of the phase. To be specific, the point-like (or zero-dimensional) conventional scattering characterization results in the well-known dependence with the fourth power of the distance. On the contrary, the characterization of its near-field region exposes a reflective behavior following a dependence with the second and third power of distance, respectively, for a two-dimensional (planar) and one-dimensional (linear) RIS. Furthermore, a smart RIS implementing an optimized phase control can result in a power exponent of four that, paradoxically, outperforms free-space propagation when operated in its near-field vicinity. All these features have a major impact on the practical applicability of the RIS concept. As one contribution of this work, the article concludes by presenting a complete signal characterization for a wireless link in the presence of RIS on all such regions of operation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 157,085 |
1701.04645 | Une mesure d'expertise pour le crowdsourcing | Crowdsourcing, a major economic issue, is the fact that the firm outsources internal task to the crowd. It is a form of digital subcontracting for the general public. The evaluation of the participants work quality is a major issue in crowdsourcing. Indeed, contributions must be controlled to ensure the effectiveness and relevance of the campaign. We are particularly interested in small, fast and not automatable tasks. Several methods have been proposed to solve this problem, but they are applicable when the "golden truth" is not always known. This work has the particularity to propose a method for calculating the degree of expertise in the presence of gold data in crowdsourcing. This method is based on the belief function theory and proposes a structuring of data using graphs. The proposed approach will be assessed and applied to the data. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 66,876 |
2203.09390 | A Cube Algebra with Comparative Operations: Containment, Overlap,
Distance and Usability | In this paper, we provide a comprehensive rigorous modeling for multidimensional spaces with hierarchically structured dimensions in several layers of abstractions and data cubes that live in such spaces. We model cube queries and their semantics and define typical OLAP operators like Selections, Roll-Up, Drill-Down, etc. The model serves as the basis to offer the main contribution of this paper which includes theorems and algorithms for being able to associate data cube queries via comparative operations that are evaluated only on the syntax of the queries involved. Specifically, these operations include: (a) foundational containment, referring to the coverage of common parts of the most detailed level of aggregation of the multidimensional space, (b/c) same-level containment and intersection, referring to the inclusion/existence of common parts of the multidimensional space in two query results of the same aggregation levels, (d) query distance, referring to being able to assess the similarity of two queries in the same multidimensional space, and, (e) cube usability, i.e., the possibility of computing a new cube from a previous one, defined at a different level of abstraction. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 286,141 |
2012.06644 | Regularizing Action Policies for Smooth Control with Reinforcement
Learning | A critical problem with the practical utility of controllers trained with deep Reinforcement Learning (RL) is the notable lack of smoothness in the actions learned by the RL policies. This trend often presents itself in the form of control signal oscillation and can result in poor control, high power consumption, and undue system wear. We introduce Conditioning for Action Policy Smoothness (CAPS), an effective yet intuitive regularization on action policies, which offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers, reflected in the elimination of high-frequency components in the control signal. Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption while consistently training flight-worthy controllers. Project website: http://ai.bu.edu/caps | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 211,168 |
2107.03465 | An audiovisual and contextual approach for categorical and continuous
emotion recognition in-the-wild | In this work we tackle the task of video-based audio-visual emotion recognition, within the premises of the 2nd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW2). Poor illumination conditions, head/body orientation and low image resolution constitute factors that can potentially hinder performance in case of methodologies that solely rely on the extraction and analysis of facial features. In order to alleviate this problem, we leverage both bodily and contextual features, as part of a broader emotion recognition framework. We choose to use a standard CNN-RNN cascade as the backbone of our proposed model for sequence-to-sequence (seq2seq) learning. Apart from learning through the RGB input modality, we construct an aural stream which operates on sequences of extracted mel-spectrograms. Our extensive experiments on the challenging and newly assembled Aff-Wild2 dataset verify the validity of our intuitive multi-stream and multi-modal approach towards emotion recognition in-the-wild. Emphasis is being laid on the the beneficial influence of the human body and scene context, as aspects of the emotion recognition process that have been left relatively unexplored up to this point. All the code was implemented using PyTorch and is publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 245,170 |
2403.18438 | Global Vegetation Modeling with Pre-Trained Weather Transformers | Accurate vegetation models can produce further insights into the complex interaction between vegetation activity and ecosystem processes. Previous research has established that long-term trends and short-term variability of temperature and precipitation affect vegetation activity. Motivated by the recent success of Transformer-based Deep Learning models for medium-range weather forecasting, we adapt the publicly available pre-trained FourCastNet to model vegetation activity while accounting for the short-term dynamics of climate variability. We investigate how the learned global representation of the atmosphere's state can be transferred to model the normalized difference vegetation index (NDVI). Our model globally estimates vegetation activity at a resolution of \SI{0.25}{\degree} while relying only on meteorological data. We demonstrate that leveraging pre-trained weather models improves the NDVI estimates compared to learning an NDVI model from scratch. Additionally, we compare our results to other recent data-driven NDVI modeling approaches from machine learning and ecology literature. We further provide experimental evidence on how much data and training time is necessary to turn FourCastNet into an effective vegetation model. Code and models will be made available upon publication. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 441,939 |
1902.02992 | A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based
Learning | Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure. In this paper, we present a novel hyperbolic distribution called \textit{pseudo-hyperbolic Gaussian}, a Gaussian-like distribution on hyperbolic space whose density can be evaluated analytically and differentiated with respect to the parameters. Our distribution enables the gradient-based learning of the probabilistic models on hyperbolic space that could never have been considered before. Also, we can sample from this hyperbolic probability distribution without resorting to auxiliary means like rejection sampling. As applications of our distribution, we develop a hyperbolic-analog of variational autoencoder and a method of probabilistic word embedding on hyperbolic space. We demonstrate the efficacy of our distribution on various datasets including MNIST, Atari 2600 Breakout, and WordNet. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,992 |
2209.13850 | Bimanual rope manipulation skill synthesis through context dependent
correction policy learning from human demonstration | Learning from demonstration (LfD) provides a convenient means to equip robots with dexterous skills when demonstration can be obtained in robot intrinsic coordinates. However, the problem of compounding errors in long and complex skills reduces its wide deployment. Since most such complex skills are composed of smaller movements that are combined, considering the target skill as a sequence of compact motor primitives seems reasonable. Here the problem that needs to be tackled is to ensure that a motor primitive ends in a state that allows the successful execution of the subsequent primitive. In this study, we focus on this problem by proposing to learn an explicit correction policy when the expected transition state between primitives is not achieved. The correction policy is itself learned via behavior cloning by the use of a state-of-the-art movement primitive learning architecture, Conditional Neural Motor Primitives (CNMPs). The learned correction policy is then able to produce diverse movement trajectories in a context dependent way. The advantage of the proposed system over learning the complete task as a single action is shown with a table-top setup in simulation, where an object has to be pushed through a corridor in two steps. Then, the applicability of the proposed method to bi-manual knotting in the real world is shown by equipping an upper-body humanoid robot with the skill of making knots over a bar in 3D space. The experiments show that the robot can perform successful knotting even when the faced correction cases are not part of the human demonstration set. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 320,051 |
2102.07704 | Multi-Class Unsourced Random Access via Coded Demixing | Unsourced random access (URA) is a recently proposed communication paradigm attuned to machine-driven data transfers. In the original URA formulation, all the active devices share the same number of bits per packet. The scenario where several classes of devices transmit concurrently has so far received little attention. An initial solution to this problem takes the form of group successive interference cancellation, where codewords from a class of devices with more resources are recovered first, followed by the decoding of the remaining messages. This article introduces a joint iterative decoding approach rooted in approximate message passing. This framework has a concatenated coding structure borrowed from the single-class coded compressed sensing and admits a solution that offers performance improvement at little added computational complexity. Our findings point to new connections between multi-class URA and compressive demixing. The performance of the envisioned algorithm is validated through numerical simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 220,194 |
2403.17844 | Mechanistic Design and Scaling of Hybrid Architectures | The development of deep learning architectures is a resource-demanding process, due to a vast design space, long prototyping times, and high compute costs associated with at-scale model training and evaluation. We set out to simplify this process by grounding it in an end-to-end mechanistic architecture design (MAD) pipeline, encompassing small-scale capability unit tests predictive of scaling laws. Through a suite of synthetic token manipulation tasks such as compression and recall, designed to probe capabilities, we identify and test new hybrid architectures constructed from a variety of computational primitives. We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis, training over 500 language models between 70M to 7B parameters. Surprisingly, we find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures via isolated proxy tasks. The new architectures found via MAD, based on simple ideas such as hybridization and sparsity, outperform state-of-the-art Transformer, convolutional, and recurrent architectures (Transformer++, Hyena, Mamba) in scaling, both at compute-optimal budgets and in overtrained regimes. Overall, these results provide evidence that performance on curated synthetic tasks can be predictive of scaling laws, and that an optimal architecture should leverage specialized layers via a hybrid topology. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 441,653 |
2310.06282 | MuseChat: A Conversational Music Recommendation System for Videos | Music recommendation for videos attracts growing interest in multi-modal research. However, existing systems focus primarily on content compatibility, often ignoring the users' preferences. Their inability to interact with users for further refinements or to provide explanations leads to a less satisfying experience. We address these issues with MuseChat, a first-of-its-kind dialogue-based recommendation system that personalizes music suggestions for videos. Our system consists of two key functionalities with associated modules: recommendation and reasoning. The recommendation module takes a video along with optional information including previous suggested music and user's preference as inputs and retrieves an appropriate music matching the context. The reasoning module, equipped with the power of Large Language Model (Vicuna-7B) and extended to multi-modal inputs, is able to provide reasonable explanation for the recommended music. To evaluate the effectiveness of MuseChat, we build a large-scale dataset, conversational music recommendation for videos, that simulates a two-turn interaction between a user and a recommender based on accurate music track information. Experiment results show that MuseChat achieves significant improvements over existing video-based music retrieval methods as well as offers strong interpretability and interactability. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 398,506 |
2412.12654 | CALA: A Class-Aware Logit Adapter for Few-Shot Class-Incremental
Learning | Few-Shot Class-Incremental Learning (FSCIL) defines a practical but challenging task where models are required to continuously learn novel concepts with only a few training samples. Due to data scarcity, existing FSCIL methods resort to training a backbone with abundant base data and then keeping it frozen afterward. However, the above operation often causes the backbone to overfit to base classes while overlooking the novel ones, leading to severe confusion between them. To address this issue, we propose Class-Aware Logit Adapter (CALA). Our method involves a lightweight adapter that learns to rectify biased predictions through a pseudo-incremental learning paradigm. In the real FSCIL process, we use the learned adapter to dynamically generate robust balancing factors. These factors can adjust confused novel instances back to their true label space based on their similarity to base classes. Specifically, when confusion is more likely to occur in novel instances that closely resemble base classes, greater rectification is required. Notably, CALA operates on the classifier level, preserving the original feature space, thus it can be flexibly plugged into most of the existing FSCIL works for improved performance. Experiments on three benchmark datasets consistently validate the effectiveness and flexibility of CALA. Codes will be available upon acceptance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 517,969 |
1111.2259 | A Survey on Open Problems for Mobile Robots | Gathering mobile robots is a widely studied problem in robotic research. This survey first introduces the related work, summarizing models and results. Then, the focus shifts on the open problem of gathering fat robots. In this context, "fat" means that the robot is not represented by a point in a bidimensional space, but it has an extent. Moreover, it can be opaque in the sense that other robots cannot "see through" it. All these issues lead to a redefinition of the original problem and an extension of the CORDA model. For at most 4 robots an algorithm is provided in the literature, but is gathering always possible for n>4 fat robots? Another open problem is considered: Boundary Patrolling by mobile robots. A set of mobile robots with constraints only on speed and visibility is working in a polygonal environment having boundary and possibly obstacles. The robots have to perform a perpetual movement (possibly within the environment) so that the maximum timespan in which a point of the boundary is not being watched by any robot is minimized. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 12,976 |
2311.05367 | Reducing Disorder: An Information-Theory Formulation of MEV | Maximal Extractable Value (MEV) has garnered significant attention in the cryptocurrency community. Such attention is a consequence of the revenue that can be generated from MEV, as well as the risks MEV poses to the fundamental value proposition of the underlying blockchain technology. In this work, we provide an information-theoretic formulation of MEV. With this formulation, we make common statements about MEV mathematically rigorous. For example, we show that i) all non-trivial blockchains and decentralised applications must generate MEV; ii) how MEV can be reduced at the expense of user expressibility; and iii) how MEV can be good or bad from an information theoretic standpoint. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 406,557 |
2312.10306 | Mapping Housing Stock Characteristics from Drone Images for Climate
Resilience in the Caribbean | Comprehensive information on housing stock is crucial for climate adaptation initiatives aiming to reduce the adverse impacts of climate-extreme hazards in high-risk regions like the Caribbean. In this study, we propose a workflow for rapidly generating critical baseline housing stock data using very high-resolution drone images and deep learning techniques. Specifically, our work leverages the Segment Anything Model and convolutional neural networks for the automated generation of building footprints and roof classification maps. By strengthening local capacity within government agencies to leverage AI and Earth Observation-based solutions, this work seeks to improve the climate resilience of the housing sector in small island developing states in the Caribbean. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 416,101 |
1301.0552 | A constraint satisfaction approach to the robust spanning tree problem
with interval data | Robust optimization is one of the fundamental approaches to deal with uncertainty in combinatorial optimization. This paper considers the robust spanning tree problem with interval data, which arises in a variety of telecommunication applications. It proposes a constraint satisfaction approach using a combinatorial lower bound, a pruning component that removes infeasible and suboptimal edges, as well as a search strategy exploring the most uncertain edges first. The resulting algorithm is shown to produce very dramatic improvements over the mathematical programming approach of Yaman et al. and to enlarge considerably the class of problems amenable to effective solutions | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 20,734 |
2410.16013 | Information-Theoretic Minimax Regret Bounds for Reinforcement Learning
based on Duality | We study agents acting in an unknown environment where the agent's goal is to find a robust policy. We consider robust policies as policies that achieve high cumulative rewards for all possible environments. To this end, we consider agents minimizing the maximum regret over different environment parameters, leading to the study of minimax regret. This research focuses on deriving information-theoretic bounds for minimax regret in Markov Decision Processes (MDPs) with a finite time horizon. Building on concepts from supervised learning, such as minimum excess risk (MER) and minimax excess risk, we use recent bounds on the Bayesian regret to derive minimax regret bounds. Specifically, we establish minimax theorems and use bounds on the Bayesian regret to perform minimax regret analysis using these minimax theorems. Our contributions include defining a suitable minimax regret in the context of MDPs, finding information-theoretic bounds for it, and applying these bounds in various scenarios. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 500,841 |
1909.00657 | Economic Evaluation of the Portuguese PV and Energy Storage Residential
Applications | In the residential sector, energy micro-generation and its intelligent management have been creating novel energy market models, considering new concepts of energy use and distribution, in which the prosumer has an active role in the energy generation and its self-consumption. The configuration of the solar photovoltaic system with a battery energy storage in Portugal is unclear in the technical, energetic and mostly in the economical point of view. The energy generation and consumption management, jointly with the battery operation, have a great influence in the profitability value of the configuration. The present work evaluates different photovoltaic configurations with and without energy storage for the normal low voltage C consumer profile, for a contracted power of 3.45 kVA, to evaluate the cost-effectiveness of the systems, framed in the regulation in force in Portugal, the decree-law 153/2014, which promotes the micro-generation and self-consumption. The analysis consists of three different geographical locations in the country, considering distinct electric tariffs. These are relevant parameters in the choice of the configuration, concluding that although the solar photovoltaic system by itself is already economical presently, its integration with battery energy storage is not in most of the configurations, however it is already possible to find profitable PV and battery configurations, considering all the most relevant criteria, and supported by good energy management. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 143,686 |
2201.03450 | Leveraging Social Influence based on Users Activity Centers for
Point-of-Interest Recommendation | Recommender Systems (RSs) aim to model and predict the user preference while interacting with items, such as Points of Interest (POIs). These systems face several challenges, such as data sparsity, limiting their effectiveness. In this paper, we address this problem by incorporating social, geographical, and temporal information into the Matrix Factorization (MF) technique. To this end, we model social influence based on two factors: similarities between users in terms of common check-ins and the friendships between them. We introduce two levels of friendship based on explicit friendship networks and high check-in overlap between users. We base our friendship algorithm on users' geographical activity centers. The results show that our proposed model outperforms the state-of-the-art on two real-world datasets. More specifically, our ablation study shows that the social model improves the performance of our proposed POI recommendation system by 31% and 14% on the Gowalla and Yelp datasets in terms of Precision@10, respectively. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 274,857 |
1908.06327 | Language Features Matter: Effective Language Representations for
Vision-Language Tasks | Shouldn't language and vision features be treated equally in vision-language (VL) tasks? Many VL approaches treat the language component as an afterthought, using simple language models that are either built upon fixed word embeddings trained on text-only data or are learned from scratch. We believe that language features deserve more attention, and conduct experiments which compare different word embeddings, language models, and embedding augmentation steps on five common VL tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. Our experiments provide some striking results; an average embedding language model outperforms an LSTM on retrieval-style tasks; state-of-the-art representations such as BERT perform relatively poorly on vision-language tasks. From this comprehensive set of experiments we propose a set of best practices for incorporating the language component of VL tasks. To further elevate language features, we also show that knowledge in vision-language problems can be transferred across tasks to gain performance with multi-task training. This multi-task training is applied to a new Graph Oriented Vision-Language Embedding (GrOVLE), which we adapt from Word2Vec using WordNet and an original visual-language graph built from Visual Genome, providing a ready-to-use vision-language embedding: http://ai.bu.edu/grovle. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 141,981 |
2405.14437 | Combining Denoising Autoencoders with Contrastive Learning to fine-tune
Transformer Models | Recently, using large pretrained Transformer models for transfer learning tasks has evolved to the point where they have become one of the flagship trends in the Natural Language Processing (NLP) community, giving rise to various outlooks such as prompt-based, adapters or combinations with unsupervised approaches, among many others. This work proposes a 3 Phase technique to adjust a base model for a classification task. First, we adapt the model's signal to the data distribution by performing further training with a Denoising Autoencoder (DAE). Second, we adjust the representation space of the output to the corresponding classes by clustering through a Contrastive Learning (CL) method. In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets. Third, we apply fine-tuning to delimit the predefined categories. These different phases provide relevant and complementary knowledge to the model to learn the final task. We supply extensive experimental results on several datasets to demonstrate these claims. Moreover, we include an ablation study and compare the proposed method against other ways of combining these techniques. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 456,427 |
2411.12906 | Experimental Study of Underwater Acoustic Reconfigurable Intelligent
Surfaces with In-Phase and Quadrature Modulation | This paper presents an underwater acoustic reconfigurable intelligent surfaces (UA-RIS) designed for long-range, high-speed, and environmentally friendly communication in oceanic environments. The proposed UA-RIS comprises multiple pairs of acoustic reflectors that utilize in-phase and quadrature (IQ) modulation to flexibly control the amplitude and phase of reflected waves. This capability enables precise beam steering to enhance or attenuate sound levels in specific directions. A prototype UA-RIS with 4*6 acoustic reflection units is constructed and tested in both tank and lake environments to evaluate performance. The experimental results indicate that the prototype is capable of effectively pointing reflected waves to targeted directions while minimizing side lobes using passive IQ modulation. Field tests reveal that deploying the UA-RIS on the sender side considerably extends communication ranges by 28% in deep water and 46% in shallow waters. Furthermore, with a fixed communication distance, positioning the UA-RIS at the transmitter side substantially boosts data rates, with an average increase of 63.8% and peaks up to 96%. When positioned on the receiver side, the UA-RIS can expand the communication range in shallow and deep water environments by 40.6% and 66%, respectively. Moreover, placing the UA-RIS close to the receiver enhances data rates by an average of 80.3%, reaching up to 163% under certain circumstances. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 509,591 |
1807.07761 | Controllability of Social Networks and the Strategic Use of Random
Information | This work is aimed at studying realistic social control strategies for social networks based on the introduction of random information into the state of selected driver agents. Deliberately exposing selected agents to random information is a technique already experimented in recommender systems or search engines, and represents one of the few options for influencing the behavior of a social context that could be accepted as ethical, could be fully disclosed to members, and does not involve the use of force or of deception. Our research is based on a model of knowledge diffusion applied to a time-varying adaptive network, and considers two well-known strategies for influencing social contexts. One is the selection of few influencers for manipulating their actions in order to drive the whole network to a certain behavior; the other, instead, drives the network behavior acting on the state of a large subset of ordinary, scarcely influencing users. The two approaches have been studied in terms of network and diffusion effects. The network effect is analyzed through the changes induced on network average degree and clustering coefficient, while the diffusion effect is based on two ad-hoc metrics defined to measure the degree of knowledge diffusion and skill level, as well as the polarization of agent interests. The results, obtained through simulations on synthetic networks, show a rich dynamics and strong effects on the communication structure and on the distribution of knowledge and skills, supporting our hypothesis that the strategic use of random information could represent a realistic approach to social network controllability, and that with both strategies, in principle, the control effect could be remarkable. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 103,376 |
2305.13904 | Deep GEM-Based Network for Weakly Supervised UWB Ranging Error
Mitigation | Ultra-wideband (UWB)-based techniques, while becoming mainstream approaches for high-accurate positioning, tend to be challenged by ranging bias in harsh environments. The emerging learning-based methods for error mitigation have shown great performance improvement via exploiting high semantic features from raw data. However, these methods rely heavily on fully labeled data, leading to a high cost for data acquisition. We present a learning framework based on weak supervision for UWB ranging error mitigation. Specifically, we propose a deep learning method based on the generalized expectation-maximization (GEM) algorithm for robust UWB ranging error mitigation under weak supervision. Such method integrate probabilistic modeling into the deep learning scheme, and adopt weakly supervised labels as prior information. Extensive experiments in various supervision scenarios illustrate the superiority of the proposed method. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 366,754 |
1401.1549 | Optimal Demand Response Using Device Based Reinforcement Learning | Demand response (DR) for residential and small commercial buildings is estimated to account for as much as 65% of the total energy savings potential of DR, and previous work shows that a fully automated Energy Management System (EMS) is a necessary prerequisite to DR in these areas. In this paper, we propose a novel EMS formulation for DR problems in these sectors. Specifically, we formulate a fully automated EMS's rescheduling problem as a reinforcement learning (RL) problem, and argue that this RL problem can be approximately solved by decomposing it over device clusters. Compared with existing formulations, our new formulation (1) does not require explicitly modeling the user's dissatisfaction on job rescheduling, (2) enables the EMS to self-initiate jobs, (3) allows the user to initiate more flexible requests and (4) has a computational complexity linear in the number of devices. We also demonstrate the simulation results of applying Q-learning, one of the most popular and classical RL algorithms, to a representative example. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 29,663 |
2303.11407 | Distributed Resilient Interval Observers for Bounded-Error LTI Systems
Subject to False Data Injection Attacks | This paper proposes a novel distributed interval-valued simultaneous state and input observer for linear time-invariant (LTI) systems that are subject to attacks or unknown inputs injected both on their sensors and actuators. Each agent in the network leverages a singular value decomposition (SVD) based transformation to decompose its observations into two components, one of them unaffected by the attack signal, which helps to obtain local interval estimates of the state and unknown input and then uses intersection to compute the best interval estimate among neighboring nodes. We show that the computed intervals are guaranteed to contain the true state and input trajectories, and we provide conditions under which the observer is stable. Furthermore, we provide a method for designing stabilizing gains that minimize an upper bound on the worst-case steady-state observer error. We demonstrate our algorithm on an IEEE 14-bus power system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 352,832 |
1901.02495 | Presence-absence estimation in audio recordings of tropical frog
communities | One non-invasive way to study frog communities is by analyzing long-term samples of acoustic material containing calls. This immense task has been optimized by the development of Machine Learning tools to extract ecological information. We explored a likelihood-ratio audio detector based on Gaussian mixture model classification of 10 frog species, and applied it to estimate presence-absence in audio recordings from an actual amphibian monitoring performed at Yasun\'i National Park in the Ecuadorian Amazonia. A modified filter-bank was used to extract 20 cepstral features that model the spectral content of frog calls. Experiments were carried out to investigate the hyperparameters and the minimum frog-call time needed to train an accurate GMM classifier. With 64 Gaussians and 12 seconds of training time, the classifier achieved an average weighted error rate of 0.9% on the 10-fold cross-validation for nine species classification, as compared to 3% with MFCC and 1.8% with PLP features. For testing, 10 GMMs were trained using all the available training-validation dataset to study 23.5 hours in 141, 10-minute long samples of unidentified real-world audio recorded at two frog communities in 2001 with analog equipment. To evaluate automatic presence-absence estimation, we characterized the audio samples with 10 binary variables each corresponding to a frog species, and manually labeled a sub-set of 18 samples using headphones. A recall of 87.5% and precision of 100% with average accuracy of 96.66% suggests good generalization ability of the algorithm, and provides evidence of the validity of this approach to study real-world audio recorded in a tropical acoustic environment. Finally, we applied the algorithm to the available corpus, and show its potentiality to gain insights into the temporal reproductive behavior of frogs. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 118,217 |
2304.07358 | Exact Subspace Diffusion for Decentralized Multitask Learning | Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents. While these strategies have proven effective in i.i.d. scenarios, they can result in significant performance degradation when agents follow heterogeneous objectives or data. Distributed strategies for multitask learning, on the other hand, induce relationships between agents in a more nuanced manner, and encourage collaboration without enforcing consensus. We develop a generalization of the exact diffusion algorithm for subspace constrained multitask learning over networks, and derive an accurate expression for its mean-squared deviation when utilizing noisy gradient approximations. We verify numerically the accuracy of the predicted performance expressions, as well as the improved performance of the proposed approach over alternatives based on approximate projections. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 358,316 |
2407.11433 | CycleHOI: Improving Human-Object Interaction Detection with Cycle
Consistency of Detection and Generation | Recognition and generation are two fundamental tasks in computer vision, which are often investigated separately in the exiting literature. However, these two tasks are highly correlated in essence as they both require understanding the underline semantics of visual concepts. In this paper, we propose a new learning framework, coined as CycleHOI, to boost the performance of human-object interaction (HOI) detection by bridging the DETR-based detection pipeline and the pre-trained text-to-image diffusion model. Our key design is to introduce a novel cycle consistency loss for the training of HOI detector, which is able to explicitly leverage the knowledge captured in the powerful diffusion model to guide the HOI detector training. Specifically, we build an extra generation task on top of the decoded instance representations from HOI detector to enforce a detection-generation cycle consistency. Moreover, we perform feature distillation from diffusion model to detector encoder to enhance its representation power. In addition, we further utilize the generation power of diffusion model to augment the training set in both aspects of label correction and sample generation. We perform extensive experiments to verify the effectiveness and generalization power of our CycleHOI with three HOI detection frameworks on two public datasets: HICO-DET and V-COCO. The experimental results demonstrate our CycleHOI can significantly improve the performance of the state-of-the-art HOI detectors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,457 |
2211.16462 | Will My Robot Achieve My Goals? Predicting the Probability that an MDP
Policy Reaches a User-Specified Behavior Target | As an autonomous system performs a task, it should maintain a calibrated estimate of the probability that it will achieve the user's goal. If that probability falls below some desired level, it should alert the user so that appropriate interventions can be made. This paper considers settings where the user's goal is specified as a target interval for a real-valued performance summary, such as the cumulative reward, measured at a fixed horizon $H$. At each time $t \in \{0, \ldots, H-1\}$, our method produces a calibrated estimate of the probability that the final cumulative reward will fall within a user-specified target interval $[y^-,y^+].$ Using this estimate, the autonomous system can raise an alarm if the probability drops below a specified threshold. We compute the probability estimates by inverting conformal prediction. Our starting point is the Conformalized Quantile Regression (CQR) method of Romano et al., which applies split-conformal prediction to the results of quantile regression. CQR is not invertible, but by using the conditional cumulative distribution function (CDF) as the non-conformity measure, we show how to obtain an invertible modification that we call Probability-space Conformalized Quantile Regression (PCQR). Like CQR, PCQR produces well-calibrated conditional prediction intervals with finite-sample marginal guarantees. By inverting PCQR, we obtain guarantees for the probability that the cumulative reward of an autonomous system will fall below a threshold sampled from the marginal distribution of the response variable (i.e., a calibrated CDF estimate) that we employ to predict coverage probabilities for user-specified target intervals. Experiments on two domains confirm that these probabilities are well-calibrated. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,638 |
2208.04541 | Coverage Increase at THz Frequencies: A Cooperative Rate-Splitting
Approach | Numerous studies claim that terahertz (THz) communication will be an essential piece of sixth-generation wireless communication systems. Its promising potential also comes with major challenges, in particular the reduced coverage due to harsh propagation loss, hardware constraints, and blockage vulnerability. To increase the coverage of THz communication, we revisit cooperative communication. We propose a new type of cooperative rate-splitting (CRS) called extraction-based CRS (eCRS). Furthermore, we explore two extreme cases of eCRS, namely, identical eCRS and distinct eCRS. To enable the proposed eCRS framework, we design a novel THz cooperative channel model by considering unique characteristics of THz communication. Through mathematical derivations and convex optimization techniques considering the THz cooperative channel model, we derive local optimal solutions for the two cases of eCRS and a global optimal closed form solution for a specific scenario. Finally, we propose a novel channel estimation technique that not only specifies the channel value, but also the time delay of the channel from each cooperating user equipment to fully utilize the THz cooperative channel. In simulation results, we verify the validity of the two cases of our proposed framework and channel estimation technique. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 312,144 |
2101.02338 | Max-Affine Spline Insights Into Deep Network Pruning | In this paper, we study the importance of pruning in Deep Networks (DNs) and the yin & yang relationship between (1) pruning highly overparametrized DNs that have been trained from random initialization and (2) training small DNs that have been "cleverly" initialized. As in most cases practitioners can only resort to random initialization, there is a strong need to develop a grounded understanding of DN pruning. Current literature remains largely empirical, lacking a theoretical understanding of how pruning affects DNs' decision boundary, how to interpret pruning, and how to design corresponding principled pruning techniques. To tackle those questions, we propose to employ recent advances in the theoretical analysis of Continuous Piecewise Affine (CPA) DNs. From this perspective, we will be able to detect the early-bird (EB) ticket phenomenon, provide interpretability into current pruning techniques, and develop a principled pruning strategy. In each step of our study, we conduct extensive experiments supporting our claims and results; while our main goal is to enhance the current understanding towards DN pruning instead of developing a new pruning method, our spline pruning criteria in terms of layerwise and global pruning is on par with or even outperforms state-of-the-art pruning methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 214,589 |
2109.04411 | Non-autoregressive End-to-end Speech Translation with Parallel
Autoregressive Rescoring | This article describes an efficient end-to-end speech translation (E2E-ST) framework based on non-autoregressive (NAR) models. End-to-end speech translation models have several advantages over traditional cascade systems such as inference latency reduction. However, conventional AR decoding methods are not fast enough because each token is generated incrementally. NAR models, however, can accelerate the decoding speed by generating multiple tokens in parallel on the basis of the token-wise conditional independence assumption. We propose a unified NAR E2E-ST framework called Orthros, which has an NAR decoder and an auxiliary shallow AR decoder on top of the shared encoder. The auxiliary shallow AR decoder selects the best hypothesis by rescoring multiple candidates generated from the NAR decoder in parallel (parallel AR rescoring). We adopt conditional masked language model (CMLM) and a connectionist temporal classification (CTC)-based model as NAR decoders for Orthros, referred to as Orthros-CMLM and Orthros-CTC, respectively. We also propose two training methods to enhance the CMLM decoder. Experimental evaluations on three benchmark datasets with six language directions demonstrated that Orthros achieved large improvements in translation quality with a very small overhead compared with the baseline NAR model. Moreover, the Conformer encoder architecture enabled large quality improvements, especially for CTC-based models. Orthros-CTC with the Conformer encoder increased decoding speed by 3.63x on CPU with translation quality comparable to that of an AR model. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,398 |
2411.05224 | Beyond the Numbers: Transparency in Relation Extraction Benchmark
Creation and Leaderboards | This paper investigates the transparency in the creation of benchmarks and the use of leaderboards for measuring progress in NLP, with a focus on the relation extraction (RE) task. Existing RE benchmarks often suffer from insufficient documentation, lacking crucial details such as data sources, inter-annotator agreement, the algorithms used for the selection of instances for datasets, and information on potential biases like dataset imbalance. Progress in RE is frequently measured by leaderboards that rank systems based on evaluation methods, typically limited to aggregate metrics like F1-score. However, the absence of detailed performance analysis beyond these metrics can obscure the true generalisation capabilities of models. Our analysis reveals that widely used RE benchmarks, such as TACRED and NYT, tend to be highly imbalanced and contain noisy labels. Moreover, the lack of class-based performance metrics fails to accurately reflect model performance across datasets with a large number of relation types. These limitations should be carefully considered when reporting progress in RE. While our discussion centers on the transparency of RE benchmarks and leaderboards, the observations we discuss are broadly applicable to other NLP tasks as well. Rather than undermining the significance and value of existing RE benchmarks and the development of new models, this paper advocates for improved documentation and more rigorous evaluation to advance the field. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 506,583 |
1407.3474 | Multichannel group sparsity methods for compressive channel estimation
in doubly selective multicarrier MIMO systems (extended version) | We consider channel estimation within pulse-shaping multicarrier multiple-input multiple-output (MIMO) systems transmitting over doubly selective MIMO channels. This setup includes MIMO orthogonal frequency-division multiplexing (MIMO-OFDM) systems as a special case. We show that the component channels tend to exhibit an approximate joint group sparsity structure in the delay-Doppler domain. We then develop a compressive channel estimator that exploits this structure for improved performance. The proposed channel estimator uses the methodology of multichannel group sparse compressed sensing, which combines the methodologies of group sparse compressed sensing and multichannel compressed sensing. We derive an upper bound on the channel estimation error and analyze the estimator's computational complexity. The performance of the estimator is further improved by introducing a basis expansion yielding enhanced joint group sparsity, along with a basis optimization algorithm that is able to utilize prior statistical information if available. Simulations using a geometry-based channel simulator demonstrate the performance gains due to leveraging the joint group sparsity and optimizing the basis. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,623 |
2004.14171 | SE-KGE: A Location-Aware Knowledge Graph Embedding Model for Geographic
Question Answering and Spatial Semantic Lifting | Learning knowledge graph (KG) embeddings is an emerging technique for a variety of downstream tasks such as summarization, link prediction, information retrieval, and question answering. However, most existing KG embedding models neglect space and, therefore, do not perform well when applied to (geo)spatial data and tasks. For those models that consider space, most of them primarily rely on some notions of distance. These models suffer from higher computational complexity during training while still losing information beyond the relative distance between entities. In this work, we propose a location-aware KG embedding model called SE-KGE. It directly encodes spatial information such as point coordinates or bounding boxes of geographic entities into the KG embedding space. The resulting model is capable of handling different types of spatial reasoning. We also construct a geographic knowledge graph as well as a set of geographic query-answer pairs called DBGeo to evaluate the performance of SE-KGE in comparison to multiple baselines. Evaluation results show that SE-KGE outperforms these baselines on the DBGeo dataset for geographic logic query answering task. This demonstrates the effectiveness of our spatially-explicit model and the importance of considering the scale of different geographic entities. Finally, we introduce a novel downstream task called spatial semantic lifting which links an arbitrary location in the study area to entities in the KG via some relations. Evaluation on DBGeo shows that our model outperforms the baseline by a substantial margin. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | true | false | 174,798 |
2303.15792 | SDAT: Sub-Dataset Alternation Training for Improved Image Demosaicing | Image demosaicing is an important step in the image processing pipeline for digital cameras. In data centric approaches, such as deep learning, the distribution of the dataset used for training can impose a bias on the networks' outcome. For example, in natural images most patches are smooth, and high-content patches are much rarer. This can lead to a bias in the performance of demosaicing algorithms. Most deep learning approaches address this challenge by utilizing specific losses or designing special network architectures. We propose a novel approach, SDAT, Sub-Dataset Alternation Training, that tackles the problem from a training protocol perspective. SDAT is comprised of two essential phases. In the initial phase, we employ a method to create sub-datasets from the entire dataset, each inducing a distinct bias. The subsequent phase involves an alternating training process, which uses the derived sub-datasets in addition to training also on the entire dataset. SDAT can be applied regardless of the chosen architecture as demonstrated by various experiments we conducted for the demosaicing task. The experiments are performed across a range of architecture sizes and types, namely CNNs and transformers. We show improved performance in all cases. We are also able to achieve state-of-the-art results on three highly popular image demosaicing benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,624 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.