id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.06066 | Working Locally Thinking Globally: Theoretical Guarantees for
Convolutional Sparse Coding | The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeling low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been recently proposed, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. While several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In this work, we address the theoretical aspects of the convolutional sparse model providing the first meaningful answers to questions of uniqueness of solutions and success of pursuit algorithms, both greedy and convex relaxations, in ideal and noisy regimes. To this end, we generalize mathematical quantities, such as the $\ell_0$ norm, mutual coherence, Spark and RIP to their counterparts in the convolutional setting, intrinsically capturing local measures of the global model. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,347 |
2208.09201 | Improving Post-Processing of Audio Event Detectors Using Reinforcement
Learning | We apply post-processing to the class probability distribution outputs of audio event classification models and employ reinforcement learning to jointly discover the optimal parameters for various stages of a post-processing stack, such as the classification thresholds and the kernel sizes of median filtering algorithms used to smooth out model predictions. To achieve this we define a reinforcement learning environment where: 1) a state is the class probability distribution provided by the model for a given audio sample, 2) an action is the choice of a candidate optimal value for each parameter of the post-processing stack, 3) the reward is based on the classification accuracy metric we aim to optimize, which is the audio event-based macro F1-score in our case. We apply our post-processing to the class probability distribution outputs of two audio event classification models submitted to the DCASE Task4 2020 challenge. We find that by using reinforcement learning to discover the optimal per-class parameters for the post-processing stack that is applied to the outputs of audio event classification models, we can improve the audio event-based macro F1-score (the main metric used in the DCASE challenge to compare audio event classification accuracy) by 4-5% compared to using the same post-processing stack with manually tuned parameters. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 313,625 |
2502.01074 | Omni-Mol: Exploring Universal Convergent Space for Omni-Molecular Tasks | Building generalist models has recently demonstrated remarkable capabilities in diverse scientific domains. Within the realm of molecular learning, several studies have explored unifying diverse tasks across diverse domains. However, negative conflicts and interference between molecules and knowledge from different domain may have a worse impact in threefold. First, conflicting molecular representations can lead to optimization difficulties for the models. Second, mixing and scaling up training data across diverse tasks is inherently challenging. Third, the computational cost of refined pretraining is prohibitively high. To address these limitations, this paper presents Omni-Mol, a scalable and unified LLM-based framework for direct instruction tuning. Omni-Mol builds on three key components to tackles conflicts: (1) a unified encoding mechanism for any task input; (2) an active-learning-driven data selection strategy that significantly reduces dataset size; (3) a novel design of the adaptive gradient stabilization module and anchor-and-reconcile MoE framework that ensures stable convergence. Experimentally, Omni-Mol achieves state-of-the-art performance across 15 molecular tasks, demonstrates the presence of scaling laws in the molecular domain, and is supported by extensive ablation studies and analyses validating the effectiveness of its design. The code and weights of the powerful AI-driven chemistry generalist are open-sourced at: https://anonymous.4open.science/r/Omni-Mol-8EDB. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 529,681 |
2407.19594 | Meta-Rewarding Language Models: Self-Improving Alignment with
LLM-as-a-Meta-Judge | Large Language Models (LLMs) are rapidly surpassing human knowledge in many domains. While improving these models traditionally relies on costly human data, recent self-rewarding mechanisms (Yuan et al., 2024) have shown that LLMs can improve by judging their own responses instead of relying on human labelers. However, existing methods have primarily focused on improving model responses rather than judgment capabilities, resulting in rapid saturation during iterative training. To address this issue, we introduce a novel Meta-Rewarding step to the self-improvement process, where the model judges its own judgements and uses that feedback to refine its judgment skills. Surprisingly, this unsupervised approach improves the model's ability to judge {\em and} follow instructions, as demonstrated by a win rate improvement of Llama-3-8B-Instruct from 22.9% to 39.4% on AlpacaEval 2, and 20.6% to 29.1% on Arena-Hard. These results strongly suggest the potential for self-improving models without human supervision. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 476,848 |
1601.04724 | Interference Alignment in MIMO Interference Channels using SDP
Relaxation | Nowadays, providing higher data rate is a momentous goal for wireless communications systems. Interference is one of the important obstacles to reach this purpose. Interference alignment is a management technique that align interference from other transmitters in the least possible dimension subspace at each receiver and as a result, provide the remaining dimensions for free interference signal. An uncoordinated interference is an example of interference which cannot be aligned coordinately with interference from coordinated part and consequently, the performance of interference alignment approaches is degraded. In this paper, we propose two rank minimization methods to enhance the performance of interference alignment in the presence of uncoordinated interference sources. Firstly, a new objective function is chosen then, a new class of convex relaxation is proposed with respect to the uncoordinated interference which leads to decrease the optimal value of our optimization problem. Moreover, we use schatten-p-norm as surrogate of rank function and we implement iteratively reweighted algorithm to solve optimization problem. In addition, we apply our proposed methods to mitigate interference in relay-aided MIMO interference channel, and propose a weighted-sum method to improve the performance of interference alignment in the amplify-and-forward relay-aided MIMO system based on the rank minimization approach. Finally, our simulation results show that our proposed methods can obtain considerably higher multiplexing gain and sum rate than other approaches in the interference alignment framework and the performance of interference alignment is improved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 51,052 |
2409.08277 | Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active
Sensor | High frame rate and accurate depth estimation plays an important role in several tasks crucial to robotics and automotive perception. To date, this can be achieved through ToF and LiDAR devices for indoor and outdoor applications, respectively. However, their applicability is limited by low frame rate, energy consumption, and spatial sparsity. Depth on Demand (DoD) allows for accurate temporal and spatial depth densification achieved by exploiting a high frame rate RGB sensor coupled with a potentially lower frame rate and sparse active depth sensor. Our proposal jointly enables lower energy consumption and denser shape reconstruction, by significantly reducing the streaming requirements on the depth sensor thanks to its three core stages: i) multi-modal encoding, ii) iterative multi-modal integration, and iii) depth decoding. We present extended evidence assessing the effectiveness of DoD on indoor and outdoor video datasets, covering both environment scanning and automotive perception use cases. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 487,843 |
2308.05870 | UFed-GAN: A Secure Federated Learning Framework with Constrained
Computation and Unlabeled Data | To satisfy the broad applications and insatiable hunger for deploying low latency multimedia data classification and data privacy in a cloud-based setting, federated learning (FL) has emerged as an important learning paradigm. For the practical cases involving limited computational power and only unlabeled data in many wireless communications applications, this work investigates FL paradigm in a resource-constrained and label-missing environment. Specifically, we propose a novel framework of UFed-GAN: Unsupervised Federated Generative Adversarial Network, which can capture user-side data distribution without local classification training. We also analyze the convergence and privacy of the proposed UFed-GAN. Our experimental results demonstrate the strong potential of UFed-GAN in addressing limited computational resources and unlabeled data while preserving privacy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,934 |
1906.02702 | A Sharp Estimate on the Transient Time of Distributed Stochastic
Gradient Descent | This paper is concerned with minimizing the average of $n$ cost functions over a network in which agents may communicate and exchange information with each other. We consider the setting where only noisy gradient information is available. To solve the problem, we study the distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD). Our main contribution is to characterize the transient time needed for DSGD to approach the asymptotic convergence rate, which we show behaves as $K_T=\mathcal{O}\left(\frac{n}{(1-\rho_w)^2}\right)$, where $1-\rho_w$ denotes the spectral gap of the mixing matrix. Moreover, we construct a "hard" optimization problem for which we show the transient time needed for DSGD to approach the asymptotic convergence rate is lower bounded by $\Omega \left(\frac{n}{(1-\rho_w)^2} \right)$, implying the sharpness of the obtained result. Numerical experiments demonstrate the tightness of the theoretical results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | true | 134,150 |
1702.07203 | Utilizing Lexical Similarity between Related, Low-resource Languages for
Pivot-based SMT | We investigate pivot-based translation between related languages in a low resource, phrase-based SMT setting. We show that a subword-level pivot-based SMT model using a related pivot language is substantially better than word and morpheme-level pivot models. It is also highly competitive with the best direct translation model, which is encouraging as no direct source-target training corpus is used. We also show that combining multiple related language pivot models can rival a direct translation model. Thus, the use of subwords as translation units coupled with multiple related pivot languages can compensate for the lack of a direct parallel corpus. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 68,745 |
2309.09336 | Unleashing the Power of Dynamic Mode Decomposition and Deep Learning for
Rainfall Prediction in North-East India | Accurate rainfall forecasting is crucial for effective disaster preparedness and mitigation in the North-East region of India, which is prone to extreme weather events such as floods and landslides. In this study, we investigated the use of two data-driven methods, Dynamic Mode Decomposition (DMD) and Long Short-Term Memory (LSTM), for rainfall forecasting using daily rainfall data collected from India Meteorological Department in northeast region over a period of 118 years. We conducted a comparative analysis of these methods to determine their relative effectiveness in predicting rainfall patterns. Using historical rainfall data from multiple weather stations, we trained and validated our models to forecast future rainfall patterns. Our results indicate that both DMD and LSTM are effective in forecasting rainfall, with LSTM outperforming DMD in terms of accuracy, revealing that LSTM has the ability to capture complex nonlinear relationships in the data, making it a powerful tool for rainfall forecasting. Our findings suggest that data-driven methods such as DMD and deep learning approaches like LSTM can significantly improve rainfall forecasting accuracy in the North-East region of India, helping to mitigate the impact of extreme weather events and enhance the region's resilience to climate change. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 392,568 |
1009.0571 | Information-theoretic lower bounds on the oracle complexity of
stochastic convex optimization | Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 7,466 |
2107.09225 | Discriminator-Free Generative Adversarial Attack | The Deep Neural Networks are vulnerable toadversarial exam-ples(Figure 1), making the DNNs-based systems collapsed byadding the inconspicuous perturbations to the images. Most of the existing works for adversarial attack are gradient-based and suf-fer from the latency efficiencies and the load on GPU memory. Thegenerative-based adversarial attacks can get rid of this limitation,and some relative works propose the approaches based on GAN.However, suffering from the difficulty of the convergence of train-ing a GAN, the adversarial examples have either bad attack abilityor bad visual quality. In this work, we find that the discriminatorcould be not necessary for generative-based adversarial attack, andpropose theSymmetric Saliency-based Auto-Encoder (SSAE)to generate the perturbations, which is composed of the saliencymap module and the angle-norm disentanglement of the featuresmodule. The advantage of our proposed method lies in that it is notdepending on discriminator, and uses the generative saliency map to pay more attention to label-relevant regions. The extensive exper-iments among the various tasks, datasets, and models demonstratethat the adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.The code is available at https://github.com/BravoLu/SSAE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 246,968 |
2007.15576 | Dense Scene Multiple Object Tracking with Box-Plane Matching | Multiple Object Tracking (MOT) is an important task in computer vision. MOT is still challenging due to the occlusion problem, especially in dense scenes. Following the tracking-by-detection framework, we propose the Box-Plane Matching (BPM) method to improve the MOT performacne in dense scenes. First, we design the Layer-wise Aggregation Discriminative Model (LADM) to filter the noisy detections. Then, to associate remaining detections correctly, we introduce the Global Attention Feature Model (GAFM) to extract appearance feature and use it to calculate the appearance similarity between history tracklets and current detections. Finally, we propose the Box-Plane Matching strategy to achieve data association according to the motion similarity and appearance similarity between tracklets and detections. With the effectiveness of the three modules, our team achieves the 1st place on the Track-1 leaderboard in the ACM MM Grand Challenge HiEve 2020. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 189,702 |
2011.08575 | Audience Creation for Consumables -- Simple and Scalable Precision
Merchandising for a Growing Marketplace | Consumable categories, such as grocery and fast-moving consumer goods, are quintessential to the growth of e-commerce marketplaces in developing countries. In this work, we present the design and implementation of a precision merchandising system, which creates audience sets from over 10 million consumers and is deployed at Flipkart Supermart, one of the largest online grocery stores in India. We employ temporal point process to model the latent periodicity and mutual-excitation in the purchase dynamics of consumables. Further, we develop a likelihood-free estimation procedure that is robust against data sparsity, censure and noise typical of a growing marketplace. Lastly, we scale the inference by quantizing the triggering kernels and exploiting sparse matrix-vector multiplication primitive available on a commercial distributed linear algebra backend. In operation spanning more than a year, we have witnessed a consistent increase in click-through rate in the range of 25-70% for banner-based merchandising in the storefront, and in the range of 12-26% for push notification-based campaigns. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 206,924 |
2210.13432 | Towards Better Few-Shot and Finetuning Performance with Forgetful Causal
Language Models | Large language models (LLM) trained using the next-token-prediction objective, such as GPT3 and PaLM, have revolutionized natural language processing in recent years by showing impressive zero-shot and few-shot capabilities across a wide range of tasks. In this work, we propose a simple technique that significantly boosts the performance of LLMs without adding computational cost. Our key observation is that, by performing the next token prediction task with randomly selected past tokens masked out, we can improve the quality of the learned representations for downstream language understanding tasks. We hypothesize that randomly masking past tokens prevents over-attending to recent tokens and encourages attention to tokens in the distant past. We find that our method, Forgetful Causal Masking (FCM), significantly improves both few-shot and finetuning performance of PaLM. We further consider a simple extension, T-FCM, which introduces bidirectional context to causal language model without altering the sequence order, and further improves finetuning performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 326,158 |
1801.00121 | Resource Allocation for Downlink NOMA Systems: Key Techniques and Open
Issues | This article presents advances in resource allocation (RA) for downlink non-orthogonal multiple access (NOMA) systems, focusing on user pairing (UP) and power allocation (PA) algorithms. The former pairs the users to obtain the high capacity gain by exploiting the channel gain difference between the users, while the later allocates power to users in each cluster to balance system throughput and user fairness. Additionally, the article introduces the concept of cluster fairness and proposes the divideand- next largest difference-based UP algorithm to distribute the capacity gain among the NOMA clusters in a controlled manner. Furthermore, performance comparison between multiple-input multiple-output NOMA (MIMO-NOMA) and MIMO-OMA is conducted when users have pre-defined quality of service. Simulation results are presented, which validate the advantages of NOMA over OMA. Finally, the article provides avenues for further research on RA for downlink NOMA. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 87,513 |
2104.02000 | Can audio-visual integration strengthen robustness under multimodal
attacks? | In this paper, we propose to make a systematic study on machines multisensory perception under attacks. We use the audio-visual event recognition task against multimodal adversarial attacks as a proxy to investigate the robustness of audio-visual learning. We attack audio, visual, and both modalities to explore whether audio-visual integration still strengthens perception and how different fusion mechanisms affect the robustness of audio-visual models. For interpreting the multimodal interactions under attacks, we learn a weakly-supervised sound source visual localization model to localize sounding regions in videos. To mitigate multimodal attacks, we propose an audio-visual defense approach based on an audio-visual dissimilarity constraint and external feature memory banks. Extensive experiments demonstrate that audio-visual models are susceptible to multimodal adversarial attacks; audio-visual integration could decrease the model robustness rather than strengthen under multimodal attacks; even a weakly-supervised sound source visual localization model can be successfully fooled; our defense method can improve the invulnerability of audio-visual networks without significantly sacrificing clean model performance. | false | false | true | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 228,552 |
2401.15569 | Efficient Tuning and Inference for Large Language Models on Textual
Graphs | Rich textual and topological information of textual graphs need to be modeled in real-world applications such as webpages, e-commerce, and academic articles. Practitioners have been long following the path of adopting a shallow text encoder and a subsequent graph neural network (GNN) to solve this problem. In light of recent advancements in large language models (LLMs), it is apparent that integrating LLMs for enhanced textual encoding can substantially improve the performance of textual graphs. Nevertheless, the efficiency of these methods poses a significant challenge. In this paper, we propose ENGINE, a parameter- and memory-efficient fine-tuning method for textual graphs with an LLM encoder. The key insight is to combine the LLMs and GNNs through a tunable side structure, which significantly reduces the training complexity without impairing the joint model's capacity. Extensive experiments on textual graphs demonstrate our method's effectiveness by achieving the best model performance, meanwhile having the lowest training cost compared to previous methods. Moreover, we introduce two variants with caching and dynamic early exit to further enhance training and inference speed. Specifically, caching accelerates ENGINE's training by 12x, and dynamic early exit achieves up to 5x faster inference with a negligible performance drop (at maximum 1.17% relevant drop across 7 datasets). Our codes are available at: https://github.com/ZhuYun97/ENGINE | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 424,502 |
1006.2565 | State-Dependent Relay Channel with Private Messages with Partial Causal
and Non-Causal Channel State Information | In this paper, we introduce a discrete memoryless State-Dependent Relay Channel with Private Messages (SD-RCPM) as a generalization of the state-dependent relay channel. We investigate two main cases: SD-RCPM with non-causal Channel State Information (CSI), and SD-RCPM with causal CSI. In each case, it is assumed that partial CSI is available at the source and relay. For non-causal case, we establish an achievable rate region using Gel'fand-Pinsker type coding scheme at the nodes informed of CSI, and Compress-and-Forward (CF) scheme at the relay. Using Shannon's strategy and CF scheme, an achievable rate region for causal case is obtained. As an example, the Gaussian version of SD-RCPM is considered, and an achievable rate region for Gaussian SD-RCPM with non-causal perfect CSI only at the source, is derived. Providing numerical examples, we illustrate the comparison between achievable rate regions derived using CF and Decode-and-Forward (DF) schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 6,774 |
1708.03065 | Heterogeneous Networks with Power-Domain NOMA: Coverage, Throughput and
Power Allocation Analysis | In a heterogeneous cellular network (HetNet), consider that a base station in the HetNet is able to simultaneously schedule and serve K users in the downlink by performing the power-domain non-orthogonal multiple access (NOMA) scheme. This paper aims at the preliminary study on the downlink coverage and throughput performances of the HetNet with the non-cooperative and the (proposed) cooperative NOMA schemes. First, the coverage probability and link throughput of K users in each cell are studied and their accurate expressions are derived for the non-cooperative NOMA scheme in which no BSs are coordinated to jointly transmit the NOMA signals for a particular user. We show that the coverage and link throughput can be largely reduced if transmit power allocations among the K users do not satisfy the constraint derived. Next, we analyze the coverage and link throughput of K users for the cooperative NOMA scheme in which the void BSs without users are coordinated to enhance the farthest NOMA user in a cell. The derived accurate results show that cooperative NOMA can significantly improve the coverage and link throughput of all users. Finally, we show that there exist optimal power allocation schemes that maximize the average cell coverage and throughput under some derived power allocation constraints and numerical results validate our analytical findings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 78,711 |
1805.11728 | Sapphire: Querying RDF Data Made Simple | RDF data in the linked open data (LOD) cloud is very valuable for many different applications. In order to unlock the full value of this data, users should be able to issue complex queries on the RDF datasets in the LOD cloud. SPARQL can express such complex queries, but constructing SPARQL queries can be a challenge to users since it requires knowing the structure and vocabulary of the datasets being queried. In this paper, we introduce Sapphire, a tool that helps users write syntactically and semantically correct SPARQL queries without prior knowledge of the queried datasets. Sapphire interactively helps the user while typing the query by providing auto-complete suggestions based on the queried data. After a query is issued, Sapphire provides suggestions on ways to change the query to better match the needs of the user. We evaluated Sapphire based on performance experiments and a user study and showed it to be superior to competing approaches. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 98,991 |
2007.10534 | Check_square at CheckThat! 2020: Claim Detection in Social Media via
Fusion of Transformer and Syntactic Features | In this digital age of news consumption, a news reader has the ability to react, express and share opinions with others in a highly interactive and fast manner. As a consequence, fake news has made its way into our daily life because of very limited capacity to verify news on the Internet by large companies as well as individuals. In this paper, we focus on solving two problems which are part of the fact-checking ecosystem that can help to automate fact-checking of claims in an ever increasing stream of content on social media. For the first problem, claim check-worthiness prediction, we explore the fusion of syntactic features and deep transformer Bidirectional Encoder Representations from Transformers (BERT) embeddings, to classify check-worthiness of a tweet, i.e. whether it includes a claim or not. We conduct a detailed feature analysis and present our best performing models for English and Arabic tweets. For the second problem, claim retrieval, we explore the pre-trained embeddings from a Siamese network transformer model (sentence-transformers) specifically trained for semantic textual similarity, and perform KD-search to retrieve verified claims with respect to a query tweet. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 188,297 |
2202.00995 | MD-GAN with multi-particle input: the machine learning of long-time
molecular behavior from short-time MD data | MD-GAN is a machine learning-based method that can evolve part of the system at any time step, accelerating the generation of molecular dynamics data. For the accurate prediction of MD-GAN, sufficient information on the dynamics of a part of the system should be included with the training data. Therefore, the selection of the part of the system is important for efficient learning. In a previous study, only one particle (or vector) of each molecule was extracted as part of the system. Therefore, we investigated the effectiveness of adding information from other particles to the learning process. In the experiment of the polyethylene system, when the dynamics of three particles of each molecule were used, the diffusion was successfully predicted using one-third of the time length of the training data, compared to the single-particle input. Surprisingly, the unobserved transition of diffusion in the training data was also predicted using this method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,336 |
2101.12252 | Gaussian Process Latent Class Choice Models | We present a Gaussian Process - Latent Class Choice Model (GP-LCCM) to integrate a non-parametric class of probabilistic machine learning within discrete choice models (DCMs). Gaussian Processes (GPs) are kernel-based algorithms that incorporate expert knowledge by assuming priors over latent functions rather than priors over parameters, which makes them more flexible in addressing nonlinear problems. By integrating a Gaussian Process within a LCCM structure, we aim at improving discrete representations of unobserved heterogeneity. The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs and simultaneously estimate class-specific choice models by relying on random utility models. Furthermore, we derive and implement an Expectation-Maximization (EM) algorithm to jointly estimate/infer the hyperparameters of the GP kernel function and the class-specific choice parameters by relying on a Laplace approximation and gradient-based numerical optimization methods, respectively. The model is tested on two different mode choice applications and compared against different LCCM benchmarks. Results show that GP-LCCM allows for a more complex and flexible representation of heterogeneity and improves both in-sample fit and out-of-sample predictive power. Moreover, behavioral and economic interpretability is maintained at the class-specific choice model level while local interpretation of the latent classes can still be achieved, although the non-parametric characteristic of GPs lessens the transparency of the model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 217,522 |
2104.10868 | Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting | Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems. Especially, deep neural network (DNN) methods have significantly reduced estimation errors for crowd counting missions. Recent studies have demonstrated that DNNs are vulnerable to adversarial attacks, i.e., normal images with human-imperceptible perturbations could mislead DNNs to make false predictions. In this work, we propose a robust attack strategy called Adversarial Patch Attack with Momentum (APAM) to systematically evaluate the robustness of crowd counting models, where the attacker's goal is to create an adversarial perturbation that severely degrades their performances, thus leading to public safety accidents (e.g., stampede accidents). Especially, the proposed attack leverages the extreme-density background information of input images to generate robust adversarial patches via a series of transformations (e.g., interpolation, rotation, etc.). We observe that by perturbing less than 6\% of image pixels, our attacks severely degrade the performance of crowd counting systems, both digitally and physically. To better enhance the adversarial robustness of crowd counting models, we propose the first regression model-based Randomized Ablation (RA), which is more sufficient than Adversarial Training (ADT) (Mean Absolute Error of RA is 5 lower than ADT on clean samples and 30 lower than ADT on adversarial examples). Extensive experiments on five crowd counting models demonstrate the effectiveness and generality of the proposed method. The supplementary materials and certificate retrained models are available at \url{https://www.dropbox.com/s/hc4fdx133vht0qb/ACM_MM2021_Supp.pdf?dl=0} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 231,745 |
1811.07350 | Policy Optimization with Model-based Explorations | Model-free reinforcement learning methods such as the Proximal Policy Optimization algorithm (PPO) have successfully applied in complex decision-making problems such as Atari games. However, these methods suffer from high variances and high sample complexity. On the other hand, model-based reinforcement learning methods that learn the transition dynamics are more sample efficient, but they often suffer from the bias of the transition estimation. How to make use of both model-based and model-free learning is a central problem in reinforcement learning. In this paper, we present a new technique to address the trade-off between exploration and exploitation, which regards the difference between model-free and model-based estimations as a measure of exploration value. We apply this new technique to the PPO algorithm and arrive at a new policy optimization method, named Policy Optimization with Model-based Explorations (POME). POME uses two components to predict the actions' target values: a model-free one estimated by Monte-Carlo sampling and a model-based one which learns a transition model and predicts the value of the next state. POME adds the error of these two target estimations as the additional exploration value for each state-action pair, i.e, encourages the algorithm to explore the states with larger target errors which are hard to estimate. We compare POME with PPO on Atari 2600 games, and it shows that POME outperforms PPO on 33 games out of 49 games. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 113,744 |
1008.3306 | Modelling of Multi-Agent Systems: Experiences with Membrane Computing
and Future Challenges | Formal modelling of Multi-Agent Systems (MAS) is a challenging task due to high complexity, interaction, parallelism and continuous change of roles and organisation between agents. In this paper we record our research experience on formal modelling of MAS. We review our research throughout the last decade, by describing the problems we have encountered and the decisions we have made towards resolving them and providing solutions. Much of this work involved membrane computing and classes of P Systems, such as Tissue and Population P Systems, targeted to the modelling of MAS whose dynamic structure is a prominent characteristic. More particularly, social insects (such as colonies of ants, bees, etc.), biology inspired swarms and systems with emergent behaviour are indicative examples for which we developed formal MAS models. Here, we aim to review our work and disseminate our findings to fellow researchers who might face similar challenges and, furthermore, to discuss important issues for advancing research on the application of membrane computing in MAS modelling. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 7,312 |
2010.07610 | A Methodology for Ethics-by-Design AI Systems: Dealing with Human Value
Conflicts | The introduction of artificial intelligence into activities traditionally carried out by human beings produces brutal changes. This is not without consequences for human values. This paper is about designing and implementing models of ethical behaviors in AI-based systems, and more specifically it presents a methodology for designing systems that take ethical aspects into account at an early stage while finding an innovative solution to prevent human values from being affected. Two case studies where AI-based innovations complement economic and social proposals with this methodology are presented: one in the field of culture and operated by a private company, the other in the field of scientific research and supported by a state organization. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 200,887 |
2108.02664 | A method to compute the communicability of nodes through causal paths in
temporal networks | We present a method aimed to compute the communicability (broadcast and receive) of nodes through causal paths in temporal networks. The method considers all possible combinations of chronologically ordered products of adjacency matrices of the network snapshots and by means of a damping procedure favors the paths that have high communication efficiency. We apply the method to four real-world networks of face-to-face human contacts and identify the nodes with high communicability. The accuracy of the method is proved by studying the spread of an epidemic in the networks using the susceptible-infected-recovered model. We show that if a node with high broadcast is chosen as the origin of the outbreak of infection then the epidemic spreads early while it is delayed and inhibited if the origin of infection is a node with low broadcast. Receiving nodes can be treated as broadcasters if the arrow of time is reversed. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 249,406 |
1909.10114 | Gridless Angular Domain Channel Estimation for mmWave Massive MIMO
System With One-Bit Quantization Via Approximate Message Passing | We develop a direction of arrival (DoA) and channel estimation algorithm for the one-bit quantized millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) system. By formulating the estimation problem as a noisy one-bit compressed sensing problem, we propose a computationally efficient gridless solution based on the expectation-maximization generalized approximate message passing (EMGAMP) approach. The proposed algorithm does not need the prior knowledge about the number of DoAs and outperforms the existing methods in distinguishing extremely close DoAs for the case of one-bit quantization. Both the DoAs and the channel coefficients are estimated for the case of one-bit quantization. The simulation results show that the proposed algorithm has effective estimation performances when the DoAs are very close to each other. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 146,450 |
1506.05900 | Representation Learning for Clustering: A Statistical Framework | We address the problem of communicating domain knowledge from a user to the designer of a clustering algorithm. We propose a protocol in which the user provides a clustering of a relatively small random sample of a data set. The algorithm designer then uses that sample to come up with a data representation under which $k$-means clustering results in a clustering (of the full data set) that is aligned with the user's clustering. We provide a formal statistical model for analyzing the sample complexity of learning a clustering representation with this paradigm. We then introduce a notion of capacity of a class of possible representations, in the spirit of the VC-dimension, showing that classes of representations that have finite such dimension can be successfully learned with sample size error bounds, and end our discussion with an analysis of that dimension for classes of representations induced by linear embeddings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 44,350 |
2206.03795 | NOMA-based Improper Signaling for Multicell MISO RIS-assisted Broadcast
Channels | In this paper, we study the performance of reconfigurable intelligent surfaces (RISs) in a multicell broadcast channel (BC) that employs improper Gaussian signaling (IGS) jointly with non-orthogonal multiple access (NOMA) to optimize either the minimum-weighted rate or the energy efficiency (EE) of the network. We show that although the RIS can significantly improve the system performance, it cannot mitigate interference completely, so we have to employ other interference-management techniques to further improve performance. We show that the proposed NOMA-based IGS scheme can substantially outperform proper Gaussian signaling (PGS) and IGS schemes that treat interference as noise (TIN) in particular when the number of users per cell is larger than the number of base station (BS) antennas (referred to as overloaded networks). In other words, IGS and NOMA complement to each other as interference management techniques in multicell RIS-assisted BCs. Furthermore, we consider three different feasibility sets for the RIS components showing that even a RIS with a small number of elements provides considerable gains for all the feasibility sets. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 301,409 |
2407.08861 | A Hybrid Spiking-Convolutional Neural Network Approach for Advancing
Machine Learning Models | In this article, we propose a novel standalone hybrid Spiking-Convolutional Neural Network (SC-NN) model and test on using image inpainting tasks. Our approach uses the unique capabilities of SNNs, such as event-based computation and temporal processing, along with the strong representation learning abilities of CNNs, to generate high-quality inpainted images. The model is trained on a custom dataset specifically designed for image inpainting, where missing regions are created using masks. The hybrid model consists of SNNConv2d layers and traditional CNN layers. The SNNConv2d layers implement the leaky integrate-and-fire (LIF) neuron model, capturing spiking behavior, while the CNN layers capture spatial features. In this study, a mean squared error (MSE) loss function demonstrates the training process, where a training loss value of 0.015, indicates accurate performance on the training set and the model achieved a validation loss value as low as 0.0017 on the testing set. Furthermore, extensive experimental results demonstrate state-of-the-art performance, showcasing the potential of integrating temporal dynamics and feature extraction in a single network for image inpainting. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 472,334 |
2106.14617 | Optimized Wireless Control and Telemetry Network for Mobile Soccer
Robots | In a diverse set of robotics applications, including RoboCup categories, mobile robots require control commands to interact with surrounding environment correctly. These control commands should come wirelessly to not interfere in robots' movement; also, the communication has a set of requirements, including low latency and consistent delivery. This paper presents a complete communication architecture consisting of computer communication with a base station, which transmits the data to robots and returns robots telemetry to the computer. With the proposed communication, it is possible to send messages in less than 4.5ms for six robots with telemetry enables in all of them. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 243,465 |
1508.02959 | Mountain Peak Detection in Online Social Media | We present a system for the classification of mountain panoramas from user-generated photographs followed by identification and extraction of mountain peaks from those panoramas. We have developed an automatic technique that, given as input a geo-tagged photograph, estimates its FOV (Field Of View) and the direction of the camera using a matching algorithm on the photograph edge maps and a rendered view of the mountain silhouettes that should be seen from the observer's point of view. The extraction algorithm then identifies the mountain peaks present in the photograph and their profiles. We discuss possible applications in social fields such as photograph peak tagging on social portals, augmented reality on mobile devices when viewing a mountain panorama, and generation of collective intelligence systems (such as environmental models) from massive social media collections (e.g. snow water availability maps based on mountain peak states extracted from photograph hosting services). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 45,955 |
2305.09758 | A Video Is Worth 4096 Tokens: Verbalize Videos To Understand Them In
Zero Shot | Multimedia content, such as advertisements and story videos, exhibit a rich blend of creativity and multiple modalities. They incorporate elements like text, visuals, audio, and storytelling techniques, employing devices like emotions, symbolism, and slogans to convey meaning. There is a dearth of large annotated training datasets in the multimedia domain hindering the development of supervised learning models with satisfactory performance for real-world applications. On the other hand, the rise of large language models (LLMs) has witnessed remarkable zero-shot performance in various natural language processing (NLP) tasks, such as emotion classification, question-answering, and topic classification. To leverage such advanced techniques to bridge this performance gap in multimedia understanding, we propose verbalizing long videos to generate their descriptions in natural language, followed by performing video-understanding tasks on the generated story as opposed to the original video. Through extensive experiments on fifteen video-understanding tasks, we demonstrate that our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding. Furthermore, to alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 364,758 |
2306.04528 | PromptRobust: Towards Evaluating the Robustness of Large Language Models
on Adversarial Prompts | The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts. In response to this vital need, we introduce PromptRobust, a robustness benchmark designed to measure LLMs' resilience to adversarial prompts. This study uses a plethora of adversarial textual attacks targeting prompts across multiple levels: character, word, sentence, and semantic. The adversarial prompts, crafted to mimic plausible user errors like typos or synonyms, aim to evaluate how slight deviations can affect LLM outcomes while maintaining semantic integrity. These prompts are then employed in diverse tasks including sentiment analysis, natural language inference, reading comprehension, machine translation, and math problem-solving. Our study generates 4,788 adversarial prompts, meticulously evaluated over 8 tasks and 13 datasets. Our findings demonstrate that contemporary LLMs are not robust to adversarial prompts. Furthermore, we present a comprehensive analysis to understand the mystery behind prompt robustness and its transferability. We then offer insightful robustness analysis and pragmatic recommendations for prompt composition, beneficial to both researchers and everyday users. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 371,784 |
2208.01537 | Optimal Friendly Jamming and Transmit Power Allocation in RIS-assisted
Secure Communication | This paper analyzes the secrecy performance of a reconfigurable intelligent surface (RIS) assisted wireless communication system with a friendly jammer in the presence of an eavesdropper. The friendly jammer enhances the secrecy by introducing artificial noise towards the eavesdropper without degrading the reception at the destination. Approximate secrecy outage probability (SOP) is derived in closed form. We also provide a simpler approximate closed-form expression for the SOP in order to understand the effect of system parameters on the performance and to find the optimal power allocation for the transmitter and jammer. The optimal transmit and jamming power allocation factor is derived by minimizing the SOP assuming a total power constraint. It is shown that the SOP performance is significantly improved by the introduction of the jammer and a gain of approximately $3$ dB is achieved at an SOP of $10^{-4}$ by optimally allocating power compared to the case of equal power allocation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 311,191 |
2206.08181 | ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization | Graph Neural Networks (GNNs) have attracted much attention due to their ability in learning representations from graph-structured data. Despite the successful applications of GNNs in many domains, the optimization of GNNs is less well studied, and the performance on node classification heavily suffers from the long-tailed node degree distribution. This paper focuses on improving the performance of GNNs via normalization. In detail, by studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs, which is termed ResNorm (\textbf{Res}haping the long-tailed distribution into a normal-like distribution via \textbf{norm}alization). The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes (\textit{i}.\textit{e}., low-degree nodes). We provide a theoretical interpretation and empirical evidence for understanding the mechanism of the above $scale$. In addition to the long-tailed distribution issue, over-smoothing is also a fundamental issue plaguing the community. To this end, we analyze the behavior of the standard shift and prove that the standard shift serves as a preconditioner on the weight matrix, increasing the risk of over-smoothing. With the over-smoothing issue in mind, we design a $shift$ operation for ResNorm that simulates the degree-specific parameter strategy in a low-cost manner. Extensive experiments have validated the effectiveness of ResNorm on several node classification benchmark datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,032 |
2006.09239 | Posterior Network: Uncertainty Estimation without OOD Samples via
Density-Based Pseudo-Counts | Accurate estimation of aleatoric and epistemic uncertainty is crucial to build safe and reliable systems. Traditional approaches, such as dropout and ensemble methods, estimate uncertainty by sampling probability predictions from different submodels, which leads to slow uncertainty estimation at inference time. Recent works address this drawback by directly predicting parameters of prior distributions over the probability predictions with a neural network. While this approach has demonstrated accurate uncertainty estimation, it requires defining arbitrary target parameters for in-distribution data and makes the unrealistic assumption that out-of-distribution (OOD) data is known at training time. In this work we propose the Posterior Network (PostNet), which uses Normalizing Flows to predict an individual closed-form posterior distribution over predicted probabilites for any input sample. The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- without requiring access to OOD data at training time. PostNet achieves state-of-the art results in OOD detection and in uncertainty calibration under dataset shifts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,482 |
1707.08115 | A novel CS Beamformer root-MUSIC algorithm and its subspace deviation
analysis | Subspace based techniques for direction of arrival (DOA) estimation need large amount of snapshots to detect source directions accurately. This poses a problem in the form of computational burden on practical applications. The introduction of compressive sensing (CS) to solve this issue has become a norm in the last decade. In this paper, a novel CS beamformer root-MUSIC algorithm is presented with a revised optimal measurement matrix bound. With regards to this algorithm, the effect of signal subspace deviation under low snapshot scenario (e.g. target tracking) is analysed. The CS beamformer greatly reduces computational complexity without affecting resolution of the algorithm, works on par with root-MUSIC under low snapshot scenario and also, gives an option of non-uniform linear array sensors unlike the case of root-MUSIC algorithm. The effectiveness of the algorithm is demonstrated with simulations under various scenarios. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,751 |
2405.20387 | Sensitivity Analysis for Piecewise-Affine Approximations of Nonlinear
Programs with Polytopic Constraints | Nonlinear Programs (NLPs) are prevalent in optimization-based control of nonlinear systems. Solving general NLPs is computationally expensive, necessitating the development of fast hardware or tractable suboptimal approximations. This paper investigates the sensitivity of the solutions of NLPs with polytopic constraints when the nonlinear continuous objective function is approximated by a PieceWise-Affine (PWA) counterpart. By leveraging perturbation analysis using a convex modulus, we derive guaranteed bounds on the distance between the optimal solution of the original polytopically-constrained NLP and that of its approximated formulation. Our approach aids in determining criteria for achieving desired solution bounds. Two case studies on the Eggholder function and nonlinear model predictive control of an inverted pendulum demonstrate the theoretical results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 459,319 |
2411.09820 | WelQrate: Defining the Gold Standard in Small Molecule Drug Discovery
Benchmarking | While deep learning has revolutionized computer-aided drug discovery, the AI community has predominantly focused on model innovation and placed less emphasis on establishing best benchmarking practices. We posit that without a sound model evaluation framework, the AI community's efforts cannot reach their full potential, thereby slowing the progress and transfer of innovation into real-world drug discovery. Thus, in this paper, we seek to establish a new gold standard for small molecule drug discovery benchmarking, WelQrate. Specifically, our contributions are threefold: WelQrate Dataset Collection - we introduce a meticulously curated collection of 9 datasets spanning 5 therapeutic target classes. Our hierarchical curation pipelines, designed by drug discovery experts, go beyond the primary high-throughput screen by leveraging additional confirmatory and counter screens along with rigorous domain-driven preprocessing, such as Pan-Assay Interference Compounds (PAINS) filtering, to ensure the high-quality data in the datasets; WelQrate Evaluation Framework - we propose a standardized model evaluation framework considering high-quality datasets, featurization, 3D conformation generation, evaluation metrics, and data splits, which provides a reliable benchmarking for drug discovery experts conducting real-world virtual screening; Benchmarking - we evaluate model performance through various research questions using the WelQrate dataset collection, exploring the effects of different models, dataset quality, featurization methods, and data splitting strategies on the results. In summary, we recommend adopting our proposed WelQrate as the gold standard in small molecule drug discovery benchmarking. The WelQrate dataset collection, along with the curation codes, and experimental scripts are all publicly available at WelQrate.org. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 508,377 |
1006.2977 | Algebraic Constructions of Graph-Based Nested Codes from Protographs | Nested codes have been employed in a large number of communication applications as a specific case of superposition codes, for example to implement binning schemes in the presence of noise, in joint network-channel coding, or in physical-layer secrecy. Whereas nested lattice codes have been proposed recently for continuous-input channels, in this paper we focus on the construction of nested linear codes for joint channel-network coding problems based on algebraic protograph LDPC codes. In particular, over the past few years several constructions of codes have been proposed that are based on random lifts of suitably chosen base graphs. More recently, an algebraic analog of this approach was introduced using the theory of voltage graphs. In this paper we illustrate how these methods can be used in the construction of nested codes from algebraic lifts of graphs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 6,798 |
1811.08622 | Angular Triplet-Center Loss for Multi-view 3D Shape Retrieval | How to obtain the desirable representation of a 3D shape, which is discriminative across categories and polymerized within classes, is a significant challenge in 3D shape retrieval. Most existing 3D shape retrieval methods focus on capturing strong discriminative shape representation with softmax loss for the classification task, while the shape feature learning with metric loss is neglected for 3D shape retrieval. In this paper, we address this problem based on the intuition that the cosine distance of shape embeddings should be close enough within the same class and far away across categories. Since most of 3D shape retrieval tasks use cosine distance of shape features for measuring shape similarity, we propose a novel metric loss named angular triplet-center loss, which directly optimizes the cosine distances between the features. It inherits the triplet-center loss property to achieve larger inter-class distance and smaller intra-class distance simultaneously. Unlike previous metric loss utilized in 3D shape retrieval methods, where Euclidean distance is adopted and the margin design is difficult, the proposed method is more convenient to train feature embeddings and more suitable for 3D shape retrieval. Moreover, the angle margin is adopted to replace the cosine margin in order to provide more explicit discriminative constraints on an embedding space. Extensive experimental results on two popular 3D object retrieval benchmarks, ModelNet40 and ShapeNetCore 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-of-the-art results on various 3D shape datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,090 |
2308.15464 | A Comparative Study of Loss Functions: Traffic Predictions in Regular
and Congestion Scenarios | Spatiotemporal graph neural networks have achieved state-of-the-art performance in traffic forecasting. However, they often struggle to forecast congestion accurately due to the limitations of traditional loss functions. While accurate forecasting of regular traffic conditions is crucial, a reliable AI system must also accurately forecast congestion scenarios to maintain safe and efficient transportation. In this paper, we explore various loss functions inspired by heavy tail analysis and imbalanced classification problems to address this issue. We evaluate the efficacy of these loss functions in forecasting traffic speed, with an emphasis on congestion scenarios. Through extensive experiments on real-world traffic datasets, we discovered that when optimizing for Mean Absolute Error (MAE), the MAE-Focal Loss function stands out as the most effective. When optimizing Mean Squared Error (MSE), Gumbel Loss proves to be the superior choice. These choices effectively forecast traffic congestion events without compromising the accuracy of regular traffic speed forecasts. This research enhances deep learning models' capabilities in forecasting sudden speed changes due to congestion and underscores the need for more research in this direction. By elevating the accuracy of congestion forecasting, we advocate for AI systems that are reliable, secure, and resilient in practical traffic management scenarios. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 388,690 |
2207.06569 | Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting | The practical success of overparameterized neural networks has motivated the recent scientific study of interpolating methods, which perfectly fit their training data. Certain interpolating methods, including neural networks, can fit noisy training data without catastrophically bad test performance, in defiance of standard intuitions from statistical learning theory. Aiming to explain this, a body of recent work has studied benign overfitting, a phenomenon where some interpolating methods approach Bayes optimality, even in the presence of noise. In this work we argue that while benign overfitting has been instructive and fruitful to study, many real interpolating methods like neural networks do not fit benignly: modest noise in the training set causes nonzero (but non-infinite) excess risk at test time, implying these models are neither benign nor catastrophic but rather fall in an intermediate regime. We call this intermediate regime tempered overfitting, and we initiate its systematic study. We first explore this phenomenon in the context of kernel (ridge) regression (KR) by obtaining conditions on the ridge parameter and kernel eigenspectrum under which KR exhibits each of the three behaviors. We find that kernels with powerlaw spectra, including Laplace kernels and ReLU neural tangent kernels, exhibit tempered overfitting. We then empirically study deep neural networks through the lens of our taxonomy, and find that those trained to interpolation are tempered, while those stopped early are benign. We hope our work leads to a more refined understanding of overfitting in modern learning. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 307,919 |
2411.02068 | Model Integrity when Unlearning with T2I Diffusion Models | The rapid advancement of text-to-image Diffusion Models has led to their widespread public accessibility. However these models, trained on large internet datasets, can sometimes generate undesirable outputs. To mitigate this, approximate Machine Unlearning algorithms have been proposed to modify model weights to reduce the generation of specific types of images, characterized by samples from a ``forget distribution'', while preserving the model's ability to generate other images, characterized by samples from a ``retain distribution''. While these methods aim to minimize the influence of training data in the forget distribution without extensive additional computation, we point out that they can compromise the model's integrity by inadvertently affecting generation for images in the retain distribution. Recognizing the limitations of FID and CLIPScore in capturing these effects, we introduce a novel retention metric that directly assesses the perceptual difference between outputs generated by the original and the unlearned models. We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines. Given their straightforward implementation, these algorithms serve as valuable benchmarks for future advancements in approximate Machine Unlearning for Diffusion Models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 505,341 |
2303.01428 | PuSHR: A Multirobot System for Nonprehensile Rearrangement | We focus on the problem of rearranging a set of objects with a team of car-like robot pushers built using off-the-shelf components. Maintaining control of pushed objects while avoiding collisions in a tight space demands highly coordinated motion that is challenging to execute on constrained hardware. Centralized replanning approaches become intractable even for small-sized problems whereas decentralized approaches often get stuck in deadlocks. Our key insight is that by carefully assigning pushing tasks to robots, we could reduce the complexity of the rearrangement task, enabling robust performance via scalable decentralized control. Based on this insight, we built PuSHR, a system that optimally assigns pushing tasks and trajectories to robots offline, and performs trajectory tracking via decentralized control online. Through an ablation study in simulation, we demonstrate that PuSHR dominates baselines ranging from purely decentralized to fully decentralized in terms of success rate and time efficiency across challenging tasks with up to 4 robots. Hardware experiments demonstrate the transfer of our system to the real world and highlight its robustness to model inaccuracies. Our code can be found at https://github.com/prl-mushr/pushr, and videos from our experiments at https://youtu.be/DIWmZerF_O8. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 348,965 |
2310.19574 | Skip-WaveNet: A Wavelet based Multi-scale Architecture to Trace Snow
Layers in Radar Echograms | Airborne radar sensors capture the profile of snow layers present on top of an ice sheet. Accurate tracking of these layers is essential to calculate their thicknesses, which are required to investigate the contribution of polar ice cap melt to sea-level rise. However, automatically processing the radar echograms to detect the underlying snow layers is a challenging problem. In our work, we develop wavelet-based multi-scale deep learning architectures for these radar echograms to improve snow layer detection. These architectures estimate the layer depths with a mean absolute error of 3.31 pixels and 94.3% average precision, achieving higher generalizability as compared to state-of-the-art snow layer detection networks. These depth estimates also agree well with physically drilled stake measurements. Such robust architectures can be used on echograms from future missions to efficiently trace snow layers, estimate their individual thicknesses and thus support sea-level rise projection models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 404,042 |
1311.6227 | Experience of Developing a Meta-Semantic Search Engine | Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that forwards users query to several existing search engines and provides combined results by using their own page ranking algorithm. SemanTelli is a meta semantic search engine that fetches results from different semantic search engines such as Hakia, DuckDuckGo, SenseBot through intelligent agents. This paper proposes enhancement of SemanTelli with improved snippet analysis based page ranking algorithm and support for image and news search. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 28,634 |
2109.13441 | DynG2G: An Efficient Stochastic Graph Embedding Method for Temporal
Graphs | Dynamic graph embedding has gained great attention recently due to its capability of learning low dimensional graph representations for complex temporal graphs with high accuracy. However, recent advances mostly focus on learning node embeddings as deterministic "vectors" for static graphs yet disregarding the key graph temporal dynamics and the evolving uncertainties associated with node embedding in the latent space. In this work, we propose an efficient stochastic dynamic graph embedding method (DynG2G) that applies an inductive feed-forward encoder trained with node triplet-based contrastive loss. Every node per timestamp is encoded as a time-dependent probabilistic multivariate Gaussian distribution in the latent space, hence we can quantify the node embedding uncertainty on-the-fly. We adopted eight different benchmarks that represent diversity in size (from 96 nodes to 87,626 and from 13,398 edges to 4,870,863) and diversity in dynamics. We demonstrate via extensive experiments on these eight dynamic graph benchmarks that DynG2G achieves new state-of-the-art performance in capturing the underlying temporal node embeddings. We also demonstrate that DynG2G can predict the evolving node embedding uncertainty, which plays a crucial role in quantifying the intrinsic dimensionality of the dynamical system over time. We obtain a universal relation of the optimal embedding dimension, $L_o$, versus the effective dimensionality of uncertainty, $D_u$, and we infer that $L_o=D_u$ for all cases. This implies that the uncertainty quantification approach we employ in the DynG2G correctly captures the intrinsic dimensionality of the dynamics of such evolving graphs despite the diverse nature and composition of the graphs at each timestamp. Moreover, this $L_0 - D_u$ correlation provides a clear path to select adaptively the optimum embedding size at each timestamp by setting $L \ge D_u$. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 257,636 |
1208.4316 | An Online Character Recognition System to Convert Grantha Script to
Malayalam | This paper presents a novel approach to recognize Grantha, an ancient script in South India and converting it to Malayalam, a prevalent language in South India using online character recognition mechanism. The motivation behind this work owes its credit to (i) developing a mechanism to recognize Grantha script in this modern world and (ii) affirming the strong connection among Grantha and Malayalam. A framework for the recognition of Grantha script using online character recognition is designed and implemented. The features extracted from the Grantha script comprises mainly of time-domain features based on writing direction and curvature. The recognized characters are mapped to corresponding Malayalam characters. The framework was tested on a bed of medium length manuscripts containing 9-12 sample lines and printed pages of a book titled Soundarya Lahari writtenin Grantha by Sri Adi Shankara to recognize the words and sentences. The manuscript recognition rates with the system are for Grantha as 92.11%, Old Malayalam 90.82% and for new Malayalam script 89.56%. The recognition rates of pages of the printed book are for Grantha as 96.16%, Old Malayalam script 95.22% and new Malayalam script as 92.32% respectively. These results show the efficiency of the developed system. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 18,201 |
2407.17783 | How Lightweight Can A Vision Transformer Be | In this paper, we explore a strategy that uses Mixture-of-Experts (MoE) to streamline, rather than augment, vision transformers. Each expert in an MoE layer is a SwiGLU feedforward network, where V and W2 are shared across the layer. No complex attention or convolutional mechanisms are employed. Depth-wise scaling is applied to progressively reduce the size of the hidden layer and the number of experts is increased in stages. Grouped query attention is used. We studied the proposed approach with and without pre-training on small datasets and investigated whether transfer learning works at this scale. We found that the architecture is competitive even at a size of 0.67M parameters. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 476,113 |
2109.12085 | Text-based NP Enrichment | Understanding the relations between entities denoted by NPs in a text is a critical part of human-like natural language understanding. However, only a fraction of such relations is covered by standard NLP tasks and benchmarks nowadays. In this work, we propose a novel task termed text-based NP enrichment (TNE), in which we aim to enrich each NP in a text with all the preposition-mediated relations -- either explicit or implicit -- that hold between it and other NPs in the text. The relations are represented as triplets, each denoted by two NPs related via a preposition. Humans recover such relations seamlessly, while current state-of-the-art models struggle with them due to the implicit nature of the problem. We build the first large-scale dataset for the problem, provide the formal framing and scope of annotation, analyze the data, and report the results of fine-tuned language models on the task, demonstrating the challenge it poses to current technology. A webpage with a data-exploration UI, a demo, and links to the code, models, and leaderboard, to foster further research into this challenging problem can be found at: yanaiela.github.io/TNE/. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 257,157 |
2104.08631 | Training Humans to Train Robots Dynamic Motor Skills | Learning from demonstration (LfD) is commonly considered to be a natural and intuitive way to allow novice users to teach motor skills to robots. However, it is important to acknowledge that the effectiveness of LfD is heavily dependent on the quality of teaching, something that may not be assured with novices. It remains an open question as to the most effective way of guiding demonstrators to produce informative demonstrations beyond ad hoc advice for specific teaching tasks. To this end, this paper investigates the use of machine teaching to derive an index for determining the quality of demonstrations and evaluates its use in guiding and training novices to become better teachers. Experiments with a simple learner robot suggest that guidance and training of teachers through the proposed approach can lead to up to 66.5% decrease in error in the learnt skill. | true | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 230,882 |
2311.12821 | Advancing The Rate-Distortion-Computation Frontier For Neural Image
Compression | The rate-distortion performance of neural image compression models has exceeded the state-of-the-art for non-learned codecs, but neural codecs are still far from widespread deployment and adoption. The largest obstacle is having efficient models that are feasible on a wide variety of consumer hardware. Comparative research and evaluation is difficult due to the lack of standard benchmarking platforms and due to variations in hardware architectures and test environments. Through our rate-distortion-computation (RDC) study we demonstrate that neither floating-point operations (FLOPs) nor runtime are sufficient on their own to accurately rank neural compression methods. We also explore the RDC frontier, which leads to a family of model architectures with the best empirical trade-off between computational requirements and RD performance. Finally, we identify a novel neural compression architecture that yields state-of-the-art RD performance with rate savings of 23.1% over BPG (7.0% over VTM and 3.0% over ELIC) without requiring significantly more FLOPs than other learning-based codecs. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 409,495 |
2104.07365 | D-Cliques: Compensating for Data Heterogeneity with Topology in
Decentralized Federated Learning | The convergence speed of machine learning models trained with Federated Learning is significantly affected by heterogeneous data partitions, even more so in a fully decentralized setting without a central server. In this paper, we show that the impact of label distribution skew, an important type of data heterogeneity, can be significantly reduced by carefully designing the underlying communication topology. We present D-Cliques, a novel topology that reduces gradient bias by grouping nodes in sparsely interconnected cliques such that the label distribution in a clique is representative of the global label distribution. We also show how to adapt the updates of decentralized SGD to obtain unbiased gradients and implement an effective momentum with D-Cliques. Our extensive empirical evaluation on MNIST and CIFAR10 demonstrates that our approach provides similar convergence speed as a fully-connected topology, which provides the best convergence in a data heterogeneous setting, with a significant reduction in the number of edges and messages. In a 1000-node topology, D-Cliques require 98% less edges and 96% less total messages, with further possible gains using a small-world topology across cliques. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 230,389 |
1303.5431 | Intuitions about Ordered Beliefs Leading to Probabilistic Models | The general use of subjective probabilities to model belief has been justified using many axiomatic schemes. For example, ?consistent betting behavior' arguments are well-known. To those not already convinced of the unique fitness and generality of probability models, such justifications are often unconvincing. The present paper explores another rationale for probability models. ?Qualitative probability,' which is known to provide stringent constraints on belief representation schemes, is derived from five simple assumptions about relationships among beliefs. While counterparts of familiar rationality concepts such as transitivity, dominance, and consistency are used, the betting context is avoided. The gap between qualitative probability and probability proper can be bridged by any of several additional assumptions. The discussion here relies on results common in the recent AI literature, introducing a sixth simple assumption. The narrative emphasizes models based on unique complete orderings, but the rationale extends easily to motivate set-valued representations of partial orderings as well. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,119 |
2410.06481 | Leaf Stripping on Uniform Attachment Trees | In this note we analyze the performance of a simple root-finding algorithm in uniform attachment trees. The leaf-stripping algorithm recursively removes all leaves of the tree for a carefully chosen number of rounds. We show that, with probability $1 - \epsilon$, the set of remaining vertices contains the root and has a size only depending on $\epsilon$ but not on the size of the tree. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 496,221 |
2311.08835 | Correlation-Guided Query-Dependency Calibration for Video Temporal
Grounding | Temporal Grounding is to identify specific moments or highlights from a video corresponding to textual descriptions. Typical approaches in temporal grounding treat all video clips equally during the encoding process regardless of their semantic relevance with the text query. Therefore, we propose Correlation-Guided DEtection TRansformer (CG-DETR), exploring to provide clues for query-associated video clips within the cross-modal attention. First, we design an adaptive cross-attention with dummy tokens. Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query. Yet, not all words equally inherit the text query's correlation to video clips. Thus, we further guide the cross-attention map by inferring the fine-grained correlation between video clips and words. We enable this by learning a joint embedding space for high-level concepts, i.e., moment and sentence level, and inferring the clip-word correlation. Lastly, we exploit the moment-specific characteristics and combine them with the context of each video to form a moment-adaptive saliency detector. By exploiting the degrees of text engagement in each video clip, it precisely measures the highlightness of each clip. CG-DETR achieves state-of-the-art results on various benchmarks for temporal grounding. Codes are available at https://github.com/wjun0830/CGDETR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 407,887 |
2310.13258 | ManiCast: Collaborative Manipulation with Cost-Aware Human Forecasting | Seamless human-robot manipulation in close proximity relies on accurate forecasts of human motion. While there has been significant progress in learning forecast models at scale, when applied to manipulation tasks, these models accrue high errors at critical transition points leading to degradation in downstream planning performance. Our key insight is that instead of predicting the most likely human motion, it is sufficient to produce forecasts that capture how future human motion would affect the cost of a robot's plan. We present ManiCast, a novel framework that learns cost-aware human forecasts and feeds them to a model predictive control planner to execute collaborative manipulation tasks. Our framework enables fluid, real-time interactions between a human and a 7-DoF robot arm across a number of real-world tasks such as reactive stirring, object handovers, and collaborative table setting. We evaluate both the motion forecasts and the end-to-end forecaster-planner system against a range of learned and heuristic baselines while additionally contributing new datasets. We release our code and datasets at https://portal-cornell.github.io/manicast/. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 401,365 |
1306.5850 | Practical Secrecy: Bridging the Gap between Cryptography and Physical
Layer Security | Current security techniques can be implemented either by requiring a secret key exchange or depending on assumptions about the communication channels. In this paper, we show that, by using a physical layer technique known as artificial noise, it is feasible to protect secret data without any form of secret key exchange and any restriction on the communication channels. Specifically, we analyze how the artificial noise can achieve practical secrecy. By treating the artificial noise as an unshared one-time pad secret key, we show that the proposed scheme also achieves Shannon's perfect secrecy. Moreover, we show that achieving perfect secrecy is much easier than ensuring non-zero secrecy capacity, especially when the eavesdropper has more antennas than the transmitter. Focusing on the practical applications, we show that practical secrecy and strong secrecy can be guaranteed even if the eavesdropper attempts to remove the artificial noise. We finally show the connections between traditional cryptography and physical layer security. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 25,437 |
2209.08924 | HVC-Net: Unifying Homography, Visibility, and Confidence Learning for
Planar Object Tracking | Robust and accurate planar tracking over a whole video sequence is vitally important for many vision applications. The key to planar object tracking is to find object correspondences, modeled by homography, between the reference image and the tracked image. Existing methods tend to obtain wrong correspondences with changing appearance variations, camera-object relative motions and occlusions. To alleviate this problem, we present a unified convolutional neural network (CNN) model that jointly considers homography, visibility, and confidence. First, we introduce correlation blocks that explicitly account for the local appearance changes and camera-object relative motions as the base of our model. Second, we jointly learn the homography and visibility that links camera-object relative motions with occlusions. Third, we propose a confidence module that actively monitors the estimation quality from the pixel correlation distributions obtained in correlation blocks. All these modules are plugged into a Lucas-Kanade (LK) tracking pipeline to obtain both accurate and robust planar object tracking. Our approach outperforms the state-of-the-art methods on public POT and TMT datasets. Its superior performance is also verified on a real-world application, synthesizing high-quality in-video advertisements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 318,316 |
2405.06964 | ManiFoundation Model for General-Purpose Robotic Manipulation of Contact
Synthesis with Arbitrary Objects and Robots | To substantially enhance robot intelligence, there is a pressing need to develop a large model that enables general-purpose robots to proficiently undertake a broad spectrum of manipulation tasks, akin to the versatile task-planning ability exhibited by LLMs. The vast diversity in objects, robots, and manipulation tasks presents huge challenges. Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation that formalizes a manipulation task as contact synthesis. Specifically, our model takes as input object and robot manipulator point clouds, object physical attributes, target motions, and manipulation region masks. It outputs contact points on the object and associated contact forces or post-contact motions for robots to achieve the desired manipulation task. We perform extensive experiments both in the simulation and real-world settings, manipulating articulated rigid objects, rigid objects, and deformable objects that vary in dimensionality, ranging from one-dimensional objects like ropes to two-dimensional objects like cloth and extending to three-dimensional objects such as plasticine. Our model achieves average success rates of around 90\%. Supplementary materials and videos are available on our project website at https://manifoundationmodel.github.io/. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 453,520 |
1209.4316 | Critical Parameter Values and Reconstruction Properties of Discrete
Tomography: Application to Experimental Fluid Dynamics | We analyze representative ill-posed scenarios of tomographic PIV with a focus on conditions for unique volume reconstruction. Based on sparse random seedings of a region of interest with small particles, the corresponding systems of linear projection equations are probabilistically analyzed in order to determine (i) the ability of unique reconstruction in terms of the imaging geometry and the critical sparsity parameter, and (ii) sharpness of the transition to non-unique reconstruction with ghost particles when choosing the sparsity parameter improperly. The sparsity parameter directly relates to the seeding density used for PIV in experimental fluids dynamics that is chosen empirically to date. Our results provide a basic mathematical characterization of the PIV volume reconstruction problem that is an essential prerequisite for any algorithm used to actually compute the reconstruction. Moreover, we connect the sparse volume function reconstruction problem from few tomographic projections to major developments in compressed sensing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 18,637 |
1911.06479 | On Model Robustness Against Adversarial Examples | We study the model robustness against adversarial examples, referred to as small perturbed input data that may however fool many state-of-the-art deep learning models. Unlike previous research, we establish a novel theory addressing the robustness issue from the perspective of stability of the loss function in the small neighborhood of natural examples. We propose to exploit an energy function to describe the stability and prove that reducing such energy guarantees the robustness against adversarial examples. We also show that the traditional training methods including adversarial training with the $l_2$ norm constraint (AT) and Virtual Adversarial Training (VAT) tend to minimize the lower bound of our proposed energy function. We make an analysis showing that minimization of such lower bound can however lead to insufficient robustness within the neighborhood around the input sample. Furthermore, we design a more rational method with the energy regularization which proves to achieve better robustness than previous methods. Through a series of experiments, we demonstrate the superiority of our model on both supervised tasks and semi-supervised tasks. In particular, our proposed adversarial framework achieves the best performance compared with previous adversarial training methods on benchmark datasets MNIST, CIFAR-10, and SVHN. Importantly, they demonstrate much better robustness against adversarial examples than all the other comparison methods. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 153,552 |
2405.10933 | Learning low-degree quantum objects | We consider the problem of learning low-degree quantum objects up to $\varepsilon$-error in $\ell_2$-distance. We show the following results: $(i)$ unknown $n$-qubit degree-$d$ (in the Pauli basis) quantum channels and unitaries can be learned using $O(1/\varepsilon^d)$ queries (independent of $n$), $(ii)$ polynomials $p:\{-1,1\}^n\rightarrow [-1,1]$ arising from $d$-query quantum algorithms can be classically learned from $O((1/\varepsilon)^d\cdot \log n)$ many random examples $(x,p(x))$ (which implies learnability even for $d=O(\log n)$), and $(iii)$ degree-$d$ polynomials $p:\{-1,1\}^n\to [-1,1]$ can be learned through $O(1/\varepsilon^d)$ queries to a quantum unitary $U_p$ that block-encodes $p$. Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely bounded~polynomials. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 454,933 |
1811.01437 | QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural
Network against Adversarial Attacks | Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a "fixed" number of quantization levels, while in TQ, the quantization levels are "iteratively learned during the training phase", thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source \textit{Cleverhans} library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8x8) - Conv2D(128, 6x6) - Conv2D(128, 5x5) - Dense(10) - Softmax()) available in \textit{Cleverhans} library. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 112,364 |
2403.01112 | Efficient Episodic Memory Utilization of Cooperative Multi-Agent
Reinforcement Learning | In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still require significant learning time and often get trapped in local optima by complex tasks, subsequently failing to discover a goal-reaching policy. To address this, we introduce Efficient episodic Memory Utilization (EMU) for MARL, with two primary objectives: (a) accelerating reinforcement learning by leveraging semantically coherent memory from an episodic buffer and (b) selectively promoting desirable transitions to prevent local convergence. To achieve (a), EMU incorporates a trainable encoder/decoder structure alongside MARL, creating coherent memory embeddings that facilitate exploratory memory recall. To achieve (b), EMU introduces a novel reward structure called episodic incentive based on the desirability of states. This reward improves the TD target in Q-learning and acts as an additional incentive for desirable transitions. We provide theoretical support for the proposed incentive and demonstrate the effectiveness of EMU compared to conventional episodic control. The proposed method is evaluated in StarCraft II and Google Research Football, and empirical results indicate further performance improvement over state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 434,264 |
2304.10712 | Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World | Infrared imaging systems have a vast array of potential applications in pedestrian detection and autonomous driving, and their safety performance is of great concern. However, few studies have explored the safety of infrared imaging systems in real-world settings. Previous research has used physical perturbations such as small bulbs and thermal "QR codes" to attack infrared imaging detectors, but such methods are highly visible and lack stealthiness. Other researchers have used hot and cold blocks to deceive infrared imaging detectors, but this method is limited in its ability to execute attacks from various angles. To address these shortcomings, we propose a novel physical attack called adversarial infrared blocks (AdvIB). By optimizing the physical parameters of the adversarial infrared blocks, this method can execute a stealthy black-box attack on thermal imaging system from various angles. We evaluate the proposed method based on its effectiveness, stealthiness, and robustness. Our physical tests show that the proposed method achieves a success rate of over 80% under most distance and angle conditions, validating its effectiveness. For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness. Additionally, we test the proposed method on advanced detectors, and experimental results demonstrate an average attack success rate of 51.2%, proving its robustness. Overall, our proposed AdvIB method offers a promising avenue for conducting stealthy, effective and robust black-box attacks on thermal imaging system, with potential implications for real-world safety and security applications. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 359,523 |
2010.11366 | Random Coordinate Underdamped Langevin Monte Carlo | The Underdamped Langevin Monte Carlo (ULMC) is a popular Markov chain Monte Carlo sampling method. It requires the computation of the full gradient of the log-density at each iteration, an expensive operation if the dimension of the problem is high. We propose a sampling method called Random Coordinate ULMC (RC-ULMC), which selects a single coordinate at each iteration to be updated and leaves the other coordinates untouched. We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions. We show that RC-ULMC is always cheaper than the classical ULMC, with a significant cost reduction when the problem is highly skewed and high dimensional. Our complexity bound for RC-ULMC is also tight in terms of dimension dependence. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 202,222 |
2404.09943 | Novel Joint Estimation and Decoding Metrics for Short-Block length
Transmission Systems | This paper presents Bit-Interleaved Coded Modulation metrics for joint estimation detection using training or reference signal transmission strategies for short to long block length channels. We show that it is possible to enhance the performance and sensitivity through joint detection-estimation compared to standard receivers, especially when the channel state information is unknown and the density of the training dimensions is low. The performance analysis makes use of a full 5G transmitter and receiver chains for both Polar and LDPC coded transmissions paired with BPSK/QPSK modulation schemes. We consider transmissions where reference signals are interleaved with data and both are transmitted over a small number of OFDM symbols so that near-perfect channel estimation cannot be achieved. This is particularly adapted to mini-slot transmissions for ultra-reliable, low-latency communications (URLLC) or for short packet random access use cases. We characterize the performance for up to eight receiving antennas in order to determine the performance gain offered by the proposed BICM detection in realistic base station receiver scenarios. Our findings demonstrate that when the detection windows used in the metric units is on the order of four modulated symbols the proposed BICM metrics can be used to achieve detection performance that is close to that of a coherent receiver with perfect channel state information for both polar and LDPC coded configurations. Furthermore, we show that for transmissions with low DMRS density, a good trade-off can be achieved in terms of additional coding gain and improved channel estimation quality by adaptive DMRS power adjustment. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 446,894 |
1609.04117 | Network learning via multi-agent inverse transportation problems | Despite the ubiquity of transportation data, methods to infer the state parameters of a network either ignore sensitivity of route decisions, require route enumeration for parameterizing descriptive models of route selection, or require complex bilevel models of route assignment behavior. These limitations prevent modelers from fully exploiting ubiquitous data in monitoring transportation networks. Inverse optimization methods that capture network route choice behavior can address this gap, but they are designed to take observations of the same model to learn the parameters of that model, which is statistically inefficient (e.g. requires estimating population route and link flows). New inverse optimization models and supporting algorithms are proposed to learn the parameters of heterogeneous travelers' route behavior to infer shared network state parameters (e.g. link capacity dual prices). The inferred values are consistent with observations of each agent's optimization behavior. We prove that the method can obtain unique dual prices for a network shared by these agents in polynomial time. Four experiments are conducted. The first one, conducted on a 4-node network, verifies the methodology to obtain heterogeneous link cost parameters even when multinomial or mixed logit models would not be meaningfully estimated. The second is a parameter recovery test on the Nguyen-Dupuis network that shows that unique latent link capacity dual prices can be inferred using the proposed method. The third test on the same network demonstrates how a monitoring system in an online learning environment can be designed using this method. The last test demonstrates this learning on real data obtained from a freeway network in Queens, New York, using only real-time Google Maps queries. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 60,964 |
2004.11369 | Investigating similarities and differences between South African and
Sierra Leonean school outcomes using Machine Learning | Available or adequate information to inform decision making for resource allocation in support of school improvement is a critical issue globally. In this paper, we apply machine learning and education data mining techniques on education big data to identify determinants of high schools' performance in two African countries: South Africa and Sierra Leone. The research objective is to build predictors for school performance and extract the importance of different community and school-level features. We deploy interpretable metrics from machine learning approaches such as SHAP values on tree models and odds ratios of LR to extract interactions of factors that can support policy decision making. Determinants of performance vary in these two countries, hence different policy implications and resource allocation recommendations. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 173,889 |
2411.09279 | A Comparative Analysis of Electricity Consumption Flexibility in
Different Industrial Plant Configurations | The flexibility of industrial power consumption plays a key role in the transition to renewable energy systems, contributing to grid stability, cost reduction and decarbonization efforts. This paper presents a novel methodology to quantify and optimize the flexibility of electricity consumption in manufacturing plants. The proposed model is applied to actual cement and steel plant configurations. Comparative simulations performed with the model reveal significant differences in flexibility and cost-effectiveness, driven by factors such as production capacity, downstream process demand, storage capacity, and operational constraints. A comprehensive sensitivity analysis further clarifies the impact of various parameters on production optimization and flexibility savings. Specifically, as demand approaches production levels, flexibility decreases. Although increasing storage capacity typically reduces production costs, the benefits diminish above a certain threshold. The results provide valuable information for industrial operators wishing to improve operational efficiency, reduce costs and increase the flexibility of their operations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 508,197 |
2501.04733 | AI-Driven Reinvention of Hydrological Modeling for Accurate Predictions
and Interpretation to Transform Earth System Modeling | Traditional equation-driven hydrological models often struggle to accurately predict streamflow in challenging regional Earth systems like the Tibetan Plateau, while hybrid and existing algorithm-driven models face difficulties in interpreting hydrological behaviors. This work introduces HydroTrace, an algorithm-driven, data-agnostic model that substantially outperforms these approaches, achieving a Nash-Sutcliffe Efficiency of 98% and demonstrating strong generalization on unseen data. Moreover, HydroTrace leverages advanced attention mechanisms to capture spatial-temporal variations and feature-specific impacts, enabling the quantification and spatial resolution of streamflow partitioning as well as the interpretation of hydrological behaviors such as glacier-snow-streamflow interactions and monsoon dynamics. Additionally, a large language model (LLM)-based application allows users to easily understand and apply HydroTrace's insights for practical purposes. These advancements position HydroTrace as a transformative tool in hydrological and broader Earth system modeling, offering enhanced prediction accuracy and interpretability. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 523,324 |
1508.00703 | Parameter Database : Data-centric Synchronization for Scalable Machine
Learning | We propose a new data-centric synchronization framework for carrying out of machine learning (ML) tasks in a distributed environment. Our framework exploits the iterative nature of ML algorithms and relaxes the application agnostic bulk synchronization parallel (BSP) paradigm that has previously been used for distributed machine learning. Data-centric synchronization complements function-centric synchronization based on using stale updates to increase the throughput of distributed ML computations. Experiments to validate our framework suggest that we can attain substantial improvement over BSP while guaranteeing sequential correctness of ML tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 45,706 |
2306.07201 | LTCR: Long-Text Chinese Rumor Detection Dataset | False information can spread quickly on social media, negatively influencing the citizens' behaviors and responses to social events. To better detect all of the fake news, especially long texts which are harder to find completely, a Long-Text Chinese Rumor detection dataset named LTCR is proposed. The LTCR dataset provides a valuable resource for accurately detecting misinformation, especially in the context of complex fake news related to COVID-19. The dataset consists of 1,729 and 500 pieces of real and fake news, respectively. The average lengths of real and fake news are approximately 230 and 152 characters. We also propose \method, Salience-aware Fake News Detection Model, which achieves the highest accuracy (95.85%), fake news recall (90.91%) and F-score (90.60%) on the dataset. (https://github.com/Enderfga/DoubleCheck) | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 372,924 |
2404.01705 | Samba: Semantic Segmentation of Remotely Sensed Images with State Space
Model | High-resolution remotely sensed images pose a challenge for commonly used semantic segmentation methods such as Convolutional Neural Network (CNN) and Vision Transformer (ViT). CNN-based methods struggle with handling such high-resolution images due to their limited receptive field, while ViT faces challenges in handling long sequences. Inspired by Mamba, which adopts a State Space Model (SSM) to efficiently capture global semantic information, we propose a semantic segmentation framework for high-resolution remotely sensed images, named Samba. Samba utilizes an encoder-decoder architecture, with Samba blocks serving as the encoder for efficient multi-level semantic information extraction, and UperNet functioning as the decoder. We evaluate Samba on the LoveDA, ISPRS Vaihingen, and ISPRS Potsdam datasets, comparing its performance against top-performing CNN and ViT methods. The results reveal that Samba achieved unparalleled performance on commonly used remote sensing datasets for semantic segmentation. Our proposed Samba demonstrates for the first time the effectiveness of SSM in semantic segmentation of remotely sensed images, setting a new benchmark in performance for Mamba-based techniques in this specific application. The source code and baseline implementations are available at https://github.com/zhuqinfeng1999/Samba. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,541 |
1902.02308 | Decentralized Flood Forecasting Using Deep Neural Networks | Predicting flood for any location at times of extreme storms is a longstanding problem that has utmost importance in emergency management. Conventional methods that aim to predict water levels in streams use advanced hydrological models still lack of giving accurate forecasts everywhere. This study aims to explore artificial deep neural networks' performance on flood prediction. While providing models that can be used in forecasting stream stage, this paper presents a dataset that focuses on the connectivity of data points on river networks. It also shows that neural networks can be very helpful in time-series forecasting as in flood events, and support improving existing models through data assimilation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,848 |
1110.0305 | Significant communities in large sparse networks | Researchers use community-detection algorithms to reveal large-scale organization in biological and social networks, but community detection is useful only if the communities are significant and not a result of noisy data. To assess the statistical significance of the network communities, or the robustness of the detected structure, one approach is to perturb the network structure by removing links and measure how much the communities change. However, perturbing sparse networks is challenging because they are inherently sensitive; they shatter easily if links are removed. Here we propose a simple method to perturb sparse networks and assess the significance of their communities. We generate resampled networks by adding extra links based on local information, then we aggregate the information from multiple resampled networks to find a coarse-grained description of significant clusters. In addition to testing our method on benchmark networks, we use our method on the sparse network of the European Court of Justice (ECJ) case law, to detect significant and insignificant areas of law. We use our significance analysis to draw a map of the ECJ case law network that reveals the relations between the areas of law. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 12,453 |
2007.01760 | Explainable Deep One-Class Classification | Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 185,525 |
2206.03592 | Click prediction boosting via Bayesian hyperparameter optimization based
ensemble learning pipelines | Online travel agencies (OTA's) advertise their website offers on meta-search bidding engines. The problem of predicting the number of clicks a hotel would receive for a given bid amount is an important step in the management of an OTA's advertisement campaign on a meta-search engine, because bid times number of clicks defines the cost to be generated. Various regressors are ensembled in this work to improve click prediction performance. Following the preprocessing procedures, the feature set is divided into train and test groups depending on the logging date of the samples. The data collection is then subjected to feature elimination via utilizing XGBoost, which significantly reduces the dimension of features. The optimum hyper-parameters are then found by applying Bayesian hyperparameter optimization to XGBoost, LightGBM, and SGD models. The different trained models are tested separately as well as combined to form ensemble models. Four alternative ensemble solutions have been suggested. The same test set is used to test both individual and ensemble models, and the results of 46 model combinations demonstrate that stack ensemble models yield the desired R2 score of all. In conclusion, the ensemble model improves the prediction performance by about 10%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,335 |
2402.18719 | MaxCUCL: Max-Consensus with Deterministic Convergence in Networks with
Unreliable Communication | In this paper, we present a novel distributed algorithm (herein called MaxCUCL) designed to guarantee that max-consensus is reached in networks characterized by unreliable communication links (i.e., links suffering from packet drops). Our proposed algorithm is the first algorithm that achieves max-consensus in a deterministic manner (i.e., nodes always calculate the maximum of their states regardless of the nature of the probability distribution of the packet drops). Furthermore, it allows nodes to determine whether convergence has been achieved (enabling them to transition to subsequent tasks). The operation of MaxCUCL relies on the deployment of narrowband error-free feedback channels used for acknowledging whether a packet transmission between nodes was successful. We analyze the operation of our algorithm and show that it converges after a finite number of time steps. Finally, we demonstrate our algorithm's effectiveness and practical applicability by applying it to a sensor network deployed for environmental monitoring. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 433,531 |
2307.13510 | HeightFormer: Explicit Height Modeling without Extra Data for
Camera-only 3D Object Detection in Bird's Eye View | Vision-based Bird's Eye View (BEV) representation is an emerging perception formulation for autonomous driving. The core challenge is to construct BEV space with multi-camera features, which is a one-to-many ill-posed problem. Diving into all previous BEV representation generation methods, we found that most of them fall into two types: modeling depths in image views or modeling heights in the BEV space, mostly in an implicit way. In this work, we propose to explicitly model heights in the BEV space, which needs no extra data like LiDAR and can fit arbitrary camera rigs and types compared to modeling depths. Theoretically, we give proof of the equivalence between height-based methods and depth-based methods. Considering the equivalence and some advantages of modeling heights, we propose HeightFormer, which models heights and uncertainties in a self-recursive way. Without any extra data, the proposed HeightFormer could estimate heights in BEV accurately. Benchmark results show that the performance of HeightFormer achieves SOTA compared with those camera-only methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 381,610 |
2308.07187 | On the Asymptotic Nonnegative Rank of Matrices and its Applications in
Information Theory | In this paper, we study the asymptotic nonnegative rank of matrices, which characterizes the asymptotic growth of the nonnegative rank of fixed nonnegative matrices under the Kronecker product. This quantity is important since it governs several notions in information theory such as the so-called exact R\'enyi common information and the amortized communication complexity. By using the theory of asymptotic spectra of V. Strassen (J. Reine Angew. Math. 1988), we define formally the asymptotic spectrum of nonnegative matrices and give a dual characterization of the asymptotic nonnegative rank. As a complementary of the nonnegative rank, we introduce the notion of the subrank of a nonnegative matrix and show that it is exactly equal to the size of the maximum induced matching of the bipartite graph defined on the support of the matrix (therefore, independent of the value of entries). Finally, we show that two matrix parameters, namely rank and fractional cover number, belong to the asymptotic spectrum of nonnegative matrices. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 385,417 |
2305.07465 | Beyond Prompts: Exploring the Design Space of Mixed-Initiative
Co-Creativity Systems | Generative Artificial Intelligence systems have been developed for image, code, story, and game generation with the goal of facilitating human creativity. Recent work on neural generative systems has emphasized one particular means of interacting with AI systems: the user provides a specification, usually in the form of prompts, and the AI system generates the content. However, there are other configurations of human and AI coordination, such as co-creativity (CC) in which both human and AI systems can contribute to content creation, and mixed-initiative (MI) in which both human and AI systems can initiate content changes. In this paper, we define a hypothetical human-AI configuration design space consisting of different means for humans and AI systems to communicate creative intent to each other. We conduct a human participant study with 185 participants to understand how users want to interact with differently configured MI-CC systems. We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 363,906 |
2209.08618 | Koopman-theoretic Approach for Identification of Exogenous Anomalies in
Nonstationary Time-series Data | In many scenarios, it is necessary to monitor a complex system via a time-series of observations and determine when anomalous exogenous events have occurred so that relevant actions can be taken. Determining whether current observations are abnormal is challenging. It requires learning an extrapolative probabilistic model of the dynamics from historical data, and using a limited number of current observations to make a classification. We leverage recent advances in long-term probabilistic forecasting, namely {\em Deep Probabilistic Koopman}, to build a general method for classifying anomalies in multi-dimensional time-series data. We also show how to utilize models with domain knowledge of the dynamics to reduce type I and type II error. We demonstrate our proposed method on the important real-world task of global atmospheric pollution monitoring, integrating it with NASA's Global Earth System Model. The system successfully detects localized anomalies in air quality due to events such as COVID-19 lockdowns and wildfires. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,193 |
1810.10789 | Perceptual Visual Interactive Learning | Supervised learning methods are widely used in machine learning. However, the lack of labels in existing data limits the application of these technologies. Visual interactive learning (VIL) compared with computers can avoid semantic gap, and solve the labeling problem of small label quantity (SLQ) samples in a groundbreaking way. In order to fully understand the importance of VIL to the interaction process, we re-summarize the interactive learning related algorithms (e.g. clustering, classification, retrieval etc.) from the perspective of VIL. Note that, perception and cognition are two main visual processes of VIL. On this basis, we propose a perceptual visual interactive learning (PVIL) framework, which adopts gestalt principle to design interaction strategy and multi-dimensionality reduction (MDR) to optimize the process of visualization. The advantage of PVIL framework is that it combines computer's sensitivity of detailed features and human's overall understanding of global tasks. Experimental results validate that the framework is superior to traditional computer labeling methods (such as label propagation) in both accuracy and efficiency, which achieves significant classification results on dense distribution and sparse classes dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 111,364 |
2206.01176 | From Cities to Series: Complex Networks and Deep Learning for Improved
Spatial and Temporal Analytics* | Graphs have often been used to answer questions about the interaction between real-world entities by taking advantage of their capacity to represent complex topologies. Complex networks are known to be graphs that capture such non-trivial topologies; they are able to represent human phenomena such as epidemic processes, the dynamics of populations, and the urbanization of cities. The investigation of complex networks has been extrapolated to many fields of science, with particular emphasis on computing techniques, including artificial intelligence. In such a case, the analysis of the interaction between entities of interest is transposed to the internal learning of algorithms, a paradigm whose investigation is able to expand the state of the art in Computer Science. By exploring this paradigm, this thesis puts together complex networks and machine learning techniques to improve the understanding of the human phenomena observed in pandemics, pendular migration, and street networks. Accordingly, we contribute with: (i) a new neural network architecture capable of modeling dynamic processes observed in spatial and temporal data with applications in epidemics propagation, weather forecasting, and patient monitoring in intensive care units; (ii) a machine-learning methodology for analyzing and predicting links in the scope of human mobility between all the cities of Brazil; and, (iii) techniques for identifying inconsistencies in the urban planning of cities while tracking the most influential vertices, with applications over Brazilian and worldwide cities. We obtained results sustained by sound evidence of advances to the state of the art in artificial intelligence, rigorous formalisms, and ample experimentation. Our findings rely upon real-world applications in a range of domains, demonstrating the applicability of our methodologies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 300,374 |
2210.08083 | Reference Based Color Transfer for Medical Volume Rendering | The benefits of medical imaging are enormous. Medical images provide considerable amounts of anatomical information and this facilitates medical practitioners in performing effective disease diagnosis and deciding upon the best course of medical treatment. A transition from traditional monochromatic medical images like CT scans, X-Rays or MRI images to a colored 3D representation of the anatomical structure further enhances the capabilities of medical professionals in extracting valuable medical information. The proposed framework in our research starts with performing color transfer by finding deep semantic correspondence between two medical images: a colored reference image, and a monochromatic CT scan or an MRI image. We extend this idea of reference-based colorization technique to perform colored volume rendering from a stack of grayscale medical images. Furthermore, we also propose to use an effective reference image recommendation system to aid in the selection of good reference images. With our approach, we successfully perform colored medical volume visualization and essentially eliminate the painstaking process of user interaction with a transfer function to obtain color and opacity parameters for volume rendering. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 323,973 |
2410.22730 | Extensional Properties of Recurrent Neural Networks | A property of a recurrent neural network (RNN) is called \emph{extensional} if, loosely speaking, it is a property of the function computed by the RNN rather than a property of the RNN algorithm. Many properties of interest in RNNs are extensional, for example, robustness against small changes of input or good clustering of inputs. Given an RNN, it is natural to ask whether it has such a property. We give a negative answer to the general question about testing extensional properties of RNNs. Namely, we prove a version of Rice's theorem for RNNs: any nontrivial extensional property of RNNs is undecidable. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 503,747 |
1810.05357 | On The Equivalence of Tries and Dendrograms - Efficient Hierarchical
Clustering of Traffic Data | The widespread use of GPS-enabled devices generates voluminous and continuous amounts of traffic data but analyzing such data for interpretable and actionable insights poses challenges. A hierarchical clustering of the trips has many uses such as discovering shortest paths, common routes and often traversed areas. However, hierarchical clustering typically has time complexity of $O(n^2 \log n)$ where $n$ is the number of instances, and is difficult to scale to large data sets associated with GPS data. Furthermore, incremental hierarchical clustering is still a developing area. Prefix trees (also called tries) can be efficiently constructed and updated in linear time (in $n$). We show how a specially constructed trie can compactly store the trips and further show this trie is equivalent to a dendrogram that would have been built by classic agglomerative hierarchical algorithms using a specific distance metric. This allows creating hierarchical clusterings of GPS trip data and updating this hierarchy in linear time. %we can extract a meaningful kernel and can also interpret the structure as clusterings of differing granularity as one progresses down the tree. We demonstrate the usefulness of our proposed approach on a real world data set of half a million taxis' GPS traces, well beyond the capabilities of agglomerative clustering methods. Our work is not limited to trip data and can be used with other data with a string representation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 110,210 |
2112.03298 | Automation Of Transiting Exoplanet Detection, Identification and
Habitability Assessment Using Machine Learning Approaches | We are at a unique timeline in the history of human evolution where we may be able to discover earth-like planets around stars outside our solar system where conditions can support life or even find evidence of life on those planets. With the launch of several satellites in recent years by NASA, ESA, and other major space agencies, an ample amount of datasets are at our disposal which can be utilized to train machine learning models that can automate the arduous tasks of exoplanet detection, its identification, and habitability determination. Automating these tasks can save a considerable amount of time and minimize human errors due to manual intervention. To achieve this aim, we first analyze the light intensity curves from stars captured by the Kepler telescope to detect the potential curves that exhibit the characteristics of an existence of a possible planetary system. For this detection, along with training conventional models, we propose a stacked GBDT model that can be trained on multiple representations of the light signals simultaneously. Subsequently, we address the automation of exoplanet identification and habitability determination by leveraging several state-of-art machine learning and ensemble approaches. The identification of exoplanets aims to distinguish false positive instances from the actual instances of exoplanets whereas the habitability assessment groups the exoplanet instances into different clusters based on their habitable characteristics. Additionally, we propose a new metric called Adequate Thermal Adequacy (ATA) score to establish a potential linear relationship between habitable and non-habitable instances. Experimental results suggest that the proposed stacked GBDT model outperformed the conventional models in detecting transiting exoplanets. Furthermore, the incorporation of ATA scores in habitability classification enhanced the performance of models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,153 |
1601.05403 | Semantic Word Clusters Using Signed Normalized Graph Cuts | Vector space representations of words capture many aspects of word similarity, but such methods tend to make vector spaces in which antonyms (as well as synonyms) are close to each other. We present a new signed spectral normalized graph cut algorithm, signed clustering, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words which simultaneously capture distributional and synonym relations. We evaluate these clusters against the SimLex-999 dataset (Hill et al.,2014) of human judgments of word pair similarities, and also show the benefit of using our clusters to predict the sentiment of a given text. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 51,119 |
2203.16256 | Research topic trend prediction of scientific papers based on spatial
enhancement and dynamic graph convolution network | In recent years, with the increase of social investment in scientific research, the number of research results in various fields has increased significantly. Accurately and effectively predicting the trends of future research topics can help researchers discover future research hotspots. However, due to the increasingly close correlation between various research themes, there is a certain dependency relationship between a large number of research themes. Viewing a single research theme in isolation and using traditional sequence problem processing methods cannot effectively explore the spatial dependencies between these research themes. To simultaneously capture the spatial dependencies and temporal changes between research topics, we propose a deep neural network-based research topic hotness prediction algorithm, a spatiotemporal convolutional network model. Our model combines a graph convolutional neural network (GCN) and Temporal Convolutional Network (TCN), specifically, GCNs are used to learn the spatial dependencies of research topics a and use space dependence to strengthen spatial characteristics. TCN is used to learn the dynamics of research topics' trends. Optimization is based on the calculation of weighted losses based on time distance. Compared with the current mainstream sequence prediction models and similar spatiotemporal models on the paper datasets, experiments show that, in research topic prediction tasks, our model can effectively capture spatiotemporal relationships and the predictions outperform state-of-art baselines. | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 288,715 |
1711.03525 | Improving the redundancy of Knuth's balancing scheme for packet
transmission systems | A simple scheme was proposed by Knuth to generate binary balanced codewords from any information word. However, this method is limited in the sense that its redundancy is twice that of the full sets of balanced codes. The gap between Knuth's algorithm's redundancy and that of the full sets of balanced codes is significantly considerable. This paper attempts to reduce that gap. Furthermore, many constructions assume that a full balancing can be performed without showing the steps. A full balancing refers to the overall balancing of the encoded information together with the prefix. We propose an efficient way to perform a full balancing scheme that does not make use of lookup tables or enumerative coding. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 84,225 |
1409.4481 | Real-time Crowd Tracking using Parameter Optimized Mixture of Motion
Models | We present a novel, real-time algorithm to track the trajectory of each pedestrian in moderately dense crowded scenes. Our formulation is based on an adaptive particle-filtering scheme that uses a combination of various multi-agent heterogeneous pedestrian simulation models. We automatically compute the optimal parameters for each of these different models based on prior tracked data and use the best model as motion prior for our particle-filter based tracking algorithm. We also use our "mixture of motion models" for adaptive particle selection and accelerate the performance of the online tracking algorithm. The motion model parameter estimation is formulated as an optimization problem, and we use an approach that solves this combinatorial optimization problem in a model independent manner and hence scalable to any multi-agent pedestrian motion model. We evaluate the performance of our approach on different crowd video datasets and highlight the improvement in accuracy over homogeneous motion models and a baseline mean-shift based tracker. In practice, our formulation can compute trajectories of tens of pedestrians on a multi-core desktop CPU in in real time and offer higher accuracy as compared to prior real time pedestrian tracking algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 36,079 |
2107.01858 | Automating Generative Deep Learning for Artistic Purposes: Challenges
and Opportunities | We present a framework for automating generative deep learning with a specific focus on artistic applications. The framework provides opportunities to hand over creative responsibilities to a generative system as targets for automation. For the definition of targets, we adopt core concepts from automated machine learning and an analysis of generative deep learning pipelines, both in standard and artistic settings. To motivate the framework, we argue that automation aligns well with the goal of increasing the creative responsibility of a generative system, a central theme in computational creativity research. We understand automation as the challenge of granting a generative system more creative autonomy, by framing the interaction between the user and the system as a co-creative process. The development of the framework is informed by our analysis of the relationship between automation and creative autonomy. An illustrative example shows how the framework can give inspiration and guidance in the process of handing over creative responsibility. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,620 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.