id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1508.02788 | The Effects of Hyperparameters on SGD Training of Neural Networks | The performance of neural network classifiers is determined by a number of hyperparameters, including learning rate, batch size, and depth. A number of attempts have been made to explore these parameters in the literature, and at times, to develop methods for optimizing them. However, exploration of parameter spaces has often been limited. In this note, I report the results of large scale experiments exploring these different parameters and their interactions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 45,938 |
2207.13807 | Pose-NDF: Modeling Human Pose Manifolds with Neural Distance Fields | We present Pose-NDF, a continuous model for plausible human poses based on neural distance fields (NDFs). Pose or motion priors are important for generating realistic new poses and for reconstructing accurate poses from noisy or partial observations. Pose-NDF learns a manifold of plausible poses as the zero level set of a neural implicit function, extending the idea of modeling implicit surfaces in 3D to the high-dimensional domain SO(3)^K, where a human pose is defined by a single data point, represented by K quaternions. The resulting high-dimensional implicit function can be differentiated with respect to the input poses and thus can be used to project arbitrary poses onto the manifold by using gradient descent on the set of 3-dimensional hyperspheres. In contrast to previous VAE-based human pose priors, which transform the pose space into a Gaussian distribution, we model the actual pose manifold, preserving the distances between poses. We demonstrate that PoseNDF outperforms existing state-of-the-art methods as a prior in various downstream tasks, ranging from denoising real-world human mocap data, pose recovery from occluded data to 3D pose reconstruction from images. Furthermore, we show that it can be used to generate more diverse poses by random sampling and projection than VAE-based methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 310,404 |
2006.11376 | StressGAN: A Generative Deep Learning Model for 2D Stress Distribution
Prediction | Using deep learning to analyze mechanical stress distributions has been gaining interest with the demand for fast stress analysis methods. Deep learning approaches have achieved excellent outcomes when utilized to speed up stress computation and learn the physics without prior knowledge of underlying equations. However, most studies restrict the variation of geometry or boundary conditions, making these methods difficult to be generalized to unseen configurations. We propose a conditional generative adversarial network (cGAN) model for predicting 2D von Mises stress distributions in solid structures. The cGAN learns to generate stress distributions conditioned by geometries, load, and boundary conditions through a two-player minimax game between two neural networks with no prior knowledge. By evaluating the generative network on two stress distribution datasets under multiple metrics, we demonstrate that our model can predict more accurate high-resolution stress distributions than a baseline convolutional neural network model, given various and complex cases of geometry, load and boundary conditions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 183,195 |
2003.13676 | Deep reinforcement learning for large-scale epidemic control | Epidemics of infectious diseases are an important threat to public health and global economies. Yet, the development of prevention strategies remains a challenging process, as epidemics are non-linear and complex processes. For this reason, we investigate a deep reinforcement learning approach to automatically learn prevention strategies in the context of pandemic influenza. Firstly, we construct a new epidemiological meta-population model, with 379 patches (one for each administrative district in Great Britain), that adequately captures the infection process of pandemic influenza. Our model balances complexity and computational efficiency such that the use of reinforcement learning techniques becomes attainable. Secondly, we set up a ground truth such that we can evaluate the performance of the 'Proximal Policy Optimization' algorithm to learn in a single district of this epidemiological model. Finally, we consider a large-scale problem, by conducting an experiment where we aim to learn a joint policy to control the districts in a community of 11 tightly coupled districts, for which no ground truth can be established. This experiment shows that deep reinforcement learning can be used to learn mitigation policies in complex epidemiological models with a large state space. Moreover, through this experiment, we demonstrate that there can be an advantage to consider collaboration between districts when designing prevention strategies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 170,275 |
0911.4874 | Non-photorealistic image processing: an Impressionist rendering | The paper describes an image processing for a non-photorealistic rendering. The algorithm is based on a random choice of a set of pixels from those ot the original image and substitution of them with colour spots. An iterative procedure is applied to cover, at a desired level, the canvas. The resulting effect mimics the impressionist painting and Pointillism. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 5,017 |
2010.05696 | Deep Adversarial Domain Adaptation Based on Multi-layer Joint Kernelized
Distance | Domain adaptation refers to the learning scenario that a model learned from the source data is applied on the target data which have the same categories but different distribution. While it has been widely applied, the distribution discrepancy between source data and target data can substantially affect the adaptation performance. The problem has been recently addressed by employing adversarial learning and distinctive adaptation performance has been reported. In this paper, a deep adversarial domain adaptation model based on a multi-layer joint kernelized distance metric is proposed. By utilizing the abstract features extracted from deep networks, the multi-layer joint kernelized distance (MJKD) between the $j$th target data predicted as the $m$th category and all the source data of the $m'$th category is computed. Base on MJKD, a class-balanced selection strategy is utilized in each category to select target data that are most likely to be classified correctly and treat them as labeled data using their pseudo labels. Then an adversarial architecture is used to draw the newly generated labeled training data and the remaining target data close to each other. In this way, the target data itself provide valuable information to enhance the domain adaptation. An analysis of the proposed method is also given and the experimental results demonstrate that the proposed method can achieve a better performance than a number of state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 200,229 |
1401.5341 | Domain Views for Constraint Programming | Views are a standard abstraction in constraint programming: They make it possible to implement a single version of each constraint, while avoiding to create new variables and constraints that would slow down propagation. Traditional constraint-programming systems provide the concept of {\em variable views} which implement a view of the type $y = f(x)$ by delegating all (domain and constraint) operations on variable $y$ to variable $x$. This paper proposes the alternative concept of {\em domain views} which only delegate domain operations. Domain views preserve the benefits of variable views but simplify the implementation of value-based propagation. Domain views also support non-injective views compositionally, expanding the scope of views significantly. Experimental results demonstrate the practical benefits of domain views. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 30,192 |
2009.08205 | Generating Label Cohesive and Well-Formed Adversarial Claims | Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimising the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 196,166 |
2406.00282 | Adversarial 3D Virtual Patches using Integrated Gradients | LiDAR sensors are widely used in autonomous vehicles to better perceive the environment. However, prior works have shown that LiDAR signals can be spoofed to hide real objects from 3D object detectors. This study explores the feasibility of reducing the required spoofing area through a novel object-hiding strategy based on virtual patches (VPs). We first manually design VPs (MVPs) and show that VP-focused attacks can achieve similar success rates with prior work but with a fraction of the required spoofing area. Then we design a framework Saliency-LiDAR (SALL), which can identify critical regions for LiDAR objects using Integrated Gradients. VPs crafted on critical regions (CVPs) reduce object detection recall by at least 15% compared to our baseline with an approximate 50% reduction in the spoofing area for vehicles of average size. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 459,777 |
2411.07320 | Richer Output for Richer Countries: Uncovering Geographical Disparities
in Generated Stories and Travel Recommendations | While a large body of work inspects language models for biases concerning gender, race, occupation and religion, biases of geographical nature are relatively less explored. Some recent studies benchmark the degree to which large language models encode geospatial knowledge. However, the impact of the encoded geographical knowledge (or lack thereof) on real-world applications has not been documented. In this work, we examine large language models for two common scenarios that require geographical knowledge: (a) travel recommendations and (b) geo-anchored story generation. Specifically, we study five popular language models, and across about $100$K travel requests, and $200$K story generations, we observe that travel recommendations corresponding to poorer countries are less unique with fewer location references, and stories from these regions more often convey emotions of hardship and sadness compared to those from wealthier nations. | false | false | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | 507,483 |
2410.03058 | DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in
Dense Microscopy Images | The proliferation of digital microscopy images, driven by advances in automated whole slide scanning, presents significant opportunities for biomedical research and clinical diagnostics. However, accurately annotating densely packed information in these images remains a major challenge. To address this, we introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks. DiffKillR employs two complementary neural networks: one that learns a diffeomorphism-invariant feature space for robust cell matching and another that computes the precise warping field between cells for annotation mapping. Using a small set of annotated archetypes, DiffKillR efficiently propagates annotations across large microscopy images, reducing the need for extensive manual labeling. More importantly, it is suitable for any type of pixel-level annotation. We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 494,593 |
2309.05681 | Knowledge-based Refinement of Scientific Publication Knowledge Graphs | We consider the problem of identifying authorship by posing it as a knowledge graph construction and refinement. To this effect, we model this problem as learning a probabilistic logic model in the presence of human guidance (knowledge-based learning). Specifically, we learn relational regression trees using functional gradient boosting that outputs explainable rules. To incorporate human knowledge, advice in the form of first-order clauses is injected to refine the trees. We demonstrate the usefulness of human knowledge both quantitatively and qualitatively in seven authorship domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 391,170 |
2306.00212 | Provably Efficient Generalized Lagrangian Policy Optimization for Safe
Multi-Agent Reinforcement Learning | We examine online safe multi-agent reinforcement learning using constrained Markov games in which agents compete by maximizing their expected total rewards under a constraint on expected total utilities. Our focus is confined to an episodic two-player zero-sum constrained Markov game with independent transition functions that are unknown to agents, adversarial reward functions, and stochastic utility functions. For such a Markov game, we employ an approach based on the occupancy measure to formulate it as an online constrained saddle-point problem with an explicit constraint. We extend the Lagrange multiplier method in constrained optimization to handle the constraint by creating a generalized Lagrangian with minimax decision primal variables and a dual variable. Next, we develop an upper confidence reinforcement learning algorithm to solve this Lagrangian problem while balancing exploration and exploitation. Our algorithm updates the minimax decision primal variables via online mirror descent and the dual variable via projected gradient step and we prove that it enjoys sublinear rate $ O((|X|+|Y|) L \sqrt{T(|A|+|B|)}))$ for both regret and constraint violation after playing $T$ episodes of the game. Here, $L$ is the horizon of each episode, $(|X|,|A|)$ and $(|Y|,|B|)$ are the state/action space sizes of the min-player and the max-player, respectively. To the best of our knowledge, we provide the first provably efficient online safe reinforcement learning algorithm in constrained Markov games. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 369,915 |
2302.11298 | Approximate spectral clustering density-based similarity for noisy
datasets | Approximate spectral clustering (ASC) was developed to overcome heavy computational demands of spectral clustering (SC). It maintains SC ability in predicting non-convex clusters. Since it involves a preprocessing step, ASC defines new similarity measures to assign weights on graph edges. Connectivity matrix (CONN) is an efficient similarity measure to construct graphs for ASC. It defines the weight between two vertices as the number of points assigned to them during vector quantization training. However, this relationship is undirected, where it is not clear which of the vertices is contributing more to that edge. Also, CONN could be tricked by noisy density between clusters. We defined a directed version of CONN, named DCONN, to get insights on vertices contributions to edges. Also, we provided filtering schemes to ensure CONN edges are highlighting potential clusters. Experiments reveal that the proposed filtering was highly efficient when noise cannot be tolerated by CONN. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | true | false | false | 347,161 |
2009.07578 | Anomaly and Fraud Detection in Credit Card Transactions Using the ARIMA
Model | This paper addresses the problem of unsupervised approach of credit card fraud detection in unbalanced dataset using the ARIMA model. The ARIMA model is fitted on the regular spending behaviour of the customer and is used to detect fraud if some deviations or discrepancies appear. Our model is applied to credit card datasets and is compared to 4 anomaly detection approaches such as K-Means, Box-Plot, Local Outlier Factor and Isolation Forest. The results show that the ARIMA model presents a better detecting power than the benchmark models. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 195,984 |
2207.14513 | Uncertainty-Driven Action Quality Assessment | Automatic action quality assessment (AQA) has attracted increasing attention due to its wide applications. However, most existing AQA methods employ deterministic models to predict the final score for each action, while overlooking the subjectivity and diversity among expert judges during the scoring process. In this paper, we propose a novel probabilistic model, named Uncertainty-Driven AQA (UD-AQA), to utilize and capture the diversity among multiple judge scores. Specifically, we design a Conditional Variational Auto-Encoder (CVAE)-based module to encode the uncertainty in expert assessment, where multiple judge scores can be produced by sampling latent features from the learned latent space multiple times. To further utilize the uncertainty, we generate the estimation of uncertainty for each prediction, which is employed to re-weight AQA regression loss, effectively reducing the influence of uncertain samples during training. Moreover, we further design an uncertainty-guided training strategy to dynamically adjust the learning order of the samples from low uncertainty to high uncertainty. The experiments show that our proposed method achieves competitive results on three benchmarks including the Olympic events MTL-AQA and FineDiving, and the surgical skill JIGSAWS datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 310,613 |
1905.08920 | Domain adaptation for part-of-speech tagging of noisy user-generated
text | The performance of a Part-of-speech (POS) tagger is highly dependent on the domain ofthe processed text, and for many domains there is no or only very little training data available. This work addresses the problem of POS tagging noisy user-generated text using a neural network. We propose an architecture that trains an out-of-domain model on a large newswire corpus, and transfers those weights by using them as a prior for a model trained on the target domain (a data-set of German Tweets) for which there is very little an-notations available. The neural network has two standard bidirectional LSTMs at its core. However, we find it crucial to also encode a set of task-specific features, and to obtain reliable (source-domain and target-domain) word representations. Experiments with different regularization techniques such as early stopping, dropout and fine-tuning the domain adaptation prior weights are conducted. Our best model uses external weights from the out-of-domain model, as well as feature embeddings, pre-trained word and sub-word embeddings and achieves a tagging accuracy of slightly over 90%, improving on the previous state of the art for this task. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 131,607 |
2312.00036 | Privacy-Preserving Load Forecasting via Personalized Model Obfuscation | The widespread adoption of smart meters provides access to detailed and localized load consumption data, suitable for training building-level load forecasting models. To mitigate privacy concerns stemming from model-induced data leakage, federated learning (FL) has been proposed. This paper addresses the performance challenges of short-term load forecasting models trained with FL on heterogeneous data, emphasizing privacy preservation through model obfuscation. Our proposed algorithm, Privacy Preserving Federated Learning (PPFL), incorporates personalization layers for localized training at each smart meter. Additionally, we employ a differentially private mechanism to safeguard against data leakage from shared layers. Simulations on the NREL ComStock dataset corroborate the effectiveness of our approach. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 411,864 |
2401.08049 | EmoTalker: Emotionally Editable Talking Face Generation via Diffusion
Model | In recent years, the field of talking faces generation has attracted considerable attention, with certain methods adept at generating virtual faces that convincingly imitate human expressions. However, existing methods face challenges related to limited generalization, particularly when dealing with challenging identities. Furthermore, methods for editing expressions are often confined to a singular emotion, failing to adapt to intricate emotions. To overcome these challenges, this paper proposes EmoTalker, an emotionally editable portraits animation approach based on the diffusion model. EmoTalker modifies the denoising process to ensure preservation of the original portrait's identity during inference. To enhance emotion comprehension from text input, Emotion Intensity Block is introduced to analyze fine-grained emotions and strengths derived from prompts. Additionally, a crafted dataset is harnessed to enhance emotion comprehension within prompts. Experiments show the effectiveness of EmoTalker in generating high-quality, emotionally customizable facial expressions. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 421,744 |
1706.05870 | Deep learning with spatiotemporal consistency for nerve segmentation in
ultrasound images | Ultrasound-Guided Regional Anesthesia (UGRA) has been gaining importance in the last few years, offering numerous advantages over alternative methods of nerve localization (neurostimulation or paraesthesia). However, nerve detection is one of the most tasks that anaesthetists can encounter in the UGRA procedure. Computer aided system that can detect automatically region of nerve, would help practitioner to concentrate more in anaesthetic delivery. In this paper we propose a new method based on deep learning combined with spatiotemporal information to robustly segment the nerve region. The proposed method is based on two phases, localisation and segmentation. The first phase, consists in using convolutional neural network combined with spatial and temporal consistency to detect the nerve zone. The second phase utilises active contour model to delineate the region of interest. Obtained results show the validity of the proposed approach and its robustness. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 75,589 |
1809.02768 | Generating Distractors for Reading Comprehension Questions from Real
Examinations | We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations. In contrast to all previous works, we do not aim at preparing words or short phrases distractors, instead, we endeavor to generate longer and semantic-rich distractors which are closer to distractors in real reading comprehension from examinations. Taking a reading comprehension article, a pair of question and its correct option as input, our goal is to generate several distractors which are somehow related to the answer, consistent with the semantic context of the question and have some trace in the article. We propose a hierarchical encoder-decoder framework with static and dynamic attention mechanisms to tackle this task. Specifically, the dynamic attention can combine sentence-level and word-level attention varying at each recurrent time step to generate a more readable sequence. The static attention is to modulate the dynamic attention not to focus on question irrelevant sentences or sentences which contribute to the correct option. Our proposed framework outperforms several strong baselines on the first prepared distractor generation dataset of real reading comprehension questions. For human evaluation, compared with those distractors generated by baselines, our generated distractors are more functional to confuse the annotators. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 107,127 |
2310.18291 | Addressing GAN Training Instabilities via Tunable Classification Losses | Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and $f$-GANs which minimize $f$-divergences. We also show that all symmetric $f$-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to $\alpha$-GANs, defined using $\alpha$-loss, a tunable CPE loss family parametrized by $\alpha\in(0,\infty]$. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player's objective using $\alpha$-loss to obtain $(\alpha_D,\alpha_G)$-GANs. We show that the resulting non-zero sum game simplifies to minimizing an $f$-divergence under appropriate conditions on $(\alpha_D,\alpha_G)$. Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning $(\alpha_D,\alpha_G)$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 403,460 |
2410.04801 | Improving Image Clustering with Artifacts Attenuation via Inference-Time
Attention Engineering | The goal of this paper is to improve the performance of pretrained Vision Transformer (ViT) models, particularly DINOv2, in image clustering task without requiring re-training or fine-tuning. As model size increases, high-norm artifacts anomaly appears in the patches of multi-head attention. We observe that this anomaly leads to reduced accuracy in zero-shot image clustering. These artifacts are characterized by disproportionately large values in the attention map compared to other patch tokens. To address these artifacts, we propose an approach called Inference-Time Attention Engineering (ITAE), which manipulates attention function during inference. Specifically, we identify the artifacts by investigating one of the Query-Key-Value (QKV) patches in the multi-head attention and attenuate their corresponding attention values inside the pretrained models. ITAE shows improved clustering accuracy on multiple datasets by exhibiting more expressive features in latent space. Our findings highlight the potential of ITAE as a practical solution for reducing artifacts in pretrained ViT models and improving model performance in clustering tasks without the need for re-training or fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 495,446 |
2409.02885 | CanvOI, an Oncology Intelligence Foundation Model: Scaling FLOPS
Differently | The rapidly evolving field of digital oncopathology faces significant challenges, including the need to address diverse and complex clinical questions, often involving rare conditions, with limited availability of labeled data. These limitations hinder the development of robust AI-driven tools in the biomedical space, where accuracy in probabilistic determinations is of utmost importance. To address this, digital pathology foundation models have begun to emerge, typically developed with the size and diversity of the pre-training dataset and model parameters in mind. Here, we present CanvOI, a ViT-g/10-based foundation model designed to enhance the capabilities of digital pathology by addressing these challenges through a different approach. Considering the unique nature of oncologic histopathological images and the requirements from the embeddings to provide meaningful representations for Multiple Instance Learning (MIL) downstream models, we chose to modify the input image characteristics. By introducing larger tile sizes (380 x 380 pixels) and smaller patch sizes (10 x 10 pixels), we were able to optimize the model's performance, pushing computational resources in a new direction and achieving state-of-the-art performance on cancer-related benchmarks. CanvOI demonstrated a 1.5-7.4% improvement in averaged AUC compared to other leading foundation models built for digital pathology. Moreover, our results demonstrate that CanvOI significantly outperformed the other models, with the performance gap widening substantially when trained on just 10% of the initial cohort. This work highlights an alternative approach that, if integrated with traditional development approaches, has the potential to advance Oncology Intelligence (OI), overcome some of the current barriers and ultimately improve the clinical outcome of cancer patients. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 485,864 |
1407.0516 | Spatially Coupled Turbo Codes: Principles and Finite Length Performance | In this paper, we give an overview of spatially coupled turbo codes (SC-TCs), the spatial coupling of parallel and serially concatenated convolutional codes, recently introduced by the authors. For presentation purposes, we focus on spatially coupled serially concatenated codes (SC-SCCs). We review the main principles of SC-TCs and discuss their exact density evolution (DE) analysis on the binary erasure channel. We also consider the construction of a family of rate-compatible SC-SCCs with simple 4-state component encoders. For all considered code rates, threshold saturation of the belief propagation (BP) to the maximum a posteriori threshold of the uncoupled ensemble is demonstrated, and it is shown that the BP threshold approaches the Shannon limit as the coupling memory increases. Finally we give some simulation results for finite lengths. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,341 |
2407.05689 | Ten Years of Teaching Empirical Software Engineering in the context of
Energy-efficient Software | In this chapter we share our experience in running ten editions of the Green Lab course at the Vrije Universiteit Amsterdam, the Netherlands. The course is given in the Software Engineering and Green IT track of the Computer Science Master program of the VU. The course takes place every year over a 2-month period and teaches Computer Science students the fundamentals of Empirical Software Engineering in the context of energy-efficient software. The peculiarity of the course is its research orientation: at the beginning of the course the instructor presents a catalog of scientifically relevant goals, and each team of students signs up for one of them and works together for 2 months on their own experiment for achieving the goal. Each team goes over the classic steps of an empirical study, starting from a precise formulation of the goal and research questions to context definition, selection of experimental subjects and objects, definition of experimental variables, experiment execution, data analysis, and reporting. Over the years, the course became well-known within the Software Engineering community since it led to several scientific studies that have been published at various scientific conferences and journals. Also, students execute their experiments using \textit{open-source tools}, which are developed and maintained by researchers and other students within the program, thus creating a virtuous community of learners where students exchange ideas, help each other, and learn how to collaboratively contribute to open-source projects in a safe environment. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 471,091 |
math/0208155 | Toric codes over finite fields | In this note, a class of error-correcting codes is associated to a toric variety associated to a fan defined over a finite field $\fff_q$, analogous to the class of Goppa codes associated to a curve. For such a ``toric code'' satisfying certain additional conditions, we present an efficient decoding algorithm for the dual of a Goppa code. Many examples are given. For small $q$, many of these codes have parameters beating the Gilbert-Varshamov bound. In fact, using toric codes, we construct a $(n,k,d)=(49,11,28)$ code over $\fff_8$, which is better than any other known code listed in Brouwer's on-line tables for that $n$ and $k$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,634 |
1811.01774 | SCAV'18: Report of the 2nd International Workshop on Safe Control of
Autonomous Vehicles | This report summarizes the discussions, open issues, take-away messages, and conclusions of the 2nd SCAV workshop. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | true | 112,446 |
2311.17002 | Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following | Existing text-to-image (T2I) diffusion models usually struggle in interpreting complex prompts, especially those with quantity, object-attribute binding, and multi-subject descriptions. In this work, we introduce a semantic panel as the middleware in decoding texts to images, supporting the generator to better follow instructions. The panel is obtained through arranging the visual concepts parsed from the input text by the aid of large language models, and then injected into the denoising network as a detailed control signal to complement the text condition. To facilitate text-to-panel learning, we come up with a carefully designed semantic formatting protocol, accompanied by a fully-automatic data preparation pipeline. Thanks to such a design, our approach, which we call Ranni, manages to enhance a pre-trained T2I generator regarding its textual controllability. More importantly, the introduction of the generative middleware brings a more convenient form of interaction (i.e., directly adjusting the elements in the panel or using language instructions) and further allows users to finely customize their generation, based on which we develop a practical system and showcase its potential in continuous generation and chatting-based editing. Our project page is at https://ranni-t2i.github.io/Ranni. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,115 |
1709.02445 | Large Teams Have Developed Science and Technology; Small Teams Have
Disrupted It | Teams dominate the production of high-impact science and technology. Analyzing teamwork from more than 50 million papers, patents, and software products, 1954-2014, we demonstrate across this period that larger teams developed recent, popular ideas, while small teams disrupted the system by drawing on older and less prevalent ideas. Attention to work from large teams came immediately, while advances by small teams succeeded further into the future. Differences between small and large teams magnify with impact - small teams have become known for disruptive work and large teams for developing work. Differences in topic and re- search design account for part of the relationship between team size and disruption, but most of the effect occurs within people, controlling for detailed subject and article type. Our findings suggest the importance of supporting both small and large teams for the sustainable vitality of science and technology. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 80,263 |
2205.10117 | DDDM: a Brain-Inspired Framework for Robust Classification | Despite their outstanding performance in a broad spectrum of real-world tasks, deep artificial neural networks are sensitive to input noises, particularly adversarial perturbations. On the contrary, human and animal brains are much less vulnerable. In contrast to the one-shot inference performed by most deep neural networks, the brain often solves decision-making with an evidence accumulation mechanism that may trade time for accuracy when facing noisy inputs. The mechanism is well described by the Drift-Diffusion Model (DDM). In the DDM, decision-making is modeled as a process in which noisy evidence is accumulated toward a threshold. Drawing inspiration from the DDM, we propose the Dropout-based Drift-Diffusion Model (DDDM) that combines test-phase dropout and the DDM for improving the robustness for arbitrary neural networks. The dropouts create temporally uncorrelated noises in the network that counter perturbations, while the evidence accumulation mechanism guarantees a reasonable decision accuracy. Neural networks enhanced with the DDDM tested in image, speech, and text classification tasks all significantly outperform their native counterparts, demonstrating the DDDM as a task-agnostic defense against adversarial attacks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 297,567 |
1202.5618 | An equation-free approach to coarse-graining the dynamics of networks | We propose and illustrate an approach to coarse-graining the dynamics of evolving networks (networks whose connectivity changes dynamically). The approach is based on the equation-free framework: short bursts of detailed network evolution simulations are coupled with lifting and restriction operators that translate between actual network realizations and their (appropriately chosen) coarse observables. This framework is used here to accelerate temporal simulations (through coarse projective integration), and to implement coarsegrained fixed point algorithms (through matrix-free Newton-Krylov GMRES). The approach is illustrated through a simple network evolution example, for which analytical approximations to the coarse-grained dynamics can be independently obtained, so as to validate the computational results. The scope and applicability of the approach, as well as the issue of selection of good coarse observables are discussed. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 14,570 |
2405.06702 | Malayalam Sign Language Identification using Finetuned YOLOv8 and
Computer Vision Techniques | Technological advancements and innovations are advancing our daily life in all the ways possible but there is a larger section of society who are deprived of accessing the benefits due to their physical inabilities. To reap the real benefits and make it accessible to society, these talented and gifted people should also use such innovations without any hurdles. Many applications developed these days address these challenges, but localized communities and other constrained linguistic groups may find it difficult to use them. Malayalam, a Dravidian language spoken in the Indian state of Kerala is one of the twenty-two scheduled languages in India. Recent years have witnessed a surge in the development of systems and tools in Malayalam, addressing the needs of Kerala, but many of them are not empathetically designed to cater to the needs of hearing-impaired people. One of the major challenges is the limited or no availability of sign language data for the Malayalam language and sufficient efforts are not made in this direction. In this connection, this paper proposes an approach for sign language identification for the Malayalam language using advanced deep learning and computer vision techniques. We start by developing a labeled dataset for Malayalam letters and for the identification we use advanced deep learning techniques such as YOLOv8 and computer vision. Experimental results show that the identification accuracy is comparable to other sign language identification systems and other researchers in sign language identification can use the model as a baseline to develop advanced models. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 453,413 |
1509.01168 | Semi-described and semi-supervised learning with Gaussian processes | Propagating input uncertainty through non-linear Gaussian process (GP) mappings is intractable. This hinders the task of training GPs using uncertain and partially observed inputs. In this paper we refer to this task as "semi-described learning". We then introduce a GP framework that solves both, the semi-described and the semi-supervised learning problems (where missing values occur in the outputs). Auto-regressive state space simulation is also recognised as a special case of semi-described learning. To achieve our goal we develop variational methods for handling semi-described inputs in GPs, and couple them with algorithms that allow for imputing the missing values while treating the uncertainty in a principled, Bayesian manner. Extensive experiments on simulated and real-world data study the problems of iterative forecasting and regression/classification with missing values. The results suggest that the principled propagation of uncertainty stemming from our framework can significantly improve performance in these tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 46,573 |
2210.04676 | Learning "O" Helps for Learning More: Handling the Concealed Entity
Problem for Class-incremental NER | As the categories of named entities rapidly increase, the deployed NER models are required to keep updating toward recognizing more entity types, creating a demand for class-incremental learning for NER. Considering the privacy concerns and storage constraints, the standard paradigm for class-incremental NER updates the models with training data only annotated with the new classes, yet the entities from other entity classes are unlabeled, regarded as "Non-entity" (or "O"). In this work, we conduct an empirical study on the "Unlabeled Entity Problem" and find that it leads to severe confusion between "O" and entities, decreasing class discrimination of old classes and declining the model's ability to learn new classes. To solve the Unlabeled Entity Problem, we propose a novel representation learning method to learn discriminative representations for the entity classes and "O". Specifically, we propose an entity-aware contrastive learning method that adaptively detects entity clusters in "O". Furthermore, we propose two effective distance-based relabeling strategies for better learning the old classes. We introduce a more realistic and challenging benchmark for class-incremental NER, and the proposed method achieves up to 10.62\% improvement over the baseline methods. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 322,547 |
1307.1482 | Towards Combining HTN Planning and Geometric Task Planning | In this paper we present an interface between a symbolic planner and a geometric task planner, which is different to a standard trajectory planner in that the former is able to perform geometric reasoning on abstract entities---tasks. We believe that this approach facilitates a more principled interface to symbolic planning, while also leaving more room for the geometric planner to make independent decisions. We show how the two planners could be interfaced, and how their planning and backtracking could be interleaved. We also provide insights for a methodology for using the combined system, and experimental results to use as a benchmark with future extensions to both the combined system, as well as to the geometric task planner. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 25,634 |
2310.15846 | Optimal Spatial-Temporal Triangulation for Bearing-Only Cooperative
Motion Estimation | Vision-based cooperative motion estimation is an important problem for many multi-robot systems such as cooperative aerial target pursuit. This problem can be formulated as bearing-only cooperative motion estimation, where the visual measurement is modeled as a bearing vector pointing from the camera to the target. The conventional approaches for bearing-only cooperative estimation are mainly based on the framework distributed Kalman filtering (DKF). In this paper, we propose a new optimal bearing-only cooperative estimation algorithm, named spatial-temporal triangulation, based on the method of distributed recursive least squares, which provides a more flexible framework for designing distributed estimators than DKF. The design of the algorithm fully incorporates all the available information and the specific triangulation geometric constraint. As a result, the algorithm has superior estimation performance than the state-of-the-art DKF algorithms in terms of both accuracy and convergence speed as verified by numerical simulation. We rigorously prove the exponential convergence of the proposed algorithm. Moreover, to verify the effectiveness of the proposed algorithm under practical challenging conditions, we develop a vision-based cooperative aerial target pursuit system, which is the first of such fully autonomous systems so far to the best of our knowledge. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 402,473 |
2502.02406 | LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in
Multimodal Large Language Models | Cross-attention is commonly adopted in multimodal large language models (MLLMs) for integrating visual information into the language backbone. However, in applications with large visual inputs, such as video understanding, processing a large number of visual tokens in cross-attention layers leads to high memory demands and often necessitates distributed computation across multiple GPUs. Existing distributed attention mechanisms face significant communication overheads, making cross-attention layers a critical bottleneck for efficient training and inference of MLLMs. To address this, we propose LV-XAttn, a distributed, exact cross-attention mechanism with minimal communication overhead. We observe that in applications involving large visual inputs the size of the query block is typically much smaller than that of the key-value blocks. Thus, in LV-XAttn we keep the large key-value blocks locally on each GPU and exchange smaller query blocks across GPUs. We also introduce an efficient activation recomputation technique enabling support for longer visual context. We theoretically analyze the communication benefits of LV-XAttn and show that it can achieve speedups for a wide range of models. Our evaluations with mPLUG-Owl3 and OpenFlamingo models find that LV-XAttn achieves up to 5.58$\times$ end-to-end speedup compared to existing approaches. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 530,299 |
2202.02673 | PhysFad: Physics-Based End-to-End Channel Modeling of RIS-Parametrized
Environments with Adjustable Fading | Programmable radio environments parametrized by reconfigurable intelligent surfaces (RISs) are emerging as a new wireless communications paradigm, but currently used channel models for the design and analysis of signal-processing algorithms cannot include fading in a manner that is faithful to the underlying wave physics. To overcome this roadblock, we introduce a physics-based end-to-end model of RIS-parametrized wireless channels with adjustable fading (coined PhysFad) which is based on a first-principles coupled-dipole formalism. PhysFad naturally incorporates the notions of space and causality, dispersion (i.e., frequency selectivity) and the intertwinement of each RIS element's phase and amplitude response, as well as any arising mutual coupling effects including long-range mesoscopic correlations. PhysFad offers the to-date missing tuning knob for adjustable fading. We thoroughly characterize PhysFad and demonstrate its capabilities for a prototypical problem of RIS-enabled over-the-air channel equalization in rich-scattering wireless communications. We also share a user-friendly version of our code to help the community transition towards physics-based models with adjustable fading. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 278,904 |
2106.14564 | Two-point AG codes from the Beelen-Montanucci maximal curve | In this paper we investigate two-point algebraic-geometry codes (AG codes) coming from the Beelen-Montanucci (BM) maximal curve. We study properties of certain two-point Weierstrass semigroups of the curve and use them for determining a lower bound on the minimum distance of such codes. AG codes with better parameters with respect to comparable two-point codes from the Garcia-G\"uneri-Stichtenoth (GGS) curve are discovered. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 243,445 |
2412.10438 | Automatic Image Annotation for Mapped Features Detection | Detecting road features is a key enabler for autonomous driving and localization. For instance, a reliable detection of poles which are widespread in road environments can improve localization. Modern deep learning-based perception systems need a significant amount of annotated data. Automatic annotation avoids time-consuming and costly manual annotation. Because automatic methods are prone to errors, managing annotation uncertainty is crucial to ensure a proper learning process. Fusing multiple annotation sources on the same dataset can be an efficient way to reduce the errors. This not only improves the quality of annotations, but also improves the learning of perception models. In this paper, we consider the fusion of three automatic annotation methods in images: feature projection from a high accuracy vector map combined with a lidar, image segmentation and lidar segmentation. Our experimental results demonstrate the significant benefits of multi-modal automatic annotation for pole detection through a comparative evaluation on manually annotated images. Finally, the resulting multi-modal fusion is used to fine-tune an object detection model for pole base detection using unlabeled data, showing overall improvements achieved by enhancing network specialization. The dataset is publicly available. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 516,929 |
2309.17160 | Redistributing the Precision and Content in 3D-LUT-based Inverse
Tone-mapping for HDR/WCG Display | ITM(inverse tone-mapping) converts SDR (standard dynamic range) footage to HDR/WCG (high dynamic range /wide color gamut) for media production. It happens not only when remastering legacy SDR footage in front-end content provider, but also adapting on-theair SDR service on user-end HDR display. The latter requires more efficiency, thus the pre-calculated LUT (look-up table) has become a popular solution. Yet, conventional fixed LUT lacks adaptability, so we learn from research community and combine it with AI. Meanwhile, higher-bit-depth HDR/WCG requires larger LUT than SDR, so we consult traditional ITM for an efficiency-performance trade-off: We use 3 smaller LUTs, each has a non-uniform packing (precision) respectively denser in dark, middle and bright luma range. In this case, their results will have less error only in their own range, so we use a contribution map to combine their best parts to final result. With the guidance of this map, the elements (content) of 3 LUTs will also be redistributed during training. We conduct ablation studies to verify method's effectiveness, and subjective and objective experiments to show its practicability. Code is available at: https://github.com/AndreGuo/ITMLUT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 395,654 |
1812.01192 | Learning to Fuse Things and Stuff | We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 115,455 |
0911.5553 | Randomized vs. orthogonal spectrum allocation in decentralized networks:
Outage Analysis | We address a decentralized wireless communication network with a fixed number $u$ of frequency sub-bands to be shared among $N$ transmitter-receiver pairs. It is assumed that the number of users $N$ is a random variable with a given distribution and the channel gains are quasi-static Rayleigh fading. The transmitters are assumed to be unaware of the number of active users in the network as well as the channel gains and not capable of detecting the presence of other users in a given frequency sub-band. Moreover, the users are unaware of each other's codebooks and hence, no multiuser detection is possible. We consider a randomized Frequency Hopping (FH) scheme in which each transmitter randomly hops over a subset of the $u$ sub-bands from transmission to transmission. Developing a new upper bound on the differential entropy of a mixed Gaussian random vector and using entropy power inequality, we offer a series of lower bounds on the achievable rate of each user. Thereafter, we obtain lower bounds on the maximum transmission rate per user to ensure a specified outage probability at a given Signal-to-Noise Ratio (SNR) level. We demonstrate that the so-called outage capacity can be considerably higher in the FH scheme than in the Frequency Division (FD) scenario for reasonable distributions on the number of active users. This guarantees a higher spectral efficiency in FH compared to FD. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 5,050 |
1512.04052 | Big Data Scaling through Metric Mapping: Exploiting the Remarkable
Simplicity of Very High Dimensional Spaces using Correspondence Analysis | We present new findings in regard to data analysis in very high dimensional spaces. We use dimensionalities up to around one million. A particular benefit of Correspondence Analysis is its suitability for carrying out an orthonormal mapping, or scaling, of power law distributed data. Power law distributed data are found in many domains. Correspondence factor analysis provides a latent semantic or principal axes mapping. Our experiments use data from digital chemistry and finance, and other statistically generated data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 50,093 |
1402.5360 | Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity | In this paper, a new descriptor selection method for selecting an optimal combination of important descriptors of sulfonamide derivatives data, named self tuned reweighted sampling (STRS), is developed. descriptors are defined as the descriptors with large absolute coefficients in a multivariate linear regression model such as partial least squares(PLS). In this study, the absolute values of regression coefficients of PLS model are used as an index for evaluating the importance of each descriptor Then, based on the importance level of each descriptor, STRS sequentially selects N subsets of descriptors from N Monte Carlo (MC) sampling runs in an iterative and competitive manner. In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly selected to establish a regresson model. Next, based on the regression coefficients, a two-step procedure including rapidly decreasing function (RDF) based enforced descriptor selection and self tuned sampling (STS) based competitive descriptor selection is adopted to select the important descriptorss. After running the loops, a number of subsets of descriptors are obtained and root mean squared error of cross validation (RMSECV) of PLS models established with subsets of descriptors is computed. The subset of descriptors with the lowest RMSECV is considered as the optimal descriptor subset. The performance of the proposed algorithm is evaluated by sulfanomide derivative dataset. The results reveal an good characteristic of STRS that it can usually locate an optimal combination of some important descriptors which are interpretable to the biologically of interest. Additionally, our study shows that better prediction is obtained by STRS when compared to full descriptor set PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 31,050 |
cs/0405050 | Traffic Accident Analysis Using Decision Trees and Neural Networks | The costs of fatalities and injuries due to traffic accident have a great impact on society. This paper presents our research to model the severity of injury resulting from traffic accidents using artificial neural networks and decision trees. We have applied them to an actual data set obtained from the National Automotive Sampling System (NASS) General Estimates System (GES). Experiment results reveal that in all the cases the decision tree outperforms the neural network. Our research analysis also shows that the three most important factors in fatal injury are: driver's seat belt usage, light condition of the roadway, and driver's alcohol usage. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 538,200 |
1109.6638 | The Statistical Inefficiency of Sparse Coding for Images (or, One Gabor
to Rule them All) | Sparse coding is a proven principle for learning compact representations of images. However, sparse coding by itself often leads to very redundant dictionaries. With images, this often takes the form of similar edge detectors which are replicated many times at various positions, scales and orientations. An immediate consequence of this observation is that the estimation of the dictionary components is not statistically efficient. We propose a factored model in which factors of variation (e.g. position, scale and orientation) are untangled from the underlying Gabor-like filters. There is so much redundancy in sparse codes for natural images that our model requires only a single dictionary element (a Gabor-like edge detector) to outperform standard sparse coding. Our model scales naturally to arbitrary-sized images while achieving much greater statistical efficiency during learning. We validate this claim with a number of experiments showing, in part, superior compression of out-of-sample data using a sparse coding dictionary learned with only a single image. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 12,402 |
1406.0022 | Error Decay of (almost) Consistent Signal Estimations from Quantized
Gaussian Random Projections | This paper provides new error bounds on "consistent" reconstruction methods for signals observed from quantized random projections. Those signal estimation techniques guarantee a perfect matching between the available quantized data and a new observation of the estimated signal under the same sensing model. Focusing on dithered uniform scalar quantization of resolution $\delta>0$, we prove first that, given a Gaussian random frame of $\mathbb R^N$ with $M$ vectors, the worst-case $\ell_2$-error of consistent signal reconstruction decays with high probability as $O(\frac{N}{M}\log\frac{M}{\sqrt N})$ uniformly for all signals of the unit ball $\mathbb B^N \subset \mathbb R^N$. Up to a log factor, this matches a known lower bound in $\Omega(N/M)$ and former empirical validations in $O(N/M)$. Equivalently, if $M$ exceeds a minimal number of frame coefficients growing like $O(\frac{N}{\epsilon_0}\log \frac{\sqrt N}{\epsilon_0})$, any vectors in $\mathbb B^N$ with $M$ identical quantized projections are at most $\epsilon_0$ apart with high probability. Second, in the context of Quantized Compressed Sensing with $M$ Gaussian random measurements and under the same scalar quantization scheme, consistent reconstructions of $K$-sparse signals of $\mathbb R^N$ have a worst-case error that decreases with high probability as $O(\tfrac{K}{M}\log\tfrac{MN}{\sqrt K^3})$ uniformly for all such signals. Finally, we show that the proximity of vectors whose quantized random projections are only approximately consistent can still be bounded with high probability. A certain level of corruption is thus allowed in the quantization process, up to the appearance of a systematic bias in the reconstruction error of (almost) consistent signal estimates. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 33,512 |
2001.10190 | Time-Domain Audio Source Separation Based on Wave-U-Net Combined with
Discrete Wavelet Transform | We propose a time-domain audio source separation method using down-sampling (DS) and up-sampling (US) layers based on a discrete wavelet transform (DWT). The proposed method is based on one of the state-of-the-art deep neural networks, Wave-U-Net, which successively down-samples and up-samples feature maps. We find that this architecture resembles that of multiresolution analysis, and reveal that the DS layers of Wave-U-Net cause aliasing and may discard information useful for the separation. Although the effects of these problems may be reduced by training, to achieve a more reliable source separation method, we should design DS layers capable of overcoming the problems. With this belief, focusing on the fact that the DWT has an anti-aliasing filter and the perfect reconstruction property, we design the proposed layers. Experiments on music source separation show the efficacy of the proposed method and the importance of simultaneously considering the anti-aliasing filters and the perfect reconstruction property. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 161,764 |
2502.01943 | DAMA: Data- and Model-aware Alignment of Multi-modal LLMs | Direct Preference Optimization (DPO) has shown effectiveness in aligning multi-modal large language models (MLLM) with human preferences. However, existing methods exhibit an imbalanced responsiveness to the data of varying hardness, tending to overfit on the easy-to-distinguish data while underfitting on the hard-to-distinguish data. In this paper, we propose Data- and Model-aware DPO (DAMA) to dynamically adjust the optimization process from two key aspects: (1) a data-aware strategy that incorporates data hardness, and (2) a model-aware strategy that integrates real-time model responses. By combining the two strategies, DAMA enables the model to effectively adapt to data with varying levels of hardness. Extensive experiments on five benchmarks demonstrate that DAMA not only significantly enhances the trustworthiness, but also improves the effectiveness over general tasks. For instance, on the Object-HalBench, our DAMA-7B reduces response-level and mentioned-level hallucination by 90.0% and 95.3%, respectively, surpassing the performance of GPT-4V. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 530,102 |
2501.06879 | Defect Detection Network In PCB Circuit Devices Based on GAN Enhanced
YOLOv11 | This study proposes an advanced method for surface defect detection in printed circuit boards (PCBs) using an improved YOLOv11 model enhanced with a generative adversarial network (GAN). The approach focuses on identifying six common defect types: missing hole, rat bite, open circuit, short circuit, burr, and virtual welding. By employing GAN to generate synthetic defect images, the dataset is augmented with diverse and realistic patterns, improving the model's ability to generalize, particularly for complex and infrequent defects like burrs. The enhanced YOLOv11 model is evaluated on a PCB defect dataset, demonstrating significant improvements in accuracy, recall, and robustness, especially when dealing with defects in complex environments or small targets. This research contributes to the broader field of electronic design automation (EDA), where efficient defect detection is a crucial step in ensuring high-quality PCB manufacturing. By integrating advanced deep learning techniques, this approach enhances the automation and precision of defect detection, reducing reliance on manual inspection and accelerating design-to-production workflows. The findings underscore the importance of incorporating GAN-based data augmentation and optimized detection architectures in EDA processes, providing valuable insights for improving reliability and efficiency in PCB defect detection within industrial applications. | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 524,168 |
2210.02494 | Model Reference Gaussian Process Regression: Data-Driven Output Feedback
Controller | Data-driven controls using Gaussian process regression have recently gained much attention. In such approaches, system identification by Gaussian process regression is mostly followed by model-based controller designs. However, the outcomes of Gaussian process regression are often too complicated to apply conventional control designs, which makes the numerical design such as model predictive control employed in many cases. To overcome the restriction, our idea is to perform Gaussian process regression to the inverse of the plant with the same input/output data for the conventional regression. With the inverse, one can design a model reference controller without resorting to numerical control methods. This paper considers single-input single-output (SISO) discrete-time nonlinear systems of minimum phase with relative degree one. It is highlighted that the model reference Gaussian process regression controller is designed directly from pre-collected input/output data without system identification. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 321,660 |
2210.13497 | Subspace Recovery from Heterogeneous Data with Non-isotropic Noise | Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from $n$ users with user $i$ contributing data samples from a $d$-dimensional distribution with mean $\mu_i$. Our goal is to recover the linear subspace shared by $\mu_1,\ldots,\mu_n$ using the data points from all users, where every data point from user $i$ is formed by adding an independent mean-zero noise vector to $\mu_i$. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 326,186 |
2304.10241 | A geometry-aware deep network for depth estimation in monocular
endoscopy | Monocular depth estimation is critical for endoscopists to perform spatial perception and 3D navigation of surgical sites. However, most of the existing methods ignore the important geometric structural consistency, which inevitably leads to performance degradation and distortion of 3D reconstruction. To address this issue, we introduce a gradient loss to penalize edge fluctuations ambiguous around stepped edge structures and a normal loss to explicitly express the sensitivity to frequently small structures, and propose a geometric consistency loss to spreads the spatial information across the sample grids to constrain the global geometric anatomy structures. In addition, we develop a synthetic RGB-Depth dataset that captures the anatomical structures under reflections and illumination variations. The proposed method is extensively validated across different datasets and clinical images and achieves mean RMSE values of 0.066 (stomach), 0.029 (small intestine), and 0.139 (colon) on the EndoSLAM dataset. The generalizability of the proposed method achieves mean RMSE values of 12.604 (T1-L1), 9.930 (T2-L2), and 13.893 (T3-L3) on the ColonDepth dataset. The experimental results show that our method exceeds previous state-of-the-art competitors and generates more consistent depth maps and reasonable anatomical structures. The quality of intraoperative 3D structure perception from endoscopic videos of the proposed method meets the accuracy requirements of video-CT registration algorithms for endoscopic navigation. The dataset and the source code will be available at https://github.com/YYM-SIA/LINGMI-MR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 359,336 |
2409.12926 | MaskMol: Knowledge-guided Molecular Image Pre-Training Framework for
Activity Cliffs | Activity cliffs, which refer to pairs of molecules that are structurally similar but show significant differences in their potency, can lead to model representation collapse and make the model challenging to distinguish them. Our research indicates that as molecular similarity increases, graph-based methods struggle to capture these nuances, whereas image-based approaches effectively retain the distinctions. Thus, we developed MaskMol, a knowledge-guided molecular image self-supervised learning framework. MaskMol accurately learns the representation of molecular images by considering multiple levels of molecular knowledge, such as atoms, bonds, and substructures. By utilizing pixel masking tasks, MaskMol extracts fine-grained information from molecular images, overcoming the limitations of existing deep learning models in identifying subtle structural changes. Experimental results demonstrate MaskMol's high accuracy and transferability in activity cliff estimation and compound potency prediction across 20 different macromolecular targets, outperforming 25 state-of-the-art deep learning and machine learning approaches. Visualization analyses reveal MaskMol's high biological interpretability in identifying activity cliff-relevant molecular substructures. Notably, through MaskMol, we identified candidate EP4 inhibitors that could be used to treat tumors. This study not only raises awareness about activity cliffs but also introduces a novel method for molecular image representation learning and virtual screening, advancing drug discovery and providing new insights into structure-activity relationships (SAR). | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,774 |
2201.05272 | Toward Fully Automated Robotic Platform for Remote Auscultation | Since most developed countries are facing an increase in the number of patients per healthcare worker due to a declining birth rate and an aging population, relatively simple and safe diagnosis tasks may need to be performed using robotics and automation technologies, without specialists and hospitals. This study presents an automated robotic platform for remote auscultation, which is a highly cost-effective screening tool for detecting abnormal clinical signs. The developed robotic platform is composed of a 6-degree-of-freedom cooperative robotic arm, light detection and ranging (LiDAR) camera, and a spring-based mechanism holding an electric stethoscope. The platform enables autonomous stethoscope positioning based on external body information acquired using the LiDAR camera-based multi-way registration; the platform also ensures safe and flexible contact, maintaining the contact force within a certain range through the passive mechanism. Our preliminary results confirm that the robotic platform enables estimation of the landing positions required for cardiac examinations based on the depth and landmark information of the body surface. It also handles the stethoscope while maintaining the contact force without relying on the push-in displacement by the robotic arm. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 275,337 |
2409.19691 | CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical
Understanding and Generation in Essays | Existing rhetorical understanding and generation datasets or corpora primarily focus on single coarse-grained categories or fine-grained categories, neglecting the common interrelations between different rhetorical devices by treating them as independent sub-tasks. In this paper, we propose the Chinese Essay Rhetoric Dataset (CERD), consisting of 4 commonly used coarse-grained categories including metaphor, personification, hyperbole and parallelism and 23 fine-grained categories across both form and content levels. CERD is a manually annotated and comprehensive Chinese rhetoric dataset with five interrelated sub-tasks. Unlike previous work, our dataset aids in understanding various rhetorical devices, recognizing corresponding rhetorical components, and generating rhetorical sentences under given conditions, thereby improving the author's writing proficiency and language usage skills. Extensive experiments are conducted to demonstrate the interrelations between multiple tasks in CERD, as well as to establish a benchmark for future research on rhetoric. The experimental results indicate that Large Language Models achieve the best performance across most tasks, and jointly fine-tuning with multiple tasks further enhances performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 492,807 |
2406.14653 | LLM Granularity for On-the-Fly Robot Control | Assistive robots have attracted significant attention due to their potential to enhance the quality of life for vulnerable individuals like the elderly. The convergence of computer vision, large language models, and robotics has introduced the `visuolinguomotor' mode for assistive robots, where visuals and linguistics are incorporated into assistive robots to enable proactive and interactive assistance. This raises the question: \textit{In circumstances where visuals become unreliable or unavailable, can we rely solely on language to control robots, i.e., the viability of the `linguomotor` mode for assistive robots?} This work takes the initial steps to answer this question by: 1) evaluating the responses of assistive robots to language prompts of varying granularities; and 2) exploring the necessity and feasibility of controlling the robot on-the-fly. We have designed and conducted experiments on a Sawyer cobot to support our arguments. A Turtlebot robot case is designed to demonstrate the adaptation of the solution to scenarios where assistive robots need to maneuver to assist. Codes will be released on GitHub soon to benefit the community. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 466,401 |
1211.0056 | Iterative Hard Thresholding Methods for $l_0$ Regularized Convex Cone
Programming | In this paper we consider $l_0$ regularized convex cone programming problems. In particular, we first propose an iterative hard thresholding (IHT) method and its variant for solving $l_0$ regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an $\epsilon$-local-optimal solution. We then propose a method for solving $l_0$ regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and establish its iteration complexity for finding an $\epsilon$-approximate local minimizer. Finally, we propose a variant of this method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local minimizer of the problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 19,507 |
2305.02261 | End-to-end Training and Decoding for Pivot-based Cascaded Translation
Model | Utilizing pivot language effectively can significantly improve low-resource machine translation. Usually, the two translation models, source-pivot and pivot-target, are trained individually and do not utilize the limited (source, target) parallel data. This work proposes an end-to-end training method for the cascaded translation model and configures an improved decoding algorithm. The input of the pivot-target model is modified to weighted pivot embedding based on the probability distribution output by the source-pivot model. This allows the model to be trained end-to-end. In addition, we mitigate the inconsistency between tokens and probability distributions while using beam search in pivot decoding. Experiments demonstrate that our method enhances the quality of translation. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 361,970 |
2312.09323 | Perspectives on the State and Future of Deep Learning - 2023 | The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time. The plan is to host this survey periodically until the AI singularity paperclip-frenzy-driven doomsday, keeping an updated list of topical questions and interviewing new community members for each edition. In this issue, we probed people's opinions on interpretable AI, the value of benchmarking in modern NLP, the state of progress towards understanding deep learning, and the future of academia. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,673 |
cs/0607098 | List decoding of noisy Reed-Muller-like codes | First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are two fundamental error-correcting codes which arise in communication as well as in probabilistically-checkable proofs and learning. In this paper, we take the first steps toward extending the quick randomized decoding tools of RM(1) into the realm of quadratic binary and, equivalently, Z_4 codes. Our main algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and RM(2). That is, given signal s of length N, we find a list that is a superset of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times the norm of s, in time polynomial in k and log(N). We also give a new and simple formulation of a known Kerdock code as a subcode of the Hankel code. As a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm for finding a sparse Kerdock approximation. That is, for k small compared with 1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such approximation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 539,603 |
1910.00517 | Learning Multi-Stage Sparsification for Maximum Clique Enumeration | We propose a multi-stage learning approach for pruning the search space of maximum clique enumeration, a fundamental computationally difficult problem arising in various network analysis tasks. In each stage, our approach learns the characteristics of vertices in terms of various neighborhood features and leverage them to prune the set of vertices that are likely not contained in any maximum clique. Furthermore, we demonstrate that our approach is domain independent -- the same small set of features works well on graph instances from different domain. Compared to the state-of-the-art heuristics and preprocessing strategies, the advantages of our approach are that (i) it does not require any estimate on the maximum clique size at runtime and (ii) we demonstrate it to be effective also for dense graphs. In particular, for dense graphs, we typically prune around 30 \% of the vertices resulting in speedups of up to 53 times for state-of-the-art solvers while generally preserving the size of the maximum clique (though some maximum cliques may be lost). For large real-world sparse graphs, we routinely prune over 99 \% of the vertices resulting in several tenfold speedups at best, typically with no impact on solution quality. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 147,683 |
1605.04806 | Multilevel Thresholding Segmentation of T2 weighted Brain MRI images
using Convergent Heterogeneous Particle Swarm Optimization | This paper proposes a new image thresholding segmentation approach using the heuristic method, Convergent Heterogeneous Particle Swarm Optimization algorithm. The proposed algorithm incorporates a new strategy of searching the problem space by dividing the swarm into subswarms. Each subswarm particles search for better solution separately lead to better exploitation while they cooperate with each other to find the best global position. The consequence of the aforementioned cooperation is better exploration, convergence and it able the algorithm to jump from local optimal solution to the better spots. A practical application of this method is demonstrated for the problem of medical image thresholding segmentation. We considered two classical thresholding techniques of Otsu and Kapur separately as the objective function for the optimization method and applied on a set of brain MR images. Comparative experimental results reveal that the proposed method outperforms another state of the art method from the literature in terms of accuracy, computation time and stable results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 55,919 |
2208.11948 | Learning to Construct 3D Building Wireframes from 3D Line Clouds | Line clouds, though under-investigated in the previous work, potentially encode more compact structural information of buildings than point clouds extracted from multi-view images. In this work, we propose the first network to process line clouds for building wireframe abstraction. The network takes a line cloud as input , i.e., a nonstructural and unordered set of 3D line segments extracted from multi-view images, and outputs a 3D wireframe of the underlying building, which consists of a sparse set of 3D junctions connected by line segments. We observe that a line patch, i.e., a group of neighboring line segments, encodes sufficient contour information to predict the existence and even the 3D position of a potential junction, as well as the likelihood of connectivity between two query junctions. We therefore introduce a two-layer Line-Patch Transformer to extract junctions and connectivities from sampled line patches to form a 3D building wireframe model. We also introduce a synthetic dataset of multi-view images with ground-truth 3D wireframe. We extensively justify that our reconstructed 3D wireframe models significantly improve upon multiple baseline building reconstruction methods. The code and data can be found at https://github.com/Luo1Cheng/LC2WF. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 314,587 |
1308.3177 | Normalized Google Distance of Multisets with Applications | Normalized Google distance (NGD) is a relative semantic distance based on the World Wide Web (or any other large electronic database, for instance Wikipedia) and a search engine that returns aggregate page counts. The earlier NGD between pairs of search terms (including phrases) is not sufficient for all applications. We propose an NGD of finite multisets of search terms that is better for many applications. This gives a relative semantics shared by a multiset of search terms. We give applications and compare the results with those obtained using the pairwise NGD. The derivation of NGD method is based on Kolmogorov complexity. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 26,440 |
2210.03929 | EgoTaskQA: Understanding Human Tasks in Egocentric Videos | Understanding human tasks through video observations is an essential capability of intelligent agents. The challenges of such capability lie in the difficulty of generating a detailed understanding of situated actions, their effects on object states (i.e., state changes), and their causal dependencies. These challenges are further aggravated by the natural parallelism from multi-tasking and partial observations in multi-agent collaboration. Most prior works leverage action localization or future prediction as an indirect metric for evaluating such task understanding from videos. To make a direct evaluation, we introduce the EgoTaskQA benchmark that provides a single home for the crucial dimensions of task understanding through question-answering on real-world egocentric videos. We meticulously design questions that target the understanding of (1) action dependencies and effects, (2) intents and goals, and (3) agents' beliefs about others. These questions are divided into four types, including descriptive (what status?), predictive (what will?), explanatory (what caused?), and counterfactual (what if?) to provide diagnostic analyses on spatial, temporal, and causal understandings of goal-oriented tasks. We evaluate state-of-the-art video reasoning models on our benchmark and show their significant gaps between humans in understanding complex goal-oriented egocentric videos. We hope this effort will drive the vision community to move onward with goal-oriented video understanding and reasoning. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 322,233 |
2305.07376 | DAISM: Digital Approximate In-SRAM Multiplier-based Accelerator for DNN
Training and Inference | DNNs are widely used but face significant computational costs due to matrix multiplications, especially from data movement between the memory and processing units. One promising approach is therefore Processing-in-Memory as it greatly reduces this overhead. However, most PIM solutions rely either on novel memory technologies that have yet to mature or bit-serial computations that have significant performance overhead and scalability issues. Our work proposes an in-SRAM digital multiplier, that uses a conventional memory to perform bit-parallel computations, leveraging multiple wordlines activation. We then introduce DAISM, an architecture leveraging this multiplier, which achieves up to two orders of magnitude higher area efficiency compared to the SOTA counterparts, with competitive energy efficiency. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 363,877 |
2406.09601 | Turns Out I'm Not Real: Towards Robust Detection of AI-Generated Videos | The impressive achievements of generative models in creating high-quality videos have raised concerns about digital integrity and privacy vulnerabilities. Recent works to combat Deepfakes videos have developed detectors that are highly accurate at identifying GAN-generated samples. However, the robustness of these detectors on diffusion-generated videos generated from video creation tools (e.g., SORA by OpenAI, Runway Gen-2, and Pika, etc.) is still unexplored. In this paper, we propose a novel framework for detecting videos synthesized from multiple state-of-the-art (SOTA) generative models, such as Stable Video Diffusion. We find that the SOTA methods for detecting diffusion-generated images lack robustness in identifying diffusion-generated videos. Our analysis reveals that the effectiveness of these detectors diminishes when applied to out-of-domain videos, primarily because they struggle to track the temporal features and dynamic variations between frames. To address the above-mentioned challenge, we collect a new benchmark video dataset for diffusion-generated videos using SOTA video creation tools. We extract representation within explicit knowledge from the diffusion model for video frames and train our detector with a CNN + LSTM architecture. The evaluation shows that our framework can well capture the temporal features between frames, achieves 93.7% detection accuracy for in-domain videos, and improves the accuracy of out-domain videos by up to 16 points. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 464,000 |
2311.05486 | Disease Gene Prioritization With Quantum Walks | Disease gene prioritization assigns scores to genes or proteins according to their likely relevance for a given disease based on a provided set of seed genes. Here, we describe a new algorithm for disease gene prioritization based on continuous-time quantum walks using the adjacency matrix of a protein-protein interaction (PPI) network. Our algorithm can be seen as a quantum version of a previous method known as the diffusion kernel, but, importantly, has higher performance in predicting disease genes, and also permits the encoding of seed node self-loops into the underlying Hamiltonian, which offers yet another boost in performance. We demonstrate the success of our proposed method by comparing it to several well-known gene prioritization methods on three disease sets, across seven different PPI networks. In order to compare these methods, we use cross-validation and examine the mean reciprocal ranks and recall values. We further validate our method by performing an enrichment analysis of the predicted genes for coronary artery disease. We also investigate the impact of adding self-loops to the seeds, and argue that they allow the quantum walker to remain more local to low-degree seed nodes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 406,599 |
1711.03599 | Traffic Models of Periodic Event-Triggered Control Systems | Periodic event-triggered control (PETC) is a version of event-triggered control (ETC) that only requires to measure the plant output periodically instead of continuously. In this work, we present a construction of timing models for these PETC implementations to capture the dynamics of the traffic they generate. In the construction, we employ a two-step approach. We first partition the state space into a finite number of regions. Then in each region, the event-triggering behavior is analyzed with the help of LMIs. The state transitions among different regions result from computing the reachable state set starting from each region within the computed event time intervals. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 84,244 |
2407.19832 | ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2 | Multimodal Large Language Models (MLLMs) have attracted much attention for their multifunctionality. However, traditional Transformer architectures incur significant overhead due to their secondary computational complexity. To address this issue, we introduce ML-Mamba, a multimodal language model, which utilizes the latest and efficient Mamba-2 model for inference. Mamba-2 is known for its linear scalability and fast processing of long sequences. We replace the Transformer-based backbone with a pre-trained Mamba-2 model and explore methods for integrating 2D visual selective scanning mechanisms into multimodal learning while also trying various visual encoders and Mamba-2 model variants. Our extensive experiments in various multimodal benchmark tests demonstrate the competitive performance of ML-Mamba and highlight the potential of state space models in multimodal tasks. The experimental results show that: (1) we empirically explore how to effectively apply the 2D vision selective scan mechanism for multimodal learning. We propose a novel multimodal connector called the Mamba-2 Scan Connector (MSC), which enhances representational capabilities. (2) ML-Mamba achieves performance comparable to state-of-the-art methods such as TinyLaVA and MobileVLM v2 through its linear sequential modeling while faster inference speed; (3) Compared to multimodal models utilizing Mamba-1, the Mamba-2-based ML-Mamba exhibits superior inference performance and effectiveness. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 476,948 |
2206.02211 | Variable-rate hierarchical CPC leads to acoustic unit discovery in
speech | The success of deep learning comes from its ability to capture the hierarchical structure of data by learning high-level representations defined in terms of low-level ones. In this paper we explore self-supervised learning of hierarchical representations of speech by applying multiple levels of Contrastive Predictive Coding (CPC). We observe that simply stacking two CPC models does not yield significant improvements over single-level architectures. Inspired by the fact that speech is often described as a sequence of discrete units unevenly distributed in time, we propose a model in which the output of a low-level CPC module is non-uniformly downsampled to directly minimize the loss of a high-level CPC module. The latter is designed to also enforce a prior of separability and discreteness in its representations by enforcing dissimilarity of successive high-level representations through focused negative sampling, and by quantization of the prediction targets. Accounting for the structure of the speech signal improves upon single-level CPC features and enhances the disentanglement of the learned representations, as measured by downstream speech recognition tasks, while resulting in a meaningful segmentation of the signal that closely resembles phone boundaries. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | true | false | false | 300,796 |
2208.11231 | Exact Penalty Method for Federated Learning | Federated learning has burgeoned recently in machine learning, giving rise to a variety of research topics. Popular optimization algorithms are based on the frameworks of the (stochastic) gradient descent methods or the alternating direction method of multipliers. In this paper, we deploy an exact penalty method to deal with federated learning and propose an algorithm, FedEPM, that enables to tackle four critical issues in federated learning: communication efficiency, computational complexity, stragglers' effect, and data privacy. Moreover, it is proven to be convergent and testified to have high numerical performance. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 314,348 |
1504.07682 | Shotgun assembly of labeled graphs | We consider the problem of reconstructing graphs or labeled graphs from neighborhoods of a given radius r. Special instances of this problem include the well known: DNA shotgun assembly; the lesser-known: neural network reconstruction; and a new problem: assembling random jigsaw puzzles. We provide some necessary and some sufficient conditions for correct recovery both in combinatorial terms and for some generative models including random labelings of lattices, Erdos-Renyi random graphs, and the random jigsaw puzzle model. Many open problems and conjectures are provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 42,568 |
1806.03368 | An Exploration of H-1B Visa Applications in the United States | The H-1B visa program is a very important tool for US-based businesses and educational institutes to recruit foreign talent. While the ultimate decision to certify an application lies with the United States Department of Labor, there are signals that can be used to determine whether an application is likely to be certified or denied. In this paper we first perform a data-driven exploratory analysis. We then leverage the features to train several classifiers and compare their performance. Finally, we discuss the implications of this work and future work that can be done in this area. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 99,978 |
cs/0412049 | Neural Networks in Mobile Robot Motion | This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 538,430 |
1709.08325 | Pose-driven Deep Convolutional Model for Person Re-identification | Feature extraction and matching are two crucial components in person Re-Identification (ReID). The large pose deformations and the complex view variations exhibited by the captured person images significantly increase the difficulty of learning and matching of the features from person images. To overcome these difficulties, in this work we propose a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end. Our deep architecture explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts. To match the features from global human body and local body parts, a pose driven feature weighting sub-network is further designed to learn adaptive feature fusions. Extensive experimental analyses and results on three popular datasets demonstrate significant performance improvements of our model over all published state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 81,451 |
2406.04721 | End-to-End Design of Polar Coded Integrated Data and Energy Networking | In order to transmit data and transfer energy to the low-power Internet of Things (IoT) devices, integrated data and energy networking (IDEN) system may be harnessed. In this context, we propose a bitwise end-to-end design for polar coded IDEN systems, where the conventional encoding/decoding, modulation/demodulation, and energy harvesting (EH) modules are replaced by the neural networks (NNs). In this way, the entire system can be treated as an AutoEncoder (AE) and trained in an end-to-end manner. Hence achieving global optimization. Additionally, we improve the common NN-based belief propagation (BP) decoder by adding an extra hypernetwork, which generates the corresponding NN weights for the main network under different number of iterations, thus the adaptability of the receiver architecture can be further enhanced. Our numerical results demonstrate that our BP-based end-to-end design is superior to conventional BP-based counterparts in terms of both the BER and power transfer, but it is inferior to the successive cancellation list (SCL)-based conventional IDEN system, which may be due to the inherent performance gap between the BP and SCL decoders. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 461,811 |
2012.15695 | EfficientNet-Absolute Zero for Continuous Speech Keyword Spotting | Keyword spotting is a process of finding some specific words or phrases in recorded speeches by computers. Deep neural network algorithms, as a powerful engine, can handle this problem if they are trained over an appropriate dataset. To this end, the football keyword dataset (FKD), as a new keyword spotting dataset in Persian, is collected with crowdsourcing. This dataset contains nearly 31000 samples in 18 classes. The continuous speech synthesis method proposed to made FKD usable in the practical application which works with continuous speeches. Besides, we proposed a lightweight architecture called EfficientNet-A0 (absolute zero) by applying the compound scaling method on EfficientNet-B0 for keyword spotting task. Finally, the proposed architecture is evaluated with various models. It is realized that EfficientNet-A0 and Resnet models outperform other models on this dataset. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 213,869 |
1705.02073 | Cross-lingual Distillation for Text Classification | Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 72,920 |
2402.16790 | Beyond Self-learned Attention: Mitigating Attention Bias in
Transformer-based Models Using Attention Guidance | Transformer-based models have demonstrated considerable potential for source code modeling tasks in software engineering. However, they are limited by their dependence solely on automatic self-attention weight learning mechanisms. Previous studies have shown that these models overemphasize delimiters added by tokenizers (e.g., [CLS], [SEP]), which may lead to overlooking essential information in the original input source code. To address this challenge, we introduce SyntaGuid, a novel approach that utilizes the observation that attention weights tend to be biased towards specific source code syntax tokens and abstract syntax tree (AST) elements in fine-tuned language models when they make correct predictions. SyntaGuid facilitates the guidance of attention-weight learning, leading to improved model performance on various software engineering tasks. We evaluate the effectiveness of SyntaGuid on multiple tasks and demonstrate that it outperforms existing state-of-the-art models in overall performance without requiring additional data. Experimental result shows that SyntaGuid can improve overall performance up to 3.25% and fix up to 28.3% wrong predictions. Our work represents the first attempt to guide the attention of Transformer-based models towards critical source code tokens during fine-tuning, highlighting the potential for enhancing Transformer-based models in software engineering. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 432,695 |
2205.11948 | SHARP: Shape-Aware Reconstruction of People in Loose Clothing | Recent advancements in deep learning have enabled 3D human body reconstruction from a monocular image, which has broad applications in multiple domains. In this paper, we propose SHARP (SHape Aware Reconstruction of People in loose clothing), a novel end-to-end trainable network that accurately recovers the 3D geometry and appearance of humans in loose clothing from a monocular image. SHARP uses a sparse and efficient fusion strategy to combine parametric body prior with a non-parametric 2D representation of clothed humans. The parametric body prior enforces geometrical consistency on the body shape and pose, while the non-parametric representation models loose clothing and handle self-occlusions as well. We also leverage the sparseness of the non-parametric representation for faster training of our network while using losses on 2D maps. Another key contribution is 3DHumans, our new life-like dataset of 3D human body scans with rich geometrical and textural details. We evaluate SHARP on 3DHumans and other publicly available datasets and show superior qualitative and quantitative performance than existing state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 298,345 |
1804.08766 | Real-Time Stochastic Predictive Control for Hybrid Vehicle Energy
Management | This work presents three computational methods for real time energy management in a hybrid hydraulic vehicle (HHV) when driver behavior and vehicle route are not known in advance. These methods, implemented in a receding horizon control (aka model predictive control) framework, are rather general and can be applied to systems with nonlinear dynamics subject to a Markov disturbance. State and input constraints are considered in each method. A mechanism based on the steady state distribution of the underlying Markov chain is developed for planning beyond a finite horizon in the HHV energy management problem. Road elevation information is forecasted along the horizon and then merged with the statistical model of driver behavior to increase accuracy of the horizon optimization. The characteristics of each strategy are compared and the benefit of learning driver behavior is analyzed through simulation on three drive cycles, including one real world drive cycle. A simulation is designed to explicitly demonstrate the benefit of adapting the Markov chain to real time driver behavior. Experimental results demonstrate the real time potential of the primary algorithm when implemented on a processor with limited computational resources. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 95,830 |
2105.01052 | Applied Language Technology: NLP for the Humanities | This contribution describes a two-course module that seeks to provide humanities majors with a basic understanding of language technology and its applications using Python. The learning materials consist of interactive Jupyter Notebooks and accompanying YouTube videos, which are openly available with a Creative Commons licence. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 233,420 |
2010.04637 | Recurrent babbling: evaluating the acquisition of grammar from limited
input data | Recurrent Neural Networks (RNNs) have been shown to capture various aspects of syntax from raw linguistic input. In most previous experiments, however, learning happens over unrealistic corpora, which do not reflect the type and amount of data a child would be exposed to. This paper remedies this state of affairs by training a Long Short-Term Memory network (LSTM) over a realistically sized subset of child-directed input. The behaviour of the network is analysed over time using a novel methodology which consists in quantifying the level of grammatical abstraction in the model's generated output (its "babbling"), compared to the language it has been exposed to. We show that the LSTM indeed abstracts new structuresas learning proceeds. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 199,813 |
2411.01477 | DPCL-Diff: The Temporal Knowledge Graph Reasoning Based on Graph Node
Diffusion Model with Dual-Domain Periodic Contrastive Learning | Temporal knowledge graph (TKG) reasoning that infers future missing facts is an essential and challenging task. Predicting future events typically relies on closely related historical facts, yielding more accurate results for repetitive or periodic events. However, for future events with sparse historical interactions, the effectiveness of this method, which focuses on leveraging high-frequency historical information, diminishes. Recently, the capabilities of diffusion models in image generation have opened new opportunities for TKG reasoning. Therefore, we propose a graph node diffusion model with dual-domain periodic contrastive learning (DPCL-Diff). Graph node diffusion model (GNDiff) introduces noise into sparsely related events to simulate new events, generating high-quality data that better conforms to the actual distribution. This generative mechanism significantly enhances the model's ability to reason about new events. Additionally, the dual-domain periodic contrastive learning (DPCL) maps periodic and non-periodic event entities to Poincar\'e and Euclidean spaces, leveraging their characteristics to distinguish similar periodic events effectively. Experimental results on four public datasets demonstrate that DPCL-Diff significantly outperforms state-of-the-art TKG models in event prediction, demonstrating our approach's effectiveness. This study also investigates the combined effectiveness of GNDiff and DPCL in TKG tasks. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 505,092 |
2408.00004 | Handling Numeric Expressions in Automatic Speech Recognition | This paper addresses the problem of correctly formatting numeric expressions in automatic speech recognition (ASR) transcripts. This is challenging since the expected transcript format depends on the context, e.g., 1945 (year) vs. 19:45 (timestamp). We compare cascaded and end-to-end approaches to recognize and format numeric expression, such as years, timestamps, currency amounts, and quantities. For the end-to-end approach we employed a data generation strategy using a large language model (LLM) together with a text to speech (TTS) model to generate adaptation data. The results on our test dataset show that while approaches based on LLMs perform well on recognizing formatted numeric expressions, adapted end-to-end models offer competitive performance with the advantage of lower latency and inference cost. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 477,678 |
1908.11360 | Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability
for STIT Logics | This work provides proof-search algorithms and automated counter-model extraction for a class of STIT logics. With this, we answer an open problem concerning syntactic decision procedures and cut-free calculi for STIT logics. A new class of cut-free complete labelled sequent calculi G3LdmL^m_n, for multi-agent STIT with at most n-many choices, is introduced. We refine the calculi G3LdmL^m_n through the use of propagation rules and demonstrate the admissibility of their structural rules, resulting in auxiliary calculi Ldm^m_nL. In the single-agent case, we show that the refined calculi Ldm^m_nL derive theorems within a restricted class of (forestlike) sequents, allowing us to provide proof-search algorithms that decide single-agent STIT logics. We prove that the proof-search algorithms are correct and terminate. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | true | 143,363 |
2412.19110 | A Selective Secure Precoding Framework for MU-MIMO Rate-Splitting
Multiple Access Networks Under Limited CSIT | In this paper, we propose a robust and adaptable secure precoding framework designed to encapsulate a intricate scenario where legitimate users have different information security: secure private or normal public information. Leveraging rate-splitting multiple access (RSMA), we formulate the sum secrecy spectral efficiency (SE) maximization problem in downlink multi-user multiple-input multiple-output (MIMO) systems with multi-eavesdropper. To resolve the challenges including the heterogeneity of security, non-convexity, and non-smoothness of the problem, we initially approximate the problem using a LogSumExp technique. Subsequently, we derive the first-order optimality condition in the form of a generalized eigenvalue problem. We utilize a power iteration-based method to solve the condition, thereby achieving a superior local optimal solution. The proposed algorithm is further extended to a more realistic scenario involving limited channel state information at the transmitter (CSIT). To effectively utilize the limited channel information, we employ a conditional average rate approach. Handling the conditional average by deriving useful bounds, we establish a lower bound for the objective function under the conditional average. Then we apply the similar optimization method as for the perfect CSIT case. In simulations, we validate the proposed algorithm in terms of the sum secrecy SE. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 520,715 |
2108.08911 | Explainable Deep Reinforcement Learning Using Introspection in a
Non-episodic Task | Explainable reinforcement learning allows artificial agents to explain their behavior in a human-like manner aiming at non-expert end-users. An efficient alternative of creating explanations is to use an introspection-based method that transforms Q-values into probabilities of success used as the base to explain the agent's decision-making process. This approach has been effectively used in episodic and discrete scenarios, however, to compute the probability of success in non-episodic and more complex environments has not been addressed yet. In this work, we adapt the introspection method to be used in a non-episodic task and try it in a continuous Atari game scenario solved with the Rainbow algorithm. Our initial results show that the probability of success can be computed directly from the Q-values for all possible actions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 251,428 |
2104.12622 | Towards Knowledge Graphs Validation through Weighted Knowledge Sources | The performance of applications, such as personal assistants and search engines, relies on high-quality knowledge bases, a.k.a. Knowledge Graphs (KGs). To ensure their quality one important task is knowledge validation, which measures the degree to which statements or triples of KGs are semantically correct. KGs inevitably contain incorrect and incomplete statements, which may hinder their adoption in business applications as they are not trustworthy. In this paper, we propose and implement a Validator that computes a confidence score for every triple and instance in KGs. The computed score is based on finding the same instances across different weighted knowledge sources and comparing their features. We evaluate our approach by comparing its results against a baseline validation. Our results suggest that we can validate KGs with an f-measure of at least 75%. Time-wise, the Validator, performed a validation of 2530 instances in 15 minutes approximately. Furthermore, we give insights and directions toward a better architecture to tackle KG validation. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 232,273 |
1905.03577 | Spatial-Spectral Feature Extraction via Deep ConvLSTM Neural Networks
for Hyperspectral Image Classification | In recent years, deep learning has presented a great advance in hyperspectral image (HSI) classification. Particularly, long short-term memory (LSTM), as a special deep learning structure, has shown great ability in modeling long-term dependencies in the time dimension of video or the spectral dimension of HSIs. However, the loss of spatial information makes it quite difficult to obtain the better performance. In order to address this problem, two novel deep models are proposed to extract more discriminative spatial-spectral features by exploiting the Convolutional LSTM (ConvLSTM). By taking the data patch in a local sliding window as the input of each memory cell band by band, the 2-D extended architecture of LSTM is considered for building the spatial-spectral ConvLSTM 2-D Neural Network (SSCL2DNN) to model long-range dependencies in the spectral domain. To better preserve the intrinsic structure information of the hyperspectral data, the spatial-spectral ConvLSTM 3-D Neural Network (SSCL3DNN) is proposed by extending LSTM to 3-D version for further improving the classification performance. The experiments, conducted on three commonly used HSI data sets, demonstrate that the proposed deep models have certain competitive advantages and can provide better classification performance than other state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 130,229 |
2502.00138 | JustAct+: Justified and Accountable Actions in Policy-Regulated,
Multi-Domain Data Processing | Inter-organisational data exchange is regulated by norms originating from sources ranging from (inter)national laws, to processing agreements, and individual consent. Verifying norm compliance is complex because laws (e.g., GDPR) distribute responsibility and require accountability. Moreover, in some application domains (e.g., healthcare), privacy requirements extend the norms (e.g., patient consent). In contrast, existing solutions such as smart contracts, access- and usage-control assume policies to be public, or otherwise, statically partition policy information at the cost of accountability and flexibility. Instead, our framework prescribes how decentralised agents justify their actions with policy fragments that the agents autonomously create, gossip, and assemble. Crucially, the permission of actions is always reproducible by any observer, even with a partial view of all the dynamic policies. Actors can be sure that future auditors will confirm their permissions. Systems centralise control by (re)configuring externally synchronised agreements, the bases of all justifications. As a result, control is centralised only to the extent desired by the agents. In this paper, we define the JustAct framework, detail its implementation in a particular data-processing system, and design a suitable policy language based on logic programming. A case study reproduces Brane - an existing policy-regulated, inter-domain, medical data processing system - and serves to demonstrate and assess the qualities of the framework. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 529,226 |
1010.0771 | Genetic Algorithm for Mulicriteria Optimization of a Multi-Pickup and
Delivery Problem with Time Windows | In This paper we present a genetic algorithm for mulicriteria optimization of a multipickup and delivery problem with time windows (m-PDPTW). The m-PDPTW is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence, capacity and time constraints. This paper purposes a brief literature review of the PDPTW, present an approach based on genetic algorithms and Pareto dominance method to give a set of satisfying solutions to the m-PDPTW minimizing total travel cost, total tardiness time and the vehicles number. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 7,784 |
1809.00798 | Plastic Waste is Exponentially Filling our Oceans, but where are the
Robots? | Plastic waste is filling our oceans at an exponential rate. The situation is catastrophic and has now garnered worldwide attention. Despite the catastrophic conditions, little to no robotics research is conducted in the identification, collection, sorting, and removal of plastic waste from oceans and rivers and at the macro- and micro-scale. Only a scarce amount of individual efforts can be found from private sources. This paper presents a cursory view of the current plastic water waste catastrophe, associated robot research, and other efforts currently underway to address the issue. As well as the call that as a community, we must wait no longer to address the problem. Surely there is much potential for robots to help meet the challenges posed by the enormity of this problem. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 106,657 |
2410.24104 | Clustering to Minimize Cluster-Aware Norm Objectives | We initiate the study of the following general clustering problem. We seek to partition a given set $P$ of data points into $k$ clusters by finding a set $X$ of $k$ centers and assigning each data point to one of the centers. The cost of a cluster, represented by a center $x\in X$, is a monotone, symmetric norm $f$ (inner norm) of the vector of distances of points assigned to $x$. The goal is to minimize a norm $g$ (outer norm) of the vector of cluster costs. This problem, which we call $(f,g)$-Clustering, generalizes many fundamental clustering problems such as $k$-Center, $k$-Median , Min-Sum of Radii, and Min-Load $k$-Clustering . A recent line of research (Chakrabarty, Swamy [STOC'19]) studies norm objectives that are oblivious to the cluster structure such as $k$-Median and $k$-Center. In contrast, our problem models cluster-aware objectives including Min-Sum of Radii and Min-Load $k$-Clustering. Our main results are as follows. First, we design a constant-factor approximation algorithm for $(\textsf{top}_\ell,\mathcal{L}_1)$-Clustering where the inner norm ($\textsf{top}_\ell$) sums over the $\ell$ largest distances. Second, we design a constant-factor approximation\ for $(\mathcal{L}_\infty,\textsf{Ord})$-Clustering where the outer norm is a convex combination of $\textsf{top}_\ell$ norms (ordered weighted norm). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 504,330 |
2412.05438 | Granular Ball K-Class Twin Support Vector Classifier | This paper introduces the Granular Ball K-Class Twin Support Vector Classifier (GB-TWKSVC), a novel multi-class classification framework that combines Twin Support Vector Machines (TWSVM) with granular ball computing. The proposed method addresses key challenges in multi-class classification by utilizing granular ball representation for improved noise robustness and TWSVM's non-parallel hyperplane architecture solves two smaller quadratic programming problems, enhancing efficiency. Our approach introduces a novel formulation that effectively handles multi-class scenarios, advancing traditional binary classification methods. Experimental evaluation on diverse benchmark datasets shows that GB-TWKSVC significantly outperforms current state-of-the-art classifiers in both accuracy and computational performance. The method's effectiveness is validated through comprehensive statistical tests and complexity analysis. Our work advances classification algorithms by providing a mathematically sound framework that addresses the scalability and robustness needs of modern machine learning applications. The results demonstrate GB-TWKSVC's broad applicability across domains including pattern recognition, fault diagnosis, and large-scale data analytics, establishing it as a valuable addition to the classification algorithm landscape. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 514,824 |
1401.6500 | Holographic Transformation for Quantum Factor Graphs | Recently, a general tool called a holographic transformation, which transforms an expression of the partition function to another form, has been used for polynomial-time algorithms and for improvement and understanding of the belief propagation. In this work, the holographic transformation is generalized to quantum factor graphs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,365 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.