id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2409.19967 | Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We
Learn How Vision-Language Models Function | Text-to-image diffusion models particularly Stable Diffusion, have revolutionized the field of computer vision. However, the synthesis quality often deteriorates when asked to generate images that faithfully represent complex prompts involving multiple attributes and objects. While previous studies suggest that blended text embeddings lead to improper attribute binding, few have explored this in depth. In this work, we critically examine the limitations of the CLIP text encoder in understanding attributes and investigate how this affects diffusion models. We discern a phenomenon of attribute bias in the text space and highlight a contextual issue in padding embeddings that entangle different concepts. We propose \textbf{Magnet}, a novel training-free approach to tackle the attribute binding problem. We introduce positive and negative binding vectors to enhance disentanglement, further with a neighbor strategy to increase accuracy. Extensive experiments show that Magnet significantly improves synthesis quality and binding accuracy with negligible computational cost, enabling the generation of unconventional and unnatural concepts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 492,937 |
2304.13538 | ElegansNet: a brief scientific report and initial experiments | This research report introduces ElegansNet, a neural network that mimics real-world neuronal network circuitry, with the goal of better understanding the interplay between connectome topology and deep learning systems. The proposed approach utilizes the powerful representational capabilities of living beings' neuronal circuitry to design and generate improved deep learning systems with a topology similar to natural networks. The Caenorhabditis elegans connectome is used as a reference due to its completeness, reasonable size, and functional neuron classes annotations. It is demonstrated that the connectome of simple organisms exhibits specific functional relationships between neurons, and once transformed into learnable tensor networks and integrated into modern architectures, it offers bio-plausible structures that efficiently solve complex tasks. The performance of the models is demonstrated against randomly wired networks and compared to artificial networks ranked on global benchmarks. In the first case, ElegansNet outperforms randomly wired networks. Interestingly, ElegansNet models show slightly similar performance with only those based on the Watts-Strogatz small-world property. When compared to state-of-the-art artificial neural networks, such as transformers or attention-based autoencoders, ElegansNet outperforms well-known deep learning and traditional models in both supervised image classification tasks and unsupervised hand-written digits reconstruction, achieving top-1 accuracy of 99.99% on Cifar10 and 99.84% on MNIST Unsup on the validation sets. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 360,616 |
2003.10674 | Towards Explainability of Machine Learning Models in Insurance Pricing | Machine learning methods have garnered increasing interest among actuaries in recent years. However, their adoption by practitioners has been limited, partly due to the lack of transparency of these methods, as compared to generalized linear models. In this paper, we discuss the need for model interpretability in property & casualty insurance ratemaking, propose a framework for explaining models, and present a case study to illustrate the framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,409 |
2402.00179 | De-identification is not always enough | For sharing privacy-sensitive data, de-identification is commonly regarded as adequate for safeguarding privacy. Synthetic data is also being considered as a privacy-preserving alternative. Recent successes with numerical and tabular data generative models and the breakthroughs in large generative language models raise the question of whether synthetically generated clinical notes could be a viable alternative to real notes for research purposes. In this work, we demonstrated that (i) de-identification of real clinical notes does not protect records against a membership inference attack, (ii) proposed a novel approach to generate synthetic clinical notes using the current state-of-the-art large language models, (iii) evaluated the performance of the synthetically generated notes in a clinical domain task, and (iv) proposed a way to mount a membership inference attack where the target model is trained with synthetic data. We observed that when synthetically generated notes closely match the performance of real data, they also exhibit similar privacy concerns to the real data. Whether other approaches to synthetically generated clinical notes could offer better trade-offs and become a better alternative to sensitive real notes warrants further investigation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 425,510 |
2311.06835 | Open-Set Graph Anomaly Detection via Normal Structure Regularisation | This paper considers an important Graph Anomaly Detection (GAD) task, namely open-set GAD, which aims to train a detection model using a small number of normal and anomaly nodes (referred to as seen anomalies) to detect both seen anomalies and unseen anomalies (i.e., anomalies that cannot be illustrated the training anomalies). Those labelled training data provide crucial prior knowledge about abnormalities for GAD models, enabling substantially reduced detection errors. However, current supervised GAD methods tend to over-emphasise fitting the seen anomalies, leading to many errors of detecting the unseen anomalies as normal nodes. Further, existing open-set AD models were introduced to handle Euclidean data, failing to effectively capture discriminative features from graph structure and node attributes for GAD. In this work, we propose a novel open-set GAD approach, namely normal structure regularisation (NSReg), to achieve generalised detection ability to unseen anomalies, while maintaining its effectiveness on detecting seen anomalies. The key idea in NSReg is to introduce a regularisation term that enforces the learning of compact, semantically-rich representations of normal nodes based on their structural relations to other nodes. When being optimised with supervised anomaly detection losses, the regularisation term helps incorporate strong normality into the modelling, and thus, it effectively avoids over-fitting the seen anomalies and learns a better normality decision boundary, largely reducing the false negatives of detecting unseen anomalies as normal. Extensive empirical results on seven real-world datasets show that NSReg significantly outperforms state-of-the-art competing methods by at least 14% AUC-ROC on the unseen anomaly classes and by 10% AUC-ROC on all anomaly classes. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 407,097 |
2205.03533 | Training Enhancement of Deep Learning Models for Massive MIMO CSI
Feedback with Small Datasets | Accurate downlink channel state information (CSI) is vital to achieving high spectrum efficiency in massive MIMO systems. Existing works on the deep learning (DL) model for CSI feedback have shown efficient compression and recovery in frequency division duplex (FDD) systems. However, practical DL networks require sizeable wireless CSI datasets during training to achieve high model accuracy. To address this labor-intensive problem, this work develops an efficient training enhancement solution of DL-based feedback architecture based on a modest dataset by exploiting the complex CSI features, and augmenting CSI dataset based on domain knowledge. We first propose a spherical CSI feedback network, SPTM2-ISTANet+, which employs the spherical normalization framework to mitigate the effect of path loss variation. We exploit the trainable measurement matrix and residual recovery structure to improve the encoding efficiency and recovery accuracy. For limited CSI measurements, we propose a model-driven lightweight and universal augmentation strategy based on decoupling CSI magnitude and phase information, applying the circular shift in angular-delay domain, and randomizing the CSI phase to approximate phase distribution. Test results demonstrate the efficacy and efficiency of the proposed training strategy and feedback architecture for accurate CSI feedback under limited measurements. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 295,313 |
2402.01547 | Contingency Detection in Modern Power Systems: A Stochastic Hybrid
System Method | This paper introduces a new stochastic hybrid system (SHS) framework for contingency detection in modern power systems (MPS). The framework uses stochastic hybrid system representations in state space models to expand and facilitate capability of contingency detection. In typical microgrids (MGs), buses may contain various synchronous generators, renewable generators, controllable loads, battery systems, regular loads, etc. For development of SHS models in power systems, this paper introduces the concept of dynamic and non-dynamic buses. By converting a physical power grid into a virtual linearized state space model and representing contingencies as random switching of system structures and parameters, this paper formulates the contingency detection problem as a joint estimation problem of discrete event and continuous states in stochastic hybrid systems. This method offers unique advantages, including using common measurement signals on voltage and current synchrophasors to detect different types and locations of contingencies, avoiding expensive local direct fault measurements and detecting certain contingencies that cannot be directly measured. The method employs a small and suitably-designed probing signal to sustain the ability of persistent contingency detection. Joint estimation algorithms are presented with their proven convergence and reliability properties. Examples that use an IEEE 5-bus system demonstrate the main ideas and derivation steps. Simulation case studies on an IEEE 33-bus system are used for detecting transmission line faults and sensor interruptions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 426,075 |
2501.01447 | Analyzing Country-Level Vaccination Rates and Determinants of Practical
Capacity to Administer COVID-19 Vaccines | The COVID-19 vaccine development, manufacturing, transportation, and administration proved an extreme logistics operation of global magnitude. Global vaccination levels, however, remain a key concern in preventing the emergence of new strains and minimizing the impact of the pandemic's disruption of daily life. In this paper, country-level vaccination rates are analyzed through a queuing framework to extract service rates that represent the practical capacity of a country to administer vaccines. These rates are further characterized through regression and interpretable machine learning methods with country-level demographic, governmental, and socio-economic variates. Model results show that participation in multi-governmental collaborations such as COVAX may improve the ability to vaccinate. Similarly, improved transportation and accessibility variates such as roads per area for low-income countries and rail lines per area for high-income countries can improve rates. It was also found that for low-income countries specifically, improvements in basic and health infrastructure (as measured through spending on healthcare, number of doctors and hospital beds per 100k, population percent with access to electricity, life expectancy, and vehicles per 1000 people) resulted in higher vaccination rates. Of the high-income countries, those with larger 65-plus populations struggled to vaccinate at high rates, indicating potential accessibility issues for the elderly. This study finds that improving basic and health infrastructure, focusing on accessibility in the last mile, particularly for the elderly, and fostering global partnerships can improve logistical operations of such a scale. Such structural impediments and inequities in global health care must be addressed in preparation for future global public health crises. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 522,074 |
1912.05617 | Molecular Generative Model Based On Adversarially Regularized
Autoencoder | Deep generative models are attracting great attention as a new promising approach for molecular design. All models reported so far are based on either variational autoencoder (VAE) or generative adversarial network (GAN). Here we propose a new type model based on an adversarially regularized autoencoder (ARAE). It basically uses latent variables like VAE, but the distribution of the latent variables is obtained by adversarial training like in GAN. The latter is intended to avoid both inappropriate approximation of posterior distribution in VAE and difficulty in handling discrete variables in GAN. Our benchmark study showed that ARAE indeed outperformed conventional models in terms of validity, uniqueness, and novelty per generated molecule. We also demonstrated successful conditional generation of drug-like molecules with ARAE for both cases of single and multiple properties control. As a potential real-world application, we could generate EGFR inhibitors sharing the scaffolds of known active molecules while satisfying drug-like conditions simultaneously. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,146 |
2011.08337 | Optimal Transport-based Coverage Control for Swarm Robot Systems:
Generalization of the Voronoi Tessellation-based Method | Swarm robot systems, which consist of many cooperating mobile robots, have attracted attention for their environmental adaptability and fault tolerance advantages. One of the most important tasks for such systems is coverage control, in which robots autonomously deploy to approximate a given spatial distribution. In this study, we formulate a coverage control paradigm using the concept of optimal transport and propose a novel control technique, which we have termed the optimal transport-based coverage control (OTCC) method. The proposed OTCC, derived via the gradient flow of the cost function in the Kantorovich dual problem, is shown to covers a widely used existing control method as a special case. We also perform a Lyapunov stability analysis of the controlled system, and provide numerical calculations to show that the OTCC reproduces target distributions with better performance than the existing control method. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 206,837 |
2209.05987 | Design of Negative Sampling Strategies for Distantly Supervised Skill
Extraction | Skills play a central role in the job market and many human resources (HR) processes. In the wake of other digital experiences, today's online job market has candidates expecting to see the right opportunities based on their skill set. Similarly, enterprises increasingly need to use data to guarantee that the skills within their workforce remain future-proof. However, structured information about skills is often missing, and processes building on self- or manager-assessment have shown to struggle with issues around adoption, completeness, and freshness of the resulting data. Extracting skills is a highly challenging task, given the many thousands of possible skill labels mentioned either explicitly or merely described implicitly and the lack of finely annotated training corpora. Previous work on skill extraction overly simplifies the task to an explicit entity detection task or builds on manually annotated training data that would be infeasible if applied to a complete vocabulary of skills. We propose an end-to-end system for skill extraction, based on distant supervision through literal matching. We propose and evaluate several negative sampling strategies, tuned on a small validation dataset, to improve the generalization of skill extraction towards implicitly mentioned skills, despite the lack of such implicit skills in the distantly supervised data. We observe that using the ESCO taxonomy to select negative examples from related skills yields the biggest improvements, and combining three different strategies in one model further increases the performance, up to 8 percentage points in RP@5. We introduce a manually annotated evaluation benchmark for skill extraction based on the ESCO taxonomy, on which we validate our models. We release the benchmark dataset for research purposes to stimulate further research on the task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 317,262 |
2411.14666 | Brain-Computer Interfaces for Emotional Regulation in Patients with
Various Disorders | Neurological and Physiological Disorders that impact emotional regulation each have their own unique characteristics which are important to understand in order to create a generalized solution to all of them. The purpose of this experiment is to explore the potential applications of EEG-based Brain-Computer Interfaces (BCIs) in enhancing emotional regulation for individuals with neurological and physiological disorders. The research focuses on the development of a novel neural network algorithm for understanding EEG data, with a particular emphasis on recognizing and regulating emotional states. The procedure involves the collection of EEG-based emotion data from open-Neuro. Using novel data modification techniques, information from the dataset can be altered to create a dataset that has neural patterns of patients with disorders whilst showing emotional change. The data analysis reveals promising results, as the algorithm is able to successfully classify emotional states with a high degree of accuracy. This suggests that EEG-based BCIs have the potential to be a valuable tool in aiding individuals with a range of neurological and physiological disorders in recognizing and regulating their emotions. To improve upon this work, data collection on patients with neurological disorders should be done to improve overall sample diversity. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,266 |
1809.02862 | A survey of food recommenders | Everyone eats. However, people do not always know what to eat. They need a little help and inspiration. Consequently, a number of apps, services, and programs have developed recommenders around food. These cover food, meal, recipe, and restaurant recommendations, which are the most common use cases, but also other areas such as substitute ingredients, menus, and diets. The latter is especially important in the area of health and wellness where users have more specific dietary needs and goals. In this survey, we review the food recommender literature. We cover the types of systems in terms of their goals and what they are recommending, the datasets and signals that they use to train models, the technical approaches and model types used, as well as some of the system constraints. | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | 107,160 |
1904.08928 | Class specific or shared? A cascaded dictionary learning framework for
image classification | Dictionary learning methods can be split into: i) class specific dictionary learning ii) class shared dictionary learning. The difference between the two categories is how to use discriminative information. With the first category, samples of different classes are mapped into different subspaces, which leads to some redundancy with the class specific base vectors. While for the second category, the samples in each specific class can not be described accurately. In this paper, we first propose a novel class shared dictionary learning method named label embedded dictionary learning (LEDL). It is the improvement based on LCKSVD, which is easier to find out the optimal solution. Then we propose a novel framework named cascaded dictionary learning framework (CDLF) to combine the specific dictionary learning with shared dictionary learning to describe the feature to boost the performance of classification sufficiently. Extensive experimental results on six benchmark datasets illustrate that our methods are capable of achieving superior performance compared to several state-of-art classification algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,220 |
0710.1511 | Demographic growth and the distribution of language sizes | It is argued that the present log-normal distribution of language sizes is, to a large extent, a consequence of demographic dynamics within the population of speakers of each language. A two-parameter stochastic multiplicative process is proposed as a model for the population dynamics of individual languages, and applied over a period spanning the last ten centuries. The model disregards language birth and death. A straightforward fitting of the two parameters, which statistically characterize the population growth rate, predicts a distribution of language sizes in excellent agreement with empirical data. Numerical simulations, and the study of the size distribution within language families, validate the assumptions at the basis of the model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 753 |
2409.05846 | An Introduction to Quantum Reinforcement Learning (QRL) | Recent advancements in quantum computing (QC) and machine learning (ML) have sparked considerable interest in the integration of these two cutting-edge fields. Among the various ML techniques, reinforcement learning (RL) stands out for its ability to address complex sequential decision-making problems. RL has already demonstrated substantial success in the classical ML community. Now, the emerging field of Quantum Reinforcement Learning (QRL) seeks to enhance RL algorithms by incorporating principles from quantum computing. This paper offers an introduction to this exciting area for the broader AI and ML community. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | true | 486,910 |
2304.10212 | The impact of the AI revolution on asset management | Recent progress in deep learning, a special form of machine learning, has led to remarkable capabilities machines can now be endowed with: they can read and understand free flowing text, reason and bargain with human counterparts, translate texts between languages, learn how to take decisions to maximize certain outcomes, etc. Today, machines have revolutionized the detection of cancer, the prediction of protein structures, the design of drugs, the control of nuclear fusion reactors etc. Although these capabilities are still in their infancy, it seems clear that their continued refinement and application will result in a technological impact on nearly all social and economic areas of human activity, the likes of which we have not seen before. In this article, I will share my view as to how AI will likely impact asset management in general and I will provide a mental framework that will equip readers with a simple criterion to assess whether and to what degree a given fund really exploits deep learning and whether a large disruption risk from deep learning exist. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 359,329 |
2405.16287 | LoGAH: Predicting 774-Million-Parameter Transformers using Graph
HyperNetworks with 1/100 Parameters | A good initialization of deep learning models is essential since it can help them converge better and faster. However, pretraining large models is unaffordable for many researchers, which makes a desired prediction for initial parameters more necessary nowadays. Graph HyperNetworks (GHNs), one approach to predicting model parameters, have recently shown strong performance in initializing large vision models. Unfortunately, predicting parameters of very wide networks relies on copying small chunks of parameters multiple times and requires an extremely large number of parameters to support full prediction, which greatly hinders its adoption in practice. To address this limitation, we propose LoGAH (Low-rank GrAph Hypernetworks), a GHN with a low-rank parameter decoder that expands to significantly wider networks without requiring as excessive increase of parameters as in previous attempts. LoGAH allows us to predict the parameters of 774-million large neural networks in a memory-efficient manner. We show that vision and language models (i.e., ViT and GPT-2) initialized with LoGAH achieve better performance than those initialized randomly or using existing hypernetworks. Furthermore, we show promising transfer learning results w.r.t. training LoGAH on small datasets and using the predicted parameters to initialize for larger tasks. We provide the codes in https://github.com/Blackzxy/LoGAH . | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,349 |
2406.03333 | A Flexible Recursive Network for Video Stereo Matching Based on Residual
Estimation | Due to the high similarity of disparity between consecutive frames in video sequences, the area where disparity changes is defined as the residual map, which can be calculated. Based on this, we propose RecSM, a network based on residual estimation with a flexible recursive structure for video stereo matching. The RecSM network accelerates stereo matching using a Multi-scale Residual Estimation Module (MREM), which employs the temporal context as a reference and rapidly calculates the disparity for the current frame by computing only the residual values between the current and previous frames. To further reduce the error of estimated disparities, we use the Disparity Optimization Module (DOM) and Temporal Attention Module (TAM) to enforce constraints between each module, and together with MREM, form a flexible Stackable Computation Structure (SCS), which allows for the design of different numbers of SCS based on practical scenarios. Experimental results demonstrate that with a stack count of 3, RecSM achieves a 4x speed improvement compared to ACVNet, running at 0.054 seconds based on one NVIDIA RTX 2080TI GPU, with an accuracy decrease of only 0.7%. Code is available at https://github.com/Y0uchenZ/RecSM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 461,194 |
0709.1227 | Efficient Algorithms for Node Disjoint Subgraph Homeomorphism
Determination | Recently, great efforts have been dedicated to researches on the management of large scale graph based data such as WWW, social networks, biological networks. In the study of graph based data management, node disjoint subgraph homeomorphism relation between graphs is more suitable than (sub)graph isomorphism in many cases, especially in those cases that node skipping and node mismatching are allowed. However, no efficient node disjoint subgraph homeomorphism determination (ndSHD) algorithms have been available. In this paper, we propose two computationally efficient ndSHD algorithms based on state spaces searching with backtracking, which employ many heuristics to prune the search spaces. Experimental results on synthetic data sets show that the proposed algorithms are efficient, require relative little time in most of the testing cases, can scale to large or dense graphs, and can accommodate to more complex fuzzy matching cases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 648 |
2306.10346 | Fast Fourier Inception Networks for Occluded Video Prediction | Video prediction is a pixel-level task that generates future frames by employing the historical frames. There often exist continuous complex motions, such as object overlapping and scene occlusion in video, which poses great challenges to this task. Previous works either fail to well capture the long-term temporal dynamics or do not handle the occlusion masks. To address these issues, we develop the fully convolutional Fast Fourier Inception Networks for video prediction, termed \textit{FFINet}, which includes two primary components, \ie, the occlusion inpainter and the spatiotemporal translator. The former adopts the fast Fourier convolutions to enlarge the receptive field, such that the missing areas (occlusion) with complex geometric structures are filled by the inpainter. The latter employs the stacked Fourier transform inception module to learn the temporal evolution by group convolutions and the spatial movement by channel-wise Fourier convolutions, which captures both the local and the global spatiotemporal features. This encourages generating more realistic and high-quality future frames. To optimize the model, the recovery loss is imposed to the objective, \ie, minimizing the mean square error between the ground-truth frame and the recovery frame. Both quantitative and qualitative experimental results on five benchmarks, including Moving MNIST, TaxiBJ, Human3.6M, Caltech Pedestrian, and KTH, have demonstrated the superiority of the proposed approach. Our code is available at GitHub. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 374,193 |
2403.09599 | Logical Discrete Graphical Models Must Supplement Large Language Models
for Information Synthesis | Given the emergent reasoning abilities of large language models, information retrieval is becoming more complex. Rather than just retrieve a document, modern information retrieval systems advertise that they can synthesize an answer based on potentially many different documents, conflicting data sources, and using reasoning. We review recent literature and argue that the large language model has crucial flaws that prevent it from on its own ever constituting general intelligence, or answering general information synthesis requests. This review shows that the following are problems for large language models: hallucinations, complex reasoning, planning under uncertainty, and complex calculations. We outline how logical discrete graphical models can solve all of these problems, and outline a method of training a logical discrete model from unlabeled text. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 437,830 |
2412.19542 | Interacted Object Grounding in Spatio-Temporal Human-Object Interactions | Spatio-temporal Human-Object Interaction (ST-HOI) understanding aims at detecting HOIs from videos, which is crucial for activity understanding. However, existing whole-body-object interaction video benchmarks overlook the truth that open-world objects are diverse, that is, they usually provide limited and predefined object classes. Therefore, we introduce a new open-world benchmark: Grounding Interacted Objects (GIO) including 1,098 interacted objects class and 290K interacted object boxes annotation. Accordingly, an object grounding task is proposed expecting vision systems to discover interacted objects. Even though today's detectors and grounding methods have succeeded greatly, they perform unsatisfactorily in localizing diverse and rare objects in GIO. This profoundly reveals the limitations of current vision systems and poses a great challenge. Thus, we explore leveraging spatio-temporal cues to address object grounding and propose a 4D question-answering framework (4D-QA) to discover interacted objects from diverse videos. Our method demonstrates significant superiority in extensive experiments compared to current baselines. Data and code will be publicly available at https://github.com/DirtyHarryLYL/HAKE-AVA. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 520,892 |
2208.10762 | Depth Map Decomposition for Monocular Depth Estimation | We propose a novel algorithm for monocular depth estimation that decomposes a metric depth map into a normalized depth map and scale features. The proposed network is composed of a shared encoder and three decoders, called G-Net, N-Net, and M-Net, which estimate gradient maps, a normalized depth map, and a metric depth map, respectively. M-Net learns to estimate metric depths more accurately using relative depth features extracted by G-Net and N-Net. The proposed algorithm has the advantage that it can use datasets without metric depth labels to improve the performance of metric depth estimation. Experimental results on various datasets demonstrate that the proposed algorithm not only provides competitive performance to state-of-the-art algorithms but also yields acceptable results even when only a small amount of metric depth data is available for its training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 314,192 |
2007.12578 | Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning | Computational histopathology image diagnosis becomes increasingly popular and important, where images are segmented or classified for disease diagnosis by computers. While pathologists do not struggle with color variations in slides, computational solutions usually suffer from this critical issue. To address the issue of color variations in histopathology images, this study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks. By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information presented by image texture, structure, and chroma-contrast features is preserved in color-normalized images. Particularly, the smart treat of chromatic image content in our DSCSI-GAN model helps to achieve noticeable normalization improvement in image regions where stains mix due to histological substances co-localization. Extensive experimentation on public histopathology image sets indicates that our methods outperform prior arts in terms of generating more stain-consistent images, better preserving histological information in images, and obtaining significantly higher learning efficiency. Our python implementation is published on https://github.com/hanwen0529/DSCSI-GAN. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 188,863 |
2303.13757 | Structural Imbalance Aware Graph Augmentation Learning | Graph machine learning (GML) has made great progress in node classification, link prediction, graph classification and so on. However, graphs in reality are often structurally imbalanced, that is, only a few hub nodes have a denser local structure and higher influence. The imbalance may compromise the robustness of existing GML models, especially in learning tail nodes. This paper proposes a selective graph augmentation method (SAug) to solve this problem. Firstly, a Pagerank-based sampling strategy is designed to identify hub nodes and tail nodes in the graph. Secondly, a selective augmentation strategy is proposed, which drops the noisy neighbors of hub nodes on one side, and discovers the latent neighbors and generates pseudo neighbors for tail nodes on the other side. It can also alleviate the structural imbalance between two types of nodes. Finally, a GNN model will be retrained on the augmented graph. Extensive experiments demonstrate that SAug can significantly improve the backbone GNNs and achieve superior performance to its competitors of graph augmentation methods and hub/tail aware methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 353,806 |
2112.14893 | Reversible Upper Confidence Bound Algorithm to Generate Diverse
Optimized Candidates | Most algorithms for the multi-armed bandit problem in reinforcement learning aimed to maximize the expected reward, which are thus useful in searching the optimized candidate with the highest reward (function value) for diverse applications (e.g., AlphaGo). However, in some typical application scenaios such as drug discovery, the aim is to search a diverse set of candidates with high reward. Here we propose a reversible upper confidence bound (rUCB) algorithm for such a purpose, and demonstrate its application in virtual screening upon intrinsically disordered proteins (IDPs). It is shown that rUCB greatly reduces the query times while achieving both high accuracy and low performance loss.The rUCB may have potential application in multipoint optimization and other reinforcement-learning cases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,644 |
2102.06018 | Transparent FPGA Acceleration with TensorFlow | Today, artificial neural networks are one of the major innovators pushing the progress of machine learning. This has particularly affected the development of neural network accelerating hardware. However, since most of these architectures require specialized toolchains, there is a certain amount of additional effort for developers each time they want to make use of a new deep learning accelerator. Furthermore the flexibility of the device is bound to the architecture itself, as well as to the functionality of the runtime environment. In this paper we propose a toolflow using TensorFlow as frontend, thus offering developers the opportunity of using a familiar environment. On the backend we use an FPGA, which is addressable via an HSA runtime environment. In this way we are able to hide the complexity of controlling new hardware from the user, while at the same time maintaining a high amount of flexibility. This can be achieved by our HSA toolflow, since the hardware is not statically configured with the structure of the network. Instead, it can be dynamically reconfigured during runtime with the respective kernels executed by the network and simultaneously from other sources e.g. OpenCL/OpenMP. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 219,613 |
2410.18457 | Integrating Deep Feature Extraction and Hybrid ResNet-DenseNet Model for
Multi-Class Abnormality Detection in Endoscopic Images | This paper presents a deep learning framework for the multi-class classification of gastrointestinal abnormalities in Video Capsule Endoscopy (VCE) frames. The aim is to automate the identification of ten GI abnormality classes, including angioectasia, bleeding, and ulcers, thereby reducing the diagnostic burden on gastroenterologists. Utilizing an ensemble of DenseNet and ResNet architectures, the proposed model achieves an overall accuracy of 94\% across a well-structured dataset. Precision scores range from 0.56 for erythema to 1.00 for worms, with recall rates peaking at 98% for normal findings. This study emphasizes the importance of robust data preprocessing techniques, including normalization and augmentation, in enhancing model performance. The contributions of this work lie in developing an effective AI-driven tool that streamlines the diagnostic process in gastroenterology, ultimately improving patient care and clinical outcomes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 501,893 |
2001.00991 | Human-robot co-manipulation of extended objects: Data-driven models and
control from analysis of human-human dyads | Human teams are able to easily perform collaborative manipulation tasks. However, for a robot and human to simultaneously manipulate an extended object is a difficult task using existing methods from the literature. Our approach in this paper is to use data from human-human dyad experiments to determine motion intent which we use for a physical human-robot co-manipulation task. We first present and analyze data from human-human dyads performing co-manipulation tasks. We show that our human-human dyad data has interesting trends including that interaction forces are non-negligible compared to the force required to accelerate an object and that the beginning of a lateral movement is characterized by distinct torque triggers from the leader of the dyad. We also examine different metrics to quantify performance of different dyads. We also develop a deep neural network based on motion data from human-human trials to predict human intent based on past motion. We then show how force and motion data can be used as a basis for robot control in a human-robot dyad. Finally, we compare the performance of two controllers for human-robot co-manipulation to human-human dyad performance. | true | false | false | false | true | false | true | true | false | false | true | false | false | false | false | false | false | false | 159,366 |
2410.12881 | MIND: Math Informed syNthetic Dialogues for Pretraining LLMs | The utility of synthetic data to enhance pretraining data quality and hence to improve downstream task accuracy has been widely explored in recent large language models (LLMs). Yet, these approaches fall inadequate in complex, multi-hop and mathematical reasoning tasks as the synthetic data typically fails to add complementary knowledge to the existing raw corpus. In this work, we propose a novel large-scale and diverse Math Informed syNthetic Dialogue (MIND) generation method that improves the mathematical reasoning ability of LLMs. Specifically, using MIND, we generate synthetic conversations based on OpenWebMath (OWM), resulting in a new math corpus, MIND-OWM. Our experiments with different conversational settings reveal that incorporating knowledge gaps between dialog participants is essential for generating high-quality math data. We further identify an effective way to format and integrate synthetic and raw data during pretraining to maximize the gain in mathematical reasoning, emphasizing the need to restructure raw data rather than use it as-is. Compared to pretraining just on raw data, a model pretrained on MIND-OWM shows significant boost in mathematical reasoning (GSM8K: +13.42%, MATH: +2.30%), including superior performance in specialized knowledge (MMLU: +4.55%, MMLU-STEM: +4.28%) and general purpose reasoning tasks (GENERAL REASONING: +2.51%). | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 499,265 |
2407.02056 | Integrate the Essence and Eliminate the Dross: Fine-Grained
Self-Consistency for Free-Form Language Generation | Self-consistency (SC), leveraging multiple samples from LLMs, shows significant gains on various reasoning tasks but struggles with free-form generation due to the difficulty of aggregating answers. Its variants, UCS and USC, rely on sample selection or voting mechanisms to improve output quality. These methods, however, face limitations due to their inability to fully utilize the nuanced consensus knowledge present within multiple candidate samples, often resulting in suboptimal outputs. We propose Fine-Grained Self-Consistency (FSC) to addresses these limitations by extracting and integrating segment-level commonalities from candidate samples, enhancing the performance of LLMs both in open-ended and reasoning tasks. Based on this, we present two additional strategies: candidate filtering, which enhances overall quality by identifying highly similar candidate sets, and merging, which reduces input token requirements by combining similar samples. The effectiveness of FSC is demonstrated through extensive experiments on various tasks, including summarization, code generation, and mathematical reasoning, using GPT-3.5-turbo and GPT-4. The results indicate significant improvements over baseline methods, showcasing the potential of FSC to optimize output quality by effectively synthesizing fine-grained consensus knowledge from multiple samples. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 469,566 |
2005.07675 | How to Make 5G Communications "Invisible": Adversarial Machine Learning
for Wireless Privacy | We consider the problem of hiding wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect whether any transmission of interest is present or not. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper, while a cooperative jammer (CJ) transmits carefully crafted adversarial perturbations over the air to fool the eavesdropper into classifying the received superposition of signals as noise. The CJ puts an upper bound on the strength of perturbation signal to limit its impact on the bit error rate (BER) at the receiver. We show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with high probability while increasing the BER only slightly. On the other hand, the CJ cannot fool the eavesdropper by simply transmitting Gaussian noise as in conventional jamming and instead needs to craft perturbation signals built by adversarial machine learning to enable covert communications. Our results show that signals with different modulation types and eventually 5G communications can be effectively hidden from an eavesdropper even if it is equipped with a DL classifier to detect transmissions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 177,347 |
2307.16366 | Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's
Disease from sMRI and PET Scans | In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks can be created from sMRI or PET images and be used in a population graph framework that can combine phenotypic information with imaging features of these brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique is proposed to combine the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis, and this study also provides technical reference and support the need for multivariate multi-modal diagnosis methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,577 |
2309.14737 | Volumetric Semantically Consistent 3D Panoptic Mapping | We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating comprehensive, accurate, and efficient semantic 3D maps suitable for autonomous agents in unstructured environments. The proposed approach is based on a Voxel-TSDF representation used in recent algorithms. It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions. Further improvements are achieved by graph optimization-based semantic labeling and instance refinement. The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics. We also highlight a downfall in the evaluation of recent studies: using the ground truth trajectory as input instead of a SLAM-estimated one substantially affects the accuracy, creating a large gap between the reported results and the actual performance on real-world data. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 394,719 |
1810.12503 | Unsupervised Meta-path Reduction on Heterogeneous Information Networks | Heterogeneous Information Network (HIN) has attracted much attention due to its wide applicability in a variety of data mining tasks, especially for tasks with multi-typed objects. A potentially large number of meta-paths can be extracted from the heterogeneous networks, providing abundant semantic knowledge. Though a variety of meta-paths can be defined, too many meta-paths are redundant. Reduction on the number of meta-paths can enhance the effectiveness since some redundant meta-paths provide interferential linkage to the task. Moreover, the reduced meta-paths can reflect the characteristic of the heterogeneous network. Previous endeavors try to reduce the number of meta-paths under the guidance of supervision information. Nevertheless, supervised information is expensive and may not always be available. In this paper, we propose a novel algorithm, SPMR (Semantic Preserving Meta-path Reduction), to reduce a set of pre-defined meta-paths in an unsupervised setting. The proposed method is able to evaluate a set of meta-paths to maximally preserve the semantics of original meta-paths after reduction. Experimental results show that SPMR can select a succinct subset of meta-paths which can achieve comparable or even better performance with fewer meta-paths. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 111,782 |
2304.02233 | Ericson: An Interactive Open-Domain Conversational Search Agent | Open-domain conversational search (ODCS) aims to provide valuable, up-to-date information, while maintaining natural conversations to help users refine and ultimately answer information needs. However, creating an effective and robust ODCS agent is challenging. In this paper, we present a fully functional ODCS system, Ericson, which includes state-of-the-art question answering and information retrieval components, as well as intent inference and dialogue management models for proactive question refinement and recommendations. Our system was stress-tested in the Amazon Alexa Prize, by engaging in live conversations with thousands of Alexa users, thus providing empirical basis for the analysis of the ODCS system in real settings. Our interaction data analysis revealed that accurate intent classification, encouraging user engagement, and careful proactive recommendations contribute most to the users satisfaction. Our study further identifies limitations of the existing search techniques, and can serve as a building block for the next generation of ODCS agents. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 356,371 |
2208.07918 | Ex-Ante Assessment of Discrimination in Dataset | Data owners face increasing liability for how the use of their data could harm under-priviliged communities. Stakeholders would like to identify the characteristics of data that lead to algorithms being biased against any particular demographic groups, for example, defined by their race, gender, age, and/or religion. Specifically, we are interested in identifying subsets of the feature space where the ground truth response function from features to observed outcomes differs across demographic groups. To this end, we propose FORESEE, a FORESt of decision trEEs algorithm, which generates a score that captures how likely an individual's response varies with sensitive attributes. Empirically, we find that our approach allows us to identify the individuals who are most likely to be misclassified by several classifiers, including Random Forest, Logistic Regression, Support Vector Machine, and k-Nearest Neighbors. The advantage of our approach is that it allows stakeholders to characterize risky samples that may contribute to discrimination, as well as, use the FORESEE to estimate the risk of upcoming samples. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 313,185 |
1210.4469 | A Rule-based Model of a Hypothetical Zombie Outbreak: Insights on the
role of emotional factors during behavioral adaptation of an artificial
population | Models of infectious diseases have been developed since the first half of the twentieth century. Most models haven't considered the role that emotional factors of the individual may play on the population's behavioral adaptation during the spread of a pandemic disease. Considering that local interactions among individuals generate patterns that -at a large scale- govern the action of masses, we have studied the behavioral adaptation of a population induced by the spread of an infectious disease. Therefore, we have developed a rule-based model of a hypothetical zombie outbreak, written in Kappa language, and simulated using Guillespie's stochastic approach. Our study addresses the specificity and heterogeneity of the system at the individual level, a highly desirable characteristic, mostly overlooked in classic epidemic models. Together with the basic elements of a typical epidemiological model, our model includes an individual representation of the disease progression and the traveling of agents among cities being affected. It also introduces an approximation to measure the effect of panic in the population as a function of the individual situational awareness. In addition, the effect of two possible countermeasures to overcome the zombie threat is considered: the availability of medical treatment and the deployment of special armed forces. However, due to the special characteristics of this hypothetical infectious disease, even using exaggerated numbers of countermeasures, only a small percentage of the population can be saved at the end of the simulations. As expected from a rule-based model approach, the global dynamics of our model resulted primarily governed by the mechanistic description of local interactions occurring at the individual level. As a whole, people's situational awareness resulted essential to modulate the inner dynamics of the system. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 19,142 |
2201.11691 | Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences | Hyperdimensional computing (HDC), also known as vector symbolic architectures (VSA), is a computing framework used within artificial intelligence and cognitive computing that operates with distributed vector representations of large fixed dimensionality. A critical step for designing the HDC/VSA solutions is to obtain such representations from the input data. Here, we focus on sequences and propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant to the sequence shift. These properties are enabled by forming representations of sequence positions using recursive binding and superposition operations. The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity. The obtained results are on a par with more sophisticated approaches from the literature. The proposed transformation was designed for the HDC/VSA model known as Fourier Holographic Reduced Representations. However, it can be adapted to some other HDC/VSA models. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 277,374 |
1906.04787 | Power Gradient Descent | The development of machine learning is promoting the search for fast and stable minimization algorithms. To this end, we suggest a change in the current gradient descent methods that should speed up the motion in flat regions and slow it down in steep directions of the function to minimize. It is based on a "power gradient", in which each component of the gradient is replaced by its versus-preserving $H$-th power, with $0<H<1$. We test three modern gradient descent methods fed by such variant and by standard gradients, finding the new version to achieve significantly better performances for the Nesterov accelerated gradient and AMSGrad. We also propose an effective new take on the ADAM algorithm, which includes power gradients with varying $H$. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 134,838 |
2412.10730 | MAL: Cluster-Masked and Multi-Task Pretraining for Enhanced xLSTM Vision
Performance | The Long Short-Term Memory (LSTM) networks have traditionally faced challenges in scaling and effectively capturing complex dependencies in visual tasks. The xLSTM architecture has emerged to address these limitations, incorporating exponential gating and a parallel matrix memory structure to enhance performance and scalability. Despite these advancements, the potential of xLSTM in visual computing has not been fully realized, particularly in leveraging autoregressive techniques for improved feature extraction. In this paper, we introduce MAL (Cluster-Masked and Multi-Task Pretraining for Enhanced xLSTM Vision Performance), a novel framework that enhances xLSTM's capabilities through innovative pretraining strategies. We propose a cluster-masked masking method that significantly improves local feature capture and optimizes image scanning efficiency. Additionally, our universal encoder-decoder pretraining approach integrates multiple tasks, including image autoregression, depth estimation, and image segmentation, thereby enhancing the model's adaptability and robustness across diverse visual tasks. Our experimental results demonstrate that MAL surpasses traditional supervised models and fully leverages the scaling potential of xLSTM, setting a new benchmark in visual task performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 517,077 |
2304.07254 | Dynamic Mobile-Former: Strengthening Dynamic Convolution with Attention
and Residual Connection in Kernel Space | We introduce Dynamic Mobile-Former(DMF), maximizes the capabilities of dynamic convolution by harmonizing it with efficient operators.Our Dynamic MobileFormer effectively utilizes the advantages of Dynamic MobileNet (MobileNet equipped with dynamic convolution) using global information from light-weight attention.A Transformer in Dynamic Mobile-Former only requires a few randomly initialized tokens to calculate global features, making it computationally efficient.And a bridge between Dynamic MobileNet and Transformer allows for bidirectional integration of local and global features.We also simplify the optimization process of vanilla dynamic convolution by splitting the convolution kernel into an input-agnostic kernel and an input-dependent kernel.This allows for optimization in a wider kernel space, resulting in enhanced capacity.By integrating lightweight attention and enhanced dynamic convolution, our Dynamic Mobile-Former achieves not only high efficiency, but also strong performance.We benchmark the Dynamic Mobile-Former on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection, and instanace segmentation.For example, our DMF hits the top-1 accuracy of 79.4% on ImageNet-1K, much higher than PVT-Tiny by 4.3% with only 1/4 FLOPs.Additionally,our proposed DMF-S model performed well on challenging vision datasets such as COCO, achieving a 39.0% mAP,which is 1% higher than that of the Mobile-Former 508M model, despite using 3 GFLOPs less computations.Code and models are available at https://github.com/ysj9909/DMF | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 358,283 |
2003.09990 | Variations on a Demonic Theme: Szilard's Other Engines | Szilard's now-famous single-molecule engine was only the first of three constructions he introduced in 1929 to resolve several paradoxes arising from Maxwell's demon. We analyze Szilard's remaining two demon models. We show that the second one, though a markedly different implementation employing a population of distinct molecular species and semi-permeable membranes, is informationally and thermodynamically equivalent to an ideal gas of the single-molecule engines. Since it is a gas of noninteracting particles one concludes, following Boyd and Crutchfield, that (i) it reduces to a chaotic dynamical system---called the Szilard Map, a composite of three piecewise linear maps that implement the thermodynamic transformations of measurement, control, and erasure; (ii) its transitory functioning as an engine that converts disorganized heat energy to work is governed by the Kolmogorov-Sinai entropy rate; (iii) the demon's minimum necessary "intelligence" for optimal functioning is given by the engine's statistical complexity, and (iv) its functioning saturates thermodynamic bounds and so it is a minimal, optimal implementation. We show that Szilard's third model is rather different and addresses the fundamental issue, raised by the first two, of measurement in and by thermodynamic systems and entropy generation. Taken together, Szilard's suite of constructions lays out a range of possible realizations of Maxwellian demons that anticipated by almost two decades Shannon's and Wiener's concept of information as surprise and cybernetics' notion of functional information. This, in turn, gives new insight into engineering implementations of novel nanoscale information engines that leverage microscopic fluctuations and into the diversity of thermodynamic mechanisms and intrinsic computation harnessed in physical, molecular, biochemical, and biological systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 169,204 |
2401.06437 | 3D-PreMise: Can Large Language Models Generate 3D Shapes with Sharp
Features and Parametric Control? | Recent advancements in implicit 3D representations and generative models have markedly propelled the field of 3D object generation forward. However, it remains a significant challenge to accurately model geometries with defined sharp features under parametric controls, which is crucial in fields like industrial design and manufacturing. To bridge this gap, we introduce a framework that employs Large Language Models (LLMs) to generate text-driven 3D shapes, manipulating 3D software via program synthesis. We present 3D-PreMise, a dataset specifically tailored for 3D parametric modeling of industrial shapes, designed to explore state-of-the-art LLMs within our proposed pipeline. Our work reveals effective generation strategies and delves into the self-correction capabilities of LLMs using a visual interface. Our work highlights both the potential and limitations of LLMs in 3D parametric modeling for industrial applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 421,158 |
2403.10733 | Incentive-Compatible and Distributed Allocation for Robotic Service
Provision Through Contract Theory | Robot allocation plays an essential role in facilitating robotic service provision across various domains. Yet the increasing number of users and the uncertainties regarding the users' true service requirements have posed challenges for the service provider in effectively allocating service robots to users to meet their needs. In this work, we first propose a contract-based approach to enable incentive-compatible service selection so that the service provider can effectively reduce the user's service uncertainties for precise service provision. Then, we develop a distributed allocation algorithm that incorporates robot dynamics and collision avoidance to allocate service robots and address scalability concerns associated with increasing numbers of service robots and users. We conduct simulations in eight scenarios to validate our approach. Comparative analysis against the robust allocation paradigm and two alternative uncertainty reduction strategies demonstrates that our approach achieves better allocation efficiency and accuracy. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 438,324 |
2104.04120 | Self-Weighted Ensemble Method to Adjust the Influence of Individual
Models based on Reliability | Image classification technology and performance based on Deep Learning have already achieved high standards. Nevertheless, many efforts have conducted to improve the stability of classification via ensembling. However, the existing ensemble method has a limitation in that it requires extra effort including time consumption to find the weight for each model output. In this paper, we propose a simple but improved ensemble method, naming with Self-Weighted Ensemble (SWE), that places the weight of each model via its verification reliability. The proposed ensemble method, SWE, reduces overall efforts for constructing a classification system with varied classifiers. The performance using SWE is 0.033% higher than the conventional ensemble method. Also, the percent of performance superiority to the previous model is up to 73.333% (ratio of 8:22). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,295 |
2211.03160 | Multi-GPU thermal lattice Boltzmann simulations using OpenACC and MPI | We assess the performance of the hybrid Open Accelerator (OpenACC) and Message Passing Interface (MPI) approach for multi-graphics processing units (GPUs) accelerated thermal lattice Boltzmann (LB) simulation. The OpenACC accelerates computation on a single GPU, and the MPI synchronizes the information between multiple GPUs. With a single GPU, the two-dimension (2D) simulation achieved 1.93 billion lattice updates per second (GLUPS) with a grid number of $8193^{2}$, and the three-dimension (3D) simulation achieved 1.04 GLUPS with a grid number of $385^{3}$, which is more than 76% of the theoretical maximum performance. On multi-GPUs, we adopt block partitioning, overlapping communications with computations, and concurrent computation to optimize parallel efficiency. We show that in the strong scaling test, using 16 GPUs, the 2D simulation achieved 30.42 GLUPS and the 3D simulation achieved 14.52 GLUPS. In the weak scaling test, the parallel efficiency remains above 99% up to 16 GPUs. Our results demonstrated that, with improved data and task management, the hybrid OpenACC and MPI technique is promising for thermal LB simulation on multi-GPUs. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 328,847 |
1001.1009 | Multi-path Probabilistic Available Bandwidth Estimation through Bayesian
Active Learning | Knowing the largest rate at which data can be sent on an end-to-end path such that the egress rate is equal to the ingress rate with high probability can be very practical when choosing transmission rates in video streaming or selecting peers in peer-to-peer applications. We introduce probabilistic available bandwidth, which is defined in terms of ingress rates and egress rates of traffic on a path, rather than in terms of capacity and utilization of the constituent links of the path like the standard available bandwidth metric. In this paper, we describe a distributed algorithm, based on a probabilistic graphical model and Bayesian active learning, for simultaneously estimating the probabilistic available bandwidth of multiple paths through a network. Our procedure exploits the fact that each packet train provides information not only about the path it traverses, but also about any path that shares a link with the monitored path. Simulations and PlanetLab experiments indicate that this process can dramatically reduce the number of probes required to generate accurate estimates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 5,277 |
2306.07644 | SRATTA : Sample Re-ATTribution Attack of Secure Aggregation in Federated
Learning | We consider a cross-silo federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA). We present SRATTA an attack relying only on aggregated models which, under realistic assumptions, (i) recovers data samples from the different clients, and (ii) groups data samples coming from the same client together. While sample recovery has already been explored in an FL setting, the ability to group samples per client, despite the use of SA, is novel. This poses a significant unforeseen security threat to FL and effectively breaks SA. We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also propose counter-measures, and claim that clients should play an active role to guarantee their privacy during training. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 373,095 |
2108.09020 | Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data | Continual learning is the problem of learning and retaining knowledge through time over multiple tasks and environments. Research has primarily focused on the incremental classification setting, where new tasks/classes are added at discrete time intervals. Such an "offline" setting does not evaluate the ability of agents to learn effectively and efficiently, since an agent can perform multiple learning epochs without any time limitation when a task is added. We argue that "online" continual learning, where data is a single continuous stream without task boundaries, enables evaluating both information retention and online learning efficacy. In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online. Trained models are later evaluated on historical data to assess information retention. We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts. Through a large-scale analysis, we identify critical and previously unobserved phenomena of gradient-based optimization in continual learning, and propose effective strategies for improving gradient-based online continual learning with real data. The source code and dataset are available in: https://github.com/IntelLabs/continuallearning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 251,467 |
2408.03847 | GAIA -- A Large Language Model for Advanced Power Dispatch | Power dispatch is essential for providing stable, cost-effective, and eco-friendly electricity to society. However, traditional methods falter as power systems grow in scale and complexity, struggling with multitasking, swift problem-solving, and human-machine collaboration. This paper introduces GAIA, the pioneering Large Language Model (LLM) tailored for power dispatch tasks. We have developed a novel dataset construction technique that harnesses a range of data sources to fine-tune GAIA for optimal performance in this domain. This approach streamlines LLM training, allowing for the seamless integration of multidimensional data in power system management. Additionally, we have crafted specialized prompt strategies to boost GAIA's input-output efficiency in dispatch scenarios. When evaluated on the ElecBench benchmark, GAIA surpasses the baseline model LLaMA2 on multiple metrics. In practical applications, GAIA has demonstrated its ability to enhance decision-making processes, improve operational efficiency, and facilitate better human-machine interactions in power dispatch operations. This paper expands the application of LLMs to power dispatch and validates their practical utility, paving the way for future innovations in this field. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 479,169 |
2301.12906 | Curvature Filtrations for Graph Generative Model Evaluation | Graph generative model evaluation necessitates understanding differences between graphs on the distributional level. This entails being able to harness salient attributes of graphs in an efficient manner. Curvature constitutes one such property that has recently proved its utility in characterising graphs. Its expressive properties, stability, and practical utility in model evaluation remain largely unexplored, however. We combine graph curvature descriptors with emerging methods from topological data analysis to obtain robust, expressive descriptors for evaluating graph generative models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 342,716 |
1304.2730 | Structuring Causal Tree Models with Continuous Variables | This paper considers the problem of invoking auxiliary, unobservable variables to facilitate the structuring of causal tree models for a given set of continuous variables. Paralleling the treatment of bi-valued variables in [Pearl 1986], we show that if a collection of coupled variables are governed by a joint normal distribution and a tree-structured representation exists, then both the topology and all internal relationships of the tree can be uncovered by observing pairwise dependencies among the observed variables (i.e., the leaves of the tree). Furthermore, the conditions for normally distributed variables are less restrictive than those governing bi-valued variables. The result extends the applications of causal tree models which were found useful in evidential reasoning tasks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,739 |
2210.12213 | SpaBERT: A Pretrained Language Model from Geographic Data for Geo-Entity
Representation | Named geographic entities (geo-entities for short) are the building blocks of many geographic datasets. Characterizing geo-entities is integral to various application domains, such as geo-intelligence and map comprehension, while a key challenge is to capture the spatial-varying context of an entity. We hypothesize that we shall know the characteristics of a geo-entity by its surrounding entities, similar to knowing word meanings by their linguistic context. Accordingly, we propose a novel spatial language model, SpaBERT, which provides a general-purpose geo-entity representation based on neighboring entities in geospatial data. SpaBERT extends BERT to capture linearized spatial context, while incorporating a spatial coordinate embedding mechanism to preserve spatial relations of entities in the 2-dimensional space. SpaBERT is pretrained with masked language modeling and masked entity prediction tasks to learn spatial dependencies. We apply SpaBERT to two downstream tasks: geo-entity typing and geo-entity linking. Compared with the existing language models that do not use spatial context, SpaBERT shows significant performance improvement on both tasks. We also analyze the entity representation from SpaBERT in various settings and the effect of spatial coordinate embedding. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 325,632 |
2208.04529 | Motif-based Graph Representation Learning with Application to Chemical
Molecules | This work considers the task of representation learning on the attributed relational graph (ARG). Both the nodes and edges in an ARG are associated with attributes/features allowing ARGs to encode rich structural information widely observed in real applications. Existing graph neural networks offer limited ability to capture complex interactions within local structural contexts, which hinders them from taking advantage of the expression power of ARGs. We propose Motif Convolution Module (MCM), a new motif-based graph representation learning technique to better utilize local structural information. The ability to handle continuous edge and node features is one of MCM's advantages over existing motif-based models. MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks. When compared with other graph learning approaches to classifying synthetic graphs, our approach is substantially better in capturing structural context. We also demonstrate the performance and explainability advantages of our approach by applying it to several molecular benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,139 |
1909.12807 | Research Commentary on Recommendations with Side Information: A Survey
and Research Directions | Recommender systems have become an essential tool to help resolve the information overload problem in recent decades. Traditional recommender systems, however, suffer from data sparsity and cold start problems. To address these issues, a great number of recommendation algorithms have been proposed to leverage side information of users or items (e.g., social network and item category), demonstrating a high degree of effectiveness in improving recommendation performance. This Research Commentary aims to provide a comprehensive and systematic survey of the recent research on recommender systems with side information. Specifically, we provide an overview of state-of-the-art recommendation algorithms with side information from two orthogonal perspectives. One involves the different methodologies of recommendation: the memory-based methods, latent factor, representation learning, and deep learning models. The others cover different representations of side information, including structural data (flat, network, and hierarchical features, and knowledge graphs); and non-structural data (text, image and video features). Finally, we discuss challenges and provide new potential directions in recommendation, along with the conclusion of this survey. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 147,222 |
1908.05204 | On The Evaluation of Machine Translation Systems Trained With
Back-Translation | Back-translation is a widely used data augmentation technique which leverages target monolingual data. However, its effectiveness has been challenged since automatic metrics such as BLEU only show significant improvements for test examples where the source itself is a translation, or translationese. This is believed to be due to translationese inputs better matching the back-translated training data. In this work, we show that this conjecture is not empirically supported and that back-translation improves translation quality of both naturally occurring text as well as translationese according to professional human translators. We provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs. BLEU cannot capture human preferences because references are translationese when source sentences are natural text. We recommend complementing BLEU with a language model score to measure fluency. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 141,663 |
2407.08489 | Projecting Points to Axes: Oriented Object Detection via Point-Axis
Representation | This paper introduces the point-axis representation for oriented object detection, emphasizing its flexibility and geometrically intuitive nature with two key components: points and axes. 1) Points delineate the spatial extent and contours of objects, providing detailed shape descriptions. 2) Axes define the primary directionalities of objects, providing essential orientation cues crucial for precise detection. The point-axis representation decouples location and rotation, addressing the loss discontinuity issues commonly encountered in traditional bounding box-based approaches. For effective optimization without introducing additional annotations, we propose the max-projection loss to supervise point set learning and the cross-axis loss for robust axis representation learning. Further, leveraging this representation, we present the Oriented DETR model, seamlessly integrating the DETR framework for precise point-axis prediction and end-to-end detection. Experimental results demonstrate significant performance improvements in oriented object detection tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,185 |
2401.15422 | A Survey on Data Augmentation in Large Model Era | Large models, encompassing large language and diffusion models, have shown exceptional promise in approximating human-level intelligence, garnering significant interest from both academic and industrial spheres. However, the training of these large models necessitates vast quantities of high-quality data, and with continuous updates to these models, the existing reservoir of high-quality data may soon be depleted. This challenge has catalyzed a surge in research focused on data augmentation methods. Leveraging large models, these data augmentation techniques have outperformed traditional approaches. This paper offers an exhaustive review of large model-driven data augmentation methods, adopting a comprehensive perspective. We begin by establishing a classification of relevant studies into three main categories: image augmentation, text augmentation, and paired data augmentation. Following this, we delve into various data post-processing techniques pertinent to large model-based data augmentation. Our discussion then expands to encompass the array of applications for these data augmentation methods within natural language processing, computer vision, and audio signal processing. We proceed to evaluate the successes and limitations of large model-based data augmentation across different scenarios. Concluding our review, we highlight prospective challenges and avenues for future exploration in the field of data augmentation. Our objective is to furnish researchers with critical insights, ultimately contributing to the advancement of more sophisticated large models. We consistently maintain the related open-source materials at: https://github.com/MLGroup-JLU/LLM-data-aug-survey. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 424,441 |
2308.16067 | Consensus of state of the art mortality prediction models: From
all-cause mortality to sudden death prediction | Worldwide, many millions of people die suddenly and unexpectedly each year, either with or without a prior history of cardiovascular disease. Such events are sparse (once in a lifetime), many victims will not have had prior investigations for cardiac disease and many different definitions of sudden death exist. Accordingly, sudden death is hard to predict. This analysis used NHS Electronic Health Records (EHRs) for people aged $\geq$50 years living in the Greater Glasgow and Clyde (GG\&C) region in 2010 (n = 380,000) to try to overcome these challenges. We investigated whether medical history, blood tests, prescription of medicines, and hospitalisations might, in combination, predict a heightened risk of sudden death. We compared the performance of models trained to predict either sudden death or all-cause mortality. We built six models for each outcome of interest: three taken from state-of-the-art research (BEHRT, Deepr and Deep Patient), and three of our own creation. We trained these using two different data representations: a language-based representation, and a sparse temporal matrix. We used global interpretability to understand the most important features of each model, and compare how much agreement there was amongst models using Rank Biased Overlap. It is challenging to account for correlated variables without increasing the complexity of the interpretability technique. We overcame this by clustering features into groups and comparing the most important groups for each model. We found the agreement between models to be much higher when accounting for correlated variables. Our analysis emphasises the challenge of predicting sudden death and emphasises the need for better understanding and interpretation of machine learning models applied to healthcare applications. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 388,891 |
1609.06268 | Semantic Similarity Strategies for Job Title Classification | Automatic and accurate classification of items enables numerous downstream applications in many domains. These applications can range from faceted browsing of items to product recommendations and big data analytics. In the online recruitment domain, we refer to classifying job ads to pre-defined or custom occupation categories as job title classification. A large-scale job title classification system can power various downstream applications such as semantic search, job recommendations and labor market analytics. In this paper, we discuss experiments conducted to improve our in-house job title classification system. The classification component of the system is composed of a two-stage coarse and fine level classifier cascade that classifies input text such as job title and/or job ads to one of the thousands of job titles in our taxonomy. To improve classification accuracy and effectiveness, we experiment with various semantic representation strategies such as average W2V vectors and document similarity measures such as Word Movers Distance (WMD). Our initial results show an overall improvement in accuracy of Carotene[1]. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 61,262 |
2205.12619 | Location-free Human Pose Estimation | Human pose estimation (HPE) usually requires large-scale training data to reach high performance. However, it is rather time-consuming to collect high-quality and fine-grained annotations for human body. To alleviate this issue, we revisit HPE and propose a location-free framework without supervision of keypoint locations. We reformulate the regression-based HPE from the perspective of classification. Inspired by the CAM-based weakly-supervised object localization, we observe that the coarse keypoint locations can be acquired through the part-aware CAMs but unsatisfactory due to the gap between the fine-grained HPE and the object-level localization. To this end, we propose a customized transformer framework to mine the fine-grained representation of human context, equipped with the structural relation to capture subtle differences among keypoints. Concretely, we design a Multi-scale Spatial-guided Context Encoder to fully capture the global human context while focusing on the part-aware regions and a Relation-encoded Pose Prototype Generation module to encode the structural relations. All these works together for strengthening the weak supervision from image-level category labels on locations. Our model achieves competitive performance on three datasets when only supervised at a category-level and importantly, it can achieve comparable results with fully-supervised methods with only 25\% location labels on MS-COCO and MPII. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 298,632 |
2404.03868 | Extract, Define, Canonicalize: An LLM-based Framework for Knowledge
Graph Construction | In this work, we are interested in automated methods for knowledge graph creation (KGC) from input text. Progress on large language models (LLMs) has prompted a series of recent works applying them to KGC, e.g., via zero/few-shot prompting. Despite successes on small domain-specific datasets, these models face difficulties scaling up to text common in many real-world applications. A principal issue is that, in prior methods, the KG schema has to be included in the LLM prompt to generate valid triplets; larger and more complex schemas easily exceed the LLMs' context window length. Furthermore, there are scenarios where a fixed pre-defined schema is not available and we would like the method to construct a high-quality KG with a succinct self-generated schema. To address these problems, we propose a three-phase framework named Extract-Define-Canonicalize (EDC): open information extraction followed by schema definition and post-hoc canonicalization. EDC is flexible in that it can be applied to settings where a pre-defined target schema is available and when it is not; in the latter case, it constructs a schema automatically and applies self-canonicalization. To further improve performance, we introduce a trained component that retrieves schema elements relevant to the input text; this improves the LLMs' extraction performance in a retrieval-augmented generation-like manner. We demonstrate on three KGC benchmarks that EDC is able to extract high-quality triplets without any parameter tuning and with significantly larger schemas compared to prior works. Code for EDC is available at https://github.com/clear-nus/edc. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 444,429 |
2206.03093 | Beyond spectral gap: The role of the topology in decentralized learning | In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the 'spectral gap' of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,148 |
2308.01117 | Optimization-Based Motion Planning for Autonomous Agricultural Vehicles
Turning in Constrained Headlands | Headland maneuvering is a crucial aspect of unmanned field operations for autonomous agricultural vehicles (AAVs). While motion planning for headland turning in open fields has been extensively studied and integrated into commercial auto-guidance systems, the existing methods primarily address scenarios with ample headland space and thus may not work in more constrained headland geometries. Commercial orchards often contain narrow and irregularly shaped headlands, which may include static obstacles,rendering the task of planning a smooth and collision-free turning trajectory difficult. To address this challenge, we propose an optimization-based motion planning algorithm for headland turning under geometrical constraints imposed by field geometry and obstacles. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 383,146 |
2109.00900 | Spatio-temporal-spectral-angular observation model that integrates
observations from UAV and mobile mapping vehicle for better urban mapping | In a complex urban scene, observation from a single sensor unavoidably leads to voids in observations, failing to describe urban objects in a comprehensive manner. In this paper, we propose a spatio-temporal-spectral-angular observation model to integrate observations from UAV and mobile mapping vehicle platform, realizing a joint, coordinated observation operation from both air and ground. We develop a multi-source remote sensing data acquisition system to effectively acquire multi-angle data of complex urban scenes. Multi-source data fusion solves the missing data problem caused by occlusion and achieves accurate, rapid, and complete collection of holographic spatial and temporal information in complex urban scenes. We carried out an experiment on Baisha Town, Chongqing, China and obtained multi-sensor, multi-angle data from UAV and mobile mapping vehicle. We first extracted the point cloud from UAV and then integrated the UAV and mobile mapping vehicle point cloud. The integrated results combined both the characteristic of UAV and mobile mapping vehicle point cloud, confirming the practicability of the proposed joint data acquisition platform and the effectiveness of spatio-temporal-spectral-angular observation model. Compared with the observation from UAV or mobile mapping vehicle alone, the integrated system provides an effective data acquisition solution towards comprehensive urban monitoring. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,268 |
2110.12894 | The Efficiency Misnomer | Model efficiency is a critical aspect of developing and deploying machine learning models. Inference time and latency directly affect the user experience, and some applications have hard requirements. In addition to inference costs, model training also have direct financial and environmental impacts. Although there are numerous well-established metrics (cost indicators) for measuring model efficiency, researchers and practitioners often assume that these metrics are correlated with each other and report only few of them. In this paper, we thoroughly discuss common cost indicators, their advantages and disadvantages, and how they can contradict each other. We demonstrate how incomplete reporting of cost indicators can lead to partial conclusions and a blurred or incomplete picture of the practical considerations of different models. We further present suggestions to improve reporting of efficiency metrics. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 262,999 |
1804.05232 | An Unsupervised Approach to Detect Spam Campaigns that Use Botnets on
Twitter | In recent years, Twitter has seen a proliferation of automated accounts or bots that send spam, offer clickbait, compromise security using malware, and attempt to skew public opinion. Previous research estimates that around 9% to 17% of Twitter accounts are bots contributing to between 16% to 56% of tweets on the medium. This paper introduces an unsupervised approach to detect Twitter spam campaigns in real-time. The bot groups we detect tweet duplicate content with shortened embedded URLs over extended periods of time. Our experiments with the detection protocol reveal that bots consistently account for 10% to 50% of tweets generated from 7 popular URL shortening services on Twitter. More importantly, we discover that bots using shortened URLs are connected to large scale spam campaigns that control thousands of domains. There appear to be two distinct mechanisms used to control bot groups and we investigate both in this paper. Our detection system runs 24/7 and actively collects bots involved in spam campaigns and adds them to an evolving database of malicious bots. We make our database of detected bots available for query through a REST API so others can filter tweets from malicious bots to get high quality Twitter datasets for analysis. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 95,022 |
2106.00007 | DikpolaSat Mission: Improvement of Space Flight Performance and Optimal
Control Using Trained Deep Neural Network -- Trajectory Controller for Space
Objects Collision Avoidance | This paper introduced the space mission DikpolaSat Mission, how this research fits into the mission, and the importance of having a trained DNN model instead of the usual GN&C functionality. This paper shows how the controller demonstration is carried out by having the spacecraft follow a desired path, specified in the referenced model. Increases can be made by examining the route used to construct a DNN and understanding the effects of various activating functions on system efficiency. The obstacle avoidance algorithm is built into the control features to respond spontaneously using inputs from the neural network for collision avoidance while optimizing the modified trajectory. The action of a neural network to control the adaptive nature of the nonlinear mechanisms in the controller will make the control system capable of handling multiple nonlinear events and also uncertainties that have not been induced in the control algorithm. Multiple algorithms for optimizing flight controls and fuel consumption can be implemented using knowledge of flight dynamics in trajectory and also in the event of obstacle avoidance. This paper also explains how a DNN can learn to control the flight path and make the system more reliable with each launch, thereby improving the chances of predicting collisions of space objects. The data released from this research is used to design more advanced DNN model capable of predicting other orbital events as well. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 237,953 |
1906.04379 | Band Attention Convolutional Networks For Hyperspectral Image
Classification | Redundancy and noise exist in the bands of hyperspectral images (HSIs). Thus, it is a good property to be able to select suitable parts from hundreds of input bands for HSIs classification methods. In this letter, a band attention module (BAM) is proposed to implement the deep learning based HSIs classification with the capacity of band selection or weighting. The proposed BAM can be seen as a plug-and-play complementary component of the existing classification networks which fully considers the adverse effects caused by the redundancy of the bands when using convolutional neural networks (CNNs) for HSIs classification. Unlike most of deep learning methods used in HSIs, the band attention module which is customized according to the characteristics of hyperspectral images is embedded in the ordinary CNNs for better performance. At the same time, unlike classical band selection or weighting methods, the proposed method achieves the end-to-end training instead of the separated stages. Experiments are carried out on two HSI benchmark datasets. Compared to some classical and advanced deep learning methods, numerical simulations under different evaluation criteria show that the proposed method have good performance. Last but not least, some advanced CNNs are combined with the proposed BAM for better performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 134,699 |
2003.02640 | Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing | Data-driven approaches to tactile sensing aim to overcome the complexity of accurately modeling contact with soft materials. However, their widespread adoption is impaired by concerns about data efficiency and the capability to generalize when applied to various tasks. This paper focuses on both these aspects with regard to a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface. Accurate models for the soft materials and the camera projection, derived via state-of-the-art techniques in the respective domains, are employed to generate a dataset in simulation. A strategy is proposed to train a tailored deep neural network entirely from the simulation data. The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data, while showing promising generalization capabilities to unseen contact conditions. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 166,993 |
2210.03368 | Decentralized event-triggered estimation of nonlinear systems | We investigate the scenario where a perturbed nonlinear system transmits its output measurements to a remote observer via a packet-based communication network. The sensors are grouped into N nodes and each of these nodes decides when its measured data is transmitted over the network independently. The objective is to design both the observer and the local transmission policies in order to obtain accurate state estimates, while only sporadically using the communication network. In particular, given a general nonlinear observer designed in continuous-time satisfying an input-to-state stability property, we explain how to systematically design a dynamic event-triggering rule for each sensor node that avoids the use of a copy of the observer, thereby keeping local calculation simple. We prove the practical convergence property of the estimation error to the origin and we show that there exists a uniform strictly positive minimum inter-event time for each local triggering rule under mild conditions on the plant. The efficiency of the proposed techniques is illustrated on a numerical case study of a flexible robotic arm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 322,007 |
2002.04427 | Data User-Based Attribute-Based Encryption | Attribute-Based Encryption (ABE) has emerged as an information-centric public-key cryptographic system which allows a data owner to share data, according to access policy, with multiple data users based on the attributes they possess, without knowing their identities. In the original ABE schemes, a central authority administrates the system and issues secret keys to data users based on their attributes and both the owner and users need to trust a specific CA. However, in certain real-world applications, the data users would not trust anyone but themselves. For such situations, we introduce a new decentralization model of ABE, termed Data User-based ABE (DU-ABE), which is managed jointly by the data users. DU-ABE is the first decentralized ABE scheme that replaces the authorities with the data users without employing any other extra entities. | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | 163,606 |
2412.11664 | C3oT: Generating Shorter Chain-of-Thought without Compromising
Effectiveness | Generating Chain-of-Thought (CoT) before deriving the answer can effectively improve the reasoning capabilities of large language models (LLMs) and significantly improve the accuracy of the generated answer. However, in most cases, the length of the generated CoT is much longer than the desired final answer, which results in additional decoding costs. Furthermore, existing research has discovered that shortening the reasoning steps in CoT, even while preserving the key information, diminishes LLMs' abilities. These phenomena make it difficult to use LLMs and CoT in many real-world applications that only require the final answer and are sensitive to latency, such as search and recommendation. To reduce the costs of model decoding and shorten the length of the generated CoT, this paper presents $\textbf{C}$onditioned $\textbf{C}$ompressed $\textbf{C}$hain-of-$\textbf{T}$hought (C3oT), a CoT compression framework that involves a compressor to compress an original longer CoT into a shorter CoT while maintaining key information and interpretability, a conditioned training method to train LLMs with both longer CoT and shorter CoT simultaneously to learn the corresponding relationships between them, and a conditioned inference method to gain the reasoning ability learned from longer CoT by generating shorter CoT. We conduct experiments over four datasets from arithmetic and commonsense scenarios, showing that the proposed method is capable of compressing the length of generated CoT by up to more than 50% without compromising its effectiveness. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 517,520 |
2005.03596 | Physics-informed neural network for ultrasound nondestructive
quantification of surface breaking cracks | We introduce an optimized physics-informed neural network (PINN) trained to solve the problem of identifying and characterizing a surface breaking crack in a metal plate. PINNs are neural networks that can combine data and physics in the learning process by adding the residuals of a system of Partial Differential Equations to the loss function. Our PINN is supervised with realistic ultrasonic surface acoustic wave data acquired at a frequency of 5 MHz. The ultrasonic surface wave data is represented as a surface deformation on the top surface of a metal plate, measured by using the method of laser vibrometry. The PINN is physically informed by the acoustic wave equation and its convergence is sped up using adaptive activation functions. The adaptive activation function uses a scalable hyperparameter in the activation function, which is optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The usage of adaptive activation function significantly improves the convergence, notably observed in the current study. We use PINNs to estimate the speed of sound of the metal plate, which we do with an error of 1\%, and then, by allowing the speed of sound to be space dependent, we identify and characterize the crack as the positions where the speed of sound has decreased. Our study also shows the effect of sub-sampling of the data on the sensitivity of sound speed estimates. More broadly, the resulting model shows a promising deep neural network model for ill-posed inverse problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,203 |
2404.12062 | MIDGET: Music Conditioned 3D Dance Generation | In this paper, we introduce a MusIc conditioned 3D Dance GEneraTion model, named MIDGET based on Dance motion Vector Quantised Variational AutoEncoder (VQ-VAE) model and Motion Generative Pre-Training (GPT) model to generate vibrant and highquality dances that match the music rhythm. To tackle challenges in the field, we introduce three new components: 1) a pre-trained memory codebook based on the Motion VQ-VAE model to store different human pose codes, 2) employing Motion GPT model to generate pose codes with music and motion Encoders, 3) a simple framework for music feature extraction. We compare with existing state-of-the-art models and perform ablation experiments on AIST++, the largest publicly available music-dance dataset. Experiments demonstrate that our proposed framework achieves state-of-the-art performance on motion quality and its alignment with the music. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 447,723 |
2403.07305 | Integrated Communications and Localization for Massive MIMO LEO
Satellite Systems | Integrated communications and localization (ICAL) will play an important part in future sixth generation (6G) networks for the realization of Internet of Everything (IoE) to support both global communications and seamless localization. Massive multiple-input multiple-output (MIMO) low earth orbit (LEO) satellite systems have great potential in providing wide coverage with enhanced gains, and thus are strong candidates for realizing ubiquitous ICAL. In this paper, we develop a wideband massive MIMO LEO satellite system to simultaneously support wireless communications and localization operations in the downlink. In particular, we first characterize the signal propagation properties and derive a localization performance bound. Based on these analyses, we focus on the hybrid analog/digital precoding design to achieve high communication capability and localization precision. Numerical results demonstrate that the proposed ICAL scheme supports both the wireless communication and localization operations for typical system setups. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 436,826 |
1809.00107 | Dependency-based Hybrid Trees for Semantic Parsing | We propose a novel dependency-based hybrid tree model for semantic parsing, which converts natural language utterance into machine interpretable meaning representations. Unlike previous state-of-the-art models, the semantic information is interpreted as the latent dependency between the natural language words in our joint representation. Such dependency information can capture the interactions between the semantics and natural language words. We integrate a neural component into our model and propose an efficient dynamic-programming algorithm to perform tractable inference. Through extensive experiments on the standard multilingual GeoQuery dataset with eight languages, we demonstrate that our proposed approach is able to achieve state-of-the-art performance across several languages. Analysis also justifies the effectiveness of using our new dependency-based representation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 106,495 |
2205.05206 | Best of Both Worlds: Multi-task Audio-Visual Automatic Speech
Recognition and Active Speaker Detection | Under noisy conditions, automatic speech recognition (ASR) can greatly benefit from the addition of visual signals coming from a video of the speaker's face. However, when multiple candidate speakers are visible this traditionally requires solving a separate problem, namely active speaker detection (ASD), which entails selecting at each moment in time which of the visible faces corresponds to the audio. Recent work has shown that we can solve both problems simultaneously by employing an attention mechanism over the competing video tracks of the speakers' faces, at the cost of sacrificing some accuracy on active speaker detection. This work closes this gap in active speaker detection accuracy by presenting a single model that can be jointly trained with a multi-task loss. By combining the two tasks during training we reduce the ASD classification accuracy by approximately 25%, while simultaneously improving the ASR performance when compared to the multi-person baseline trained exclusively for ASR. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 295,869 |
2404.04169 | Do Sentence Transformers Learn Quasi-Geospatial Concepts from General
Text? | Sentence transformers are language models designed to perform semantic search. This study investigates the capacity of sentence transformers, fine-tuned on general question-answering datasets for asymmetric semantic search, to associate descriptions of human-generated routes across Great Britain with queries often used to describe hiking experiences. We find that sentence transformers have some zero-shot capabilities to understand quasi-geospatial concepts, such as route types and difficulty, suggesting their potential utility for routing recommendation systems. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 444,532 |
1811.11167 | Integrated Object Detection and Tracking with Tracklet-Conditioned
Detection | Accurate detection and tracking of objects is vital for effective video understanding. In previous work, the two tasks have been combined in a way that tracking is based heavily on detection, but the detection benefits marginally from the tracking. To increase synergy, we propose to more tightly integrate the tasks by conditioning the object detection in the current frame on tracklets computed in prior frames. With this approach, the object detection results not only have high detection responses, but also improved coherence with the existing tracklets. This greater coherence leads to estimated object trajectories that are smoother and more stable than the jittered paths obtained without tracklet-conditioned detection. Over extensive experiments, this approach is shown to achieve state-of-the-art performance in terms of both detection and tracking accuracy, as well as noticeable improvements in tracking stability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,701 |
2410.11068 | Character-aware audio-visual subtitling in context | This paper presents an improved framework for character-aware audio-visual subtitling in TV shows. Our approach integrates speech recognition, speaker diarisation, and character recognition, utilising both audio and visual cues. This holistic solution addresses what is said, when it's said, and who is speaking, providing a more comprehensive and accurate character-aware subtitling for TV shows. Our approach brings improvements on two fronts: first, we show that audio-visual synchronisation can be used to pick out the talking face amongst others present in a video clip, and assign an identity to the corresponding speech segment. This audio-visual approach improves recognition accuracy and yield over current methods. Second, we show that the speaker of short segments can be determined by using the temporal context of the dialogue within a scene. We propose an approach using local voice embeddings of the audio, and large language model reasoning on the text transcription. This overcomes a limitation of existing methods that they are unable to accurately assign speakers to short temporal segments. We validate the method on a dataset with 12 TV shows, demonstrating superior performance in speaker diarisation and character recognition accuracy compared to existing approaches. Project page : https://www.robots.ox.ac.uk/~vgg/research/llr-context/ | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 498,367 |
0908.3562 | Another Look at the Physics of Large Deviations With Application to
Rate-Distortion Theory | We revisit and extend the physical interpretation recently given to a certain identity between large--deviations rate--functions (as well as applications of this identity to Information Theory), as an instance of thermal equilibrium between several physical systems that are brought into contact. Our new interpretation, of mechanical equilibrium between these systems, is shown to have several advantages relative to that of thermal equilibrium. This physical point of view also provides a trigger to the development of certain alternative representations of the rate--distortion function and channel capacity, which are new to the best knowledge of the author. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,333 |
2306.03778 | Streaming Speech-to-Confusion Network Speech Recognition | In interactive automatic speech recognition (ASR) systems, low-latency requirements limit the amount of search space that can be explored during decoding, particularly in end-to-end neural ASR. In this paper, we present a novel streaming ASR architecture that outputs a confusion network while maintaining limited latency, as needed for interactive applications. We show that 1-best results of our model are on par with a comparable RNN-T system, while the richer hypothesis set allows second-pass rescoring to achieve 10-20\% lower word error rate on the LibriSpeech task. We also show that our model outperforms a strong RNN-T baseline on a far-field voice assistant task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 371,473 |
2412.10540 | Higher Order Transformers: Enhancing Stock Movement Prediction On
Multimodal Time-Series Data | In this paper, we tackle the challenge of predicting stock movements in financial markets by introducing Higher Order Transformers, a novel architecture designed for processing multivariate time-series data. We extend the self-attention mechanism and the transformer architecture to a higher order, effectively capturing complex market dynamics across time and variables. To manage computational complexity, we propose a low-rank approximation of the potentially large attention tensor using tensor decomposition and employ kernel attention, reducing complexity to linear with respect to the data size. Additionally, we present an encoder-decoder model that integrates technical and fundamental analysis, utilizing multimodal signals from historical prices and related tweets. Our experiments on the Stocknet dataset demonstrate the effectiveness of our method, highlighting its potential for enhancing stock movement prediction in financial markets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,987 |
2208.10300 | Efficient Utility Function Learning for Multi-Objective Parameter
Optimization with Prior Knowledge | The current state-of-the-art in multi-objective optimization assumes either a given utility function, learns a utility function interactively or tries to determine the complete Pareto front, requiring a post elicitation of the preferred result. However, result elicitation in real world problems is often based on implicit and explicit expert knowledge, making it difficult to define a utility function, whereas interactive learning or post elicitation requires repeated and expensive expert involvement. To mitigate this, we learn a utility function offline, using expert knowledge by means of preference learning. In contrast to other works, we do not only use (pairwise) result preferences, but also coarse information about the utility function space. This enables us to improve the utility function estimate, especially when using very few results. Additionally, we model the occurring uncertainties in the utility function learning task and propagate them through the whole optimization chain. Our method to learn a utility function eliminates the need of repeated expert involvement while still leading to high-quality results. We show the sample efficiency and quality gains of the proposed method in 4 domains, especially in cases where the surrogate utility function is not able to exactly capture the true expert utility function. We also show that to obtain good results, it is important to consider the induced uncertainties and analyze the effect of biased samples, which is a common problem in real world domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 314,012 |
2104.09411 | Understanding Chinese Video and Language via Contrastive Multimodal
Pre-Training | The pre-trained neural models have recently achieved impressive performances in understanding multimodal content. However, it is still very challenging to pre-train neural models for video and language understanding, especially for Chinese video-language data, due to the following reasons. Firstly, existing video-language pre-training algorithms mainly focus on the co-occurrence of words and video frames, but ignore other valuable semantic and structure information of video-language content, e.g., sequential order and spatiotemporal relationships. Secondly, there exist conflicts between video sentence alignment and other proxy tasks. Thirdly, there is a lack of large-scale and high-quality Chinese video-language datasets (e.g., including 10 million unique videos), which are the fundamental success conditions for pre-training techniques. In this work, we propose a novel video-language understanding framework named VICTOR, which stands for VIdeo-language understanding via Contrastive mulTimOdal pRe-training. Besides general proxy tasks such as masked language modeling, VICTOR constructs several novel proxy tasks under the contrastive learning paradigm, making the model be more robust and able to capture more complex multimodal semantic and structural relationships from different perspectives. VICTOR is trained on a large-scale Chinese video-language dataset, including over 10 million complete videos with corresponding high-quality textual descriptions. We apply the pre-trained VICTOR model to a series of downstream applications and demonstrate its superior performances, comparing against the state-of-the-art pre-training methods such as VideoBERT and UniVL. The codes and trained checkpoints will be publicly available to nourish further developments of the research community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 231,231 |
2403.19128 | OmniParser: A Unified Framework for Text Spotting, Key Information
Extraction and Table Recognition | Recently, visually-situated text parsing (VsTP) has experienced notable advancements, driven by the increasing demand for automated document understanding and the emergence of Generative Large Language Models (LLMs) capable of processing document-based questions. Various methods have been proposed to address the challenging problem of VsTP. However, due to the diversified targets and heterogeneous schemas, previous works usually design task-specific architectures and objectives for individual tasks, which inadvertently leads to modal isolation and complex workflow. In this paper, we propose a unified paradigm for parsing visually-situated text across diverse scenarios. Specifically, we devise a universal model, called OmniParser, which can simultaneously handle three typical visually-situated text parsing tasks: text spotting, key information extraction, and table recognition. In OmniParser, all tasks share the unified encoder-decoder architecture, the unified objective: point-conditioned text generation, and the unified input & output representation: prompt & structured sequences. Extensive experiments demonstrate that the proposed OmniParser achieves state-of-the-art (SOTA) or highly competitive performances on 7 datasets for the three visually-situated text parsing tasks, despite its unified, concise design. The code is available at https://github.com/AlibabaResearch/AdvancedLiterateMachinery. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 442,218 |
1710.11332 | Improving Social Media Text Summarization by Learning Sentence Weight
Distribution | Recently, encoder-decoder models are widely used in social media text summarization. However, these models sometimes select noise words in irrelevant sentences as part of a summary by error, thus declining the performance. In order to inhibit irrelevant sentences and focus on key information, we propose an effective approach by learning sentence weight distribution. In our model, we build a multi-layer perceptron to predict sentence weights. During training, we use the ROUGE score as an alternative to the estimated sentence weight, and try to minimize the gap between estimated weights and predicted weights. In this way, we encourage our model to focus on the key sentences, which have high relevance with the summary. Experimental results show that our approach outperforms baselines on a large-scale social media corpus. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 83,572 |
1604.03757 | Chiron: A Robust Recommendation System with Graph Regularizer | Recommendation systems have been widely used by commercial service providers for giving suggestions to users. Collaborative filtering (CF) systems, one of the most popular recommendation systems, utilize the history of behaviors of the aggregate user-base to provide individual recommendations and are effective when almost all users faithfully express their opinions. However, they are vulnerable to malicious users biasing their inputs in order to change the overall ratings of a specific group of items. CF systems largely fall into two categories - neighborhood-based and (matrix) factorization-based - and the presence of adversarial input can influence recommendations in both categories, leading to instabilities in estimation and prediction. Although the robustness of different collaborative filtering algorithms has been extensively studied, designing an efficient system that is immune to manipulation remains a significant challenge. In this work we propose a novel "hybrid" recommendation system with an adaptive graph-based user/item similarity-regularization - "Chiron". Chiron ties the performance benefits of dimensionality reduction (through factorization) with the advantage of neighborhood clustering (through regularization). We demonstrate, using extensive comparative experiments, that Chiron is resistant to manipulation by large and lethal attacks. | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | 54,562 |
1902.08627 | Bayesian Anomaly Detection and Classification | Statistical uncertainties are rarely incorporated in machine learning algorithms, especially for anomaly detection. Here we present the Bayesian Anomaly Detection And Classification (BADAC) formalism, which provides a unified statistical approach to classification and anomaly detection within a hierarchical Bayesian framework. BADAC deals with uncertainties by marginalising over the unknown, true, value of the data. Using simulated data with Gaussian noise, BADAC is shown to be superior to standard algorithms in both classification and anomaly detection performance in the presence of uncertainties, though with significantly increased computational cost. Additionally, BADAC provides well-calibrated classification probabilities, valuable for use in scientific pipelines. We show that BADAC can work in online mode and is fairly robust to model errors, which can be diagnosed through model-selection methods. In addition it can perform unsupervised new class detection and can naturally be extended to search for anomalous subsets of data. BADAC is therefore ideal where computational cost is not a limiting factor and statistical rigour is important. We discuss approximations to speed up BADAC, such as the use of Gaussian processes, and finally introduce a new metric, the Rank-Weighted Score (RWS), that is particularly suited to evaluating the ability of algorithms to detect anomalies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,228 |
2005.00229 | Deepfake Forensics Using Recurrent Neural Networks | As of late an AI based free programming device has made it simple to make authentic face swaps in recordings that leaves barely any hints of control, in what are known as "deepfake" recordings. Situations where these genuine istic counterfeit recordings are utilized to make political pain, extort somebody or phony fear based oppression occasions are effectively imagined. This paper proposes a transient mindful pipeline to automat-ically recognize deepfake recordings. Our framework utilizes a convolutional neural system (CNN) to remove outline level highlights. These highlights are then used to prepare a repetitive neural net-work (RNN) that figures out how to characterize if a video has been sub-ject to control or not. We assess our technique against a huge arrangement of deepfake recordings gathered from different video sites. We show how our framework can accomplish aggressive outcomes in this assignment while utilizing a basic design. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 175,174 |
2406.10231 | Sign Language Recognition based on YOLOv5 Algorithm for the Telugu Sign
Language | Sign language recognition (SLR) technology has enormous promise to improve communication and accessibility for the difficulty of hearing. This paper presents a novel approach for identifying gestures in TSL using the YOLOv5 object identification framework. The main goal is to create an accurate and successful method for identifying TSL gestures so that the deaf community can use slr. After that, a deep learning model was created that used the YOLOv5 to recognize and classify gestures. This model benefited from the YOLOv5 architecture's high accuracy, speed, and capacity to handle complex sign language features. Utilizing transfer learning approaches, the YOLOv5 model was customized to TSL gestures. To attain the best outcomes, careful parameter and hyperparameter adjustment was carried out during training. With F1-score and mean Average Precision (mAP) ratings of 90.5% and 98.1%, the YOLOv5-medium model stands out for its outstanding performance metrics, demonstrating its efficacy in Telugu sign language identification tasks. Surprisingly, this model strikes an acceptable balance between computational complexity and training time to produce these amazing outcomes. Because it offers a convincing blend of accuracy and efficiency, the YOLOv5-medium model, trained for 200 epochs, emerges as the recommended choice for real-world deployment. The system's stability and generalizability across various TSL gestures and settings were evaluated through rigorous testing and validation, which yielded outstanding accuracy. This research lays the foundation for future advancements in accessible technology for linguistic communities by providing a cutting-edge application of deep learning and computer vision techniques to TSL gesture identification. It also offers insightful perspectives and novel approaches to the field of sign language recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 464,291 |
0803.0778 | Constant-Rank Codes | Constant-dimension codes have recently received attention due to their significance to error control in noncoherent random network coding. In this paper, we show that constant-rank codes are closely related to constant-dimension codes and we study the properties of constant-rank codes. We first introduce a relation between vectors in $\mathrm{GF}(q^m)^n$ and subspaces of $\mathrm{GF}(q)^m$ or $\mathrm{GF}(q)^n$, and use it to establish a relation between constant-rank codes and constant-dimension codes. We then derive bounds on the maximum cardinality of constant-rank codes with given rank weight and minimum rank distance. Finally, we investigate the asymptotic behavior of the maximal cardinality of constant-rank codes with given rank weight and minimum rank distance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,393 |
1711.02413 | ZipNet-GAN: Inferring Fine-grained Mobile Traffic Patterns via a
Generative Adversarial Neural Network | Large-scale mobile traffic analytics is becoming essential to digital infrastructure provisioning, public transportation, events planning, and other domains. Monitoring city-wide mobile traffic is however a complex and costly process that relies on dedicated probes. Some of these probes have limited precision or coverage, others gather tens of gigabytes of logs daily, which independently offer limited insights. Extracting fine-grained patterns involves expensive spatial aggregation of measurements, storage, and post-processing. In this paper, we propose a mobile traffic super-resolution technique that overcomes these problems by inferring narrowly localised traffic consumption from coarse measurements. We draw inspiration from image processing and design a deep-learning architecture tailored to mobile networking, which combines Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models. This enables to uniquely capture spatio-temporal relations between traffic volume snapshots routinely monitored over broad coverage areas (`low-resolution') and the corresponding consumption at 0.05 km $^2$ level (`high-resolution') usually obtained after intensive computation. Experiments we conduct with a real-world data set demonstrate that the proposed ZipNet(-GAN) infers traffic consumption with remarkable accuracy and up to 100$\times$ higher granularity as compared to standard probing, while outperforming existing data interpolation techniques. To our knowledge, this is the first time super-resolution concepts are applied to large-scale mobile traffic analysis and our solution is the first to infer fine-grained urban traffic patterns from coarse aggregates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 84,060 |
2308.10968 | Regularization by Neural Style Transfer for MRI Field-Transfer
Reconstruction with Limited Data | Recent advances in MRI reconstruction have demonstrated remarkable success through deep learning-based models. However, most existing methods rely heavily on large-scale, task-specific datasets, making reconstruction in data-limited settings a critical yet underexplored challenge. While regularization by denoising (RED) leverages denoisers as priors for reconstruction, we propose Regularization by Neural Style Transfer (RNST), a novel framework that integrates a neural style transfer (NST) engine with a denoiser to enable magnetic field-transfer reconstruction. RNST generates high-field-quality images from low-field inputs without requiring paired training data, leveraging style priors to address limited-data settings. Our experiment results demonstrate RNST's ability to reconstruct high-quality images across diverse anatomical planes (axial, coronal, sagittal) and noise levels, achieving superior clarity, contrast, and structural fidelity compared to lower-field references. Crucially, RNST maintains robustness even when style and content images lack exact alignment, broadening its applicability in clinical environments where precise reference matches are unavailable. By combining the strengths of NST and denoising, RNST offers a scalable, data-efficient solution for MRI field-transfer reconstruction, demonstrating significant potential for resource-limited settings. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 386,949 |
1903.12356 | A General FOFE-net Framework for Simple and Effective Question Answering
over Knowledge Bases | Question answering over knowledge base (KB-QA) has recently become a popular research topic in NLP. One popular way to solve the KB-QA problem is to make use of a pipeline of several NLP modules, including entity discovery and linking (EDL) and relation detection. Recent success on KB-QA task usually involves complex network structures with sophisticated heuristics. Inspired by a previous work that builds a strong KB-QA baseline, we propose a simple but general neural model composed of fixed-size ordinally forgetting encoding (FOFE) and deep neural networks, called FOFE-net to solve KB-QA problem at different stages. For evaluation, we use two popular KB-QA datasets, SimpleQuestions and WebQSP, and a newly created dataset, FreebaseQA. The experimental results show that FOFE-net performs well on KB-QA subtasks, entity discovery and linking (EDL) and relation detection, and in turn pushing overall KB-QA system to achieve strong results on all datasets. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 125,708 |
2104.11904 | Spatial-Spectral Clustering with Anchor Graph for Hyperspectral Image | Hyperspectral image (HSI) clustering, which aims at dividing hyperspectral pixels into clusters, has drawn significant attention in practical applications. Recently, many graph-based clustering methods, which construct an adjacent graph to model the data relationship, have shown dominant performance. However, the high dimensionality of HSI data makes it hard to construct the pairwise adjacent graph. Besides, abundant spatial structures are often overlooked during the clustering procedure. In order to better handle the high dimensionality problem and preserve the spatial structures, this paper proposes a novel unsupervised approach called spatial-spectral clustering with anchor graph (SSCAG) for HSI data clustering. The SSCAG has the following contributions: 1) the anchor graph-based strategy is used to construct a tractable large graph for HSI data, which effectively exploits all data points and reduces the computational complexity; 2) a new similarity metric is presented to embed the spatial-spectral information into the combined adjacent graph, which can mine the intrinsic property structure of HSI data; 3) an effective neighbors assignment strategy is adopted in the optimization, which performs the singular value decomposition (SVD) on the adjacent graph to get solutions efficiently. Extensive experiments on three public HSI datasets show that the proposed SSCAG is competitive against the state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 232,056 |
2310.07744 | Terrain-adaptive Central Pattern Generators with Reinforcement Learning
for Hexapod Locomotion | Inspired by biological motion generation, central pattern generators (CPGs) is frequently employed in legged robot locomotion control to produce natural gait pattern with low-dimensional control signals. However, the limited adaptability and stability over complex terrains hinder its application. To address this issue, this paper proposes a terrain-adaptive locomotion control method that incorporates deep reinforcement learning (DRL) framework into CPG, where the CPG model is responsible for the generation of synchronized signals, providing basic locomotion gait, while DRL is integrated to enhance the adaptability of robot towards uneven terrains by adjusting the parameters of CPG mapping functions. The experiments conducted on the hexapod robot in Isaac Gym simulation environment demonstrated the superiority of the proposed method in terrain-adaptability, convergence rate and reward design complexity. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 399,107 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.