id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.08511 | Leveraging Multi-AI Agents for Cross-Domain Knowledge Discovery | In the rapidly evolving field of artificial intelligence, the ability to harness and integrate knowledge across various domains stands as a paramount challenge and opportunity. This study introduces a novel approach to cross-domain knowledge discovery through the deployment of multi-AI agents, each specialized in distinct knowledge domains. These AI agents, designed to function as domain-specific experts, collaborate in a unified framework to synthesize and provide comprehensive insights that transcend the limitations of single-domain expertise. By facilitating seamless interaction among these agents, our platform aims to leverage the unique strengths and perspectives of each, thereby enhancing the process of knowledge discovery and decision-making. We present a comparative analysis of the different multi-agent workflow scenarios evaluating their performance in terms of efficiency, accuracy, and the breadth of knowledge integration. Through a series of experiments involving complex, interdisciplinary queries, our findings demonstrate the superior capability of domain specific multi-AI agent system in identifying and bridging knowledge gaps. This research not only underscores the significance of collaborative AI in driving innovation but also sets the stage for future advancements in AI-driven, cross-disciplinary research and application. Our methods were evaluated on a small pilot data and it showed a trend we expected, if we increase the amount of data we custom train the agents, the trend is expected to be more smooth. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 446,271 |
2409.15361 | Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs
Fine-tuning | Recent breakthroughs in Large Language Models (LLMs) have led to their adoption across a wide range of tasks, ranging from code generation to machine translation and sentiment analysis, etc. Red teaming/Safety alignment efforts show that fine-tuning models on benign (non-harmful) data could compromise safety. However, it remains unclear to what extent this phenomenon is influenced by different variables, including fine-tuning task, model calibrations, etc. This paper explores the task-wise safety degradation due to fine-tuning on downstream tasks such as summarization, code generation, translation, and classification across various calibration. Our results reveal that: 1) Fine-tuning LLMs for code generation and translation leads to the highest degradation in safety guardrails. 2) LLMs generally have weaker guardrails for translation and classification, with 73-92% of harmful prompts answered, across baseline and other calibrations, falling into one of two concern categories. 3) Current solutions, including guards and safety tuning datasets, lack cross-task robustness. To address these issues, we developed a new multitask safety dataset effectively reducing attack success rates across a range of tasks without compromising the model's overall helpfulness. Our work underscores the need for generalized alignment measures to ensure safer and more robust models. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 490,886 |
1401.5856 | Narrative Planning: Compilations to Classical Planning | A model of story generation recently proposed by Riedl and Young casts it as planning, with the additional condition that story characters behave intentionally. This means that characters have perceivable motivation for the actions they take. I show that this condition can be compiled away (in more ways than one) to produce a classical planning problem that can be solved by an off-the-shelf classical planner, more efficiently than by Riedl and Youngs specialised planner. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 30,256 |
2405.07364 | BoQ: A Place is Worth a Bag of Learnable Queries | In visual place recognition, accurately identifying and matching images of locations under varying environmental conditions and viewpoints remains a significant challenge. In this paper, we introduce a new technique, called Bag-of-Queries (BoQ), which learns a set of global queries designed to capture universal place-specific attributes. Unlike existing methods that employ self-attention and generate the queries directly from the input features, BoQ employs distinct learnable global queries, which probe the input features via cross-attention, ensuring consistent information aggregation. In addition, our technique provides an interpretable attention mechanism and integrates with both CNN and Vision Transformer backbones. The performance of BoQ is demonstrated through extensive experiments on 14 large-scale benchmarks. It consistently outperforms current state-of-the-art techniques including NetVLAD, MixVPR and EigenPlaces. Moreover, as a global retrieval technique (one-stage), BoQ surpasses two-stage retrieval methods, such as Patch-NetVLAD, TransVPR and R2Former, all while being orders of magnitude faster and more efficient. The code and model weights are publicly available at https://github.com/amaralibey/Bag-of-Queries. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,683 |
1610.07004 | Inferring Population Preferences via Mixtures of Spatial Voting Models | Understanding political phenomena requires measuring the political preferences of society. We introduce a model based on mixtures of spatial voting models that infers the underlying distribution of political preferences of voters with only voting records of the population and political positions of candidates in an election. Beyond offering a cost-effective alternative to surveys, this method projects the political preferences of voters and candidates into a shared latent preference space. This projection allows us to directly compare the preferences of the two groups, which is desirable for political science but difficult with traditional survey methods. After validating the aggregated-level inferences of this model against results of related work and on simple prediction tasks, we apply the model to better understand the phenomenon of political polarization in the Texas, New York, and Ohio electorates. Taken at face value, inferences drawn from our model indicate that the electorates in these states may be less bimodal than the distribution of candidates, but that the electorates are comparatively more extreme in their variance. We conclude with a discussion of limitations of our method and potential future directions for research. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 62,728 |
2012.10504 | CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning
for Demand Response and Urban Energy Management | Rapid urbanization, increasing integration of distributed renewable energy resources, energy storage, and electric vehicles introduce new challenges for the power grid. In the US, buildings represent about 70% of the total electricity demand and demand response has the potential for reducing peaks of electricity by about 20%. Unlocking this potential requires control systems that operate on distributed systems, ideally data-driven and model-free. For this, reinforcement learning (RL) algorithms have gained increased interest in the past years. However, research in RL for demand response has been lacking the level of standardization that propelled the enormous progress in RL research in the computer science community. To remedy this, we created CityLearn, an OpenAI Gym Environment which allows researchers to implement, share, replicate, and compare their implementations of RL for demand response. Here, we discuss this environment and The CityLearn Challenge, a RL competition we organized to propel further progress in this field. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 212,358 |
1509.06470 | Understand Scene Categories by Objects: A Semantic Regularized Scene
Classifier Using Convolutional Neural Networks | Scene classification is a fundamental perception task for environmental understanding in today's robotics. In this paper, we have attempted to exploit the use of popular machine learning technique of deep learning to enhance scene understanding, particularly in robotics applications. As scene images have larger diversity than the iconic object images, it is more challenging for deep learning methods to automatically learn features from scene images with less samples. Inspired by human scene understanding based on object knowledge, we address the problem of scene classification by encouraging deep neural networks to incorporate object-level information. This is implemented with a regularization of semantic segmentation. With only 5 thousand training images, as opposed to 2.5 million images, we show the proposed deep architecture achieves superior scene classification results to the state-of-the-art on a publicly available SUN RGB-D dataset. In addition, performance of semantic segmentation, the regularizer, also reaches a new record with refinement derived from predicted scene labels. Finally, we apply our SUN RGB-D dataset trained model to a mobile robot captured images to classify scenes in our university demonstrating the generalization ability of the proposed algorithm. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 47,157 |
2001.08345 | Target-Embedding Autoencoders for Supervised Representation Learning | Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures---thereby underscoring the further generality of this framework beyond feedforward instantiations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 161,264 |
1205.1245 | Sparse group lasso and high dimensional multinomial classification | The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. The run-time of our sparse group lasso implementation is of the same order of magnitude as the multinomial lasso algorithm implemented in the R package glmnet. Our implementation scales well with the problem size. One of the high dimensional examples considered is a 50 class classification problem with 10k features, which amounts to estimating 500k parameters. The implementation is available as the R package msgl. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 15,817 |
2502.09560 | EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language
Models for Vision-Driven Embodied Agents | Leveraging Multi-modal Large Language Models (MLLMs) to create embodied agents offers a promising avenue for tackling real-world tasks. While language-centric embodied agents have garnered substantial attention, MLLM-based embodied agents remain underexplored due to the lack of comprehensive evaluation frameworks. To bridge this gap, we introduce EmbodiedBench, an extensive benchmark designed to evaluate vision-driven embodied agents. EmbodiedBench features: (1) a diverse set of 1,128 testing tasks across four environments, ranging from high-level semantic tasks (e.g., household) to low-level tasks involving atomic actions (e.g., navigation and manipulation); and (2) six meticulously curated subsets evaluating essential agent capabilities like commonsense reasoning, complex instruction understanding, spatial awareness, visual perception, and long-term planning. Through extensive experiments, we evaluated 13 leading proprietary and open-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel at high-level tasks but struggle with low-level manipulation, with the best model, GPT-4o, scoring only 28.9% on average. EmbodiedBench provides a multifaceted standardized evaluation platform that not only highlights existing challenges but also offers valuable insights to advance MLLM-based embodied agents. Our code is available at https://embodiedbench.github.io. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 533,487 |
2311.18670 | Local Geometry Determines Global Landscape in Low-rank Factorization for
Synchronization | The orthogonal group synchronization problem, which focuses on recovering orthogonal group elements from their corrupted pairwise measurements, encompasses examples such as high-dimensional Kuramoto model on general signed networks, $\mathbb{Z}_2$-synchronization, community detection under stochastic block models, and orthogonal Procrustes problem. The semidefinite relaxation (SDR) has proven its power in solving this problem; however, its expensive computational costs impede its widespread practical applications. We consider the Burer-Monteiro factorization approach to the orthogonal group synchronization, an effective and scalable low-rank factorization to solve large scale SDPs. Despite the significant empirical successes of this factorization approach, it is still a challenging task to understand when the nonconvex optimization landscape is benign, i.e., the optimization landscape possesses only one local minimizer, which is also global. In this work, we demonstrate that if the degree of freedom within the factorization exceeds twice the condition number of the ``Laplacian" (certificate matrix) at the global minimizer, the optimization landscape is absent of spurious local minima. Our main theorem is purely algebraic and versatile, and it seamlessly applies to all the aforementioned examples: the nonconvex landscape remains benign under almost identical condition that enables the success of the SDR. Additionally, we illustrate that the Burer-Monteiro factorization is robust to ``monotone adversaries", mirroring the resilience of the SDR. In other words, introducing ``favorable" adversaries into the data will not result in the emergence of new spurious local minimizers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 411,769 |
2403.14523 | Invisible Needle Detection in Ultrasound: Leveraging Mechanism-Induced
Vibration | In clinical applications that involve ultrasound-guided intervention, the visibility of the needle can be severely impeded due to steep insertion and strong distractors such as speckle noise and anatomical occlusion. To address this challenge, we propose VibNet, a learning-based framework tailored to enhance the robustness and accuracy of needle detection in ultrasound images, even when the target becomes invisible to the naked eye. Inspired by Eulerian Video Magnification techniques, we utilize an external step motor to induce low-amplitude periodic motion on the needle. These subtle vibrations offer the potential to generate robust frequency features for detecting the motion patterns around the needle. To robustly and precisely detect the needle leveraging these vibrations, VibNet integrates learning-based Short-Time-Fourier-Transform and Hough-Transform modules to achieve successive sub-goals, including motion feature extraction in the spatiotemporal space, frequency feature aggregation, and needle detection in the Hough space. Based on the results obtained on distinct ex vivo porcine and bovine tissue samples, the proposed algorithm exhibits superior detection performance with efficient computation and generalization capability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 440,116 |
2010.04862 | Remarks on Optimal Scores for Speaker Recognition | In this article, we first establish the theory of optimal scores for speaker recognition. Our analysis shows that the minimum Bayes risk (MBR) decisions for both the speaker identification and speaker verification tasks can be based on a normalized likelihood (NL). When the underlying generative model is a linear Gaussian, the NL score is mathematically equivalent to the PLDA likelihood ratio, and the empirical scores based on cosine distance and Euclidean distance can be seen as approximations of this linear Gaussian NL score under some conditions. We discuss a number of properties of the NL score and perform a simple simulation experiment to demonstrate the properties of the NL score. | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 199,889 |
2302.05907 | LipLearner: Customizable Silent Speech Interactions on Mobile Devices | Silent speech interface is a promising technology that enables private communications in natural language. However, previous approaches only support a small and inflexible vocabulary, which leads to limited expressiveness. We leverage contrastive learning to learn efficient lipreading representations, enabling few-shot command customization with minimal user effort. Our model exhibits high robustness to different lighting, posture, and gesture conditions on an in-the-wild dataset. For 25-command classification, an F1-score of 0.8947 is achievable only using one shot, and its performance can be further boosted by adaptively learning from more data. This generalizability allowed us to develop a mobile silent speech interface empowered with on-device fine-tuning and visual keyword spotting. A user study demonstrated that with LipLearner, users could define their own commands with high reliability guaranteed by an online incremental learning scheme. Subjective feedback indicated that our system provides essential functionalities for customizable silent speech interactions with high usability and learnability. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,217 |
2112.08986 | A Heterogeneous Graph Learning Model for Cyber-Attack Detection | A cyber-attack is a malicious attempt by experienced hackers to breach the target information system. Usually, the cyber-attacks are characterized as hybrid TTPs (Tactics, Techniques, and Procedures) and long-term adversarial behaviors, making the traditional intrusion detection methods ineffective. Most existing cyber-attack detection systems are implemented based on manually designed rules by referring to domain knowledge (e.g., threat models, threat intelligences). However, this process is lack of intelligence and generalization ability. Aiming at this limitation, this paper proposes an intelligent cyber-attack detection method based on provenance data. To effective and efficient detect cyber-attacks from a huge number of system events in the provenance data, we firstly model the provenance data by a heterogeneous graph to capture the rich context information of each system entities (e.g., process, file, socket, etc.), and learns a semantic vector representation for each system entity. Then, we perform online cyber-attack detection by sampling a small and compact local graph from the heterogeneous graph, and classifying the key system entities as malicious or benign. We conducted a series of experiments on two provenance datasets with real cyber-attacks. The experiment results show that the proposed method outperforms other learning based detection models, and has competitive performance against state-of-the-art rule based cyber-attack detection systems. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 271,997 |
2403.02419 | Are More LLM Calls All You Need? Towards Scaling Laws of Compound
Inference Systems | Many recent state-of-the-art results in language tasks were achieved using compound systems that perform multiple Language Model (LM) calls and aggregate their responses. However, there is little understanding of how the number of LM calls - e.g., when asking the LM to answer each question multiple times and taking a majority vote - affects such a compound system's performance. In this paper, we initiate the study of scaling properties of compound inference systems. We analyze, theoretically and empirically, how the number of LM calls affects the performance of Vote and Filter-Vote, two of the simplest compound system designs, which aggregate LM responses via majority voting, optionally applying LM filters. We find, surprisingly, that across multiple language tasks, the performance of both Vote and Filter-Vote can first increase but then decrease as a function of the number of LM calls. Our theoretical results suggest that this non-monotonicity is due to the diversity of query difficulties within a task: more LM calls lead to higher performance on "easy" queries, but lower performance on "hard" queries, and non-monotone behavior can emerge when a task contains both types of queries. This insight then allows us to compute, from a small number of samples, the number of LM calls that maximizes system performance, and define an analytical scaling model for both systems. Experiments show that our scaling model can accurately predict the performance of Vote and Filter-Vote systems and thus find the optimal number of LM calls to make. | false | false | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | 434,794 |
1304.1135 | Combination of Evidence Using the Principle of Minimum Information Gain | One of the most important aspects in any treatment of uncertain information is the rule of combination for updating the degrees of uncertainty. The theory of belief functions uses the Dempster rule to combine two belief functions defined by independent bodies of evidence. However, with limited dependency information about the accumulated belief the Dempster rule may lead to unsatisfactory results. The present study suggests a method to determine the accumulated belief based on the premise that the information gain from the combination process should be minimum. This method provides a mechanism that is equivalent to the Bayes rule when all the conditional probabilities are available and to the Dempster rule when the normalization constant is equal to one. The proposed principle of minimum information gain is shown to be equivalent to the maximum entropy formalism, a special case of the principle of minimum cross-entropy. The application of this principle results in a monotonic increase in belief with accumulation of consistent evidence. The suggested approach may provide a more reasonable criterion for identifying conflicts among various bodies of evidence. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,488 |
2411.19278 | OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth
Integration | Depth completion (DC) aims to predict a dense depth map from an RGB image and sparse depth observations. Existing methods for DC generalize poorly on new datasets or unseen sparse depth patterns, limiting their practical applications. We propose OMNI-DC, a highly robust DC model that generalizes well across various scenarios. Our method incorporates a novel multi-resolution depth integration layer and a probability-based loss, enabling it to deal with sparse depth maps of varying densities. Moreover, we train OMNI-DC on a mixture of synthetic datasets with a scale normalization technique. To evaluate our model, we establish a new evaluation protocol named Robust-DC for zero-shot testing under various sparse depth patterns. Experimental results on Robust-DC and conventional benchmarks show that OMNI-DC significantly outperforms the previous state of the art. The checkpoints, training code, and evaluations are available at https://github.com/princeton-vl/OMNI-DC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 512,184 |
2111.03204 | Learning Model Predictive Controllers for Real-Time Ride-Hailing Vehicle
Relocation and Pricing Decisions | Large-scale ride-hailing systems often combine real-time routing at the individual request level with a macroscopic Model Predictive Control (MPC) optimization for dynamic pricing and vehicle relocation. The MPC relies on a demand forecast and optimizes over a longer time horizon to compensate for the myopic nature of the routing optimization. However, the longer horizon increases computational complexity and forces the MPC to operate at coarser spatial-temporal granularity, degrading the quality of its decisions. This paper addresses these computational challenges by learning the MPC optimization. The resulting machine-learning model then serves as the optimization proxy and predicts its optimal solutions. This makes it possible to use the MPC at higher spatial-temporal fidelity, since the optimizations can be solved and learned offline. Experimental results show that the proposed approach improves quality of service on challenging instances from the New York City dataset. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 265,085 |
2101.09362 | Machine Learning Based Early Fire Detection System using a Low-Cost
Drone | This paper proposes a new machine learning based system for forest fire earlier detection in a low-cost and accurate manner. Accordingly, it is aimed to bring a new and definite perspective to visual detection in forest fires. A drone is constructed for this purpose. The microcontroller in the system has been programmed by training with deep learning methods, and the unmanned aerial vehicle has been given the ability to recognize the smoke, the earliest sign of fire detection. The common problem in the prevalent algorithms used in fire detection is the high false alarm and overlook rates. Confirming the result obtained from the visualization with an additional supervision stage will increase the reliability of the system as well as guarantee the accuracy of the result. Due to the mobile vision ability of the unmanned aerial vehicle, the data can be controlled from any point of view clearly and continuously. System performance are validated by conducting experiments in both simulation and physical environments. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 216,574 |
2306.03664 | Experimenting with Additive Margins for Contrastive Self-Supervised
Speaker Verification | Most state-of-the-art self-supervised speaker verification systems rely on a contrastive-based objective function to learn speaker representations from unlabeled speech data. We explore different ways to improve the performance of these methods by: (1) revisiting how positive and negative pairs are sampled through a "symmetric" formulation of the contrastive loss; (2) introducing margins similar to AM-Softmax and AAM-Softmax that have been widely adopted in the supervised setting. We demonstrate the effectiveness of the symmetric contrastive loss which provides more supervision for the self-supervised task. Moreover, we show that Additive Margin and Additive Angular Margin allow reducing the overall number of false negatives and false positives by improving speaker separability. Finally, by combining both techniques and training a larger model we achieve 7.50% EER and 0.5804 minDCF on the VoxCeleb1 test set, which outperforms other contrastive self supervised methods on speaker verification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,435 |
1909.01432 | Adversarial Robustness of Similarity-Based Link Prediction | Link prediction is one of the fundamental problems in social network analysis. A common set of techniques for link prediction rely on similarity metrics which use the topology of the observed subnetwork to quantify the likelihood of unobserved links. Recently, similarity metrics for link prediction have been shown to be vulnerable to attacks whereby observations about the network are adversarially modified to hide target links. We propose a novel approach for increasing robustness of similarity-based link prediction by endowing the analyst with a restricted set of reliable queries which accurately measure the existence of queried links. The analyst aims to robustly predict a collection of possible links by optimally allocating the reliable queries. We formalize the analyst problem as a Bayesian Stackelberg game in which they first choose the reliable queries, followed by an adversary who deletes a subset of links among the remaining (unreliable) queries by the analyst. The analyst in our model is uncertain about the particular target link the adversary attempts to hide, whereas the adversary has full information about the analyst and the network. Focusing on similarity metrics using only local information, we show that the problem is NP-Hard for both players, and devise two principled and efficient approaches for solving it approximately. Extensive experiments with real and synthetic networks demonstrate the effectiveness of our approach. | false | false | false | true | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 143,895 |
2401.15061 | Digital-analog hybrid matrix multiplication processor for optical neural
networks | The computational demands of modern AI have spurred interest in optical neural networks (ONNs) which offer the potential benefits of increased speed and lower power consumption. However, current ONNs face various challenges,most significantly a limited calculation precision (typically around 4 bits) and the requirement for high-resolution signal format converters (digital-to-analogue conversions (DACs) and analogue-to-digital conversions (ADCs)). These challenges are inherent to their analog computing nature and pose significant obstacles in practical implementation. Here, we propose a digital-analog hybrid optical computing architecture for ONNs, which utilizes digital optical inputs in the form of binary words. By introducing the logic levels and decisions based on thresholding, the calculation precision can be significantly enhanced. The DACs for input data can be removed and the resolution of the ADCs can be greatly reduced. This can increase the operating speed at a high calculation precision and facilitate the compatibility with microelectronics. To validate our approach, we have fabricated a proof-of-concept photonic chip and built up a hybrid optical processor (HOP) system for neural network applications. We have demonstrated an unprecedented 16-bit calculation precision for high-definition image processing, with a pixel error rate (PER) as low as $1.8\times10^{-3}$ at an signal-to-noise ratio (SNR) of 18.2 dB. We have also implemented a convolutional neural network for handwritten digit recognition that shows the same accuracy as the one achieved by a desktop computer. The concept of the digital-analog hybrid optical computing architecture offers a methodology that could potentially be applied to various ONN implementations and may intrigue new research into efficient and accurate domain-specific optical computing architectures for neural networks. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 424,307 |
1611.08583 | Learning from Maps: Visual Common Sense for Autonomous Driving | Today's autonomous vehicles rely extensively on high-definition 3D maps to navigate the environment. While this approach works well when these maps are completely up-to-date, safe autonomous vehicles must be able to corroborate the map's information via a real time sensor-based system. Our goal in this work is to develop a model for road layout inference given imagery from on-board cameras, without any reliance on high-definition maps. However, no sufficient dataset for training such a model exists. Here, we leverage the availability of standard navigation maps and corresponding street view images to construct an automatically labeled, large-scale dataset for this complex scene understanding problem. By matching road vectors and metadata from navigation maps with Google Street View images, we can assign ground truth road layout attributes (e.g., distance to an intersection, one-way vs. two-way street) to the images. We then train deep convolutional networks to predict these road layout attributes given a single monocular RGB image. Experimental evaluation demonstrates that our model learns to correctly infer the road attributes using only panoramas captured by car-mounted cameras as input. Additionally, our results indicate that this method may be suitable to the novel application of recommending safety improvements to infrastructure (e.g., suggesting an alternative speed limit for a street). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 64,525 |
2010.13871 | Examining the causal structures of deep neural networks using
information theory | Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring "what does what" within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN's causal structure since it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the "causal plane" which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 203,271 |
2409.14439 | A Visualized Malware Detection Framework with CNN and Conditional GAN | Malware visualization analysis incorporating with Machine Learning (ML) has been proven to be a promising solution for improving security defenses on different platforms. In this work, we propose an integrated framework for addressing common problems experienced by ML utilizers in developing malware detection systems. Namely, a pictorial presentation system with extensions is designed to preserve the identities of benign/malign samples by encoding each variable into binary digits and mapping them into black and white pixels. A conditional Generative Adversarial Network based model is adopted to produce synthetic images and mitigate issues of imbalance classes. Detection models architected by Convolutional Neural Networks are for validating performances while training on datasets with and without artifactual samples. Result demonstrates accuracy rates of 98.51% and 97.26% for these two training scenarios. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 490,466 |
2402.16903 | A novel data generation scheme for surrogate modelling with deep
operator networks | Operator-based neural network architectures such as DeepONets have emerged as a promising tool for the surrogate modeling of physical systems. In general, towards operator surrogate modeling, the training data is generated by solving the PDEs using techniques such as Finite Element Method (FEM). The computationally intensive nature of data generation is one of the biggest bottleneck in deploying these surrogate models for practical applications. In this study, we propose a novel methodology to alleviate the computational burden associated with training data generation for DeepONets. Unlike existing literature, the proposed framework for data generation does not use any partial differential equation integration strategy, thereby significantly reducing the computational cost associated with generating training dataset for DeepONet. In the proposed strategy, first, the output field is generated randomly, satisfying the boundary conditions using Gaussian Process Regression (GPR). From the output field, the input source field can be calculated easily using finite difference techniques. The proposed methodology can be extended to other operator learning methods, making the approach widely applicable. To validate the proposed approach, we employ the heat equations as the model problem and develop the surrogate model for numerous boundary value problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 432,760 |
2406.06938 | Post-Hoc Answer Attribution for Grounded and Trustworthy Long Document
Comprehension: Task, Insights, and Challenges | Attributing answer text to its source document for information-seeking questions is crucial for building trustworthy, reliable, and accountable systems. We formulate a new task of post-hoc answer attribution for long document comprehension (LDC). Owing to the lack of long-form abstractive and information-seeking LDC datasets, we refactor existing datasets to assess the strengths and weaknesses of existing retrieval-based and proposed answer decomposition and textual entailment-based optimal selection attribution systems for this task. We throw light on the limitations of existing datasets and the need for datasets to assess the actual performance of systems on this task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 462,826 |
1706.10020 | Preference-based performance measures for Time-Domain Global Similarity
method | For Time-Domain Global Similarity (TDGS) method, which transforms the data cleaning problem into a binary classification problem about the physical similarity between channels, directly adopting common performance measures could only guarantee the performance for physical similarity. Nevertheless, practical data cleaning tasks have preferences for the correctness of original data sequences. To obtain the general expressions of performance measures based on the preferences of tasks, the mapping relations between performance of TDGS method about physical similarity and correctness of data sequences are investigated by probability theory in this paper. Performance measures for TDGS method in several common data cleaning tasks are set. Cases when these preference-based performance measures could be simplified are introduced. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 76,237 |
1305.3971 | Sparse Norm Filtering | Optimization-based filtering smoothes an image by minimizing a fidelity function and simultaneously preserves edges by exploiting a sparse norm penalty over gradients. It has obtained promising performance in practical problems, such as detail manipulation, HDR compression and deblurring, and thus has received increasing attentions in fields of graphics, computer vision and image processing. This paper derives a new type of image filter called sparse norm filter (SNF) from optimization-based filtering. SNF has a very simple form, introduces a general class of filtering techniques, and explains several classic filters as special implementations of SNF, e.g. the averaging filter and the median filter. It has advantages of being halo free, easy to implement, and low time and memory costs (comparable to those of the bilateral filter). Thus, it is more generic than a smoothing operator and can better adapt to different tasks. We validate the proposed SNF by a wide variety of applications including edge-preserving smoothing, outlier tolerant filtering, detail manipulation, HDR compression, non-blind deconvolution, image segmentation, and colorization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 24,653 |
2411.10794 | Going Beyond Conventional OOD Detection | Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications. Deep learning models can often misidentify OOD samples as in-distribution (ID) samples. This vulnerability worsens in the presence of spurious correlation in the training set. Likewise, in fine-grained classification settings, detection of fine-grained OOD samples becomes inherently challenging due to their high similarity to ID samples. However, current research on OOD detection has largely ignored these challenging scenarios, focusing instead on relatively easier (conventional) cases. In this work, we present a unified Approach to Spurious, fine-grained, and Conventional OOD Detection (ASCOOD). First, we propose synthesizing virtual outliers from ID data by approximating the destruction of invariant features. We identify invariant features with the pixel attribution method using the model being learned. This approach eliminates the burden of curating external OOD datasets. Then, we simultaneously incentivize ID classification and predictive uncertainty towards the virtual outliers leveraging standardized feature representation. Our approach effectively mitigates the impact of spurious correlations and encourages capturing fine-grained attributes. Extensive experiments across six datasets demonstrate the merit of ASCOOD in spurious, fine-grained, and conventional settings. The code is available at: https://github.com/sudarshanregmi/ASCOOD/ | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 508,793 |
2312.05352 | A Review of Machine Learning Methods Applied to Video Analysis Systems | The paper provides a survey of the development of machine-learning techniques for video analysis. The survey provides a summary of the most popular deep learning methods used for human activity recognition. We discuss how popular architectures perform on standard datasets and highlight the differences from real-life datasets dominated by multiple activities performed by multiple participants over long periods. For real-life datasets, we describe the use of low-parameter models (with 200X or 1,000X fewer parameters) that are trained to detect a single activity after the relevant objects have been successfully detected. Our survey then turns to a summary of machine learning methods that are specifically developed for working with a small number of labeled video samples. Our goal here is to describe modern techniques that are specifically designed so as to minimize the amount of ground truth that is needed for training and testing video analysis systems. We provide summaries of the development of self-supervised learning, semi-supervised learning, active learning, and zero-shot learning for applications in video analysis. For each method, we provide representative examples. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 414,047 |
2309.14451 | Methods of quantifying specialized knowledge and network rewiring | Technological innovations are a major driver of economic development that depend on the exchange of knowledge and ideas among those with unique but complementary specialized knowledge and knowhow. However, measurement of specialized knowledge embedded in technologists, scientists and entrepreneurs in the knowledge economy presents an empirical challenge as both the exchange of knowledge and knowledge itself remain difficult to observe. We develop novel measures of specialized knowledge using a unique dataset of longitudinal records of participation at technology-focused meetup events in two regional knowledge economics. Our measures of specialized knowledge can be further used to quantify the extend of knowledge spillover and network rewiring and uncover underlying social mechanisms that contribute to the development of increasingly complex and differentiated networks in maturing knowledge economies. We apply these methods in the context of the rapid morphogenesis of emerging regional technology economies in New York City and Los Angeles. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 394,610 |
1711.10918 | Joint Blind Motion Deblurring and Depth Estimation of Light Field | Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,693 |
1612.01216 | Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems | Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high dimensional constrained problems, as the projection step becomes computationally prohibitive to compute. To address this problem, this paper adopts a projection-free optimization approach, a.k.a.~the Frank-Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an inexact FW algorithm. Using a diminishing step size rule and letting $t$ be the iteration number, we show that the DeFW algorithm's convergence rate is ${\cal O}(1/t)$ for convex objectives; is ${\cal O}(1/t^2)$ for strongly convex objectives with the optimal solution in the interior of the constraint set; and is ${\cal O}(1/\sqrt{t})$ towards a stationary point for smooth but non-convex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. Furthermore, we demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 65,039 |
2209.14670 | Towards Equalised Odds as Fairness Metric in Academic Performance
Prediction | The literature for fairness-aware machine learning knows a plethora of different fairness notions. It is however wellknown, that it is impossible to satisfy all of them, as certain notions contradict each other. In this paper, we take a closer look at academic performance prediction (APP) systems and try to distil which fairness notions suit this task most. For this, we scan recent literature proposing guidelines as to which fairness notion to use and apply these guidelines onto APP. Our findings suggest equalised odds as most suitable notion for APP, based on APP's WYSIWYG worldview as well as potential long-term improvements for the population. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 320,321 |
1202.4743 | Real-time detection and tracking of multiple objects with partial
decoding in H.264/AVC bitstream domain | In this paper, we show that we can apply probabilistic spatiotemporal macroblock filtering (PSMF) and partial decoding processes to effectively detect and track multiple objects in real time in H.264|AVC bitstreams with stationary background. Our contribution is that our method cannot only show fast processing time but also handle multiple moving objects that are articulated, changing in size or internally have monotonous color, even though they contain a chaotic set of non-homogeneous motion vectors inside. In addition, our partial decoding process for H.264|AVC bitstreams enables to improve the accuracy of object trajectories and overcome long occlusion by using extracted color information. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 14,519 |
2111.15174 | CRIS: CLIP-Driven Referring Image Segmentation | Referring image segmentation aims to segment a referent via a natural linguistic expression.Due to the distinct data properties between text and image, it is challenging for a network to well align text and pixel-level features. Existing approaches use pretrained models to facilitate learning, yet separately transfer the language/vision knowledge from pretrained models, ignoring the multi-modal corresponding information. Inspired by the recent advance in Contrastive Language-Image Pretraining (CLIP), in this paper, we propose an end-to-end CLIP-Driven Referring Image Segmentation framework (CRIS). To transfer the multi-modal knowledge effectively, CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment. More specifically, we design a vision-language decoder to propagate fine-grained semantic information from textual representations to each pixel-level activation, which promotes consistency between the two modalities. In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances. The experimental results on three benchmark datasets demonstrate that our proposed framework significantly outperforms the state-of-the-art performance without any post-processing. The code will be released. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,844 |
0808.3689 | Optimal Power Allocation for Fading Channels in Cognitive Radio
Networks: Ergodic Capacity and Outage Capacity | A cognitive radio network (CRN) is formed by either allowing the secondary users (SUs) in a secondary communication network (SCN) to opportunistically operate in the frequency bands originally allocated to a primary communication network (PCN) or by allowing SCN to coexist with the primary users (PUs) in PCN as long as the interference caused by SCN to each PU is properly regulated. In this paper, we consider the latter case, known as spectrum sharing, and study the optimal power allocation strategies to achieve the ergodic capacity and the outage capacity of the SU fading channel under different types of power constraints and fading channel models. In particular, besides the interference power constraint at PU, the transmit power constraint of SU is also considered. Since the transmit power and the interference power can be limited either by a peak or an average constraint, various combinations of power constraints are studied. It is shown that there is a capacity gain for SU under the average over the peak transmit/interference power constraint. It is also shown that fading for the channel between SU transmitter and PU receiver is usually a beneficial factor for enhancing the SU channel capacities. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,231 |
1907.07324 | Deep Learning for Pneumothorax Detection and Localization in Chest
Radiographs | Pneumothorax is a critical condition that requires timely communication and immediate action. In order to prevent significant morbidity or patient death, early detection is crucial. For the task of pneumothorax detection, we study the characteristics of three different deep learning techniques: (i) convolutional neural networks, (ii) multiple-instance learning, and (iii) fully convolutional networks. We perform a five-fold cross-validation on a dataset consisting of 1003 chest X-ray images. ROC analysis yields AUCs of 0.96, 0.93, and 0.92 for the three methods, respectively. We review the classification and localization performance of these approaches as well as an ensemble of the three aforementioned techniques. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 138,846 |
2309.08066 | Morphologically-Aware Consensus Computation via Heuristics-based
IterATive Optimization (MACCHIatO) | The extraction of consensus segmentations from several binary or probabilistic masks is important to solve various tasks such as the analysis of inter-rater variability or the fusion of several neural network outputs. One of the most widely used methods to obtain such a consensus segmentation is the STAPLE algorithm. In this paper, we first demonstrate that the output of that algorithm is heavily impacted by the background size of images and the choice of the prior. We then propose a new method to construct a binary or a probabilistic consensus segmentation based on the Fr\'{e}chet means of carefully chosen distances which makes it totally independent of the image background size. We provide a heuristic approach to optimize this criterion such that a voxel's class is fully determined by its voxel-wise distance to the different masks, the connected component it belongs to and the group of raters who segmented it. We compared extensively our method on several datasets with the STAPLE method and the naive segmentation averaging method, showing that it leads to binary consensus masks of intermediate size between Majority Voting and STAPLE and to different posterior probabilities than Mask Averaging and STAPLE methods. Our code is available at https://gitlab.inria.fr/dhamzaou/jaccardmap . | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 392,021 |
2301.08741 | Enactive Artificial Intelligence: Subverting Gender Norms in Robot-Human
Interaction | This paper introduces Enactive Artificial Intelligence (eAI) as an intersectional gender-inclusive stance towards AI. AI design is an enacted human sociocultural practice that reflects human culture and values. Unrepresentative AI design could lead to social marginalisation. Section 1, drawing from radical enactivism, outlines embodied cultural practices. In Section 2, explores how intersectional gender intertwines with technoscience as a sociocultural practice. Section 3 focuses on subverting gender norms in the specific case of Robot-Human Interaction in AI. Finally, Section 4 identifies four vectors of ethics: explainability, fairness, transparency, and auditability for adopting an intersectionality-inclusive stance in developing gender-inclusive AI and subverting existing gender norms in robot design. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 341,269 |
1507.01290 | Narcotweets: Social Media in Wartime | This paper describes how people living in armed conflict environments use social media as a participatory news platform, in lieu of damaged state and media apparatuses. We investigate this by analyzing the microblogging practices of Mexican citizens whose everyday life is affected by the Drug War. We provide a descriptive analysis of the phenomenon, combining content and quantitative Twitter data analyses. We focus on three interrelated phenomena: general participation patterns of ordinary citizens, the emergence and role of information curators, and the tension between governmental regulation and drug cartel intimidation. This study reveals the complex tensions among citizens, media actors, and the government in light of large scale organized crime. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 44,849 |
2308.15807 | ACNPU: A 4.75TOPS/W 1080P@30FPS Super Resolution Accelerator with
Decoupled Asymmetric Convolution | Deep learning-driven superresolution (SR) outperforms traditional techniques but also faces the challenge of high complexity and memory bandwidth. This challenge leads many accelerators to opt for simpler and shallow models like FSRCNN, compromising performance for real-time needs, especially for resource-limited edge devices. This paper proposes an energy-efficient SR accelerator, ACNPU, to tackle this challenge. The ACNPU enhances image quality by 0.34dB with a 27-layer model, but needs 36\% less complexity than FSRCNN, while maintaining a similar model size, with the \textit{decoupled asymmetric convolution and split-bypass structure}. The hardware-friendly 17K-parameter model enables \textit{holistic model fusion} instead of localized layer fusion to remove external DRAM access of intermediate feature maps. The on-chip memory bandwidth is further reduced with the \textit{input stationary flow} and \textit{parallel-layer execution} to reduce power consumption. Hardware is regular and easy to control to support different layers by \textit{processing elements (PEs) clusters with reconfigurable input and uniform data flow}. The implementation in the 40 nm CMOS process consumes 2333 K gate counts and 198KB SRAMs. The ACNPU achieves 31.7 FPS and 124.4 FPS for x2 and x4 scales Full-HD generation, respectively, which attains 4.75 TOPS/W energy efficiency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 388,799 |
1007.4418 | Distributed Source Coding of Correlated Gaussian Sources | We consider the distributed source coding system of $L$ correlated Gaussian sources $Y_i,i=1,2,...,L$ which are noisy observations of correlated Gaussian remote sources $X_k, k=1,2,...,K$. We assume that $Y^{L}={}^{\rm t}(Y_1,Y_2,$ $..., Y_L)$ is an observation of the source vector $X^K={}^{\rm t}(X_1,X_2,..., X_K)$, having the form $Y^L=AX^K+N^L$, where $A$ is a $L\times K$ matrix and $N^L={}^{\rm t}(N_1,N_2,...,N_L)$ is a vector of $L$ independent Gaussian random variables also independent of $X^K$. In this system $L$ correlated Gaussian observations are separately compressed by $L$ encoders and sent to the information processing center. We study the remote source coding problem where the decoder at the center attempts to reconstruct the remote source $X^K$. We consider three distortion criteria based on the covariance matrix of the estimation error on $X^K$. For each of those three criteria we derive explicit inner and outer bounds of the rate distortion region. Next, in the case of $K=L$ and $A=I_L$, we study the multiterminal source coding problem where the decoder wishes to reconstruct the observation $Y^L=X^L+N^L$. To investigate this problem we shall establish a result which provides a strong connection between the remote source coding problem and the multiterminal source coding problem. Using this result, we drive several new partial solutions to the multiterminal source coding problem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,117 |
2301.06452 | Optimal Coordination and Discount Allocation in Residential Renewable
Energy Communities with Smart Home Appliances | This paper proposes an optimal management strategy for a Renewable Energy Community defined according to the Italian legislation. The specific case study is composed by a set of houses equipped with smart appliances, that share a PV plant. The objective is to minimize the cost of electrical energy use for each member of the community, taking into account the discount achievable from government incentives with proper shaping of the community daily consumption. Such incentives are indeed proportional to the shared energy, i.e. the portion of the renewable energy consumed at each hour by community members. The management algorithm allows an optimal coordination of houses power demands, according to the degree of flexibility granted by users. Moreover, a policy to fairly distribute the obtained discount is introduced. Simulation results show the potentialities of the approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 340,650 |
2108.10233 | Fusion of evidential CNN classifiers for image classification | We propose an information-fusion approach based on belief functions to combine convolutional neural networks. In this approach, several pre-trained DS-based CNN architectures extract features from input images and convert them into mass functions on different frames of discernment. A fusion module then aggregates these mass functions using Dempster's rule. An end-to-end learning procedure allows us to fine-tune the overall architecture using a learning set with soft labels, which further improves the classification performance. The effectiveness of this approach is demonstrated experimentally using three benchmark databases. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 251,837 |
1712.05497 | What Can This Robot Do? Learning from Appearance and Experiments | When presented with an unknown robot (subject) how can an autonomous agent (learner) figure out what this new robot can do? The subject's appearance can provide cues to its physical as well as cognitive capabilities. Seeing a humanoid can make one wonder if it can kick balls, climb stairs or recognize faces. What if the learner can request the subject to perform these tasks? We present an approach to make the learner build a model of the subject at a task based on the latter's appearance and refine it by experimentation. Apart from the subject's inherent capabilities, certain extrinsic factors may affect its performance at a task. Based on the subject's appearance and prior knowledge about the task a learner can identify a set of potential factors, a subset of which we assume are controllable. Our approach picks values of controllable factors to generate the most informative experiments to test the subject at. Additionally, we present a metric to determine if a factor should be incorporated in the model. We present results of our approach on modeling a humanoid robot at the task of kicking a ball. Firstly, we show that actively picking values for controllable factors, even in noisy experiments, leads to faster learning of the subject's model for the task. Secondly, starting from a minimal set of factors our metric identifies the set of relevant factors to incorporate in the model. Lastly, we show that the refined model better represents the subject's performance at the task. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 86,737 |
2412.08528 | Continual Learning for Encoder-only Language Models via a Discrete
Key-Value Bottleneck | Continual learning remains challenging across various natural language understanding tasks. When models are updated with new training data, they risk catastrophic forgetting of prior knowledge. In the present work, we introduce a discrete key-value bottleneck for encoder-only language models, allowing for efficient continual learning by requiring only localized updates. Inspired by the success of a discrete key-value bottleneck in vision, we address new and NLP-specific challenges. We experiment with different bottleneck architectures to find the most suitable variants regarding language, and present a generic discrete key initialization technique for NLP that is task independent. We evaluate the discrete key-value bottleneck in four continual learning NLP scenarios and demonstrate that it alleviates catastrophic forgetting. We showcase that it offers competitive performance to other popular continual learning methods, with lower computational costs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 516,128 |
2406.14526 | Fantastic Copyrighted Beasts and How (Not) to Generate Them | Recent studies show that image and video generation models can be prompted to reproduce copyrighted content from their training data, raising serious legal concerns around copyright infringement. Copyrighted characters, in particular, pose a difficult challenge for image generation services, with at least one lawsuit already awarding damages based on the generation of these characters. Yet, little research has empirically examined this issue. We conduct a systematic evaluation to fill this gap. First, we build CopyCat, an evaluation suite consisting of diverse copyrighted characters and a novel evaluation pipeline. Our evaluation considers both the detection of similarity to copyrighted characters and generated image's consistency with user input. Our evaluation systematically shows that both image and video generation models can still generate characters even if characters' names are not explicitly mentioned in the prompt, sometimes with only two generic keywords (e.g., prompting with "videogame, plumber" consistently generates Nintendo's Mario character). We then introduce techniques to semi-automatically identify such keywords or descriptions that trigger character generation. Using our evaluation suite, we study runtime mitigation strategies, including both existing methods and new strategies we propose. Our findings reveal that commonly employed strategies, such as prompt rewriting in the DALL-E system, are not sufficient as standalone guardrails. These strategies must be coupled with other approaches, like negative prompting, to effectively reduce the unintended generation of copyrighted characters. Our work provides empirical grounding to the discussion of copyright mitigation strategies and offers actionable insights for model deployers actively implementing them. | false | false | false | false | true | false | true | false | false | false | false | true | false | true | false | false | false | false | 466,352 |
2405.05733 | Batched Stochastic Bandit for Nondegenerate Functions | This paper studies batched bandit learning problems for nondegenerate functions. We introduce an algorithm that solves the batched bandit problem for nondegenerate functions near-optimally. More specifically, we introduce an algorithm, called Geometric Narrowing (GN), whose regret bound is of order $\widetilde{{\mathcal{O}}} ( A_{+}^d \sqrt{T} )$. In addition, GN only needs $\mathcal{O} (\log \log T)$ batches to achieve this regret. We also provide lower bound analysis for this problem. More specifically, we prove that over some (compact) doubling metric space of doubling dimension $d$: 1. For any policy $\pi$, there exists a problem instance on which $\pi$ admits a regret of order ${\Omega} ( A_-^d \sqrt{T})$; 2. No policy can achieve a regret of order $ A_-^d \sqrt{T} $ over all problem instances, using less than $ \Omega ( \log \log T ) $ rounds of communications. Our lower bound analysis shows that the GN algorithm achieves near optimal regret with minimal number of batches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 453,031 |
1403.4711 | Multiagent Conflict Resolution for a Specification Network of
Discrete-Event Coordinating Agents | This paper presents a novel compositional approach to distributed coordination module (CM) synthesis for multiple discrete-event agents in the formal languages and automata framework. The approach is supported by two original ideas. The first is a new formalism called the Distributed Constraint Specification Network (DCSN) that can comprehensibly describe the networking constraint relationships among distributed agents. The second is multiagent conflict resolution planning, which entails generating and using AND/OR graphs to compactly represent conflict resolution (synthesis-process) plans for a DCSN. Together with the framework of local CM design developed in the authors' earlier work, the systematic approach supports separately designing local and deconflicting CM's for individual agents in accordance to a selected conflict resolution plan. Composing the agent models and the CM's designed furnishes an overall nonblocking coordination solution that meets the set of inter-agent constraints specified in a given DCSN. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 31,671 |
2210.14006 | Non-binary Two-Deletion Correcting Codes and Burst-Deletion Correcting
Codes | In this paper, we construct systematic $q$-ary two-deletion correcting codes and burst-deletion correcting codes, where $q\geq 2$ is an even integer. For two-deletion codes, our construction has redundancy $5\log n+O(\log q\log\log n)$ and has encoding complexity near-linear in $n$, where $n$ is the length of the message sequences. For burst-deletion codes, we first present a construction of binary codes with redundancy $\log n+9\log\log n+\gamma_t+o(\log\log n)$ bits $(\gamma_t$ is a constant that depends only on $t)$ and capable of correcting a burst of at most $t$ deletions, which improves the Lenz-Polyanskii Construction (ISIT 2020). Then we give a construction of $q$-ary codes with redundancy $\log n+(8\log q+9)\log\log n+o(\log q\log\log n)+\gamma_t$ bits and capable of correcting a burst of at most $t$ deletions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 326,395 |
1801.05122 | Asynchronous Bidirectional Decoding for Neural Machine Translation | The dominant neural machine translation (NMT) models apply unified attentional encoder-decoder neural networks for translation. Traditionally, the NMT decoders adopt recurrent neural networks (RNNs) to perform translation in a left-toright manner, leaving the target-side contexts generated from right to left unexploited during translation. In this paper, we equip the conventional attentional encoder-decoder NMT framework with a backward decoder, in order to explore bidirectional decoding for NMT. Attending to the hidden state sequence produced by the encoder, our backward decoder first learns to generate the target-side hidden state sequence from right to left. Then, the forward decoder performs translation in the forward direction, while in each translation prediction timestep, it simultaneously applies two attention models to consider the source-side and reverse target-side hidden states, respectively. With this new architecture, our model is able to fully exploit source- and target-side contexts to improve translation quality altogether. Experimental results on NIST Chinese-English and WMT English-German translation tasks demonstrate that our model achieves substantial improvements over the conventional NMT by 3.14 and 1.38 BLEU points, respectively. The source code of this work can be obtained from https://github.com/DeepLearnXMU/ABDNMT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 88,397 |
2102.08262 | An Effort to Measure Customer Relationship Performance in Indonesia's
Fintech Industry | The availability of social media simplifies the companies-customers relationship. An effort to engage customers in conversation networks using social media is called Social Customer Relationship Management (SCRM). Social Network Analysis helps to understand network characteristics and how active the conversation network on social media. Calculating its network properties is beneficial for measuring customer relationship performance. Financial Technology, a new emerging industry that provides digital-based financial services utilize social media to interact with its customers. Measuring SCRM performance is needed in order to stay competitive among others. Therefore, we aim to explore the SCRM performance of the Indonesia Fintech company. In terms of discovering the market majority thought in conversation networks, we perform sentiment analysis by classifying into positive and negative opinion. As case studies, we investigate Twitter conversations about GoPay, OVO, Dana, and LinkAja during the observation period from 1st October until 1st November 2019. The result of this research is beneficial for business intelligence purposes especially in managing relationships with customers. | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 220,395 |
2203.06965 | UniVIP: A Unified Framework for Self-Supervised Visual Pre-training | Self-supervised learning (SSL) holds promise in leveraging large amounts of unlabeled data. However, the success of popular SSL methods has limited on single-centric-object images like those in ImageNet and ignores the correlation among the scene and instances, as well as the semantic difference of instances in the scene. To address the above problems, we propose a Unified Self-supervised Visual Pre-training (UniVIP), a novel self-supervised framework to learn versatile visual representations on either single-centric-object or non-iconic dataset. The framework takes into account the representation learning at three levels: 1) the similarity of scene-scene, 2) the correlation of scene-instance, 3) the discrimination of instance-instance. During the learning, we adopt the optimal transport algorithm to automatically measure the discrimination of instances. Massive experiments show that UniVIP pre-trained on non-iconic COCO achieves state-of-the-art transfer performance on a variety of downstream tasks, such as image classification, semi-supervised learning, object detection and segmentation. Furthermore, our method can also exploit single-centric-object dataset such as ImageNet and outperforms BYOL by 2.5% with the same pre-training epochs in linear probing, and surpass current self-supervised object detection methods on COCO dataset, demonstrating its universality and potential. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 285,287 |
1709.01991 | Semi-Automatic Terminology Ontology Learning Based on Topic Modeling | Ontologies provide features like a common vocabulary, reusability, machine-readable content, and also allows for semantic search, facilitate agent interaction and ordering & structuring of knowledge for the Semantic Web (Web 3.0) application. However, the challenge in ontology engineering is automatic learning, i.e., the there is still a lack of fully automatic approach from a text corpus or dataset of various topics to form ontology using machine learning techniques. In this paper, two topic modeling algorithms are explored, namely LSI & SVD and Mr.LDA for learning topic ontology. The objective is to determine the statistical relationship between document and terms to build a topic ontology and ontology graph with minimum human intervention. Experimental analysis on building a topic ontology and semantic retrieving corresponding topic ontology for the user's query demonstrating the effectiveness of the proposed approach. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 80,182 |
2408.06474 | TOGGL: Transcribing Overlapping Speech with Staggered Labeling | Transcribing the speech of multiple overlapping speakers typically requires separating the audio into multiple streams and recognizing each one independently. More recent work jointly separates and transcribes, but requires a separate decoding component for each speaker. We propose the TOGGL model to simultaneously transcribe the speech of multiple speakers. The TOGGL model uses special output tokens to attribute the speech to each speaker with only a single decoder. Our approach generalizes beyond two speakers, even when trained only on two-speaker data. We demonstrate superior performance compared to competing approaches on a conversational speech dataset. Our approach also improves performance on single-speaker audio. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 480,217 |
1908.05455 | Ergodic Rate Analysis of Cooperative Ambient Backscatter Communication | Ambient backscatter communication has shown great potential in the development of future wireless networks. It enables a backscatter transmitter (BTx) to send information directly to an adjacent receiver by modulating over ambient radio frequency (RF) carriers. In this paper, we consider a cooperative ambient backscatter communication system where a multi-antenna cooperative receiver separately decodes signals from an RF source and a BTx. Upper bounds of the ergodic rates of both links are derived. The power scaling laws are accordingly characterized for both the primary cellular transmission and the cooperative backscatter. The impact of additional backscatter link is also quantitatively analyzed. Simulation results are provided to verify the derived results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 141,725 |
1705.02976 | On the Achievable Rates of Decentralized Equalization in Massive MU-MIMO
Systems | Massive multi-user (MU) multiple-input multiple-output (MIMO) promises significant gains in spectral efficiency compared to traditional, small-scale MIMO technology. Linear equalization algorithms, such as zero forcing (ZF) or minimum mean-square error (MMSE)-based methods, typically rely on centralized processing at the base station (BS), which results in (i) excessively high interconnect and chip input/output data rates, and (ii) high computational complexity. In this paper, we investigate the achievable rates of decentralized equalization that mitigates both of these issues. We consider two distinct BS architectures that partition the antenna array into clusters, each associated with independent radio-frequency chains and signal processing hardware, and the results of each cluster are fused in a feedforward network. For both architectures, we consider ZF, MMSE, and a novel, non-linear equalization algorithm that builds upon approximate message passing (AMP), and we theoretically analyze the achievable rates of these methods. Our results demonstrate that decentralized equalization with our AMP-based methods incurs no or only a negligible loss in terms of achievable rates compared to that of centralized solutions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 73,100 |
2002.09689 | Fair and Decentralized Exchange of Digital Goods | We construct a privacy-preserving, distributed and decentralized marketplace where parties can exchange data for tokens. In this market, buyers and sellers make transactions in a blockchain and interact with a third party, called notary, who has the ability to vouch for the authenticity and integrity of the data. We introduce a protocol for the data-token exchange where neither party gains more information than what it is paying for, and the exchange is fair: either both parties gets the other's item or neither does. No third party involvement is required after setup, and no dispute resolution is needed. | false | false | false | true | false | false | false | false | false | false | false | false | true | true | false | false | false | false | 165,149 |
2202.01300 | Causal Inference Through the Structural Causal Marginal Problem | We introduce an approach to counterfactual inference based on merging information from multiple datasets. We consider a causal reformulation of the statistical marginal problem: given a collection of marginal structural causal models (SCMs) over distinct but overlapping sets of variables, determine the set of joint SCMs that are counterfactually consistent with the marginal ones. We formalise this approach for categorical SCMs using the response function formulation and show that it reduces the space of allowed marginal and joint SCMs. Our work thus highlights a new mode of falsifiability through additional variables, in contrast to the statistical one via additional data. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,434 |
1910.01785 | Co-optimization of Speed and Gearshift Control for Battery Electric
Vehicles Using Preview Information | This paper addresses the co-optimization of speed and gearshift control for battery electric vehicles using short-range traffic information. To achieve greater electric motor efficiency, a multi-speed transmission is employed, whose control involves discrete-valued gearshift signals. To overcome the computational difficulties in solving the integrated speed-and-gearshift optimal control problem that involves both continuous and discrete-valued optimization variables, we propose a hierarchical procedure to decompose the integrated hybrid problem into purely continuous and discrete sub-problems, each of which can be efficiently solved. We show, by simulations in various driving scenarios, that the co-optimization of speed and gearshift control using our proposed hierarchical procedure can achieve greater energy efficiency than other typical approaches. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 148,034 |
2306.14703 | Repetition and recurrence times: Dual statements and short memory
conditions | By an analogy to the duality between the recurrence time and the longest match length, we introduce a quantity dual to the maximal repetition length, which we call the repetition time. Extending prior results, we sandwich the repetition time in terms of unconditional and conditional min-entropies. The condition for the upper bound resembles short memory in the sense developed in time series analysis. Our reasonings make a repeated use of dualities between so called times and so called counts that generalize the duality of the recurrence time and the longest match length. We also discuss the analogy of these results with the Wyner-Ziv/Ornstein-Weiss theorem, which sandwiches the recurrence time in terms of Shannon entropies. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 375,783 |
2305.18239 | A Study on Knowledge Distillation from Weak Teacher for Scaling Up
Pre-trained Language Models | Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller, weaker teacher model to a larger student model to improve its performance. Previous studies have shown that DWT can be effective in the vision domain and natural language processing (NLP) pre-training stage. Specifically, DWT shows promise in practical scenarios, such as enhancing new generation or larger models using pre-trained yet older or smaller models and lacking a resource budget. However, the optimal conditions for using DWT have yet to be fully investigated in NLP pre-training. Therefore, this study examines three key factors to optimize DWT, distinct from those used in the vision domain or traditional knowledge distillation. These factors are: (i) the impact of teacher model quality on DWT effectiveness, (ii) guidelines for adjusting the weighting value for DWT loss, and (iii) the impact of parameter remapping as a student model initialization technique for DWT. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 368,912 |
1901.00100 | A Hardware Friendly Unsupervised Memristive Neural Network with Weight
Sharing Mechanism | Memristive neural networks (MNNs), which use memristors as neurons or synapses, have become a hot research topic recently. However, most memristors are not compatible with mainstream integrated circuit technology and their stabilities in large-scale are not very well so far. In this paper, a hardware friendly MNN circuit is introduced, in which the memristive characteristics are implemented by digital integrated circuit. Through this method, spike timing dependent plasticity (STDP) and unsupervised learning are realized. A weight sharing mechanism is proposed to bridge the gap of network scale and hardware resource. Experiment results show the hardware resource is significantly saved with it, maintaining good recognition accuracy and high speed. Moreover, the tendency of resource increase is slower than the expansion of network scale, which infers our method's potential on large scale neuromorphic network's realization. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 117,690 |
2003.05482 | Stochastic Coordinate Minimization with Progressive Precision for
Stochastic Convex Optimization | A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization. Given that exact coordinate minimization is impossible due to the unknown stochastic nature of the objective function, the crux of the proposed optimization algorithm is an optimal control of the minimization precision in each iteration. We establish the optimal precision control and the resulting order-optimal regret performance for strongly convex and separably nonsmooth functions. An interesting finding is that the optimal progression of precision across iterations is independent of the low-dimensional CM routine employed, suggesting a general framework for extending low-dimensional optimization routines to high-dimensional problems. The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization. Requiring only a sublinear order of message exchanges, it also lends itself well to distributed computing as compared with the alternative approach of coordinate gradient descent. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,862 |
2010.09188 | A Bayesian Approach for Characterizing and Mitigating Gate and
Measurement Errors | Various noise models have been developed in quantum computing study to describe the propagation and effect of the noise which is caused by imperfect implementation of hardware. Identifying parameters such as gate and readout error rates are critical to these models. We use a Bayesian inference approach to identity posterior distributions of these parameters, such that they can be characterized more elaborately. By characterizing the device errors in this way, we can further improve the accuracy of quantum error mitigation. Experiments conducted on IBM's quantum computing devices suggest that our approach provides better error mitigation performance than existing techniques used by the vendor. Also, our approach outperforms the standard Bayesian inference method in such experiments. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 201,438 |
2402.10441 | Barrier-Enhanced Parallel Homotopic Trajectory Optimization for
Safety-Critical Autonomous Driving | Enforcing safety while preventing overly conservative behaviors is essential for autonomous vehicles to achieve high task performance. In this paper, we propose a barrier-enhanced parallel homotopic trajectory optimization (BPHTO) approach with the over-relaxed alternating direction method of multipliers (ADMM) for real-time integrated decision-making and planning. To facilitate safety interactions between the ego vehicle (EV) and surrounding vehicles, a spatiotemporal safety module exhibiting bi-convexity is developed on the basis of barrier function. Varying barrier coefficients are adopted for different time steps in a planning horizon to account for the motion uncertainties of surrounding HVs and mitigate conservative behaviors. Additionally, we exploit the discrete characteristics of driving maneuvers to initialize nominal behavior-oriented free-end homotopic trajectories based on reachability analysis, and each trajectory is locally constrained to a specific driving maneuver while sharing the same task objectives. By leveraging the bi-convexity of the safety module and the kinematics of the EV, we formulate the BPHTO as a bi-convex optimization problem. Then constraint transcription and the over-relaxed ADMM are employed to streamline the optimization process, such that multiple trajectories are generated in real time with feasibility guarantees. Through a series of experiments, the proposed development demonstrates improved task accuracy, stability, and consistency in various traffic scenarios using synthetic and real-world traffic datasets. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 429,964 |
1708.03436 | Variational Deep Semantic Hashing for Text Documents | As the amount of textual data has been rapidly increasing over the past decade, efficient similarity search methods have become a crucial component of large-scale information retrieval systems. A popular strategy is to represent original data samples by compact binary codes through hashing. A spectrum of machine learning methods have been utilized, but they often lack expressiveness and flexibility in modeling to learn effective representations. The recent advances of deep learning in a wide range of applications has demonstrated its capability to learn robust and powerful feature representations for complex data. Especially, deep generative models naturally combine the expressiveness of probabilistic generative models with the high capacity of deep neural networks, which is very suitable for text modeling. However, little work has leveraged the recent progress in deep learning for text hashing. In this paper, we propose a series of novel deep document generative models for text hashing. The first proposed model is unsupervised while the second one is supervised by utilizing document labels/tags for hashing. The third model further considers document-specific factors that affect the generation of words. The probabilistic generative formulation of the proposed models provides a principled framework for model extension, uncertainty estimation, simulation, and interpretability. Based on variational inference and reparameterization, the proposed models can be interpreted as encoder-decoder deep neural networks and thus they are capable of learning complex nonlinear distributed representations of the original documents. We conduct a comprehensive set of experiments on four public testbeds. The experimental results have demonstrated the effectiveness of the proposed supervised learning models for text hashing. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 78,776 |
2301.07947 | Point Cloud Data Simulation and Modelling with Aize Workspace | This work takes a look at data models often used in digital twins and presents preliminary results specifically from surface reconstruction and semantic segmentation models trained using simulated data. This work is expected to serve as a ground work for future endeavours in data contextualisation inside a digital twin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 341,054 |
1709.05533 | Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps | Visual robot navigation within large-scale, semi-structured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 80,900 |
1906.01542 | Natural Vocabulary Emerges from Free-Form Annotations | We propose an approach for annotating object classes using free-form text written by undirected and untrained annotators. Free-form labeling is natural for annotators, they intuitively provide very specific and exhaustive labels, and no training stage is necessary. We first collect 729 labels on 15k images using 124 different annotators. Then we automatically enrich the structure of these free-form annotations by discovering a natural vocabulary of 4020 classes within them. This vocabulary represents the natural distribution of objects well and is learned directly from data, instead of being an educated guess done before collecting any labels. Hence, the natural vocabulary emerges from a large mass of free-form annotations. To do so, we (i) map the raw input strings to entities in an ontology of physical objects (which gives them an unambiguous meaning); and (ii) leverage inter-annotator co-occurrences, as well as biases and knowledge specific to individual annotators. Finally, we also automatically extract natural vocabularies of reduced size that have high object coverage while remaining specific. These reduced vocabularies represent the natural distribution of objects much better than commonly used predefined vocabularies. Moreover, they feature more uniform sample distribution over classes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 133,744 |
2209.09177 | Learning-based Uncertainty-aware Navigation in 3D Off-Road Terrains | This paper presents a safe, efficient, and agile ground vehicle navigation algorithm for 3D off-road terrain environments. Off-road navigation is subject to uncertain vehicle-terrain interactions caused by different terrain conditions on top of 3D terrain topology. The existing works are limited to adopt overly simplified vehicle-terrain models. The proposed algorithm learns the terrain-induced uncertainties from driving data and encodes the learned uncertainty distribution into the traversability cost for path evaluation. The navigation path is then designed to optimize the uncertainty-aware traversability cost, resulting in a safe and agile vehicle maneuver. Assuring real-time execution, the algorithm is further implemented within parallel computation architecture running on Graphics Processing Units (GPU). | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 318,412 |
2410.13807 | ConsisSR: Delving Deep into Consistency in Diffusion-based Image
Super-Resolution | Real-world image super-resolution (Real-ISR) aims at restoring high-quality (HQ) images from low-quality (LQ) inputs corrupted by unknown and complex degradations. In particular, pretrained text-to-image (T2I) diffusion models provide strong generative priors to reconstruct credible and intricate details. However, T2I generation focuses on semantic consistency while Real-ISR emphasizes pixel-level reconstruction, which hinders existing methods from fully exploiting diffusion priors. To address this challenge, we introduce ConsisSR to handle both semantic and pixel-level consistency. Specifically, compared to coarse-grained text prompts, we exploit the more powerful CLIP image embedding and effectively leverage both modalities through our Hybrid Prompt Adapter (HPA) for semantic guidance. Secondly, we introduce Time-aware Latent Augmentation (TALA) to mitigate the inherent gap between T2I generation and Real-ISR consistency requirements. By randomly mixing LQ and HQ latent inputs, our model not only handle timestep-specific diffusion noise but also refine the accumulated latent representations. Last but not least, our GAN-Embedding strategy employs the pretrained Real-ESRGAN model to refine the diffusion start point. This accelerates the inference process to 10 steps while preserving sampling quality, in a training-free manner. Our method demonstrates state-of-the-art performance among both full-scale and accelerated models. The code will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 499,701 |
1408.6828 | Cyclic dominance in evolutionary games: A review | Rock is wrapped by paper, paper is cut by scissors, and scissors are crushed by rock. This simple game is popular among children and adults to decide on trivial disputes that have no obvious winner, but cyclic dominance is also at the heart of predator-prey interactions, the mating strategy of side-blotched lizards, the overgrowth of marine sessile organisms, and the competition in microbial populations. Cyclical interactions also emerge spontaneously in evolutionary games entailing volunteering, reward, punishment, and in fact are common when the competing strategies are three or more regardless of the particularities of the game. Here we review recent advances on the rock-paper-scissors and related evolutionary games, focusing in particular on pattern formation, the impact of mobility, and the spontaneous emergence of cyclic dominance. We also review mean-field and zero-dimensional rock-paper-scissors models and the application of the complex Ginzburg-Landau equation, and we highlight the importance and usefulness of statistical physics for the successful study of large-scale ecological systems. Directions for future research, related for example to dynamical effects of coevolutionary rules and invasion reversals due to multi-point interactions, are outlined as well. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 35,666 |
2202.09269 | Quantification of Actual Road User Behavior on the Basis of Given
Traffic Rules | Driving on roads is restricted by various traffic rules, aiming to ensure safety for all traffic participants. However, human road users usually do not adhere to these rules strictly, resulting in varying degrees of rule conformity. Such deviations from given rules are key components of today's road traffic. In autonomous driving, robotic agents can disturb traffic flow, when rule deviations are not taken into account. In this paper, we present an approach to derive the distribution of degrees of rule conformity from human driving data. We demonstrate our method with the Waymo Open Motion dataset and Safety Distance and Speed Limit rules. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 281,136 |
2310.14910 | Linear matrix inequality based Type-III compensator synthesis for DC-DC
converters | Boost, buck-boost, and fly-back DC-DC converters which are utilized in power lines of any electric vehicles, solar energy, and power factor correction applications require control systems to regulate the output voltage under mismatched disturbances i.e. load current and input voltage. In continuous current mode operation, the converters, however, are bandwidth-limited control systems due to their non-minimum phase nature. Disturbance rejection performance of such bandwidth-limited control system is an open problem especially where input voltage and load current disturbances cannot be measured. A third-order integral-lead (Type-III) compensator with a disturbance observer (DOB) can suppress the disturbances and unmodeled dynamics of the converters. However, synthesizing such a fixed-order control system under performance constraints is generally challenging. This paper proposes a simultaneous design of a Type-III compensator and a fixed order DOB based on Hinf control approach using convex optimization. The optimization problem is formulated in a convex-concave procedure by including the estimated disturbance and sensor noise functions. We proposed a two-stage iterative algorithm to solve the problem in a convex optimization framework. Convex programming can therefore be used to synthesize an optimal fixed-order control system by removing the non-convex constraints on the parameter space. The approach leads to an easily resolvable control algorithm with linear matrix inequality constraints over parameterized controller parameters due to the convexity of the problem. The proposed control system is implemented on a 200W DC-DC multi-phase interleaved boost converter prototype using a TMS320F28335 digital signal processor. The performance of the approach is compared with the well-known K-factor design approach for the Type-III compensators. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 402,074 |
2011.04219 | Mitigating Bias in Set Selection with Noisy Protected Attributes | Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race. Currently, fair subset selection algorithms assume that the protected attributes are known as part of the dataset. However, protected attributes may be noisy due to errors during data collection or if they are imputed (as is often the case in real-world settings). While a wide body of work addresses the effect of noise on the performance of machine learning algorithms, its effect on fairness remains largely unexamined. We find that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result! Towards addressing this, we consider an existing noise model in which there is probabilistic information about the protected attributes (e.g., [58, 34, 20, 46]), and ask is fair selection possible under noisy conditions? We formulate a ``denoised'' selection problem which functions for a large class of fairness metrics; given the desired fairness goal, the solution to the denoised problem violates the goal by at most a small multiplicative amount with high probability. Although this denoised problem turns out to be NP-hard, we give a linear-programming based approximation algorithm for it. We evaluate this approach on both synthetic and real-world datasets. Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes, and, compared to prior noise-oblivious approaches, has better Pareto-tradeoffs between utility and fairness. | false | false | false | false | false | true | true | false | false | false | false | false | false | true | false | false | false | true | 205,509 |
2309.09446 | Scalable Label-efficient Footpath Network Generation Using Remote
Sensing Data and Self-supervised Learning | Footpath mapping, modeling, and analysis can provide important geospatial insights to many fields of study, including transport, health, environment and urban planning. The availability of robust Geographic Information System (GIS) layers can benefit the management of infrastructure inventories, especially at local government level with urban planners responsible for the deployment and maintenance of such infrastructure. However, many cities still lack real-time information on the location, connectivity, and width of footpaths, and/or employ costly and manual survey means to gather this information. This work designs and implements an automatic pipeline for generating footpath networks based on remote sensing images using machine learning models. The annotation of segmentation tasks, especially labeling remote sensing images with specialized requirements, is very expensive, so we aim to introduce a pipeline requiring less labeled data. Considering supervised methods require large amounts of training data, we use a self-supervised method for feature representation learning to reduce annotation requirements. Then the pre-trained model is used as the encoder of the U-Net for footpath segmentation. Based on the generated masks, the footpath polygons are extracted and converted to footpath networks which can be loaded and visualized by geographic information systems conveniently. Validation results indicate considerable consistency when compared to manually collected GIS layers. The footpath network generation pipeline proposed in this work is low-cost and extensible, and it can be applied where remote sensing images are available. Github: https://github.com/WennyXY/FootpathSeg. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 392,612 |
2107.08145 | Refactoring the MPS/University of Chicago Radiative MHD(MURaM) Model for
GPU/CPU Performance Portability Using OpenACC Directives | The MURaM (Max Planck University of Chicago Radiative MHD) code is a solar atmosphere radiative MHD model that has been broadly applied to solar phenomena ranging from quiet to active sun, including eruptive events such as flares and coronal mass ejections. The treatment of physics is sufficiently realistic to allow for the synthesis of emission from visible light to extreme UV and X-rays, which is critical for a detailed comparison with available and future multi-wavelength observations. This component relies critically on the radiation transport solver (RTS) of MURaM; the most computationally intensive component of the code. The benefits of accelerating RTS are multiple fold: A faster RTS allows for the regular use of the more expensive multi-band radiation transport needed for comparison with observations, and this will pave the way for the acceleration of ongoing improvements in RTS that are critical for simulations of the solar chromosphere. We present challenges and strategies to accelerate a multi-physics, multi-band MURaM using a directive-based programming model, OpenACC in order to maintain a single source code across CPUs and GPUs. Results for a $288^3$ test problem show that MURaM with the optimized RTS routine achieves 1.73x speedup using a single NVIDIA V100 GPU over a fully subscribed 40-core Intel Skylake CPU node and with respect to the number of simulation points (in millions) per second, a single NVIDIA V100 GPU is equivalent to 69 Skylake cores. We also measure parallel performance on up to 96 GPUs and present weak and strong scaling results. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 246,631 |
2208.13836 | PGNAA Spectral Classification of Metal with Density Estimations | For environmental, sustainable economic and political reasons, recycling processes are becoming increasingly important, aiming at a much higher use of secondary raw materials. Currently, for the copper and aluminium industries, no method for the non-destructive online analysis of heterogeneous materials are available. The Prompt Gamma Neutron Activation Analysis (PGNAA) has the potential to overcome this challenge. A difficulty when using PGNAA for online classification arises from the small amount of noisy data, due to short-term measurements. In this case, classical evaluation methods using detailed peak by peak analysis fail. Therefore, we propose to view spectral data as probability distributions. Then, we can classify material using maximum log-likelihood with respect to kernel density estimation and use discrete sampling to optimize hyperparameters. For measurements of pure aluminium alloys we achieve near perfect classification of aluminium alloys under 0.25 second. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 315,149 |
2408.09613 | How Do Social Bots Participate in Misinformation Spread? A Comprehensive
Dataset and Analysis | Information spreads faster through social media platforms than traditional media, thus becoming an ideal medium to spread misinformation. Meanwhile, automated accounts, known as social bots, contribute more to the misinformation dissemination. In this paper, we explore the interplay between social bots and misinformation on the Sina Weibo platform. We propose a comprehensive and large-scale misinformation dataset, containing 11,393 misinformation and 16,416 unbiased real information with multiple modality information, with 952,955 related users. We propose a scalable weak-surprised method to annotate social bots, obtaining 68,040 social bots and 411,635 genuine accounts. To the best of our knowledge, this dataset is the largest dataset containing misinformation and social bots. We conduct comprehensive experiments and analysis on this dataset. Results show that social bots play a central role in misinformation dissemination, participating in news discussions to amplify echo chambers, manipulate public sentiment, and reverse public stances. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 481,510 |
2310.09276 | Transformer-based Multimodal Change Detection with Multitask Consistency
Constraints | Change detection plays a fundamental role in Earth observation for analyzing temporal iterations over time. However, recent studies have largely neglected the utilization of multimodal data that presents significant practical and technical advantages compared to single-modal approaches. This research focuses on leveraging {pre-event} digital surface model (DSM) data and {post-event} digital aerial images captured at different times for detecting change beyond 2D. We observe that the current change detection methods struggle with the multitask conflicts between semantic and height change detection tasks. To address this challenge, we propose an efficient Transformer-based network that learns shared representation between cross-dimensional inputs through cross-attention. {It adopts a consistency constraint to establish the multimodal relationship. Initially, pseudo-changes are derived by employing height change thresholding. Subsequently, the $L2$ distance between semantic and pseudo-changes within their overlapping regions is minimized. This explicitly endows the height change detection (regression task) and semantic change detection (classification task) with representation consistency.} A DSM-to-image multimodal dataset encompassing three cities in the Netherlands was constructed. It lays a new foundation for beyond-2D change detection from cross-dimensional inputs. Compared to five state-of-the-art change detection methods, our model demonstrates consistent multitask superiority in terms of semantic and height change detection. Furthermore, the consistency strategy can be seamlessly adapted to the other methods, yielding promising improvements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 399,718 |
2403.16871 | Conformal Off-Policy Prediction for Multi-Agent Systems | Off-Policy Prediction (OPP), i.e., predicting the outcomes of a target policy using only data collected under a nominal (behavioural) policy, is a paramount problem in data-driven analysis of safety-critical systems where the deployment of a new policy may be unsafe. To achieve dependable off-policy predictions, recent work on Conformal Off-Policy Prediction (COPP) leverage the conformal prediction framework to derive prediction regions with probabilistic guarantees under the target process. Existing COPP methods can account for the distribution shifts induced by policy switching, but are limited to single-agent systems and scalar outcomes (e.g., rewards). In this work, we introduce MA-COPP, the first conformal prediction method to solve OPP problems involving multi-agent systems, deriving joint prediction regions for all agents' trajectories when one or more ego agents change their policies. Unlike the single-agent scenario, this setting introduces higher complexity as the distribution shifts affect predictions for all agents, not just the ego agents, and the prediction task involves full multi-dimensional trajectories, not just reward values. A key contribution of MA-COPP is to avoid enumeration or exhaustive search of the output space of agent trajectories, which is instead required by existing COPP methods to construct the prediction region. We achieve this by showing that an over-approximation of the true joint prediction region (JPR) can be constructed, without enumeration, from the maximum density ratio of the JPR trajectories. We evaluate the effectiveness of MA-COPP in multi-agent systems from the PettingZoo library and the F1TENTH autonomous racing environment, achieving nominal coverage in higher dimensions and various shift settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 441,225 |
2201.08744 | Impacts of Students Academic Performance Trajectories on Final Academic
Success | Many studies in the field of education analytics have identified student grade point averages (GPA) as an important indicator and predictor of students' final academic outcomes (graduate or halt). And while semester-to-semester fluctuations in GPA are considered normal, significant changes in academic performance may warrant more thorough investigation and consideration, particularly with regards to final academic outcomes. However, such an approach is challenging due to the difficulties of representing complex academic trajectories over an academic career. In this study, we apply a Hidden Markov Model (HMM) to provide a standard and intuitive classification over students' academic-performance levels, which leads to a compact representation of academic-performance trajectories. Next, we explore the relationship between different academic-performance trajectories and their correspondence to final academic success. Based on student transcript data from University of Central Florida, our proposed HMM is trained using sequences of students' course grades for each semester. Through the HMM, our analysis follows the expected finding that higher academic performance levels correlate with lower halt rates. However, in this paper, we identify that there exist many scenarios in which both improving or worsening academic-performance trajectories actually correlate to higher graduation rates. This counter-intuitive finding is made possible through the proposed and developed HMM model. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 276,438 |
1807.09623 | Repartitioning of the ComplexWebQuestions Dataset | Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a dataset focused on answering complex questions by decomposing them into a sequence of simpler questions and extracting the answer from retrieved web snippets. In their work the authors used a pre-trained reading comprehension (RC) model (Salant and Berant, 2018) to extract the answer from the web snippets. In this short note we show that training a RC model directly on the training data of ComplexWebQuestions reveals a leakage from the training set to the test set that allows to obtain unreasonably high performance. As a solution, we construct a new partitioning of ComplexWebQuestions that does not suffer from this leakage and publicly release it. We also perform an empirical evaluation on these two datasets and show that training a RC model on the training data substantially improves state-of-the-art performance. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 103,763 |
2501.10190 | Temporal Causal Reasoning with (Non-Recursive) Structural Equation
Models | Structural Equation Models (SEM) are the standard approach to representing causal dependencies between variables in causal models. In this paper we propose a new interpretation of SEMs when reasoning about Actual Causality, in which SEMs are viewed as mechanisms transforming the dynamics of exogenous variables into the dynamics of endogenous variables. This allows us to combine counterfactual causal reasoning with existing temporal logic formalisms, and to introduce a temporal logic, CPLTL, for causal reasoning about such structures. We show that the standard restriction to so-called \textit{recursive} models (with no cycles in the dependency graph) is not necessary in our approach, allowing us to reason about mutually dependent processes and feedback loops. Finally, we introduce new notions of model equivalence for temporal causal models, and show that CPLTL has an efficient model-checking procedure. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 525,431 |
2303.01067 | AI and the FCI: Can ChatGPT Project an Understanding of Introductory
Physics? | ChatGPT is a groundbreaking ``chatbot"--an AI interface built on a large language model that was trained on an enormous corpus of human text to emulate human conversation. Beyond its ability to converse in a plausible way, it has attracted attention for its ability to competently answer questions from the bar exam and from MBA coursework, and to provide useful assistance in writing computer code. These apparent abilities have prompted discussion of ChatGPT as both a threat to the integrity of higher education and conversely as a powerful teaching tool. In this work we present a preliminary analysis of how two versions of ChatGPT (ChatGPT3.5 and ChatGPT4) fare in the field of first-semester university physics, using a modified version of the Force Concept Inventory (FCI) to assess whether it can give correct responses to conceptual physics questions about kinematics and Newtonian dynamics. We demonstrate that, by some measures, ChatGPT3.5 can match or exceed the median performance of a university student who has completed one semester of college physics, though its performance is notably uneven and the results are nuanced. By these same measures, we find that ChatGPT4's performance is approaching the point of being indistinguishable from that of an expert physicist when it comes to introductory mechanics topics. After the completion of our work we became aware of Ref [1], which preceded us to publication and which completes an extensive analysis of the abilities of ChatGPT3.5 in a physics class, including a different modified version of the FCI. We view this work as confirming that portion of their results, and extending the analysis to ChatGPT4, which shows rapid and notable improvement in most, but not all respects. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 348,819 |
2305.12389 | SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot
Cross-lingual Information Extraction | Zero-shot cross-lingual information extraction(IE) aims at constructing an IE model for some low-resource target languages, given annotations exclusively in some rich-resource languages. Recent studies based on language-universal features have shown their effectiveness and are attracting increasing attention. However, prior work has neither explored the potential of establishing interactions between language-universal features and contextual representations nor incorporated features that can effectively model constituent span attributes and relationships between multiple spans. In this study, a syntax-augmented hierarchical interactive encoder (SHINE) is proposed to transfer cross-lingual IE knowledge. The proposed encoder is capable of interactively capturing complementary information between features and contextual information, to derive language-agnostic representations for various IE tasks. Concretely, a multi-level interaction network is designed to hierarchically interact the complementary information to strengthen domain adaptability. Besides, in addition to the well-studied syntax features of part-of-speech and dependency relation, a new syntax feature of constituency structure is introduced to model the constituent span information which is crucial for IE. Experiments across seven languages on three IE tasks and four benchmarks verify the effectiveness and generalization ability of the proposed method. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 365,973 |
2009.09559 | Clinical trial of an AI-augmented intervention for HIV prevention in
youth experiencing homelessness | Youth experiencing homelessness (YEH) are subject to substantially greater risk of HIV infection, compounded both by their lack of access to stable housing and the disproportionate representation of youth of marginalized racial, ethnic, and gender identity groups among YEH. A key goal for health equity is to improve adoption of protective behaviors in this population. One promising strategy for intervention is to recruit peer leaders from the population of YEH to promote behaviors such as condom usage and regular HIV testing to their social contacts. This raises a computational question: which youth should be selected as peer leaders to maximize the overall impact of the intervention? We developed an artificial intelligence system to optimize such social network interventions in a community health setting. We conducted a clinical trial enrolling 713 YEH at drop-in centers in a large US city. The clinical trial compared interventions planned with the algorithm to those where the highest-degree nodes in the youths' social network were recruited as peer leaders (the standard method in public health) and to an observation-only control group. Results from the clinical trial show that youth in the AI group experience statistically significant reductions in key risk behaviors for HIV transmission, while those in the other groups do not. This provides, to our knowledge, the first empirical validation of the usage of AI methods to optimize social network interventions for health. We conclude by discussing lessons learned over the course of the project which may inform future attempts to use AI in community-level interventions. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 196,616 |
1912.06752 | Parameter-Conditioned Sequential Generative Modeling of Fluid Flows | The computational cost associated with simulating fluid flows can make it infeasible to run many simulations across multiple flow conditions. Building upon concepts from generative modeling, we introduce a new method for learning neural network models capable of performing efficient parameterized simulations of fluid flows. Evaluated on their ability to simulate both two-dimensional and three-dimensional fluid flows, trained models are shown to capture local and global properties of the flow fields at a wide array of flow conditions. Furthermore, flow simulations generated by the trained models are shown to be orders of magnitude faster than the corresponding computational fluid dynamics simulations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,415 |
1912.03877 | Bi-Semantic Reconstructing Generative Network for Zero-shot Learning | Many recent methods of zero-shot learning (ZSL) attempt to utilize generative model to generate the unseen visual samples from semantic descriptions and random noise. Therefore, the ZSL problem becomes a traditional supervised classification problem. However, most of the existing methods based on the generative model only focus on the quality of synthesized samples at the training stage, and ignore the importance of the zero-shot recognition stage. In this paper, we consider both the above two points and propose a novel approach. Specially, we select the Generative Adversarial Network (GAN) as our generative model. In order to improve the quality of synthesized samples, considering the internal relation of the semantic description in the semantic space as well as the fact that the seen and unseen visual information belong to different domains, we propose a bi-semantic reconstructing (BSR) component which contain two different semantic reconstructing regressors to lead the training of GAN. Since the semantic descriptions are available during the training stage, to further improve the ability of classifier, we combine the visual samples and semantic descriptions to train a classifier. At the recognition stage, we naturally utilize the BSR component to transfer the visual features and semantic descriptions, and concatenate them for classification. Experimental results show that our method outperforms the state of the art on several ZSL benchmark datasets with significant improvements. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 156,715 |
2409.12446 | Neural Networks Generalize on Low Complexity Data | We show that feedforward neural networks with ReLU activation generalize on low complexity data, suitably defined. Given i.i.d. data generated from a simple programming language, the minimum description length (MDL) feedforward neural network which interpolates the data generalizes with high probability. We define this simple programming language, along with a notion of description length of such networks. We provide several examples on basic computational tasks, such as checking primality of a natural number, and more. For primality testing, our theorem shows the following. Suppose that we draw an i.i.d. sample of $\Theta(N^{\delta}\ln N)$ numbers uniformly at random from $1$ to $N$, where $\delta\in (0,1)$. For each number $x_i$, let $y_i = 1$ if $x_i$ is a prime and $0$ if it is not. Then with high probability, the MDL network fitted to this data accurately answers whether a newly drawn number between $1$ and $N$ is a prime or not, with test error $\leq O(N^{-\delta})$. Note that the network is not designed to detect primes; minimum description learning discovers a network which does so. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 489,594 |
2209.08887 | Attentive Symmetric Autoencoder for Brain MRI Segmentation | Self-supervised learning methods based on image patch reconstruction have witnessed great success in training auto-encoders, whose pre-trained weights can be transferred to fine-tune other downstream tasks of image understanding. However, existing methods seldom study the various importance of reconstructed patches and the symmetry of anatomical structures, when they are applied to 3D medical images. In this paper we propose a novel Attentive Symmetric Auto-encoder (ASA) based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks. We conjecture that forcing the auto-encoder to recover informative image regions can harvest more discriminative representations, than to recover smooth image patches. Then we adopt a gradient based metric to estimate the importance of each image patch. In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics. Moreover, we resort to the prior of brain structures and develop a Symmetric Position Encoding (SPE) method to better exploit the correlations between long-range but spatially symmetric regions to obtain effective features. Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models on three brain MRI segmentation benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 318,302 |
1907.06625 | Multi-scale Graph-based Grading for Alzheimer's Disease Prediction | The prediction of subjects with mild cognitive impairment (MCI) who will progress to Alzheimer's disease (AD) is clinically relevant, and may above all have a significant impact on accelerate the development of new treatments. In this paper, we present a new MRI-based biomarker that enables us to predict conversion of MCI subjects to AD accurately. In order to better capture the AD signature, we introduce two main contributions. First, we present a new graph-based grading framework to combine inter-subject similarity features and intra-subject variability features. This framework involves patch-based grading of anatomical structures and graph-based modeling of structure alteration relationships. Second, we propose an innovative multiscale brain analysis to capture alterations caused by AD at different anatomical levels. Based on a cascade of classifiers, this multiscale approach enables the analysis of alterations of whole brain structures and hippocampus subfields at the same time. During our experiments using the ADNI-1 dataset, the proposed multiscale graph-based grading method obtained an area under the curve (AUC) of 81% to predict conversion of MCI subjects to AD within three years. Moreover, when combined with cognitive scores, the proposed method obtained 85% of AUC. These results are competitive in comparison to state-of-the-art methods evaluated on the same dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 138,669 |
2204.10216 | Re-Examining System-Level Correlations of Automatic Summarization
Evaluation Metrics | How reliably an automatic summarization evaluation metric replicates human judgments of summary quality is quantified by system-level correlations. We identify two ways in which the definition of the system-level correlation is inconsistent with how metrics are used to evaluate systems in practice and propose changes to rectify this disconnect. First, we calculate the system score for an automatic metric using the full test set instead of the subset of summaries judged by humans, which is currently standard practice. We demonstrate how this small change leads to more precise estimates of system-level correlations. Second, we propose to calculate correlations only on pairs of systems that are separated by small differences in automatic scores which are commonly observed in practice. This allows us to demonstrate that our best estimate of the correlation of ROUGE to human judgments is near 0 in realistic scenarios. The results from the analyses point to the need to collect more high-quality human judgments and to improve automatic metrics when differences in system scores are small. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 292,711 |
1904.08499 | Co-regularized Multi-view Sparse Reconstruction Embedding for Dimension
Reduction | With the development of information technology, we have witnessed an age of data explosion which produces a large variety of data filled with redundant information. Because dimension reduction is an essential tool which embeds high-dimensional data into a lower-dimensional subspace to avoid redundant information, it has attracted interests from researchers all over the world. However, facing with features from multiple views, it's difficult for most dimension reduction methods to fully comprehended multi-view features and integrate compatible and complementary information from these features to construct low-dimensional subspace directly. Furthermore, most multi-view dimension reduction methods cannot handle features from nonlinear spaces with high dimensions. Therefore, how to construct a multi-view dimension reduction methods which can deal with multi-view features from high-dimensional nonlinear space is of vital importance but challenging. In order to address this problem, we proposed a novel method named Co-regularized Multi-view Sparse Reconstruction Embedding (CMSRE) in this paper. By exploiting correlations of sparse reconstruction from multiple views, CMSRE is able to learn local sparse structures of nonlinear manifolds from multiple views and constructs significative low-dimensional representations for them. Due to the proposed co-regularized scheme, correlations of sparse reconstructions from multiple views are preserved by CMSRE as much as possible. Furthermore, sparse representation produces more meaningful correlations between features from each single view, which helps CMSRE to gain better performances. Various evaluations based on the applications of document classification, face recognition and image retrieval can demonstrate the effectiveness of the proposed approach on multi-view dimension reduction. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,085 |
2009.05121 | Patient Cohort Retrieval using Transformer Language Models | We apply deep learning-based language models to the task of patient cohort retrieval (CR) with the aim to assess their efficacy. The task of CR requires the extraction of relevant documents from the electronic health records (EHRs) on the basis of a given query. Given the recent advancements in the field of document retrieval, we map the task of CR to a document retrieval task and apply various deep neural models implemented for the general domain tasks. In this paper, we propose a framework for retrieving patient cohorts using neural language models without the need of explicit feature engineering and domain expertise. We find that a majority of our models outperform the BM25 baseline method on various evaluation metrics. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 195,225 |
2407.14270 | T- Hop: A framework for studying the importance path information in
molecular graphs for chemical property prediction | This paper studies the usefulness of incorporating path information in predicting chemical properties from molecular graphs, in the domain of QSAR (Quantitative Structure-Activity Relationship). Towards this, we developed a GNN-style model which can be toggled to operate in one of two modes: a non-degenerate mode which incorporates path information, and a degenerate mode which leaves out path information. Thus, by comparing the performance of the non-degenerate mode versus the degenerate mode on relevant QSAR datasets, we were able to directly assess the significance of path information on those datasets. Our results corroborate previous works, by suggesting that the usefulness of path information is datasetdependent. Unlike previous studies however, we took the very first steps towards building a model that could predict upfront whether or not path information would be useful for a given dataset at hand. Moreover, we also found that, albeit its simplicity, the degenerate mode of our model yielded rather surprising results, which outperformed more sophisticated SOTA models in certain cases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 474,722 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.