id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2112.07270
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering
Answering semantically-complicated questions according to an image is challenging in Visual Question Answering (VQA) task. Although the image can be well represented by deep learning, the question is always simply embedded and cannot well indicate its meaning. Besides, the visual and textual features have a gap for different modalities, it is difficult to align and utilize the cross-modality information. In this paper, we focus on these two problems and propose a Graph Matching Attention (GMA) network. Firstly, it not only builds graph for the image, but also constructs graph for the question in terms of both syntactic and embedding information. Next, we explore the intra-modality relationships by a dual-stage graph encoder and then present a bilateral cross-modality graph matching attention to infer the relationships between the image and the question. The updated cross-modality features are then sent into the answer prediction module for final answer prediction. Experiments demonstrate that our network achieves state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset. The ablation studies verify the effectiveness of each modules in our GMA network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
271,432
2010.13588
Curious Case of Language Generation Evaluation Metrics: A Cautionary Tale
Automatic evaluation of language generation systems is a well-studied problem in Natural Language Processing. While novel metrics are proposed every year, a few popular metrics remain as the de facto metrics to evaluate tasks such as image captioning and machine translation, despite their known limitations. This is partly due to ease of use, and partly because researchers expect to see them and know how to interpret them. In this paper, we urge the community for more careful consideration of how they automatically evaluate their models by demonstrating important failure cases on multiple datasets, language pairs and tasks. Our experiments show that metrics (i) usually prefer system outputs to human-authored texts, (ii) can be insensitive to correct translations of rare words, (iii) can yield surprisingly high scores when given a single sentence as system output for the entire test set.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
203,187
2206.12802
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
A common method in training neural networks is to initialize all the weights to be independent Gaussian vectors. We observe that by instead initializing the weights into independent pairs, where each pair consists of two identical Gaussian vectors, we can significantly improve the convergence analysis. While a similar technique has been studied for random inputs [Daniely, NeurIPS 2020], it has not been analyzed with arbitrary inputs. Using this technique, we show how to significantly reduce the number of neurons required for two-layer ReLU networks, both in the under-parameterized setting with logistic loss, from roughly $\gamma^{-8}$ [Ji and Telgarsky, ICLR 2020] to $\gamma^{-2}$, where $\gamma$ denotes the separation margin with a Neural Tangent Kernel, as well as in the over-parameterized setting with squared loss, from roughly $n^4$ [Song and Yang, 2019] to $n^2$, implicitly also improving the recent running time bound of [Brand, Peng, Song and Weinstein, ITCS 2021]. For the under-parameterized setting we also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
304,742
2206.03292
Learning-Based Motion Planning with Mixture Density Networks
The trade-off between computation time and path optimality is a key consideration in motion planning algorithms. While classical sampling based algorithms fall short of computational efficiency in high dimensional planning, learning based methods have shown great potential in achieving time efficient and optimal motion planning. The SOTA learning based motion planning algorithms utilize paths generated by sampling based methods as expert supervision data and train networks via regression techniques. However, these methods often overlook the important multimodal property of the optimal paths in the training set, making them incapable of finding good paths in some scenarios. In this paper, we propose a Multimodal Neuron Planner (MNP) based on the mixture density networks that explicitly takes into account the multimodality of the training data and simultaneously achieves time efficiency and path optimality. For environments represented by a point cloud, MNP first efficiently compresses the point cloud into a latent vector by encoding networks that are suitable for processing point clouds. We then design multimodal planning networks which enables MNP to learn and predict multiple optimal solutions. Simulation results show that our method outperforms SOTA learning based method MPNet and advanced sampling based methods IRRT* and BIT*.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
301,218
2304.00001
Determination of cutting positions of honeycomb blocks using computer vision
The article discusses a method for automating the process of cutting a honeycomb block, and specifically obtaining points and cutting angles for the required faces. The following requirements are taken into account in the calculations: the allowable location of the cut plane is 0.4 of the length of the cell face, the cut plane must be perpendicular to the cell wall. The algorithm itself consists of two main stages: determining the honeycomb structure and searching for cut points. In the absence of significant defects in honeycomb blocks (deformation of the cell profile and a dent on the edges of the cells), the structure determination algorithm works without significant inaccuracies. The results of the cut point search algorithm can be considered satisfactory.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
355,519
2307.16780
Ranking-based Argumentation Semantics Applied to Logical Argumentation (full version)
In formal argumentation, a distinction can be made between extension-based semantics, where sets of arguments are either (jointly) accepted or not, and ranking-based semantics, where grades of acceptability are assigned to arguments. Another important distinction is that between abstract approaches, that abstract away from the content of arguments, and structured approaches, that specify a method of constructing argument graphs on the basis of a knowledge base. While ranking-based semantics have been extensively applied to abstract argumentation, few work has been done on ranking-based semantics for structured argumentation. In this paper, we make a systematic investigation into the behaviour of ranking-based semantics applied to existing formalisms for structured argumentation. We show that a wide class of ranking-based semantics gives rise to so-called culpability measures, and are relatively robust to specific choices in argument construction methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
382,732
2310.18515
Learning to design protein-protein interactions with enhanced generalization
Discovering mutations enhancing protein-protein interactions (PPIs) is critical for advancing biomedical research and developing improved therapeutics. While machine learning approaches have substantially advanced the field, they often struggle to generalize beyond training data in practical scenarios. The contributions of this work are three-fold. First, we construct PPIRef, the largest and non-redundant dataset of 3D protein-protein interactions, enabling effective large-scale learning. Second, we leverage the PPIRef dataset to pre-train PPIformer, a new SE(3)-equivariant model generalizing across diverse protein-binder variants. We fine-tune PPIformer to predict effects of mutations on protein-protein interactions via a thermodynamically motivated adjustment of the pre-training loss function. Finally, we demonstrate the enhanced generalization of our new PPIformer approach by outperforming other state-of-the-art methods on new, non-leaking splits of standard labeled PPI mutational data and independent case studies optimizing a human antibody against SARS-CoV-2 and increasing the thrombolytic activity of staphylokinase.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,574
2112.02219
Transferring Unconditional to Conditional GANs with Hyper-Modulation
GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. Many of the available high-quality pretrained GANs are unconditional (like StyleGAN). For many applications, however, conditional GANs are preferable, because they provide more control over the generation process, despite often suffering more training difficulties. Therefore, in this paper, we focus on transferring from high-quality pretrained unconditional GANs to conditional GANs. This requires architectural adaptation of the pretrained GAN to perform the conditioning. To this end, we propose hyper-modulated generative networks that allow for shared and complementary supervision. To prevent the additional weights of the hypernetwork to overfit, with subsequent mode collapse on small target domains, we introduce a self-initialization procedure that does not require any real data to initialize the hypernetwork parameters. To further improve the sample efficiency of the transfer, we apply contrastive learning in the discriminator, which effectively works on very limited batch sizes. In extensive experiments, we validate the efficiency of the hypernetworks, self-initialization and contrastive loss for knowledge transfer on standard benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,764
1910.04334
Optimal few-weight codes from simplicial complexes
Recently, some infinite families of binary minimal and optimal linear codes are constructed from simplicial complexes by Hyun {\em et al}. Inspired by their work, we present two new constructions of codes over the ring $\Bbb F_2+u\Bbb F_2$ by employing simplicial complexes. When the simplicial complexes are all generated by a maximal element, we determine the Lee weight distributions of two classes of the codes over $\Bbb F_2+u\Bbb F_2$. Our results show that the codes have few Lee weights. Via the Gray map, we obtain an infinite family of binary codes meeting the Griesmer bound and a class of binary distance optimal codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
148,737
1806.01830
Relational Deep Reinforcement Learning
We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
99,649
2409.10922
Anti-ESIA: Analyzing and Mitigating Impacts of Electromagnetic Signal Injection Attacks
Cameras are integral components of many critical intelligent systems. However, a growing threat, known as Electromagnetic Signal Injection Attacks (ESIA), poses a significant risk to these systems, where ESIA enables attackers to remotely manipulate images captured by cameras, potentially leading to malicious actions and catastrophic consequences. Despite the severity of this threat, the underlying reasons for ESIA's effectiveness remain poorly understood, and effective countermeasures are lacking. This paper aims to address these gaps by investigating ESIA from two distinct aspects: pixel loss and color strips. By analyzing these aspects separately on image classification tasks, we gain a deeper understanding of how ESIA can compromise intelligent systems. Additionally, we explore a lightweight solution to mitigate the effects of ESIA while acknowledging its limitations. Our findings provide valuable insights for future research and development in the field of camera security and intelligent systems.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
488,932
2408.16879
MSLIQA: Enhancing Learning Representations for Image Quality Assessment through Multi-Scale Learning
No-Reference Image Quality Assessment (NR-IQA) remains a challenging task due to the diversity of distortions and the lack of large annotated datasets. Many studies have attempted to tackle these challenges by developing more accurate NR-IQA models, often employing complex and computationally expensive networks, or by bridging the domain gap between various distortions to enhance performance on test datasets. In our work, we improve the performance of a generic lightweight NR-IQA model by introducing a novel augmentation strategy that boosts its performance by almost 28\%. This augmentation strategy enables the network to better discriminate between different distortions in various parts of the image by zooming in and out. Additionally, the inclusion of test-time augmentation further enhances performance, making our lightweight network's results comparable to the current state-of-the-art models, simply through the use of augmentations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
484,484
1309.1864
Timing estimation in distributed sensor and control systems with central processing
We consider the problem of estimating timing of measurements and actuation in distributed sensor and control systems with central processing. The focus is on direct timing estimation for scenarios where clock synchronization is not feasible or desirable. Models of the timing and central and peripheral time stamps are motivated and derived from underlying clock and communication delay definitions and models. Heuristics for constructing a system time is presented and it is outlined how the joint timing and the plant state estimation can be handled. For a simple set of underlying clock and communication delay models, inclusion of peripheral unit time stamps is shown to reduce jitter, and it is argued that in general it will give significant jitter reduction. Finally, a numerical example is given of a contemporary system design.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
26,904
1212.0763
Dynamic recommender system : using cluster-based biases to improve the accuracy of the predictions
It is today accepted that matrix factorization models allow a high quality of rating prediction in recommender systems. However, a major drawback of matrix factorization is its static nature that results in a progressive declining of the accuracy of the predictions after each factorization. This is due to the fact that the new obtained ratings are not taken into account until a new factorization is computed, which can not be done very often because of the high cost of matrix factorization. In this paper, aiming at improving the accuracy of recommender systems, we propose a cluster-based matrix factorization technique that enables online integration of new ratings. Thus, we significantly enhance the obtained predictions between two matrix factorizations. We use finer-grained user biases by clustering similar items into groups, and allocating in these groups a bias to each user. The experiments we did on large datasets demonstrated the efficiency of our approach.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
true
false
20,120
2104.11914
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience. In contrast, symbolic AI systems that convert concepts into rules or symbols -- such as knowledge graphs -- are easier to explain. However, they present lower generalisation and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. We tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process to serve as a sound basis for explainability. X-NeSyL methodology involves the concrete use of two notions of explanation at inference and training time respectively: 1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional CNN that makes use of symbolic representations, and 2) SHAP-Backprop, an explainable AI-informed training procedure that guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that our approach improves explainability and performance.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
232,059
2108.03673
RECALL: Replay-based Continual Learning in Semantic Segmentation
Deep networks allow to obtain outstanding results in semantic segmentation, however they need to be trained in a single shot with a large amount of data. Continual learning settings where new classes are learned in incremental steps and previous training data is no longer available are challenging due to the catastrophic forgetting phenomenon. Existing approaches typically fail when several incremental steps are performed or in presence of a distribution shift of the background class. We tackle these issues by recreating no longer available data for the old classes and outlining a content inpainting scheme on the background class. We propose two sources for replay data. The first resorts to a generative adversarial network to sample from the class space of past learning steps. The second relies on web-crawled data to retrieve images containing examples of old classes from online databases. In both scenarios no samples of past steps are stored, thus avoiding privacy concerns. Replay data are then blended with new samples during the incremental steps. Our approach, RECALL, outperforms state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,745
2502.11559
Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise task performance. To address these limitations, we propose $\textit{FaIRMaker}$, an automated and model-independent framework that employs an $\textbf{auto-search and refinement}$ paradigm to adaptively generate Fairwords, which act as instructions integrated into input queries to reduce gender bias and enhance response quality. Extensive experiments demonstrate that $\textit{FaIRMaker}$ automatically searches for and dynamically refines Fairwords, effectively mitigating gender bias while preserving task integrity and ensuring compatibility with both API-based and open-source LLMs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
534,439
2310.17949
Instance Segmentation under Occlusions via Location-aware Copy-Paste Data Augmentation
Occlusion is a long-standing problem in computer vision, particularly in instance segmentation. ACM MMSports 2023 DeepSportRadar has introduced a dataset that focuses on segmenting human subjects within a basketball context and a specialized evaluation metric for occlusion scenarios. Given the modest size of the dataset and the highly deformable nature of the objects to be segmented, this challenge demands the application of robust data augmentation techniques and wisely-chosen deep learning architectures. Our work (ranked 1st in the competition) first proposes a novel data augmentation technique, capable of generating more training samples with wider distribution. Then, we adopt a new architecture - Hybrid Task Cascade (HTC) framework with CBNetV2 as backbone and MaskIoU head to improve segmentation performance. Furthermore, we employ a Stochastic Weight Averaging (SWA) training strategy to improve the model's generalization. As a result, we achieve a remarkable occlusion score (OM) of 0.533 on the challenge dataset, securing the top-1 position on the leaderboard. Source code is available at this https://github.com/nguyendinhson-kaist/MMSports23-Seg-AutoID.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
403,350
1708.01910
Empathy in Bimatrix Games
Although the definition of what empathetic preferences exactly are is still evolving, there is a general consensus in the psychology, science and engineering communities that the evolution toward players' behaviors in interactive decision-making problems will be accompanied by the exploitation of their empathy, sympathy, compassion, antipathy, spitefulness, selfishness, altruism, and self-abnegating states in the payoffs. In this article, we study one-shot bimatrix games from a psychological game theory viewpoint. A new empathetic payoff model is calculated to fit empirical observations and both pure and mixed equilibria are investigated. For a realized empathy structure, the bimatrix game is categorized among four generic class of games. Number of interesting results are derived. A notable level of involvement can be observed in the empathetic one-shot game compared the non-empathetic one and this holds even for games with dominated strategies. Partial altruism can help in breaking symmetry, in reducing payoff-inequality and in selecting social welfare and more efficient outcomes. By contrast, partial spite and self-abnegating may worsen payoff equity. Empathetic evolutionary game dynamics are introduced to capture the resulting empathetic evolutionarily stable strategies under wide range of revision protocols including Brown-von Neumann-Nash, Smith, imitation, replicator, and hybrid dynamics. Finally, mutual support and Berge solution are investigated and their connection with empathetic preferences are established. We show that pure altruism is logically inconsistent, only by balancing it with some partial selfishness does it create a consistent psychology.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
78,480
2411.13140
Robust Convergency Indicator using High-dimension PID Controller in the presence of disturbance
The PID controller currently occupies a prominent position as the most prevalent control architecture, which has achieved groundbreaking success across extensive implications. However, its parameters online regulation remains a formidable challenge. The majority of existing theories hinge on the linear constant system structure, contemplating only Single-Input, Single-Output (SISO) scenarios. Restricted research has been conducted on the intricate PID control problem within high-dimensional, Multi-Input, Multi-Output (MIMO) nonlinear systems that incorporate disturbances. This research, providing insights on the velocity form of nonlinear system, aims to bolster the controller's robustness. It establishes a quantitative metric to assess the robustness of high-dimensional PID controller, elucidates the pivotal theory regarding robustness's impact on error exponential convergence, and introduces a localized compensation strategy to optimize the robustness indicator. Guided by these theoretical insights, we exploit a robust high-dimensional PID (RH-PID) controller without the crutch of oversimplifying assumptions. Experimental results demonstrate the controller's commendable exponential stabilization efficacy and the controller exhibits exceptional robustness under the robust indicator's guidance. Notably, the robust convergence indicator can also effectively evaluate the comprehensive performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
509,689
2001.02337
Multi-Agent Deep Reinforcement Learning for Cooperative Connected Vehicles
Millimeter-wave (mmWave) base station can offer abundant high capacity channel resources toward connected vehicles so that quality-of-service (QoS) of them in terms of downlink throughput can be highly improved. The mmWave base station can operate among existing base stations (e.g., macro-cell base station) on non-overlapped channels among them and the vehicles can make decision what base station to associate, and what channel to utilize on heterogeneous networks. Furthermore, because of the non-omni property of mmWave communication, the vehicles decide how to align the beam direction toward mmWave base station to associate with it. However, such joint problem requires high computational cost, which is NP-hard and has combinatorial features. In this paper, we solve the problem in 3-tier heterogeneous vehicular network (HetVNet) with multi-agent deep reinforcement learning (DRL) in a way that maximizes expected total reward (i.e., downlink throughput) of vehicles. The multi-agent deep deterministic policy gradient (MADDPG) approach is introduced to achieve optimal policy in continuous action domain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
159,710
2105.15010
Query Attack by Multi-Identity Surrogates
Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks, while the existing black-box attacks require extensive queries on the victim DNN to achieve high success rates. For query-efficiency, surrogate models of the victim are used to generate transferable Adversarial Examples (AEs) because of their Gradient Similarity (GS), i.e., surrogates' attack gradients are similar to the victim's ones. However, it is generally neglected to exploit their similarity on outputs, namely the Prediction Similarity (PS), to filter out inefficient queries by surrogates without querying the victim. To jointly utilize and also optimize surrogates' GS and PS, we develop QueryNet, a unified attack framework that can significantly reduce queries. QueryNet creatively attacks by multi-identity surrogates, i.e., crafts several AEs for one sample by different surrogates, and also uses surrogates to decide on the most promising AE for the query. After that, the victim's query feedback is accumulated to optimize not only surrogates' parameters but also their architectures, enhancing both the GS and the PS. Although QueryNet has no access to pre-trained surrogates' prior, it reduces queries by averagely about an order of magnitude compared to alternatives within an acceptable time, according to our comprehensive experiments: 11 victims (including two commercial models) on MNIST/CIFAR10/ImageNet, allowing only 8-bit image queries, and no access to the victim's training data. The code is available at https://github.com/Sizhe-Chen/QueryNet.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
237,882
2306.15115
Energy Sufficiency in Unknown Environments via Control Barrier Functions
Maintaining energy sufficiency of a battery-powered robot system is a essential for long-term missions. This capability should be flexible enough to deal with different types of environment and a wide range of missions, while constantly guaranteeing that the robot does not run out of energy. In this work we present a framework based on Control Barrier Functions (CBFs) that provides an energy sufficiency layer that can be applied on top of any path planner and provides guarantees on the robot's energy consumption during mission execution. In practice, we smooth the output of a generic path planner using double sigmoid functions and then use CBFs to ensure energy sufficiency along the smoothed path, for robots described by single integrator and unicycle kinematics. We present results using a physics-based robot simulator, as well as with real robots with a full localization and mapping stack to show the validity of our approach.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
375,913
2012.01128
Continuous Subject-in-the-Loop Integration: Centering AI on Marginalized Communities
Despite its utopian promises as a disruptive equalizer, AI - like most tools deployed under the guise of neutrality - has tended to simply reinforce existing social structures. To counter this trend, radical AI calls for centering on the marginalized. We argue that gaps in key infrastructure are preventing the widespread adoption of radical AI, and propose a guiding principle for both identifying these infrastructure gaps and evaluating whether proposals for new infrastructure effectively center marginalized voices.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
209,331
1711.01471
Robust Convergence of Power Flow using Tx Stepping Method with Equivalent Circuit Formulation
Robust solving of critical large power flow cases (with 50k or greater buses) forms the backbone of planning and operation of any large connected power grid. At present, reliable convergence with applications of existing power flow tools to large power systems is contingent upon a good initial guess for the system state. To enable robust convergence for large scale systems starting with an arbitrary initial guess, we extend our equivalent circuit formulation for power flow analysis to include a novel continuation method based on transmission line (Tx) stepping. While various continuation methods have been proposed for use with the traditional PQV power flow formulation, these methods have either failed to completely solve the problem or have resulted in convergence to a low voltage solution. The proposed Tx Stepping method in this paper demonstrates robust convergence to the high voltage solution from an arbitrary initial guess. Example systems, including 75k+ bus test cases representing different loading and operating conditions for Eastern Interconnection of the U.S. power grid, are solved from arbitrary initial guesses.Interconnection of the U.S. power grid, are solved from arbitrary initial guesses.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
83,893
2302.13693
Learning Topology-Specific Experts for Molecular Property Prediction
Recently, graph neural networks (GNNs) have been successfully applied to predicting molecular properties, which is one of the most classical cheminformatics tasks with various applications. Despite their effectiveness, we empirically observe that training a single GNN model for diverse molecules with distinct structural patterns limits its prediction performance. In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics. That is, each expert learns topology-specific discriminative features while being trained with its corresponding topological group. To tackle the key challenge of grouping molecules by their topological patterns, we introduce a clustering-based gating module that assigns an input molecule into one of the clusters and further optimizes the gating module with two different types of self-supervision: topological semantics induced by GNNs and molecular scaffolds, respectively. Extensive experiments demonstrate that TopExpert has boosted the performance for molecular property prediction and also achieved better generalization for new molecules with unseen scaffolds than baselines. The code is available at https://github.com/kimsu55/ToxExpert.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
348,031
1609.03106
On Some Universally Good Fractional Repetition Codes
Data storage in Distributed Storage Systems (DSSs) is a multidimensional optimization problem. Using network coding, one wants to provide reliability, scalability, security, reduced storage overhead, reduced bandwidth for repair and minimal disk I/O etc. in such systems. Regenerating codes have been used to optimize some of these parameters, where a file can be reconstructed by contacting any k nodes in the system and in case of node failure it can be repaired by using any d nodes in the system. This was further generalized to Fractional repetition (FR) codes (a smart replication of encoded packets) on n nodes which also provides optimized disk I/O and where a node failure can be repaired by contacting some specific set of nodes in the system. Several constructions of FR codes using graphs and combinatorial designs are known. In particular, some constructions of codes for heterogeneous DSSs are given using partial regular graph (where number of packets on each node is different) and ring construction. In this work, we show that the codes constructed using the partial regular graph are universally good code. Further, we found several universally good codes using ring construction and t-construction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
60,826
2304.13596
Video Frame Interpolation with Densely Queried Bilateral Correlation
Video Frame Interpolation (VFI) aims to synthesize non-existent intermediate frames between existent frames. Flow-based VFI algorithms estimate intermediate motion fields to warp the existent frames. Real-world motions' complexity and the reference frame's absence make motion estimation challenging. Many state-of-the-art approaches explicitly model the correlations between two neighboring frames for more accurate motion estimation. In common approaches, the receptive field of correlation modeling at higher resolution depends on the motion fields estimated beforehand. Such receptive field dependency makes common motion estimation approaches poor at coping with small and fast-moving objects. To better model correlations and to produce more accurate motion fields, we propose the Densely Queried Bilateral Correlation (DQBC) that gets rid of the receptive field dependency problem and thus is more friendly to small and fast-moving objects. The motion fields generated with the help of DQBC are further refined and up-sampled with context features. After the motion fields are fixed, a CNN-based SynthNet synthesizes the final interpolated frame. Experiments show that our approach enjoys higher accuracy and less inference time than the state-of-the-art. Source code is available at https://github.com/kinoud/DQBC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,635
2204.08645
Artificial Intelligence for Imaging Cherenkov Detectors at the EIC
Imaging Cherenkov detectors form the backbone of particle identification (PID) at the future Electron Ion Collider (EIC). Currently all the designs for the first EIC detector proposal use a dual Ring Imaging CHerenkov (dRICH) detector in the hadron endcap, a Detector for Internally Reflected Cherenkov (DIRC) light in the barrel, and a modular RICH (mRICH) in the electron endcap. These detectors involve optical processes with many photons that need to be tracked through complex surfaces at the simulation level, while for reconstruction they rely on pattern recognition of ring images. This proceeding summarizes ongoing efforts and possible applications of AI for imaging Cherenkov detectors at EIC. In particular we will provide the example of the dRICH for the AI-assisted design and of the DIRC for simulation and particle identification from complex patterns and discuss possible advantages of using AI.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
292,160
2305.16943
DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models
Existing NAS methods suffer from either an excessive amount of time for repetitive sampling and training of many task-irrelevant architectures. To tackle such limitations of existing NAS methods, we propose a paradigm shift from NAS to a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG. Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them. Moreover, with the guidance of parameterized predictors, DiffusionNAG can flexibly generate task-optimal architectures with the desired properties for diverse tasks, by sampling from a region that is more likely to satisfy the properties. This conditional NAG scheme is significantly more efficient than previous NAS schemes which sample the architectures and filter them using the property predictors. We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS. DiffusionNAG achieves superior performance with speedups of up to 35 times when compared to the baselines on Transferable NAS benchmarks. Furthermore, when integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset. Code is available at https://github.com/CownowAn/DiffusionNAG.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,334
2105.10609
Time-Gated Photon Counting Receivers for Optical Wireless Communication
Photon counting detectors such as single-photon avalanche diode (SPAD) arrays are commonly considered for reliable optical wireless communication at power limited regimes. However, SPAD-based receivers suffer from significant dead time induced intersymbol interference (ISI) especially when the incident photon rate is relatively high and the dead time is comparable or even larger than the symbol duration, i.e., sub-dead-time regime. In this work, we propose a novel time-gated SPAD receiver to mitigate such ISI effects and improve the communication performance. When operated in the gated mode, the SPAD can be activated and deactivated in well-defined time intervals. We investigate the statistics of the detected photon count for the proposed time-gated SPAD receiver. It is demonstrated that the gate-ON time interval can be optimized to achieve the best bit error rate (BER) performance. Our extensive performance analysis illustrates the superiority of the time-gated SPAD receiver over the traditional free-running receiver in terms of the BER performance and the tolerance to background light.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
236,448
1812.04352
Layer-Parallel Training of Deep Residual Neural Networks
Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers by a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that the new approach achieves similar training performance to traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
116,195
2110.13502
Shared Independent Component Analysis for Multi-Subject Neuroimaging
We consider shared response modeling, a multi-view learning problem where one wants to identify common components from multiple datasets or views. We introduce Shared Independent Component Analysis (ShICA) that models each view as a linear transform of shared independent components contaminated by additive Gaussian noise. We show that this model is identifiable if the components are either non-Gaussian or have enough diversity in noise variances. We then show that in some cases multi-set canonical correlation analysis can recover the correct unmixing matrices, but that even a small amount of sampling noise makes Multiset CCA fail. To solve this problem, we propose to use joint diagonalization after Multiset CCA, leading to a new approach called ShICA-J. We show via simulations that ShICA-J leads to improved results while being very fast to fit. While ShICA-J is based on second-order statistics, we further propose to leverage non-Gaussianity of the components using a maximum-likelihood method, ShICA-ML, that is both more accurate and more costly. Further, ShICA comes with a principled method for shared components estimation. Finally, we provide empirical evidence on fMRI and MEG datasets that ShICA yields more accurate estimation of the components than alternatives.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,207
2101.02327
A design of human-like robust AI machines in object identification
This is a perspective paper inspired from the study of Turing Test proposed by A.M. Turing (23 June 1912 - 7 June 1954) in 1950. Following one important implication of Turing Test for enabling a machine with a human-like behavior or performance, we define human-like robustness (HLR) for AI machines. The objective of the new definition aims to enforce AI machines with HLR, including to evaluate them in terms of HLR. A specific task is discussed only on object identification, because it is the most common task for every person in daily life. Similar to the perspective, or design, position by Turing, we provide a solution of how to achieve HLR AI machines without constructing them and conducting real experiments. The solution should consists of three important features in the machines. The first feature of HLR machines is to utilize common sense from humans for realizing a causal inference. The second feature is to make a decision from a semantic space for having interpretations to the decision. The third feature is to include a "human-in-the-loop" setting for advancing HLR machines. We show an "identification game" using proposed design of HLR machines. The present paper shows an attempt to learn and explore further from Turing Test towards the design of human-like AI machines.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
214,583
1010.0237
Using Stochastic Models to Describe and Predict Social Dynamics of Web Users
Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both hosts of social media content and its consumers. Accurate and timely prediction would enable hosts to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions between content quality and how the social media site chooses to highlight content. Moreover, most social media sites also selectively present content that has been highly rated by similar users, whose similarity is indicated implicitly by their behavior or explicitly by links in a social network. While these factors make it difficult to predict popularity \emph{a priori}, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating the various mechanisms through which web sites display content, such models improve on predictions based on simply extrapolating from the early votes. Using data from one such site, the news aggregator Digg, we show how a stochastic model of user behavior distinguishes the effect of the increased visibility due to the network from how interested users are in the content. We find a wide range of interest, identifying stories primarily of interest to users in the network (``niche interests'') from those of more general interest to the user community. This distinction is useful for predicting a story's eventual popularity from users' early reactions to the story.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
7,752
1312.5106
Codes between MBR and MSR Points with Exact Repair Property
In this paper distributed storage systems with exact repair are studied. A construction for regenerating codes between the minimum storage regenerating (MSR) and the minimum bandwidth regenerating (MBR) points is given. To the best of author's knowledge, no previous construction of exact-regenerating codes between MBR and MSR points is done except in the work by Tian et al. On contrast to their work, the methods used here are elementary. In this paper it is shown that in the case that the parameters $n$, $k$, and $d$ are close to each other, the given construction is close to optimal when comparing to the known functional repair capacity. This is done by showing that when the distances of the parameters $n$, $k$, and $d$ are fixed but the actual values approach to infinity, the fraction of the performance of constructed codes with exact repair and the known capacity of codes with functional repair, approaches to one. Also a simple variation of the constructed codes with almost the same performance is given.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,202
1607.04673
Unifying Registration based Tracking: A Case Study with Structural Similarity
This paper adapts a popular image quality measure called structural similarity for high precision registration based tracking while also introducing a simpler and faster variant of the same. Further, these are evaluated comprehensively against existing measures using a unified approach to study registration based trackers that decomposes them into three constituent sub modules - appearance model, state space model and search method. Several popular trackers in literature are broken down using this method so that their contributions - as of this paper - are shown to be limited to only one or two of these submodules. An open source tracking framework is made available that follows this decomposition closely through extensive use of generic programming. It is used to perform all experiments on four publicly available datasets so the results are easily reproducible. This framework provides a convenient interface to plug in a new method for any sub module and combine it with existing methods for the other two. It can also serve as a fast and flexible solution for practical tracking needs due to its highly efficient implementation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,644
2311.00116
BERTwich: Extending BERT's Capabilities to Model Dialectal and Noisy Text
Real-world NLP applications often deal with nonstandard text (e.g., dialectal, informal, or misspelled text). However, language models like BERT deteriorate in the face of dialect variation or noise. How do we push BERT's modeling capabilities to encompass nonstandard text? Fine-tuning helps, but it is designed for specializing a model to a task and does not seem to bring about the deeper, more pervasive changes needed to adapt a model to nonstandard language. In this paper, we introduce the novel idea of sandwiching BERT's encoder stack between additional encoder layers trained to perform masked language modeling on noisy text. We find that our approach, paired with recent work on including character-level noise in fine-tuning data, can promote zero-shot transfer to dialectal text, as well as reduce the distance in the embedding space between words and their noisy counterparts.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
404,513
2402.10229
Mixture-Models: a one-stop Python Library for Model-based Clustering using various Mixture Models
\texttt{Mixture-Models} is an open-source Python library for fitting Gaussian Mixture Models (GMM) and their variants, such as Parsimonious GMMs, Mixture of Factor Analyzers, MClust models, Mixture of Student's t distributions, etc. It streamlines the implementation and analysis of these models using various first/second order optimization routines such as Gradient Descent and Newton-CG through automatic differentiation (AD) tools. This helps in extending these models to high-dimensional data, which is first of its kind among Python libraries. The library provides user-friendly model evaluation tools, such as BIC, AIC, and log-likelihood estimation. The source-code is licensed under MIT license and can be accessed at \url{https://github.com/kasakh/Mixture-Models}. The package is highly extensible, allowing users to incorporate new distributions and optimization techniques with ease. We conduct a large scale simulation to compare the performance of various gradient based approaches against Expectation Maximization on a wide range of settings and identify the corresponding best suited approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
429,881
2006.03265
Understanding Self-Attention of Self-Supervised Audio Transformers
Self-supervised Audio Transformers (SAT) enable great success in many downstream speech applications like ASR, but how they work has not been widely explored yet. In this work, we present multiple strategies for the analysis of attention mechanisms in SAT. We categorize attentions into explainable categories, where we discover each category possesses its own unique functionality. We provide a visualization tool for understanding multi-head self-attention, importance ranking strategies for identifying critical attention, and attention refinement techniques to improve model performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
180,273
2502.14018
I Want 'Em All (At Once) -- Ultrametric Cluster Hierarchies
Hierarchical clustering is a powerful tool for exploratory data analysis, organizing data into a tree of clusterings from which a partition can be chosen. This paper generalizes these ideas by proving that, for any reasonable hierarchy, one can optimally solve any center-based clustering objective over it (such as $k$-means). Moreover, these solutions can be found exceedingly quickly and are themselves necessarily hierarchical. Thus, given a cluster tree, we show that one can quickly access a plethora of new, equally meaningful hierarchies. Just as in standard hierarchical clustering, one can then choose any desired partition from these new hierarchies. We conclude by verifying the utility of our proposed techniques across datasets, hierarchies, and partitioning schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
535,623
2311.04519
Synergy Among Flexible Demands: Forming a Coalition to Earn More from Reserve Market
We address potential synergy among flexible demands and how they may earn more collectively than individually by forming a coalition and bidding to the reserve market. We consider frequency-supporting ancillary service markets, particularly the manual Frequency Restoration Reserve (mFRR) market. The coalition of flexible demands provides more reliable mFRR services, where in comparison to individual demands, is penalized less for their potential failure and is paid more for their successful activation. This synergy effect is quantified as a function of the number of homogeneous assets in the coalition. A subsequent payment allocation mechanism using Shapley values is proposed to distribute the total earnings of the coalition among demands, while incentivizing them to remain in the coalition. For our numerical study, we use real price data from the Danish mFRR market in 2022.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
406,255
1712.02898
Representations of Sound in Deep Learning of Audio Features from Music
The work of a single musician, group or composer can vary widely in terms of musical style. Indeed, different stylistic elements, from performance medium and rhythm to harmony and texture, are typically exploited and developed across an artist's lifetime. Yet, there is often a discernable character to the work of, for instance, individual composers at the perceptual level - an experienced listener can often pick up on subtle clues in the music to identify the composer or performer. Here we suggest that a convolutional network may learn these subtle clues or features given an appropriate representation of the music. In this paper, we apply a deep convolutional neural network to a large audio dataset and empirically evaluate its performance on audio classification tasks. Our trained network demonstrates accurate performance on such classification tasks when presented with 5 s examples of music obtained by simple transformations of the raw audio waveform. A particularly interesting example is the spectral representation of music obtained by application of a logarithmically spaced filter bank, mirroring the early stages of auditory signal transduction in mammals. The most successful representation of music to facilitate discrimination was obtained via a random matrix transform (RMT). Networks based on logarithmic filter banks and RMT were able to correctly guess the one composer out of 31 possibilities in 68 and 84 percent of cases respectively.
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
86,362
2412.10517
Uncertainty propagation of stochastic hybrid systems: a case study for types of jump
Stochastic hybrid systems are dynamic systems that undergo both random continuous-time flows and random discrete jumps. Depending on how randomness is introduced into the continuous dynamics, discrete transitions, or both, stochastic hybrid systems exhibit distinct characteristics. This paper investigates the role of uncertainties in the interplay between continuous flows and discrete jumps by studying probability density propagation. Specifically, we formulate stochastic Koopman/Frobenius-Perron operators for three types of one-dimensional stochastic hybrid systems to uncover their unique dynamic characteristics and verify them using Monte Carlo simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
516,974
2206.01008
Approximate Network Motif Mining Via Graph Learning
Frequent and structurally related subgraphs, also known as network motifs, are valuable features of many graph datasets. However, the high computational complexity of identifying motif sets in arbitrary datasets (motif mining) has limited their use in many real-world datasets. By automatically leveraging statistical properties of datasets, machine learning approaches have shown promise in several tasks with combinatorial complexity and are therefore a promising candidate for network motif mining. In this work we seek to facilitate the development of machine learning approaches aimed at motif mining. We propose a formulation of the motif mining problem as a node labelling task. In addition, we build benchmark datasets and evaluation metrics which test the ability of models to capture different aspects of motif discovery such as motif number, size, topology, and scarcity. Next, we propose MotiFiesta, a first attempt at solving this problem in a fully differentiable manner with promising results on challenging baselines. Finally, we demonstrate through MotiFiesta that this learning setting can be applied simultaneously to general-purpose data mining and interpretable feature extraction for graph classification tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
300,326
1109.1133
Color Texture Classification Approach Based on Combination of Primitive Pattern Units and Statistical Features
Texture classification became one of the problems which has been paid much attention on by image processing scientists since late 80s. Consequently, since now many different methods have been proposed to solve this problem. In most of these methods the researchers attempted to describe and discriminate textures based on linear and non-linear patterns. The linear and non-linear patterns on any window are based on formation of Grain Components in a particular order. Grain component is a primitive unit of morphology that most meaningful information often appears in the form of occurrence of that. The approach which is proposed in this paper could analyze the texture based on its grain components and then by making grain components histogram and extracting statistical features from that would classify the textures. Finally, to increase the accuracy of classification, proposed approach is expanded to color images to utilize the ability of approach in analyzing each RGB channels, individually. Although, this approach is a general one and it could be used in different applications, the method has been tested on the stone texture and the results can prove the quality of approach.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
11,996
2311.02787
Make a Donut: Hierarchical EMD-Space Planning for Zero-Shot Deformable Manipulation with Tools
Deformable object manipulation stands as one of the most captivating yet formidable challenges in robotics. While previous techniques have predominantly relied on learning latent dynamics through demonstrations, typically represented as either particles or images, there exists a pertinent limitation: acquiring suitable demonstrations, especially for long-horizon tasks, can be elusive. Moreover, basing learning entirely on demonstrations can hamper the model's ability to generalize beyond the demonstrated tasks. In this work, we introduce a demonstration-free hierarchical planning approach capable of tackling intricate long-horizon tasks without necessitating any training. We employ large language models (LLMs) to articulate a high-level, stage-by-stage plan corresponding to a specified task. For every individual stage, the LLM provides both the tool's name and the Python code to craft intermediate subgoal point clouds. With the tool and subgoal for a particular stage at our disposal, we present a granular closed-loop model predictive control strategy. This leverages Differentiable Physics with Point-to-Point correspondence (DiffPhysics-P2P) loss in the earth mover distance (EMD) space, applied iteratively. Experimental findings affirm that our technique surpasses multiple benchmarks in dough manipulation, spanning both short and long horizons. Remarkably, our model demonstrates robust generalization capabilities to novel and previously unencountered complex tasks without any preliminary demonstrations. We further substantiate our approach with experimental trials on real-world robotic platforms. Our project page: https://qq456cvb.github.io/projects/donut.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
405,580
2011.02592
AML-SVM: Adaptive Multilevel Learning with Support Vector Machines
The support vector machines (SVM) is one of the most widely used and practical optimization based classification models in machine learning because of its interpretability and flexibility to produce high quality results. However, the big data imposes a certain difficulty to the most sophisticated but relatively slow versions of SVM, namely, the nonlinear SVM. The complexity of nonlinear SVM solvers and the number of elements in the kernel matrix quadratically increases with the number of samples in training data. Therefore, both runtime and memory requirements are negatively affected. Moreover, the parameter fitting has extra kernel parameters to tune, which exacerbate the runtime even further. This paper proposes an adaptive multilevel learning framework for the nonlinear SVM, which addresses these challenges, improves the classification quality across the refinement process, and leverages multi-threaded parallel processing for better performance. The integration of parameter fitting in the hierarchical learning framework and adaptive process to stop unnecessary computation significantly reduce the running time while increase the overall performance. The experimental results demonstrate reduced variance on prediction over validation and test data across levels in the hierarchy, and significant speedup compared to state-of-the-art nonlinear SVM libraries without a decrease in the classification quality. The code is accessible at https://github.com/esadr/amlsvm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
204,969
1911.05647
Long-range Event-level Prediction and Response Simulation for Urban Crime and Global Terrorism with Granger Networks
Large-scale trends in urban crime and global terrorism are well-predicted by socio-economic drivers, but focused, event-level predictions have had limited success. Standard machine learning approaches are promising, but lack interpretability, are generally interpolative, and ineffective for precise future interventions with costly and wasteful false positives. Here, we are introducing Granger Network inference as a new forecasting approach for individual infractions with demonstrated performance far surpassing past results, yet transparent enough to validate and extend social theory. Considering the problem of predicting crime in the City of Chicago, we achieve an average AUC of ~90\% for events predicted a week in advance within spatial tiles approximately $1000$ ft across. Instead of pre-supposing that crimes unfold across contiguous spaces akin to diffusive systems, we learn the local transport rules from data. As our key insights, we uncover indications of suburban bias -- how law-enforcement response is modulated by socio-economic contexts with disproportionately negative impacts in the inner city -- and how the dynamics of violent and property crimes co-evolve and constrain each other -- lending quantitative support to controversial pro-active policing policies. To demonstrate broad applicability to spatio-temporal phenomena, we analyze terror attacks in the middle-east in the recent past, and achieve an AUC of ~80% for predictions made a week in advance, and within spatial tiles measuring approximately 120 miles across. We conclude that while crime operates near an equilibrium quickly dissipating perturbations, terrorism does not. Indeed terrorism aims to destabilize social order, as shown by its dynamics being susceptible to run-away increases in event rates under small perturbations.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,328
2208.02046
Texture features in medical image analysis: a survey
The texture is defined as spatial structure of the intensities of the pixels in an image that is repeated periodically in the whole image or regions, and makes the concept of the image. Texture, color and shape are three main components which are used by human visual system to recognize image contents. In this paper, first of all, efficient and updated texture analysis operators are survived with details. Next, some state-of-the-art methods are survived that use texture analysis in medical applications and disease diagnosis. Finally, different approaches are compared in terms of accuracy, dataset, application, etc. Results demonstrate that texture features separately or in joint of different feature sets such as deep, color or shape features provide high accuracy in medical image classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,363
2303.13604
Stochastic Submodular Bandits with Delayed Composite Anonymous Bandit Feedback
This paper investigates the problem of combinatorial multiarmed bandits with stochastic submodular (in expectation) rewards and full-bandit delayed feedback, where the delayed feedback is assumed to be composite and anonymous. In other words, the delayed feedback is composed of components of rewards from past actions, with unknown division among the sub-components. Three models of delayed feedback: bounded adversarial, stochastic independent, and stochastic conditionally independent are studied, and regret bounds are derived for each of the delay models. Ignoring the problem dependent parameters, we show that regret bound for all the delay models is $\tilde{O}(T^{2/3} + T^{1/3} \nu)$ for time horizon $T$, where $\nu$ is a delay parameter defined differently in the three cases, thus demonstrating an additive term in regret with delay in all the three delay models. The considered algorithm is demonstrated to outperform other full-bandit approaches with delayed composite anonymous feedback.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
353,747
2101.05967
Responsible AI Challenges in End-to-end Machine Learning
Responsible AI is becoming critical as AI is widely used in our everyday lives. Many companies that deploy AI publicly state that when training a model, we not only need to improve its accuracy, but also need to guarantee that the model does not discriminate against users (fairness), is resilient to noisy or poisoned data (robustness), is explainable, and more. In addition, these objectives are not only relevant to model training, but to all steps of end-to-end machine learning, which include data collection, data cleaning and validation, model training, model evaluation, and model management and serving. Finally, responsible AI is conceptually challenging, and supporting all the objectives must be as easy as possible. We thus propose three key research directions towards this vision - depth, breadth, and usability - to measure progress and introduce our ongoing research. First, responsible AI must be deeply supported where multiple objectives like fairness and robust must be handled together. To this end, we propose FR-Train, a holistic framework for fair and robust model training in the presence of data bias and poisoning. Second, responsible AI must be broadly supported, preferably in all steps of machine learning. Currently we focus on the data pre-processing steps and propose Slice Tuner, a selective data acquisition framework for training fair and accurate models, and MLClean, a data cleaning framework that also improves fairness and robustness. Finally, responsible AI must be usable where the techniques must be easy to deploy and actionable. We propose FairBatch, a batch selection approach for fairness that is effective and simple to use, and Slice Finder, a model evaluation tool that automatically finds problematic slices. We believe we scratched the surface of responsible AI for end-to-end machine learning and suggest research challenges moving forward.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
215,569
1504.07327
Toward Smart Power Grids: Communication Network Design for Power Grids Synchronization
In smart power grids, keeping the synchronicity of generators and the corresponding controls is of great importance. To do so, a simple model is employed in terms of swing equation to represent the interactions among dynamics of generators and feedback control. In case of having a communication network available, the control can be done based on the transmitted measurements by the communication network. The stability of system is denoted by the largest eigenvalue of the weighted sum of the Laplacian matrices of the communication infrastructure and power network. In this work, we use graph theory to model the communication network as a graph problem. Then, Ant Colony System (ACS) is employed for optimum design of above graph for synchronization of power grids. Performance evaluation of the proposed method for the 39-bus New England power system versus methods such as exhaustive search and Rayleigh quotient approximation indicates feasibility and effectiveness of our method for even large scale smart power grids.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
42,524
1708.05172
Open storm: a complete framework for sensing and control of urban watersheds
Leveraging recent advances in technologies surrounding the Internet of Things, "smart" water systems are poised to transform water resources management by enabling ubiquitous real-time sensing and control. Recent applications have demonstrated the potential to improve flood forecasting, enhance rainwater harvesting, and prevent combined sewer overflows. However, adoption of smart water systems has been hindered by a limited number of proven case studies, along with a lack of guidance on how smart water systems should be built. To this end, we review existing solutions, and introduce open storm---an open-source, end-to-end platform for real-time monitoring and control of watersheds. Open storm includes (i) a robust hardware stack for distributed sensing and control in harsh environments (ii) a cloud services platform that enables system-level supervision and coordination of water assets, and (iii) a comprehensive, web-based "how-to" guide, available on open-storm.org, that empowers newcomers to develop and deploy their own smart water networks. We illustrate the capabilities of the open storm platform through two ongoing deployments: (i) a high-resolution flash-flood monitoring network that detects and communicates flood hazards at the level of individual roadways and (ii) a real-time stormwater control network that actively modulates discharges from stormwater facilities to improve water quality and reduce stream erosion. Through these case studies, we demonstrate the real-world potential for smart water systems to enable sustainable management of water resources.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
79,088
2202.04157
A Policy Gradient Algorithm for the Risk-Sensitive Exponential Cost MDP
We study the risk-sensitive exponential cost MDP formulation and develop a trajectory-based gradient algorithm to find the stationary point of the cost associated with a set of parameterized policies. We derive a formula that can be used to compute the policy gradient from (state, action, cost) information collected from sample paths of the MDP for each fixed parameterized policy. Unlike the traditional average-cost problem, standard stochastic approximation theory cannot be used to exploit this formula. To address the issue, we introduce a truncated and smooth version of the risk-sensitive cost and show that this new cost criterion can be used to approximate the risk-sensitive cost and its gradient uniformly under some mild assumptions. We then develop a trajectory-based gradient algorithm to minimize the smooth truncated estimation of the risk-sensitive cost and derive conditions under which a sequence of truncations can be used to solve the original, untruncated cost problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
279,471
2103.14620
LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification
Multi-label text classification (MLTC) is an attractive and challenging task in natural language processing (NLP). Compared with single-label text classification, MLTC has a wider range of applications in practice. In this paper, we propose a label-interpretable graph convolutional network model to solve the MLTC problem by modeling tokens and labels as nodes in a heterogeneous graph. In this way, we are able to take into account multiple relationships including token-level relationships. Besides, the model allows better interpretability for predicted labels as the token-label edges are exposed. We evaluate our method on four real-world datasets and it achieves competitive scores against selected baseline methods. Specifically, this model achieves a gain of 0.14 on the F1 score in the small label set MLTC, and 0.07 in the large label set scenario.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
226,918
2009.08917
Predicting molecular phenotypes from histopathology images: a transcriptome-wide expression-morphology analysis in breast cancer
Molecular phenotyping is central in cancer precision medicine, but remains costly and standard methods only provide a tumour average profile. Microscopic morphological patterns observable in histopathology sections from tumours are determined by the underlying molecular phenotype and associated with clinical factors. The relationship between morphology and molecular phenotype has a potential to be exploited for prediction of the molecular phenotype from the morphology visible in histopathology images. We report the first transcriptome-wide Expression-MOrphology (EMO) analysis in breast cancer, where gene-specific models were optimised and validated for prediction of mRNA expression both as a tumour average and in spatially resolved manner. Individual deep convolutional neural networks (CNNs) were optimised to predict the expression of 17,695 genes from hematoxylin and eosin (HE) stained whole slide images (WSIs). Predictions for 9,334 (52.75%) genes were significantly associated with RNA-sequencing estimates (FDR adjusted p-value < 0.05). 1,011 of the genes were brought forward for validation, with 876 (87%) and 908 (90%) successfully replicated in internal and external test data, respectively. Predicted spatial intra-tumour variabilities in expression were validated in 76 genes, out of which 59 (77.6%) had a significant association (FDR adjusted p-value < 0.05) with spatial transcriptomics estimates. These results suggest that the proposed methodology can be applied to predict both tumour average gene expression and intra-tumour spatial expression directly from morphology, thus providing a scalable approach to characterise intra-tumour heterogeneity.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,384
2406.01197
A Survey of Generative Information Retrieval
Generative Retrieval (GR) is an emerging paradigm in information retrieval that leverages generative models to directly map queries to relevant document identifiers (DocIDs) without the need for traditional query processing or document reranking. This survey provides a comprehensive overview of GR, highlighting key developments, indexing and retrieval strategies, and challenges. We discuss various document identifier strategies, including numerical and string-based identifiers, and explore different document representation methods. Our primary contribution lies in outlining future research directions that could profoundly impact the field: improving the quality of query generation, exploring learnable document identifiers, enhancing scalability, and integrating GR with multi-task learning frameworks. By examining state-of-the-art GR techniques and their applications, this survey aims to provide a foundational understanding of GR and inspire further innovations in this transformative approach to information retrieval. We also make the complementary materials such as paper collection publicly available at https://github.com/MiuLab/GenIR-Survey/
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
460,217
2407.20856
Learn by Selling: Equipping Large Language Models with Product Knowledge for Context-Driven Recommendations
The rapid evolution of large language models (LLMs) has opened up new possibilities for applications such as context-driven product recommendations. However, the effectiveness of these models in this context is heavily reliant on their comprehensive understanding of the product inventory. This paper presents a novel approach to equipping LLMs with product knowledge by training them to respond contextually to synthetic search queries that include product IDs. We delve into an extensive analysis of this method, evaluating its effectiveness, outlining its benefits, and highlighting its constraints. The paper also discusses the potential improvements and future directions for this approach, providing a comprehensive understanding of the role of LLMs in product recommendations.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
477,309
1811.09393
Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
Our work explores temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationships in the generated data are much less explored. Natural temporal changes are crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation. For the former, state-of-the-art methods often favor simpler norm losses such as $L^2$ over adversarial training. However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail. For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies. In contrast, we focus on improving learning objectives and propose a temporally self-supervised algorithm. For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail. We also propose a novel Ping-Pong loss to improve the long-term temporal consistency. It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features. Additionally, we propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies confirm the rankings computed with these metrics. Code, data, models, and results are provided at https://github.com/thunil/TecoGAN. The project page https://ge.in.tum.de/publications/2019-tecogan-chu/ contains supplemental materials.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
114,240
2010.02399
Guiding Attention for Self-Supervised Learning with Transformers
In this paper, we propose a simple and effective technique to allow for efficient self-supervised learning with bi-directional Transformers. Our approach is motivated by recent studies demonstrating that self-attention patterns in trained models contain a majority of non-linguistic regularities. We propose a computationally efficient auxiliary loss function to guide attention heads to conform to such patterns. Our method is agnostic to the actual pre-training objective and results in faster convergence of models as well as better performance on downstream tasks compared to the baselines, achieving state of the art results in low-resource settings. Surprisingly, we also find that linguistic properties of attention heads are not necessarily correlated with language modeling performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
199,002
1904.06093
STC Speaker Recognition Systems for the VOiCES From a Distance Challenge
This paper presents the Speech Technology Center (STC) speaker recognition (SR) systems submitted to the VOiCES From a Distance challenge 2019. The challenge's SR task is focused on the problem of speaker recognition in single channel distant/far-field audio under noisy conditions. In this work we investigate different deep neural networks architectures for speaker embedding extraction to solve the task. We show that deep networks with residual frame level connections outperform more shallow architectures. Simple energy based speech activity detector (SAD) and automatic speech recognition (ASR) based SAD are investigated in this work. We also address the problem of data preparation for robust embedding extractors training. The reverberation for the data augmentation was performed using automatic room impulse response generator. In our systems we used discriminatively trained cosine similarity metric learning model as embedding backend. Scores normalization procedure was applied for each individual subsystem we used. Our final submitted systems were based on the fusion of different subsystems. The results obtained on the VOiCES development and evaluation sets demonstrate effectiveness and robustness of the proposed systems when dealing with distant/far-field audio under noisy conditions.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
127,478
1901.10017
Secure Massive MIMO Communication with Low-resolution DACs
In this paper, we investigate secure transmission in a massive multiple-input multiple-output (MIMO) system adopting low-resolution digital-to-analog converters (DACs). Artificial noise (AN) is deliberately transmitted simultaneously with the confidential signals to degrade the eavesdropper's channel quality. By applying the Bussgang theorem, a DAC quantization model is developed which facilitates the analysis of the asymptotic achievable secrecy rate. Interestingly, for a fixed power allocation factor $\phi$, low-resolution DACs typically result in a secrecy rate loss, but in certain cases they provide superior performance, e.g., at low signal-to-noise ratio (SNR). Specifically, we derive a closed-form SNR threshold which determines whether low-resolution or high-resolution DACs are preferable for improving the secrecy rate. Furthermore, a closed-form expression for the optimal $\phi$ is derived. With AN generated in the null-space of the user channel and the optimal $\phi$, low-resolution DACs inevitably cause secrecy rate loss. On the other hand, for random AN with the optimal $\phi$, the secrecy rate is hardly affected by the DAC resolution because the negative impact of the quantization noise can be compensated for by reducing the AN power. All the derived analytical results are verified by numerical simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
119,898
2008.11841
On the Optimality of Vagueness: "Around", "Between", and the Gricean Maxims
Why is ordinary language vague? We argue that in contexts in which a cooperative speaker is not perfectly informed about the world, the use of vague expressions can offer an optimal tradeoff between truthfulness (Gricean Quality) and informativeness (Gricean Quantity). Focusing on expressions of approximation such as "around", which are semantically vague, we show that they allow the speaker to convey indirect probabilistic information, in a way that can give the listener a more accurate representation of the information available to the speaker than any more precise expression would (intervals of the form "between"). That is, vague sentences can be more informative than their precise counterparts. We give a probabilistic treatment of the interpretation of "around", and offer a model for the interpretation and use of "around"-statements within the Rational Speech Act (RSA) framework. In our account the shape of the speaker's distribution matters in ways not predicted by the Lexical Uncertainty model standardly used in the RSA framework for vague predicates. We use our approach to draw further lessons concerning the semantic flexibility of vague expressions and their irreducibility to more precise meanings.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
193,394
2412.06354
GraphNeuralNetworks.jl: Deep Learning on Graphs with Julia
GraphNeuralNetworks.jl is an open-source framework for deep learning on graphs, written in the Julia programming language. It supports multiple GPU backends, generic sparse or dense graph representations, and offers convenient interfaces for manipulating standard, heterogeneous, and temporal graphs with attributes at the node, edge, and graph levels. The framework allows users to define custom graph convolutional layers using gather/scatter message-passing primitives or optimized fused operations. It also includes several popular layers, enabling efficient experimentation with complex deep architectures. The package is available on GitHub: \url{https://github.com/JuliaGraphs/GraphNeuralNetworks.jl}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
515,217
2401.01466
Human Leading or Following Preferences: Effects on Human Perception of the Robot and the Human-Robot Collaboration
Achieving effective and seamless human-robot collaboration requires two key outcomes: enhanced team performance and fostering a positive human perception of both the robot and the collaboration. This paper investigates the capability of the proposed task planning framework to realize these objectives by integrating human leading/following preferences and performance into its task allocation and scheduling processes. We designed a collaborative scenario wherein the robot autonomously collaborates with participants. The outcomes of the user study indicate that the proactive task planning framework successfully attains the aforementioned goals. We also explore the impact of participants' leadership and followership styles on their collaboration. The results reveal intriguing relationships between these factors which warrant further investigation in future studies.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
419,363
2311.14182
Gradient-based bilevel optimization for multi-penalty Ridge regression through matrix differential calculus
Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
410,024
2203.09598
DP-KB: Data Programming with Knowledge Bases Improves Transformer Fine Tuning for Answer Sentence Selection
While transformers demonstrate impressive performance on many knowledge intensive (KI) tasks, their ability to serve as implicit knowledge bases (KBs) remains limited, as shown on several slot-filling, question-answering (QA), fact verification, and entity-linking tasks. In this paper, we implement an efficient, data-programming technique that enriches training data with KB-derived context and improves transformer utilization of encoded knowledge when fine-tuning for a particular QA task, namely answer sentence selection (AS2). Our method outperforms state of the art transformer approach on WikiQA and TrecQA, two widely studied AS2 benchmarks, increasing by 2.0% p@1, 1.3% MAP, 1.1% MRR, and 4.4% p@1, 0.9% MAP, 2.4% MRR, respectively. To demonstrate our improvements in an industry setting, we additionally evaluate our approach on a proprietary dataset of Alexa QA pairs, and show increase of 2.3% F1 and 2.0% MAP. We additionally find that these improvements remain even when KB context is omitted at inference time, allowing for the use of our models within existing transformer workflows without additional latency or deployment costs.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
286,206
2307.10110
AC Power Cycling Test Setup and Condition Monitoring Tools for SiC-Based Traction Inverters
AC power cycling tests allow the most realistic reliability assessment by applying close to real stress to the device or module under test to meet functional safety standards, which is highly critical for traction applications. This paper presents a comprehensive guideline and shares critical know-how to develop a 120 KVA AC power cycling test setup for high-power Silicon Carbide (SiC) modules. As of today, traction applications can not generate an early warning signal for drivers to replace critical components on time. For this purpose, the suitable precursors for all dominant failure mechanisms are discussed, and the corresponding condition monitoring tools are proposed to monitor the device aging on power converters. These condition monitoring tools are integrated into the built-in desaturation protection circuit of the electric vehicle (EV) gate driver for low-cost, practical implementation. The on-resistance of all twelve switches is monitored online as a temperature-sensitive electrical parameter (TSEP) to measure the junction temperature of the devices. To avoid heavy processing load in the microcontroller, the out-of-order equivalent time sampling technique is developed for data sampling, which leads to a measurement error of less than 1.5%. In addition, the design considerations regarding common mode noise and aging effect on DESAT protection are investigated, and experimental findings are presented.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
380,418
2108.11758
Determining the origin of impulsive noise events using paired wireless sound sensors
This work investigates how to identify the source of impulsive noise events using a pair of wireless noise sensors. One sensor is placed at a known noise source, and another sensor is placed at the noise receiver. Machine learning models receive data from the two sensors and estimate whether a given noise event originates from the known noise source or another source. To avoid privacy issues, the approach uses on-edge preprocessing that converts the sound into privacy compatible spectrograms. The system was evaluated at a shooting range and explosives training facility, using data collected during noise emission testing. The combination of convolutional neural networks with cross-correlation achieved the best results. We created multiple alternative models using different spectrogram representations. The best model detected 70.8\% of the impulsive noise events and correctly predicted 90.3\% of the noise events in the optimal trade-off between recall and precision.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
252,275
2006.09853
Shallow Feature Based Dense Attention Network for Crowd Counting
While the performance of crowd counting via deep learning has been improved dramatically in the recent years, it remains an ingrained problem due to cluttered backgrounds and varying scales of people within an image. In this paper, we propose a Shallow feature based Dense Attention Network (SDANet) for crowd counting from still images, which diminishes the impact of backgrounds via involving a shallow feature based attention model, and meanwhile, captures multi-scale information via densely connecting hierarchical image features. Specifically, inspired by the observation that backgrounds and human crowds generally have noticeably different responses in shallow features, we decide to build our attention model upon shallow-feature maps, which results in accurate background-pixel detection. Moreover, considering that the most representative features of people across different scales can appear in different layers of a feature extraction network, to better keep them all, we propose to densely connect hierarchical image features of different layers and subsequently encode them for estimating crowd density. Experimental results on three benchmark datasets clearly demonstrate the superiority of SDANet when dealing with different scenarios. Particularly, on the challenging UCF CC 50 dataset, our method outperforms other existing methods by a large margin, as is evident from a remarkable 11.9% Mean Absolute Error (MAE) drop of our SDANet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
182,671
2412.19449
Feature Alignment-Based Knowledge Distillation for Efficient Compression of Large Language Models
This study proposes a knowledge distillation algorithm based on large language models and feature alignment, aiming to effectively transfer the knowledge of large pre-trained models into lightweight student models, thereby reducing computational costs while maintaining high model performance. Different from the traditional soft label distillation method, this method introduces a multi-layer feature alignment strategy to deeply align the intermediate features and attention mechanisms of the teacher model and the student model, maximally retaining the semantic expression ability and context modeling ability of the teacher model. In terms of method design, a multi-task loss function is constructed, including feature matching loss, attention alignment loss, and output distribution matching loss, to ensure multi-level information transfer through joint optimization. The experiments were comprehensively evaluated on the GLUE data set and various natural language processing tasks. The results show that the proposed model performs very close to the state-of-the-art GPT-4 model in terms of evaluation indicators such as perplexity, BLEU, ROUGE, and CER. At the same time, it far exceeds baseline models such as DeBERTa, XLNet, and GPT-3, showing significant performance improvements and computing efficiency advantages. Research results show that the feature alignment distillation strategy is an effective model compression method that can significantly reduce computational overhead and storage requirements while maintaining model capabilities. Future research can be further expanded in the directions of self-supervised learning, cross-modal feature alignment, and multi-task transfer learning to provide more flexible and efficient solutions for the deployment and optimization of deep learning models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
520,846
2205.13682
ANISE: Assembly-based Neural Implicit Surface rEconstruction
We present ANISE, a method that reconstructs a 3D~shape from partial observations (images or sparse point clouds) using a part-aware neural implicit shape representation. The shape is formulated as an assembly of neural implicit functions, each representing a different part instance. In contrast to previous approaches, the prediction of this representation proceeds in a coarse-to-fine manner. Our model first reconstructs a structural arrangement of the shape in the form of geometric transformations of its part instances. Conditioned on them, the model predicts part latent codes encoding their surface geometry. Reconstructions can be obtained in two ways: (i) by directly decoding the part latent codes to part implicit functions, then combining them into the final shape; or (ii) by using part latents to retrieve similar part instances in a part database and assembling them in a single shape. We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.When reconstructing shapes by assembling parts retrieved from a dataset, our approach significantly outperforms traditional shape retrieval methods even when significantly restricting the database size. We present our results in well-known sparse point cloud reconstruction and single-view reconstruction benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
299,037
1506.01245
A density compensation-based path computing model for measuring semantic similarity
The shortest path between two concepts in a taxonomic ontology is commonly used to represent the semantic distance between concepts in the edge-based semantic similarity measures. In the past, the edge counting is considered to be the default method for the path computation, which is simple, intuitive and has low computational complexity. However, a large lexical taxonomy of such as WordNet has the irregular densities of links between concepts due to its broad domain but. The edge counting-based path computation is powerless for this non-uniformity problem. In this paper, we advocate that the path computation is able to be separated from the edge-based similarity measures and form various general computing models. Therefore, in order to solve the problem of non-uniformity of concept density in a large taxonomic ontology, we propose a new path computing model based on the compensation of local area density of concepts, which is equal to the number of direct hyponyms of the subsumers of concepts in their shortest path. This path model considers the local area density of concepts as an extension of the edge-based path and converts the local area density divided by their depth into the compensation for edge-based path with an adjustable parameter, which idea has been proven to be consistent with the information theory. This model is a general path computing model and can be applied in various edge-based similarity algorithms. The experiment results show that the proposed path model improves the average correlation between edge-based measures with human judgments on Miller and Charles benchmark from less than 0.8 to more than 0.85, and has a big advantage in efficiency than information content (IC) computation in a dynamic ontology, thereby successfully solving the non-uniformity problem of taxonomic ontology.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
43,783
2407.21118
Palu: Compressing KV-Cache with Low-Rank Projection
Post-training KV-Cache compression methods typically either sample a subset of effectual tokens or quantize the data into lower numerical bit width. However, these methods cannot exploit redundancy in the hidden dimension of the KV tensors. This paper presents a hidden dimension compression approach called Palu, a KV-Cache compression framework that utilizes low-rank projection to reduce inference-time LLM memory usage. Palu decomposes the linear layers into low-rank matrices, caches compressed intermediate states, and reconstructs the full keys and values on the fly. To improve accuracy, compression rate, and efficiency, Palu further encompasses (1) a medium-grained low-rank decomposition scheme, (2) an efficient rank search algorithm, (3) low-rank-aware quantization compatibility enhancements, and (4) optimized GPU kernels with operators fusion. Extensive experiments with popular LLMs show that Palu compresses KV-Cache by 50% while maintaining strong accuracy and delivering up to 1.89x on the RoPE-based attention module. When combined with quantization, Palu's inherent quantization-friendly design yields small to negligible extra accuracy degradation while saving additional memory than quantization-only methods and achieving up to 2.91x speedup for the RoPE-based attention. Moreover, it maintains comparable or even better accuracy (up to 1.19 lower perplexity) compared to quantization-only methods. These results demonstrate Palu's superior capability to effectively address the efficiency and memory challenges of LLM inference posed by KV-Cache. Our code is publicly available at: https://github.com/shadowpa0327/Palu
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
477,411
2305.19255
A Stutter Seldom Comes Alone -- Cross-Corpus Stuttering Detection as a Multi-label Problem
Most stuttering detection and classification research has viewed stuttering as a multi-class classification problem or a binary detection task for each dysfluency type; however, this does not match the nature of stuttering, in which one dysfluency seldom comes alone but rather co-occurs with others. This paper explores multi-language and cross-corpus end-to-end stuttering detection as a multi-label problem using a modified wav2vec 2.0 system with an attention-based classification head and multi-task learning. We evaluate the method using combinations of three datasets containing English and German stuttered speech, one containing speech modified by fluency shaping. The experimental results and an error analysis show that multi-label stuttering detection systems trained on cross-corpus and multi-language data achieve competitive results but performance on samples with multiple labels stays below over-all detection results.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
369,442
2106.08556
Coreference-Aware Dialogue Summarization
Summarizing conversations via neural approaches has been gaining research traction lately, yet it is still challenging to obtain practical solutions. Examples of such challenges include unstructured information exchange in dialogues, informal interactions between speakers, and dynamic role changes of speakers as the dialogue evolves. Many of such challenges result in complex coreference links. Therefore, in this work, we investigate different approaches to explicitly incorporate coreference information in neural abstractive dialogue summarization models to tackle the aforementioned challenges. Experimental results show that the proposed approaches achieve state-of-the-art performance, implying it is useful to utilize coreference information in dialogue summarization. Evaluation results on factual correctness suggest such coreference-aware models are better at tracing the information flow among interlocutors and associating accurate status/actions with the corresponding interlocutors and person mentions.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
241,333
2012.00316
Fast and Robust Bin-picking System for Densely Piled Industrial Objects
Objects grasping, also known as the bin-picking, is one of the most common tasks faced by industrial robots. While much work has been done in related topics, grasping randomly piled objects still remains a challenge because much of the existing work either lack robustness or costs too much resource. In this paper, we develop a fast and robust bin-picking system for grasping densely piled objects adaptively and safely. The proposed system starts with point cloud segmentation using improved density-based spatial clustering of application with noise (DBSCAN) algorithm, which is improved by combining the region growing algorithm and using Octree to speed up the calculation. The system then uses principle component analysis (PCA) for coarse registration and iterative closest point (ICP) for fine registration. We propose a grasp risk score (GRS) to evaluate each object by the collision probability, the stability of the object, and the whole pile's stability. Through real tests with the Anno robot, our method is verified to be advanced in speed and robustness.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
209,094
1301.0570
Reduction of Maximum Entropy Models to Hidden Markov Models
We show that maximum entropy (maxent) models can be modeled with certain kinds of HMMs, allowing us to construct maxent models with hidden variables, hidden state sequences, or other characteristics. The models can be trained using the forward-backward algorithm. While the results are primarily of theoretical interest, unifying apparently unrelated concepts, we also give experimental results for a maxent model with a hidden variable on a word disambiguation task; the model outperforms standard techniques.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
20,752
2010.12012
Deep neural networks for collaborative learning analytics: Evaluating team collaborations using student gaze point prediction
Automatic assessment and evaluation of team performance during collaborative tasks is key to the learning analytics and computer-supported cooperative work research. There is a growing interest in the use of gaze-oriented cues for evaluating the collaboration and cooperativeness of teams. However, collecting gaze data using eye-trackers is not always feasible due to time and cost constraints. In this paper, we introduce an automated team assessment tool based on gaze points and joint visual attention (JVA) information extracted by computer vision solutions. We then evaluate team collaborations in an undergraduate anatomy learning activity (N=60, 30 teams) as a test user-study. The results indicate that higher JVA was positively associated with student learning outcomes (r(30)=0.50,p<0.005). Moreover, teams who participated in two experimental groups, and used interactive 3-D anatomy models, had higher JVA (F(1,28)=6.65,p<0.05) and better knowledge retention (F(1,28) =7.56,p<0.05) than those in the control group. Also, no significant difference was observed based on JVA for different gender compositions of teams. The findings from this work offer implications in learning sciences and collaborative computing by providing a novel mutual attention-based measure to objectively evaluate team collaboration dynamics.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
202,518
2212.12055
DRL-based Energy-Efficient Baseband Function Deployments for Service-Oriented Open RAN
Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
337,950
2109.10664
A deep neural network for multi-species fish detection using multiple acoustic cameras
Underwater acoustic cameras are high potential devices for many applications in ecology, notably for fisheries management and monitoring. However how to extract such data into high value information without a time-consuming entire dataset reading by an operator is still a challenge. Moreover the analysis of acoustic imaging, due to its low signal-to-noise ratio, is a perfect training ground for experimenting with new approaches, especially concerning Deep Learning techniques. We present hereby a novel approach that takes advantage of both CNN (Convolutional Neural Network) and classical CV (Computer Vision) techniques, able to detect a generic class ''fish'' in acoustic video streams. The pipeline pre-treats the acoustic images to extract 2 features, in order to localise the signals and improve the detection performances. To ensure the performances from an ecological point of view, we propose also a two-step validation, one to validate the results of the trainings and one to test the method on a real-world scenario. The YOLOv3-based model was trained with data of fish from multiple species recorded by the two common acoustic cameras, DIDSON and ARIS, including species of high ecological interest, as Atlantic salmon or European eels. The model we developed provides satisfying results detecting almost 80% of fish and minimizing the false positive rate, however the model is much less efficient for eel detections on ARIS videos. The first CNN pipeline for fish monitoring exploiting video data from two models of acoustic cameras satisfies most of the required features. Many challenges are still present, such as the automation of fish species identification through a multiclass model. 1 However the results point a new solution for dealing with complex data, such as sonar data, which can also be reapplied in other cases where the signal-to-noise ratio is a challenge.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
256,702
2306.16914
Computationally Assisted Quality Control for Public Health Data Streams
Irregularities in public health data streams (like COVID-19 Cases) hamper data-driven decision-making for public health stakeholders. A real-time, computer-generated list of the most important, outlying data points from thousands of daily-updated public health data streams could assist an expert reviewer in identifying these irregularities. However, existing outlier detection frameworks perform poorly on this task because they do not account for the data volume or for the statistical properties of public health streams. Accordingly, we developed FlaSH (Flagging Streams in public Health), a practical outlier detection framework for public health data users that uses simple, scalable models to capture these statistical properties explicitly. In an experiment where human experts evaluate FlaSH and existing methods (including deep learning approaches), FlaSH scales to the data volume of this task, matches or exceeds these other methods in mean accuracy, and identifies the outlier points that users empirically rate as more helpful. Based on these results, FlaSH has been deployed on data streams used by public health stakeholders.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
376,528
1804.02205
Automatic Prediction of Building Age from Photographs
We present a first method for the automated age estimation of buildings from unconstrained photographs. To this end, we propose a two-stage approach that firstly learns characteristic visual patterns for different building epochs at patch-level and then globally aggregates patch-level age estimates over the building. We compile evaluation datasets from different sources and perform an detailed evaluation of our approach, its sensitivity to parameters, and the capabilities of the employed deep networks to learn characteristic visual age-related patterns. Results show that our approach is able to estimate building age at a surprisingly high level that even outperforms human evaluators and thereby sets a new performance baseline. This work represents a first step towards the automated assessment of building parameters for automated price prediction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,363
2405.18444
Discovering deposition process regimes: leveraging unsupervised learning for process insights, surrogate modeling, and sensitivity analysis
This work introduces a comprehensive approach utilizing data-driven methods to elucidate the deposition process regimes in Chemical Vapor Deposition (CVD) reactors and the interplay of physical mechanism that dominate in each one of them. Through this work, we address three key objectives. Firstly, our methodology relies on process outcomes, derived by a detailed CFD model, to identify clusters of "outcomes" corresponding to distinct process regimes, wherein the relative influence of input variables undergoes notable shifts. This phenomenon is experimentally validated through Arrhenius plot analysis, affirming the efficacy of our approach. Secondly, we demonstrate the development of an efficient surrogate model, based on Polynomial Chaos Expansion (PCE), that maintains accuracy, facilitating streamlined computational analyses. Finally, as a result of PCE, sensitivity analysis is made possible by means of Sobol' indices, that quantify the impact of process inputs across identified regimes. The insights gained from our analysis contribute to the formulation of hypotheses regarding phenomena occurring beyond the transition regime. Notably, the significance of temperature even in the diffusion-limited regime, as evidenced by the Arrhenius plot, suggests activation of gas phase reactions at elevated temperatures. Importantly, our proposed methods yield insights that align with experimental observations and theoretical principles, aiding decision-making in process design and optimization. By circumventing the need for costly and time-consuming experiments, our approach offers a pragmatic pathway towards enhanced process efficiency. Moreover, this study underscores the potential of data-driven computational methods for innovating reactor design paradigms.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
458,433
2401.11250
AFS-BM: Enhancing Model Performance through Adaptive Feature Selection with Binary Masking
We study the problem of feature selection in general machine learning (ML) context, which is one of the most critical subjects in the field. Although, there exist many feature selection methods, however, these methods face challenges such as scalability, managing high-dimensional data, dealing with correlated features, adapting to variable feature importance, and integrating domain knowledge. To this end, we introduce the "Adaptive Feature Selection with Binary Masking" (AFS-BM) which remedies these problems. AFS-BM achieves this by joint optimization for simultaneous feature selection and model training. In particular, we do the joint optimization and binary masking to continuously adapt the set of features and model parameters during the training process. This approach leads to significant improvements in model accuracy and a reduction in computational requirements. We provide an extensive set of experiments where we compare AFS-BM with the established feature selection methods using well-known datasets from real-life competitions. Our results show that AFS-BM makes significant improvement in terms of accuracy and requires significantly less computational complexity. This is due to AFS-BM's ability to dynamically adjust to the changing importance of features during the training process, which an important contribution to the field. We openly share our code for the replicability of our results and to facilitate further research.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
422,935
1906.02573
Estimation and Tracking of a Moving Target by Unmanned Aerial Vehicles
An image-based control strategy along with estimation of target motion is developed to track dynamic targets without motion constraints. To the best of our knowledge, this is the first work that utilizes a bounding box as image features for tracking control and estimation of dynamic target without motion constraint. The features generated from a You-Only-Look-Once (YOLO) deep neural network can relax the assumption of continuous availability of the feature points in most literature and minimize the gap for applications. The challenges are that the motion pattern of the target is unknown and modeling its dynamics is infeasible. To resolve these issues, the dynamics of the target is modeled by a constant-velocity model and is employed as a process model in the unscented Kalman filter (UKF), but process noise is uncertain and sensitive to system instability. To ensure convergence of the estimate error, the noise covariance matrix is estimated according to history data within a moving window. The estimated motion from the UKF is implemented as a feedforward term in the developed controller, so that tracking performance is enhanced. Simulations are demonstrated to verify the efficacy of the developed estimator and controller.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
134,112
2305.13091
Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization
With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks. One area where LLMs can be employed is as an alternative evaluation metric for complex generative tasks, which generally demands expensive human judges to complement the traditional automatic metrics for various evaluation dimensions such as fluency and consistency. In this work, we conduct extensive analysis to investigate the stability and reliability of LLMs as automatic evaluators for abstractive summarization. We found that while ChatGPT and GPT-4 outperform the commonly used automatic metrics, they are not ready as human replacements due to significant limitations. That is, LLM evaluators rate each candidate system inconsistently and are dimension-dependent. They also struggle to compare candidates with close performance and become more unreliable with higher-quality summaries by obtaining a lower correlation with humans. In other words, with better abstractive summarization systems being introduced at a fast pace, LLMs may result in misleading and unreliable evaluations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
366,344
2002.03519
Self-Attentive Associative Memory
Heretofore, neural networks with external memory are restricted to single memory with lossy representations of memory interactions. A rich representation of relationships between memory pieces urges a high-order and segregated relational memory. In this paper, we propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory). The idea is implemented through a novel Self-attentive Associative Memory (SAM) operator. Found upon outer product, SAM forms a set of associative memories that represent the hypothetical high-order relationships between arbitrary pairs of memory elements, through which a relational memory is constructed from an item memory. The two memories are wired into a single sequential model capable of both memorization and relational reasoning. We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks, from challenging synthetic problems to practical testbeds such as geometry, graph, reinforcement learning, and question answering.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,296
1102.3204
One Packet Suffices - Highly Efficient Packetized Network Coding With Finite Memory
Random Linear Network Coding (RLNC) has emerged as a powerful tool for robust high-throughput multicast. Projection analysis - a recently introduced technique - shows that the distributed packetized RLNC protocol achieves (order) optimal and perfectly pipelined information dissemination in many settings. In the original approach to RNLC intermediate nodes code together all available information. This requires intermediate nodes to keep considerable data available for coding. Moreover, it results in a coding complexity that grows linearly with the size of this data. While this has been identified as a problem, approaches that combine queuing theory and network coding have heretofore not provided a succinct representation of the memory needs of network coding at intermediates nodes. This paper shows the surprising result that, in all settings with a continuous stream of data, network coding continues to perform optimally even if only one packet per node is kept in active memory and used for computations. This leads to an extremely simple RLNC protocol variant with drastically reduced requirements on computational and memory resources. By extending the projection analysis, we show that in all settings in which the RLNC protocol was proven to be optimal its finite memory variant performs equally well. In the same way as the original projection analysis, our technique applies in a wide variety of network models, including highly dynamic topologies that can change completely at any time in an adversarial fashion.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
9,219
2208.10218
A Virtual 2D Tactile Array for Soft Actuators Using Acoustic Sensing
We create a virtual 2D tactile array for soft pneumatic actuators using embedded audio components. We detect contact-specific changes in sound modulation to infer tactile information. We evaluate different sound representations and learning methods to detect even small contact variations. We demonstrate the acoustic tactile sensor array by the example of a PneuFlex actuator and use a Braille display to individually control the contact of 29x4 pins with the actuator's 90x10 mm palmar surface. Evaluating the spatial resolution, the acoustic sensor localizes edges in x- and y-direction with a root-mean-square regression error of 1.67 mm and 0.0 mm, respectively. Even light contacts of a single Braille pin with a lifting force of 0.17 N are measured with high accuracy. Finally, we demonstrate the sensor's sensitivity to complex contact shapes by successfully reading the 26 letters of the Braille alphabet from a single display cell with a classification rate of 88%.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
313,969
1803.05131
Feature extraction without learning in an analog Spatial Pooler memristive-CMOS circuit design of Hierarchical Temporal Memory
Hierarchical Temporal Memory (HTM) is a neuromorphic algorithm that emulates sparsity, hierarchy and modularity resembling the working principles of neocortex. Feature encoding is an important step to create sparse binary patterns. This sparsity is introduced by the binary weights and random weight assignment in the initialization stage of the HTM. We propose the alternative deterministic method for the HTM initialization stage, which connects the HTM weights to the input data and preserves natural sparsity of the input information. Further, we introduce the hardware implementation of the deterministic approach and compare it to the traditional HTM and existing hardware implementation. We test the proposed approach on the face recognition problem and show that it outperforms the conventional HTM approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
92,582
2210.12209
Motion Policy Networks
Collision-free motion generation in unknown environments is a core building block for robot manipulation. Generating such motions is challenging due to multiple objectives; not only should the solutions be optimal, the motion generator itself must be fast enough for real-time performance and reliable enough for practical deployment. A wide variety of methods have been proposed ranging from local controllers to global planners, often being combined to offset their shortcomings. We present an end-to-end neural model called Motion Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from just a single depth camera observation. M$\pi$Nets are trained on over 3 million motion planning problems in over 500,000 environments. Our experiments show that M$\pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes. They are 46% better than prior neural planners and more robust than local control policies. Despite being only trained in simulation, M$\pi$Nets transfer well to the real robot with noisy partial point clouds. Code and data are publicly available at https://mpinets.github.io.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
325,631
2111.06009
Low Complexity Channel Estimation for OTFS Modulation with Fractional Delay and Doppler
We consider the problem of accurate channel estimation for OTFS based systems with few transmit/receive antennas, where additional sparsity due to large number of antennas is not a possibility. For such systems the sparsity of the effective delay-Doppler (DD) domain channel is adversely affected in the presence of channel path delay and Doppler shifts which are non-integer multiples of the delay and Doppler domain resolution. The sparsity is also adversely affected when practical transmit and receive pulses are used. In this paper we propose a Modified Maximum Likelihood Channel Estimation (M-MLE) method for OTFS based systems which exploits the fine delay and Doppler domain resolution of the OTFS modulated signal to decouple the joint estimation of the channel parameters (i.e., channel gain, delay and Doppler shift) of all channel paths into separate estimation of the channel parameters for each path. We further observe that with fine delay and Doppler domain resolution, the received DD domain signal along a particular channel path can be written as a product of a delay domain term and a Doppler domain term where the delay domain term is primarily dependent on the delay of this path and the Doppler domain term is primarily dependent on the Doppler shift of this path. This allows us to propose another method termed as the two-step method (TSE), where the joint two-dimensional estimation of the delay and Doppler shift of a particular path in the M-MLE method is further decoupled into two separate one-dimensional estimation for the delay and for the Doppler shift of that path. Simulations reveal that the proposed methods (M-MLE and TSE) achieve better channel estimation accuracy at lower complexity when compared to other known methods for accurate OTFS channel estimation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
265,952
2207.07225
Origin of life from a maker's perspective -- focus on protocellular compartments in bottom-up synthetic biology
The origin of life is shrouded in mystery, with few surviving clues, obscured by evolutionary competition. Previous reviews have touched on the complementary approaches of top-down and bottom-up synthetic biology to augment our understanding of living systems. Here we point out the synergies between these fields, especially between bottom-up synthetic biology and origin of life research. We explore recent progress made in artificial cell compartmentation in line with the crowded cell, its metabolism, as well as cycles of growth and division, and how those efforts are starting to be combined. Though the complexity of current life is among its most striking characteristics, none of life's essential features require it, and they are unlikely to have emerged thus complex from the beginning. Rather than recovering the one true origin lost in time, current research converges towards reproducing the emergence of minimal life, by teasing out how complexity and evolution may arise from a set of essential components.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
308,138
2207.14114
Classification of FIB/SEM-tomography images for highly porous multiphase materials using random forest classifiers
FIB/SEM tomography represents an indispensable tool for the characterization of three-dimensional nanostructures in battery research and many other fields. However, contrast and 3D classification/reconstruction problems occur in many cases, which strongly limits the applicability of the technique especially on porous materials, like those used for electrode materials in batteries or fuel cells. Distinguishing the different components like active Li storage particles and carbon/binder materials is difficult and often prevents a reliable quantitative analysis of image data, or may even lead to wrong conclusions about structure-property relationships. In this contribution, we present a novel approach for data classification in three-dimensional image data obtained by FIB/SEM tomography and its applications to NMC battery electrode materials. We use two different image signals, namely the signal of the angled SE2 chamber detector and the Inlens detector signal, combine both signals and train a random forest, i.e. a particular machine learning algorithm. We demonstrate that this approach can overcome current limitations of existing techniques suitable for multi-phase measurements and that it allows for quantitative data reconstruction even where current state-of the art techniques fail, or demand for large training sets. This approach may yield as guideline for future research using FIB/SEM tomography.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,484
1309.4034
The Weighted Sum Rate Maximization in MIMO Interference Networks: The Minimax Lagrangian Duality and Algorithm
We take a new perspective on the weighted sum-rate maximization in multiple-input multiple-output (MIMO) interference networks, by formulating an equivalent max-min problem. This seemingly trivial reformulation has significant implications: the Lagrangian duality of the equivalent max-min problem provides an elegant way to establish the sum-rate duality between an interference network and its reciprocal when such a duality exists, and more importantly, suggests a novel iterative minimax algorithm for the weighted sum-rate maximization. Moreover, the design and convergence proof of the algorithm use only general convex analysis. They apply and extend to any max-min problems with similar structure, and thus provide a general class of algorithms for such optimization problems. This paper presents a promising step and lends hope for establishing a general framework based on the minimax Lagrangian duality for characterizing the weighted sum-rate and developing efficient algorithms for general MIMO interference networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
27,072
2406.05855
Self-Distilled Disentangled Learning for Counterfactual Prediction
The advancements in disentangled representation learning significantly enhance the accuracy of counterfactual predictions by granting precise control over instrumental variables, confounders, and adjustable variables. An appealing method for achieving the independent separation of these factors is mutual information minimization, a task that presents challenges in numerous machine learning scenarios, especially within high-dimensional spaces. To circumvent this challenge, we propose the Self-Distilled Disentanglement framework, referred to as $SD^2$. Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs for high-dimensional representations. Our comprehensive experiments, conducted on both synthetic and real-world datasets, confirms the effectiveness of our approach in facilitating counterfactual inference in the presence of both observed and unobserved confounders.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
462,323
2210.14162
Commonsense Knowledge from Scene Graphs for Textual Environments
Text-based games are becoming commonly used in reinforcement learning as real-world simulation environments. They are usually imperfect information games, and their interactions are only in the textual modality. To challenge these games, it is effective to complement the missing information by providing knowledge outside the game, such as human common sense. However, such knowledge has only been available from textual information in previous works. In this paper, we investigate the advantage of employing commonsense reasoning obtained from visual datasets such as scene graph datasets. In general, images convey more comprehensive information compared with text for humans. This property enables to extract commonsense relationship knowledge more useful for acting effectively in a game. We compare the statistics of spatial relationships available in Visual Genome (a scene graph dataset) and ConceptNet (a text-based knowledge) to analyze the effectiveness of introducing scene graph datasets. We also conducted experiments on a text-based game task that requires commonsense reasoning. Our experimental results demonstrated that our proposed methods have higher and competitive performance than existing state-of-the-art methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
326,445
2410.06024
Jet Expansions of Residual Computation
We introduce a framework for expanding residual computational graphs using jets, operators that generalize truncated Taylor series. Our method provides a systematic approach to disentangle contributions of different computational paths to model predictions. In contrast to existing techniques such as distillation, probing, or early decoding, our expansions rely solely on the model itself and requires no data, training, or sampling from the model. We demonstrate how our framework grounds and subsumes logit lens, reveals a (super-)exponential path structure in the recursive residual depth and opens up several applications. These include sketching a transformer large language model with $n$-gram statistics extracted from its computations, and indexing the models' levels of toxicity knowledge. Our approach enables data-free analysis of residual computation for model interpretability, development, and evaluation.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
496,019