id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2104.11320
Federated Double Deep Q-learning for Joint Delay and Energy Minimization in IoT networks
In this paper, we propose a federated deep reinforcement learning framework to solve a multi-objective optimization problem, where we consider minimizing the expected long-term task completion delay and energy consumption of IoT devices. This is done by optimizing offloading decisions, computation resource allocation, and transmit power allocation. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we first cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using double deep Q-network (DDQN), where the actions are offloading decisions. The immediate cost of each agent is calculated through solving either the transmit power optimization or local computation resource optimization, based on the selected offloading decisions (actions). Then, to enhance the learning speed of IoT devices (agents), we incorporate federated learning (FDL) at the end of each episode. FDL enhances the scalability of the proposed DRL framework, creates a context for cooperation between agents, and minimizes their privacy concerns. Our numerical results demonstrate the efficacy of our proposed federated DDQN framework in terms of learning speed compared to federated deep Q network (DQN) and non-federated DDQN algorithms. In addition, we investigate the impact of batch size, network layers, DDQN target network update frequency on the learning speed of the FDL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
231,876
1511.03125
Virtual-MIMO-Boosted Information Propagation on Highways
In vehicular communications, traffic-related information should be spread over the network as quickly as possible to maintain a safer transportation system. This motivates us to develop more efficient information propagation schemes. In this paper, we propose a novel virtual-MIMO-enabled information dissemination scheme, in which the vehicles opportunistically form virtual antenna arrays to boost the transmission range and therefore accelerate information propagation along the highway. We model the information propagation process as a renewal reward process and investigate in detail the \emph{Information Propagation Speed} (IPS) of the proposed scheme. The corresponding closed-form IPS is derived, which shows that the IPS increases cubically with the vehicle density but will ultimately converge to a constant upper bound. Moreover, increased mobility also facilitates the information spreading by offering more communication opportunities. However, the limited network density essentially determines the bottleneck in information spreading. Extensive simulations are carried out to verify our analysis. We also show that the proposed scheme exhibits a significant IPS gain over its conventional counterpart.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
48,720
1812.06300
Analysis of the $(\mu/\mu_I,\lambda)$-$\sigma$-Self-Adaptation Evolution Strategy with Repair by Projection Applied to a Conically Constrained Problem
A theoretical performance analysis of the $(\mu/\mu_I,\lambda)$-$\sigma$-Self-Adaptation Evolution Strategy ($\sigma$SA-ES) is presented considering a conically constrained problem. Infeasible offspring are repaired using projection onto the boundary of the feasibility region. Closed-form approximations are used for the one-generation progress of the evolution strategy. Approximate deterministic evolution equations are formulated for analyzing the strategy's dynamics. By iterating the evolution equations with the approximate one-generation expressions, the evolution strategy's dynamics can be predicted. The derived theoretical results are compared to experiments for assessing the approximation quality. It is shown that in the steady state the $(\mu/\mu_I,\lambda)$-$\sigma$SA-ES exhibits a performance as if the ES were optimizing a sphere model. Unlike the non-recombinative $(1,\lambda)$-ES, the parental steady state behavior does not evolve on the cone boundary but stays away from the boundary to a certain extent.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
116,583
1811.03581
Decidability in Robot Manipulation Planning
Consider the problem of planning collision-free motion of $n$ objects in the plane movable through contact with a robot that can autonomously translate in the plane and that can move a maximum of $m \leq n$ objects simultaneously. This represents the abstract formulation of a manipulation planning problem that is proven to be decidable in this paper. The tools used for proving decidability of this simplified manipulation planning problem are, in fact, general enough to handle the decidability problem for the wider class of systems characterized by a stratified configuration space. These include, for example, problems of legged and multi-contact locomotion, bi-manual manipulation. In addition, the described approach does not restrict the dynamics of the manipulation system to be considered.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
112,878
cs/0611054
How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos
Symbolic dynamics has proven to be an invaluable tool in analyzing the mechanisms that lead to unpredictability and random behavior in nonlinear dynamical systems. Surprisingly, a discrete partition of continuous state space can produce a coarse-grained description of the behavior that accurately describes the invariant properties of an underlying chaotic attractor. In particular, measures of the rate of information production--the topological and metric entropy rates--can be estimated from the outputs of Markov or generating partitions. Here we develop Bayesian inference for k-th order Markov chains as a method to finding generating partitions and estimating entropy rates from finite samples of discretized data produced by coarse-grained dynamical systems.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
539,871
1506.04834
Tree-structured composition in neural networks without tree-structured architectures
Tree-structured neural networks encode a particular tree geometry for a sentence in the network design. However, these models have at best only slightly outperformed simpler sequence-based models. We hypothesize that neural sequence models like LSTMs are in fact able to discover and implicitly use recursive compositional structure, at least for tasks with clear cues to that structure in the data. We demonstrate this possibility using an artificial data task for which recursive compositional structure is crucial, and find an LSTM-based sequence model can indeed learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree models, even on large training sets, suggesting that tree-structured models are more effective at exploiting recursive structure.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
44,220
1412.1185
The Entropy of Attention and Popularity in YouTube Videos
The vast majority of YouTube videos never become popular, languishing in obscurity with few views, no likes, and no comments. We use information theoretical measures based on entropy to examine how time series distributions of common measures of popularity in videos from YouTube's "Trending videos" and "Most recent" video feeds relate to the theoretical concept of attention. While most of the videos in the "Most recent" feed are never popular, some 20% of them have distributions of attention metrics and measures of entropy that are similar to distributions for "Trending videos". We analyze how the 20% of "Most recent" videos that become somewhat popular differ from the 80% that do not, then compare these popular "Most recent" videos to different subsets of "Trending videos" to try to characterize and compare the attention each receives.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
38,079
2307.07920
A structural study of Big Tech firm-switching of inventors in the post-recession era
Complex systems research and network science have recently been used to provide novel insights into economic phenomena such as patenting behavior and innovation in firms. Several studies have found that increased mobility of inventors, manifested through firm switching or transitioning, is associated with increased overall productivity. This paper proposes a novel structural study of such transitioning inventors, and the role they play in patent co-authorship networks, in a cohort of highly innovative and economically influential companies such as the five Big Tech firms (Apple, Microsoft, Google, Amazon and Meta) in the post-recession period (2010-2022). We formulate and empirically investigate three research questions using Big Tech patent data. Our results show that transitioning inventors tend to have higher degree centrality than the average Big Tech inventor, and that their removal can lead to greater network fragmentation than would be expected by chance. The rate of transition over the 12-year period of study was found to be highest between 2015-2017, suggesting that the Big Tech innovation ecosystem underwent non-trivial shifts during this time. Finally, transition was associated with higher estimated impact of co-authored patents post-transition.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
379,590
2410.20541
Data-driven Analysis of T-Product-based Dynamical Systems
A wide variety of data can be represented using third-order tensors, spanning applications in chemometrics, psychometrics, and image processing. However, traditional data-driven frameworks are not naturally equipped to process tensors without first unfolding or flattening the data, which can result in a loss of crucial higher-order structural information. In this article, we introduce a novel framework for the data-driven analysis of T-product-based dynamical systems (TPDSs), where the system evolution is governed by the T-product between a third-order dynamic tensor and a third-order state tensor. In particular, we examine the data informativity of TPDSs concerning system identification, stability, controllability, and stabilizability and illustrate significant computational improvements over traditional approaches by leveraging the unique properties of the T-product. The effectiveness of our framework is demonstrated through numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
502,857
2304.04300
Class-Imbalanced Learning on Graphs: A Survey
The rapid advancement in data-driven research has increased the demand for effective graph data analysis. However, real-world data often exhibits class imbalance, leading to poor performance of machine learning models. To overcome this challenge, class-imbalanced learning on graphs (CILG) has emerged as a promising solution that combines the strengths of graph representation learning and class-imbalanced learning. In recent years, significant progress has been made in CILG. Anticipating that such a trend will continue, this survey aims to offer a comprehensive understanding of the current state-of-the-art in CILG and provide insights for future research directions. Concerning the former, we introduce the first taxonomy of existing work and its connection to existing imbalanced learning literature. Concerning the latter, we critically analyze recent work in CILG and discuss urgent lines of inquiry within the topic. Moreover, we provide a continuously maintained reading list of papers and code at https://github.com/yihongma/CILG-Papers.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
357,168
2007.02509
On the weight and density bounds of polynomial threshold functions
In this report, we show that all n-variable Boolean function can be represented as polynomial threshold functions (PTF) with at most $0.75 \times 2^n$ non-zero integer coefficients and give an upper bound on the absolute value of these coefficients. To our knowledge this provides the best known bound on both the PTF density (number of monomials) and weight (sum of the coefficient magnitudes) of general Boolean functions. The special case of Bent functions is also analyzed and shown that any n-variable Bent function can be represented with integer coefficients less than $2^n$ while also obeying the aforementioned density bound. Finally, sparse Boolean functions, which are almost constant except for $m << 2^n$ number of variable assignments, are shown to have small weight PTFs with density at most $m+2^{n-1}$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
185,770
2008.03781
SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!
Information on social media comprises of various modalities such as textual, visual and audio. NLP and Computer Vision communities often leverage only one prominent modality in isolation to study social media. However, the computational processing of Internet memes needs a hybrid approach. The growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twiter further suggests that we can not ignore such multimodal content anymore. To the best of our knowledge, there is not much attention towards meme emotion analysis. The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes. The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and their corresponding intensity. The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion. The best performances achieved were F1 (macro average) scores of 0.35, 0.51 and 0.32, respectively for each of the three subtasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
191,026
2103.06498
3D Human Pose, Shape and Texture from Low-Resolution Images and Videos
3D human pose and shape estimation from monocular images has been an active research area in computer vision. Existing deep learning methods for this task rely on high-resolution input, which however, is not always available in many scenarios such as video surveillance and sports broadcasting. Two common approaches to deal with low-resolution images are applying super-resolution techniques to the input, which may result in unpleasant artifacts, or simply training one model for each resolution, which is impractical in many realistic applications. To address the above issues, this paper proposes a novel algorithm called RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme. The proposed method is able to learn 3D body pose and shape across different resolutions with one single model. The self-supervision loss enforces scale-consistency of the output, and the contrastive learning scheme enforces scale-consistency of the deep features. We show that both these new losses provide robustness when learning in a weakly-supervised manner. Moreover, we extend the RSC-Net to handle low-resolution videos and apply it to reconstruct textured 3D pedestrians from low-resolution input. Extensive experiments demonstrate that the RSC-Net can achieve consistently better results than the state-of-the-art methods for challenging low-resolution images.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
224,320
2203.04476
Part-level Action Parsing via a Pose-guided Coarse-to-Fine Framework
Action recognition from videos, i.e., classifying a video into one of the pre-defined action types, has been a popular topic in the communities of artificial intelligence, multimedia, and signal processing. However, existing methods usually consider an input video as a whole and learn models, e.g., Convolutional Neural Networks (CNNs), with coarse video-level class labels. These methods can only output an action class for the video, but cannot provide fine-grained and explainable cues to answer why the video shows a specific action. Therefore, researchers start to focus on a new task, Part-level Action Parsing (PAP), which aims to not only predict the video-level action but also recognize the frame-level fine-grained actions or interactions of body parts for each person in the video. To this end, we propose a coarse-to-fine framework for this challenging task. In particular, our framework first predicts the video-level class of the input video, then localizes the body parts and predicts the part-level action. Moreover, to balance the accuracy and computation in part-level action parsing, we propose to recognize the part-level actions by segment-level features. Furthermore, to overcome the ambiguity of body parts, we propose a pose-guided positional embedding method to accurately localize body parts. Through comprehensive experiments on a large-scale dataset, i.e., Kinetics-TPS, our framework achieves state-of-the-art performance and outperforms existing methods over a 31.10% ROC score.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
284,487
1712.04965
Model Predictive Control for Autonomous Driving Based on Time Scaled Collision Cone
In this paper, we present a Model Predictive Control (MPC) framework based on path velocity decomposition paradigm for autonomous driving. The optimization underlying the MPC has a two layer structure wherein first, an appropriate path is computed for the vehicle followed by the computation of optimal forward velocity along it. The very nature of the proposed path velocity decomposition allows for seamless compatibility between the two layers of the optimization. A key feature of the proposed work is that it offloads most of the responsibility of collision avoidance to velocity optimization layer for which computationally efficient formulations can be derived. In particular, we extend our previously developed concept of time scaled collision cone (TSCC) constraints and formulate the forward velocity optimization layer as a convex quadratic programming problem. We perform validation on autonomous driving scenarios wherein proposed MPC repeatedly solves both the optimization layers in receding horizon manner to compute lane change, overtaking and merging maneuvers among multiple dynamic obstacles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
86,675
1507.01384
The method of artificial systems
This document is written with the intention to describe in detail a method and means by which a computer program can reason about the world and in so doing, increase its analogue to a living system. As the literature is rife and it is apparent we, as scientists and engineers, have not found the solution, this document will attempt the solution by grounding its intellectual arguments within tenets of human cognition in Western philosophy. The result will be a characteristic description of a method to describe an artificial system analogous to that performed for a human. The approach was the substance of my Master's thesis, explored more deeply during the course of my postdoc research. It focuses primarily on context awareness and choice set within a boundary of available epistemology, which serves to describe it. Expanded upon, such a description strives to discover agreement with Kant's critique of reason to understand how it could be applied to define the architecture of its design. The intention has never been to mimic human or biological systems, rather, to understand the profoundly fundamental rules, when leveraged correctly, results in an artificial consciousness as noumenon while in keeping with the perception of it as phenomenon.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
44,864
2311.05651
On Mergable Coresets for Polytope Distance
We show that a constant-size constant-error coreset for polytope distance is simple to maintain under merges of coresets. However, increasing the size cannot improve the error bound significantly beyond that constant.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
406,656
2112.05612
Decentralized Spectrum Access System: Vision, Challenges, and a Blockchain Solution
Spectrum access system (SAS) is widely considered the de facto solution to coordinating dynamic spectrum sharing (DSS) and protecting incumbent users. The current SAS paradigm prescribed by the FCC for the CBRS band and standardized by the WInnForum follows a centralized service model in that a spectrum user subscribes to a SAS server for spectrum allocation service. This model, however, neither tolerates SAS server failures (crash or Byzantine) nor resists dishonest SAS administrators, leading to serious concerns on SAS system reliability and trustworthiness. This is especially concerning for the evolving DSS landscape where an increasing number of SAS service providers and heterogeneous user requirements are coming up. To address these challenges, we propose a novel blockchain-based decentralized SAS architecture called BD-SAS that provides SAS services securely and efficiently, without relying on the trust of each individual SAS server for the overall system trustworthiness. In BD-SAS, a global blockchain (G-Chain) is used for spectrum regulatory compliance while smart contract-enabled local blockchains (L-Chains) are instantiated in individual spectrum zones for automating spectrum access assignment per user request. We hope our vision of a decentralized SAS, the BD-SAS architecture, and discussion on future challenges can open up a new direction towards reliable spectrum management in a decentralized manner.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
270,896
2208.01191
Implicit Two-Tower Policies
We present a new class of structured reinforcement learning policy-architectures, Implicit Two-Tower (ITT) policies, where the actions are chosen based on the attention scores of their learnable latent representations with those of the input states. By explicitly disentangling action from state processing in the policy stack, we achieve two main goals: substantial computational gains and better performance. Our architectures are compatible with both: discrete and continuous action spaces. By conducting tests on 15 environments from OpenAI Gym and DeepMind Control Suite, we show that ITT-architectures are particularly suited for blackbox/evolutionary optimization and the corresponding policy training algorithms outperform their vanilla unstructured implicit counterparts as well as commonly used explicit policies. We complement our analysis by showing how techniques such as hashing and lazy tower updates, critically relying on the two-tower structure of ITTs, can be applied to obtain additional computational improvements.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
311,095
2102.00523
Co-Seg: An Image Segmentation Framework Against Label Corruption
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training. Neural networks can gradually overfit corrupted labels if directly trained on noisy datasets, leading to severe performance degradation at test time. In this paper, we propose a novel deep learning framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels. Our approach first trains two networks simultaneously to sift through all samples and obtain a subset with reliable labels. Then, an efficient yet easily-implemented label correction strategy is applied to enrich the reliable subset. Finally, using the updated dataset, we retrain the segmentation network to finalize its parameters. Experiments in two noisy labels scenarios demonstrate that our proposed model can achieve results comparable to those obtained from supervised learning trained on the noise-free labels. In addition, our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
217,815
2109.00675
FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo Federated Learning
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL), where organizations perform collaborative model training on decentralized data. Despite the strong privacy guarantee, general HE schemes result in significant computation and communication overhead. Prior works employ batch encryption to address this problem, but it is still suboptimal in mitigating communication overhead and is incompatible with sparsification techniques. In this paper, we propose FLASHE, an HE scheme tailored for cross-silo FL. To capture the minimum requirements of security and functionality, FLASHE drops the asymmetric-key design and only involves modular addition operations with random numbers. Depending on whether to accommodate sparsification techniques, FLASHE is optimized in computation efficiency with different approaches. We have implemented FLASHE as a pluggable module atop FATE, an industrial platform for cross-silo FL. Compared to plaintext training, FLASHE slightly increases the training time by $\leq6\%$, with no communication overhead.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
253,189
1805.08551
Robust Model Predictive Control for Autonomous Vehicles/Self Driving Cars
A robust Model Predictive Control (MPC) approach for controlling front steering of an autonomous vehicle is presented in this paper. We present various approaches to increase the robustness of model predictive control by using weight tuning, a successive on-line linearization of a nonlinear vehicle model to track position error and successive on-line linearization to track velocity error. Results of the effectiveness of each method in terms of accuracy and computational load are discussed.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
98,171
1402.3511
A Clockwork RNN
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
30,881
2210.08287
Linear Scalarization for Byzantine-robust learning on non-IID data
In this work we study the problem of Byzantine-robust learning when data among clients is heterogeneous. We focus on poisoning attacks targeting the convergence of SGD. Although this problem has received great attention; the main Byzantine defenses rely on the IID assumption causing them to fail when data distribution is non-IID even with no attack. We propose the use of Linear Scalarization (LS) as an enhancing method to enable current defenses to circumvent Byzantine attacks in the non-IID setting. The LS method is based on the incorporation of a trade-off vector that penalizes the suspected malicious clients. Empirical analysis corroborates that the proposed LS variants are viable in the IID setting. For mild to strong non-IID data splits, LS is either comparable or outperforming current approaches under state-of-the-art Byzantine attack scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
324,078
2307.01158
Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning
The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings. We present a method of grounding semantically meaningful, human-interpretable beliefs within policies modeled by deep networks. We then consider the task of 2nd-order belief prediction. We propose that ability of each agent to predict the beliefs of the other agents can be used as an intrinsic reward signal for multi-agent reinforcement learning. Finally, we present preliminary empirical results in a mixed cooperative-competitive environment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
377,250
2312.03131
Heterogeneous radio access with multiple latency targets
Since the advent of ultra-reliable and low-latency communications (URLLC), the requirements of low-latency applications tend to be completely characterized by a single pre-defined latency-reliability target. That is, operation is optimal whenever the pre-defined latency threshold is met but the system is assumed to be in error when the latency threshold is violated. This vision is severely limited and does not capture the real requirements of most applications, where multiple latency thresholds can be defined, together with incentives or rewards associated with meeting each of them. Such formulation is a generalization of the single-threshold case popularized by URLLC and, in the asymptotic case, approximates to defining a cost for each point in the support of the latency distribution. In this paper, we explore the implications of defining multiple latency targets on the design of access protocols and on the optimization of repetition-based access strategies in orthogonal and non-orthogonal multiple access scenarios with users that present heterogeneous traffic characteristics and requirements. We observe that the access strategies of the users can be effectively adapted to the requirements of the application by carefully defining the latency targets and the associated rewards.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
413,145
2310.10765
BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys
Rapid progress has been made in instruction-learning for image editing with natural-language instruction, as exemplified by InstructPix2Pix. In biomedicine, such methods can be applied to counterfactual image generation, which helps differentiate causal structure from spurious correlation and facilitate robust image interpretation for disease progression modeling. However, generic image-editing models are ill-suited for the biomedical domain, and counterfactual biomedical image generation is largely underexplored. In this paper, we present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning from multimodal patient journeys. Given a patient with two biomedical images taken at different time points, we use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression. The resulting triples (prior image, progression description, new image) are then used to train a latent diffusion model for counterfactual biomedical image generation. Given the relative scarcity of image time series data, we introduce a two-stage curriculum that first pretrains the denoising network using the much more abundant single image-report pairs (with dummy prior image), and then continues training using the counterfactual triples. Experiments using the standard MIMIC-CXR dataset demonstrate the promise of our method. In a comprehensive battery of tests on counterfactual medical image generation, BiomedJourney substantially outperforms prior state-of-the-art methods in instruction image editing and medical image generation such as InstructPix2Pix and RoentGen. To facilitate future study in counterfactual medical generation, we plan to release our instruction-learning code and pretrained models.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
400,376
2005.06653
Structured Query-Based Image Retrieval Using Scene Graphs
A structured query can capture the complexity of object interactions (e.g. 'woman rides motorcycle') unlike single objects (e.g. 'woman' or 'motorcycle'). Retrieval using structured queries therefore is much more useful than single object retrieval, but a much more challenging problem. In this paper we present a method which uses scene graph embeddings as the basis for an approach to image retrieval. We examine how visual relationships, derived from scene graphs, can be used as structured queries. The visual relationships are directed subgraphs of the scene graph with a subject and object as nodes connected by a predicate relationship. Notably, we are able to achieve high recall even on low to medium frequency objects found in the long-tailed COCO-Stuff dataset, and find that adding a visual relationship-inspired loss boosts our recall by 10% in the best case.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
177,071
2202.08926
On Guiding Visual Attention with Language Specification
While real world challenges typically define visual categories with language words or phrases, most visual classification methods define categories with numerical indices. However, the language specification of the classes provides an especially useful prior for biased and noisy datasets, where it can help disambiguate what features are task-relevant. Recently, large-scale multimodal models have been shown to recognize a wide variety of high-level concepts from a language specification even without additional image training data, but they are often unable to distinguish classes for more fine-grained tasks. CNNs, in contrast, can extract subtle image features that are required for fine-grained discrimination, but will overfit to any bias or noise in datasets. Our insight is to use high-level language specification as advice for constraining the classification evidence to task-relevant features, instead of distractors. To do this, we ground task-relevant words or phrases with attention maps from a pretrained large-scale model. We then use this grounding to supervise a classifier's spatial attention away from distracting context. We show that supervising spatial attention in this way improves performance on classification tasks with biased and noisy data, including about 3-15% worst-group accuracy improvements and 41-45% relative improvements on fairness metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
281,027
2011.01788
Loss Bounds for Approximate Influence-Based Abstraction
Sequential decision making techniques hold great promise to improve the performance of many real-world systems, but computational complexity hampers their principled application. Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them. While computing exact representations of such influence might be intractable, learning approximate representations offers a promising approach to enable scalable solutions. This paper investigates the performance of such approaches from a theoretical perspective. The primary contribution is the derivation of sufficient conditions on approximate influence representations that can guarantee solutions with small value loss. In particular we show that neural networks trained with cross entropy are well suited to learn approximate influence representations. Moreover, we provide a sample based formulation of the bounds, which reduces the gap to applications. Finally, driven by our theoretical insights, we propose approximation error estimators, which empirically reveal to correlate well with the value loss.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
204,706
2408.15857
What is YOLOv8: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector
This study presents a detailed analysis of the YOLOv8 object detection model, focusing on its architecture, training techniques, and performance improvements over previous iterations like YOLOv5. Key innovations, including the CSPNet backbone for enhanced feature extraction, the FPN+PAN neck for superior multi-scale object detection, and the transition to an anchor-free approach, are thoroughly examined. The paper reviews YOLOv8's performance across benchmarks like Microsoft COCO and Roboflow 100, highlighting its high accuracy and real-time capabilities across diverse hardware platforms. Additionally, the study explores YOLOv8's developer-friendly enhancements, such as its unified Python package and CLI, which streamline model training and deployment. Overall, this research positions YOLOv8 as a state-of-the-art solution in the evolving object detection field.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
484,100
2311.11210
HiH: A Multi-modal Hierarchy in Hierarchy Network for Unconstrained Gait Recognition
Gait recognition has achieved promising advances in controlled settings, yet it significantly struggles in unconstrained environments due to challenges such as view changes, occlusions, and varying walking speeds. Additionally, efforts to fuse multiple modalities often face limited improvements because of cross-modality incompatibility, particularly in outdoor scenarios. To address these issues, we present a multi-modal Hierarchy in Hierarchy network (HiH) that integrates silhouette and pose sequences for robust gait recognition. HiH features a main branch that utilizes Hierarchical Gait Decomposer (HGD) modules for depth-wise and intra-module hierarchical examination of general gait patterns from silhouette data. This approach captures motion hierarchies from overall body dynamics to detailed limb movements, facilitating the representation of gait attributes across multiple spatial resolutions. Complementing this, an auxiliary branch, based on 2D joint sequences, enriches the spatial and temporal aspects of gait analysis. It employs a Deformable Spatial Enhancement (DSE) module for pose-guided spatial attention and a Deformable Temporal Alignment (DTA) module for aligning motion dynamics through learned temporal offsets. Extensive evaluations across diverse indoor and outdoor datasets demonstrate HiH's state-of-the-art performance, affirming a well-balanced trade-off between accuracy and efficiency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,852
2409.19600
An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes
Partial Label Learning (PLL) is a typical weakly supervised learning task, which assumes each training instance is annotated with a set of candidate labels containing the ground-truth label. Recent PLL methods adopt identification-based disambiguation to alleviate the influence of false positive labels and achieve promising performance. However, they require all classes in the test set to have appeared in the training set, ignoring the fact that new classes will keep emerging in real applications. To address this issue, in this paper, we focus on the problem of Partial Label Learning with Augmented Class (PLLAC), where one or more augmented classes are not visible in the training stage but appear in the inference stage. Specifically, we propose an unbiased risk estimator with theoretical guarantees for PLLAC, which estimates the distribution of augmented classes by differentiating the distribution of known classes from unlabeled data and can be equipped with arbitrary PLL loss functions. Besides, we provide a theoretical analysis of the estimation error bound of the estimator, which guarantees the convergence of the empirical risk minimizer to the true risk minimizer as the number of training data tends to infinity. Furthermore, we add a risk-penalty regularization term in the optimization objective to alleviate the influence of the over-fitting issue caused by negative empirical risk. Extensive experiments on benchmark, UCI and real-world datasets demonstrate the effectiveness of the proposed approach.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
492,762
2205.02277
Improved error bounds for the distance distribution of Reed-Solomon codes
We use the generating function approach to derive simple expressions for the factorial moments of the distance distribution over Reed-Solomon codes. We obtain better upper bounds for the error term of a counting formula given by Li and Wan, which gives nontrivial estimates on the number of polynomials over finite fields with prescribed leading coefficients and a given number of linear factors. This improvement leads to new results on the classification of deep holes of Reed Solomon codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
294,887
2407.06324
B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory
We describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a-priori unknown bound while making efficient use of finite resources for inference. Current architectures use such resources to represent data either eidetically over a finite span ("context" in Transformers), or fading over an infinite span (in State Space Models, or SSMs). Recent hybrid architectures have combined eidetic and fading memory, but with limitations that do not allow the designer or the learning process to seamlessly modulate the two, nor to extend the eidetic memory span. We leverage ideas from Stochastic Realization Theory to develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an elementary composable module. The overall architecture can be used to implement models that can access short-term eidetic memory "in-context," permanent structural memory "in-weights," fading memory "in-state," and long-term eidetic memory "in-storage" by natively incorporating retrieval from an asynchronously updated memory. We show that Transformers, existing SSMs such as Mamba, and hybrid architectures such as Jamba are special cases of B'MOJO and describe a basic implementation, to be open sourced, that can be stacked and scaled efficiently in hardware. We test B'MOJO on transductive inference tasks, such as associative recall, where it outperforms existing SSMs and Hybrid models; as a baseline, we test ordinary language modeling where B'MOJO achieves perplexity comparable to similarly-sized Transformers and SSMs up to 1.4B parameters, while being up to 10% faster to train. Finally, we show that B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens, four-fold the length of the longest sequences seen during training.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
471,358
2109.10052
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?
In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner. To demonstrate how our methods can be used to analyze emotion and stereotype shifts due to linguistic experience, we use fine-tuning on news sources as a case study. Our experiments expose how attitudes towards different social groups vary across models and how quickly emotions and stereotypes can shift at the fine-tuning stage.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
256,492
1806.08764
Learning Traffic Flow Dynamics using Random Fields
This paper presents a mesoscopic traffic flow model that explicitly describes the spatio-temporal evolution of the probability distributions of vehicle trajectories. The dynamics are represented by a sequence of factor graphs, which enable learning of traffic dynamics from limited Lagrangian measurements using an efficient message passing technique. The approach ensures that estimated speeds and traffic densities are non-negative with probability one. The estimation technique is tested using vehicle trajectory datasets generated using an independent microscopic traffic simulator and is shown to efficiently reproduce traffic conditions with probe vehicle penetration levels as little as 10\%. The proposed algorithm is also compared with state-of-the-art traffic state estimation techniques developed for the same purpose and it is shown that the proposed approach can outperform the state-of-the-art techniques in terms reconstruction accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
101,216
1705.00349
Network Inspection for Detecting Strategic Attacks
This article studies a problem of strategic network inspection, in which a defender (agency) is tasked with detecting the presence of multiple attacks in the network. An inspection strategy entails monitoring the network components, possibly in a randomized manner, using a given number of detectors. We formulate the network inspection problem $(\mathcal{P})$ as a large-scale bilevel optimization problem, in which the defender seeks to determine an inspection strategy with minimum number of detectors that ensures a target expected detection rate under worst-case attacks. We show that optimal solutions of $(\mathcal{P})$ can be obtained from the equilibria of a large-scale zero-sum game. Our equilibrium analysis involves both game-theoretic and combinatorial arguments, and leads to a computationally tractable approach to solve $(\mathcal{P})$. Firstly, we construct an approximate solution by utilizing solutions of minimum set cover (MSC) and maximum set packing (MSP) problems, and evaluate its detection performance. In fact, this construction generalizes some of the known results in network security games. Secondly, we leverage properties of the optimal detection rate to iteratively refine our MSC/MSP-based solution through a column generation procedure. Computational results on benchmark water networks demonstrate the scalability, performance, and operational feasibility of our approach. The results indicate that utilities can achieve a high level of protection in large-scale networks by strategically positioning a small number of detectors.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
72,663
2202.04488
CRAT-Pred: Vehicle Trajectory Prediction with Crystal Graph Convolutional Neural Networks and Multi-Head Self-Attention
Predicting the motion of surrounding vehicles is essential for autonomous vehicles, as it governs their own motion plan. Current state-of-the-art vehicle prediction models heavily rely on map information. In reality, however, this information is not always available. We therefore propose CRAT-Pred, a multi-modal and non-rasterization-based trajectory prediction model, specifically designed to effectively model social interactions between vehicles, without relying on map information. CRAT-Pred applies a graph convolution method originating from the field of material science to vehicle prediction, allowing to efficiently leverage edge features, and combines it with multi-head self-attention. Compared to other map-free approaches, the model achieves state-of-the-art performance with a significantly lower number of model parameters. In addition to that, we quantitatively show that the self-attention mechanism is able to learn social interactions between vehicles, with the weights representing a measurable interaction score. The source code is publicly available.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
279,566
2203.10249
Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances, which are either scattered around or implicitly implied in different turns of conversations. Therefore, dialogue comprehension requires diverse capabilities such as paraphrasing, summarizing, and commonsense reasoning. Towards the objective of pre-training a zero-shot dialogue comprehension model, we develop a novel narrative-guided pre-training strategy that learns by narrating the key information from a dialogue input. However, the dialogue-narrative parallel corpus for such a pre-training strategy is currently unavailable. For this reason, we first construct a dialogue-narrative parallel corpus by automatically aligning movie subtitles and their synopses. We then pre-train a BART model on the data and evaluate its performance on four dialogue-based tasks that require comprehension. Experimental results show that our model not only achieves superior zero-shot performance but also exhibits stronger fine-grained dialogue comprehension capabilities. The data and code are available at https://github.com/zhaochaocs/Diana
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
286,459
2010.00638
Tabular GANs for uneven distribution
GANs are well known for success in the realistic image generation. However, they can be applied in tabular data generation as well. We will review and examine some recent papers about tabular GANs in action. We will generate data to make train distribution bring closer to the test. Then compare model performance trained on the initial train dataset, with trained on the train with GAN generated data, also we train the model by sampling train by adversarial training. We show that using GAN might be an option in case of uneven data distribution between train and test data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
198,343
2106.15083
ElephantBook: A Semi-Automated Human-in-the-Loop System for Elephant Re-Identification
African elephants are vital to their ecosystems, but their populations are threatened by a rise in human-elephant conflict and poaching. Monitoring population dynamics is essential in conservation efforts; however, tracking elephants is a difficult task, usually relying on the invasive and sometimes dangerous placement of GPS collars. Although there have been many recent successes in the use of computer vision techniques for automated identification of other species, identification of elephants is extremely difficult and typically requires expertise as well as familiarity with elephants in the population. We have built and deployed a web-based platform and database for human-in-the-loop re-identification of elephants combining manual attribute labeling and state-of-the-art computer vision algorithms, known as ElephantBook. Our system is currently in use at the Mara Elephant Project, helping monitor the protected and at-risk population of elephants in the Greater Maasai Mara ecosystem. ElephantBook makes elephant re-identification usable by non-experts and scalable for use by multiple conservation NGOs.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
243,603
2101.05913
Supervised Transfer Learning at Scale for Medical Imaging
Transfer learning is a standard technique to improve performance on tasks with limited data. However, for medical imaging, the value of transfer learning is less clear. This is likely due to the large domain mismatch between the usual natural-image pre-training (e.g. ImageNet) and medical images. However, recent advances in transfer learning have shown substantial improvements from scale. We investigate whether modern methods can change the fortune of transfer learning for medical imaging. For this, we study the class of large-scale pre-trained networks presented by Kolesnikov et al. on three diverse imaging tasks: chest radiography, mammography, and dermatology. We study both transfer performance and critical properties for the deployment in the medical domain, including: out-of-distribution generalization, data-efficiency, sub-group fairness, and uncertainty estimation. Interestingly, we find that for some of these properties transfer from natural to medical images is indeed extremely effective, but only when performed at sufficient scale.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
215,547
2103.11136
Comprehensive Analysis of Continuously Variable Series Reactor Using G-C Framework
Continuously Variable Series Reactor (CVSR) has the ability to regulate the reactance of an ac circuit using the magnetizing characteristics of its ferromagnetic core, shared by an ac and a dc winding to control power flow, damp oscillations and limit fault currents. In order to understand and utilize a CVSR in the power grid, it is essential to know all of its operational characteristics. The gyrator-capacitor approach has been applied to model electromagnetic coupling between the two circuits, controlled ac circuit and control dc circuit of the device. In this paper, we investigate some of the CVSR side behavior in terms of the induced voltage across the dc winding, flux density within the core's branches, and the power exchange between the two circuits during normal operation and fault conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
225,683
2107.00114
QuickFlex: a Fast Algorithm for Flexible Region Construction for the TSO-DSO Coordination
Most of the new technological changes in power systems are expected to take place in distribution grids. The enormous potential for distribution flexibility could meet the transmission system's needs, changing the paradigm of generator-centric energy and ancillary services provided to a demand-centric one, by placing more importance on smaller resources, such as flexible demands and electric vehicles. For unlocking such capabilities, it is essential to understand the aggregated flexibility that can be harvested from the large population of new technologies located in distribution grids. Distribution grids, therefore, could provide aggregated flexibility at the transmission level. To date, most computational methods for estimating the aggregated flexibility at the interface between distribution grids and transmission grids have the drawback of requiring significant computational time, which hinders their applicability. This paper presents a new algorithm, coined as QuickFlex} for constructing the flexibility domain of distribution grids. Contrary to previous methods, a priory flexibility domain accuracy can be selected. Our method requires few iterations for constructing the flexibility region. The number of iterations needed is mainly independent of the distribution grid's input size and flexible elements. Numerical experiments are performed in four grids ranging from 5 nodes to 123 nodes. It is shown that QuickFlex outperforms existing proposals in the literature in both speed and accuracy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
244,042
2209.14364
Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep Learning
In recent years, the geospatial industry has been developing at a steady pace. This growth implies the addition of satellite constellations that produce a copious supply of satellite imagery and other Remote Sensing data on a daily basis. Sometimes, this information, even if in some cases we are referring to publicly available data, it sits unaccounted for due to the sheer size of it. Processing such large amounts of data with the help of human labour or by using traditional automation methods is not always a viable solution from the standpoint of both time and other resources. Within the present work, we propose an approach for creating a multi-modal and spatio-temporal dataset comprised of publicly available Remote Sensing data and testing for feasibility using state of the art Machine Learning (ML) techniques. Precisely, the usage of Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation that are present in the proposed dataset. Popularity and success of similar methods in the context of Geographical Information Systems (GIS) and Computer Vision (CV) more generally indicate that methods alike should be taken in consideration and further analysed and developed.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
320,217
1902.00541
The Efficacy of SHIELD under Different Threat Models
In this appraisal paper, we evaluate the efficacy of SHIELD, a compression-based defense framework for countering adversarial attacks on image classification models, which was published at KDD 2018. Here, we consider alternative threat models not studied in the original work, where we assume that an adaptive adversary is aware of the ensemble defense approach, the defensive pre-processing, and the architecture and weights of the models used in the ensemble. We define scenarios with varying levels of threat and empirically analyze the proposed defense by varying the degree of information available to the attacker, spanning from a full white-box attack to the gray-box threat model described in the original work. To evaluate the robustness of the defense against an adaptive attacker, we consider the targeted-attack success rate of the Projected Gradient Descent (PGD) attack, which is a strong gradient-based adversarial attack proposed in adversarial machine learning research. We also experiment with training the SHIELD ensemble from scratch, which is different from re-training using a pre-trained model as done in the original work. We find that the targeted PGD attack has a success rate of 64.3% against the original SHIELD ensemble in the full white box scenario, but this drops to 48.9% if the models used in the ensemble are trained from scratch instead of being retrained. Our experiments further reveal that an ensemble whose models are re-trained indeed have higher correlation in the cosine similarity space, and models that are trained from scratch are less vulnerable to targeted attacks in the white-box and gray-box scenarios.
false
false
false
false
true
false
true
false
false
false
false
true
true
false
false
false
false
false
120,424
2311.02369
TACNET: Temporal Audio Source Counting Network
In this paper, we introduce the Temporal Audio Source Counting Network (TaCNet), an innovative architecture that addresses limitations in audio source counting tasks. TaCNet operates directly on raw audio inputs, eliminating complex preprocessing steps and simplifying the workflow. Notably, it excels in real-time speaker counting, even with truncated input windows. Our extensive evaluation, conducted using the LibriCount dataset, underscores TaCNet's exceptional performance, positioning it as a state-of-the-art solution for audio source counting tasks. With an average accuracy of 74.18 percentage over 11 classes, TaCNet demonstrates its effectiveness across diverse scenarios, including applications involving Chinese and Persian languages. This cross-lingual adaptability highlights its versatility and potential impact.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
405,413
2412.17845
Polymer/paper-based double touch mode capacitive pressure sensing element for wireless control of robotic arm
In this work, a large area, low cost and flexible polymer/paper-based double touch mode capacitive pressure sensor is demonstrated. Garage fabrication processes are used which only require cutting, taping and assembly of aluminum (Al) coated polyimide (PI) foil, PI tape and double-sided scotch tape. The presented pressure sensor operates in different pressure regions i.e. normal (0 to 7.5 kPa), transition (7.5 to 14.24 kPa), linear (14.24 to 54.9 kPa) and saturation (above 54.9 kPa). The advantages of the demonstrated double touch mode capacitive pressure sensors are low temperature drift, long linear range, high pressure sensitivity, precise pressure measurement and large die area. The linear output along with a high sensitivity range (14.24 to 54.9 kPa pressure range) of the sensor are utilized to wirelessly control the movement of a robotic arm with precise rotation and tilt movement capabilities.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
520,146
1806.00548
A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models
We consider the problem of including additional knowledge in estimating sparse Gaussian graphical models (sGGMs) from aggregated samples, arising often in bioinformatics and neuroimaging applications. Previous joint sGGM estimators either fail to use existing knowledge or cannot scale-up to many tasks (large $K$) under a high-dimensional (large $p$) situation. In this paper, we propose a novel \underline{J}oint \underline{E}lementary \underline{E}stimator incorporating additional \underline{K}nowledge (JEEK) to infer multiple related sparse Gaussian Graphical models from large-scale heterogeneous data. Using domain knowledge as weights, we design a novel hybrid norm as the minimization objective to enforce the superposition of two weighted sparsity constraints, one on the shared interactions and the other on the task-specific structural patterns. This enables JEEK to elegantly consider various forms of existing knowledge based on the domain at hand and avoid the need to design knowledge-specific optimization. JEEK is solved through a fast and entry-wise parallelizable solution that largely improves the computational efficiency of the state-of-the-art $O(p^5K^4)$ to $O(p^2K^4)$. We conduct a rigorous statistical analysis showing that JEEK achieves the same convergence rate $O(\log(Kp)/n_{tot})$ as the state-of-the-art estimators that are much harder to compute. Empirically, on multiple synthetic datasets and two real-world data, JEEK outperforms the speed of the state-of-arts significantly while achieving the same level of prediction accuracy. Available as R tool @ http://jointnets.org/
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
99,329
1511.04066
Properly Learning Poisson Binomial Distributions in Almost Polynomial Time
We give an algorithm for properly learning Poisson binomial distributions. A Poisson binomial distribution (PBD) of order $n$ is the discrete probability distribution of the sum of $n$ mutually independent Bernoulli random variables. Given $\widetilde{O}(1/\epsilon^2)$ samples from an unknown PBD $\mathbf{p}$, our algorithm runs in time $(1/\epsilon)^{O(\log \log (1/\epsilon))}$, and outputs a hypothesis PBD that is $\epsilon$-close to $\mathbf{p}$ in total variation distance. The previously best known running time for properly learning PBDs was $(1/\epsilon)^{O(\log(1/\epsilon))}$. As one of our main contributions, we provide a novel structural characterization of PBDs. We prove that, for all $\epsilon >0,$ there exists an explicit collection $\cal{M}$ of $(1/\epsilon)^{O(\log \log (1/\epsilon))}$ vectors of multiplicities, such that for any PBD $\mathbf{p}$ there exists a PBD $\mathbf{q}$ with $O(\log(1/\epsilon))$ distinct parameters whose multiplicities are given by some element of ${\cal M}$, such that $\mathbf{q}$ is $\epsilon$-close to $\mathbf{p}$. Our proof combines tools from Fourier analysis and algebraic geometry. Our approach to the proper learning problem is as follows: Starting with an accurate non-proper hypothesis, we fit a PBD to this hypothesis. More specifically, we essentially start with the hypothesis computed by the computationally efficient non-proper learning algorithm in our recent work~\cite{DKS15}. Our aforementioned structural characterization allows us to reduce the corresponding fitting problem to a collection of $(1/\epsilon)^{O(\log \log(1/\epsilon))}$ systems of low-degree polynomial inequalities. We show that each such system can be solved in time $(1/\epsilon)^{O(\log \log(1/\epsilon))}$, which yields the overall running time of our algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
48,838
2007.01290
Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach
Structural equation models (SEMs) are widely used in sciences, ranging from economics to psychology, to uncover causal relationships underlying a complex system under consideration and estimate structural parameters of interest. We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation. We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using the stochastic gradient descent. We consider both 2-layer and multi-layer NNs with ReLU activation functions and prove global convergence in an overparametrized regime, where the number of neurons is diverging. The results are established using techniques from online learning and local linearization of NNs, and improve in several aspects the current state-of-the-art. For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
185,385
1703.03714
Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue
Our overall program objective is to provide more natural ways for soldiers to interact and communicate with robots, much like how soldiers communicate with other soldiers today. We describe how the Wizard-of-Oz (WOz) method can be applied to multimodal human-robot dialogue in a collaborative exploration task. While the WOz method can help design robot behaviors, traditional approaches place the burden of decisions on a single wizard. In this work, we consider two wizards to stand in for robot navigation and dialogue management software components. The scenario used to elicit data is one in which a human-robot team is tasked with exploring an unknown environment: a human gives verbal instructions from a remote location and the robot follows them, clarifying possible misunderstandings as needed via dialogue. We found the division of labor between wizards to be workable, which holds promise for future software development.
true
false
false
false
true
false
false
true
true
false
false
false
false
false
false
false
false
false
69,770
1810.04714
Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation
We propose the BinaryGAN, a novel generative adversarial network (GAN) that uses binary neurons at the output layer of the generator. We employ the sigmoid-adjusted straight-through estimators to estimate the gradients for the binary neurons and train the whole network by end-to-end backpropogation. The proposed model is able to directly generate binary-valued predictions at test time. We implement such a model to generate binarized MNIST digits and experimentally compare the performance for different types of binary neurons, GAN objectives and network architectures. Although the results are still preliminary, we show that it is possible to train a GAN that has binary neurons and that the use of gradient estimators can be a promising direction for modeling discrete distributions with GANs. For reproducibility, the source code is available at https://github.com/salu133445/binarygan .
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
110,091
1811.03305
BAR: Bayesian Activity Recognition using variational inference
Uncertainty estimation in deep neural networks is essential for designing reliable and robust AI systems. Applications such as video surveillance for identifying suspicious activities are designed with deep neural networks (DNNs), but DNNs do not provide uncertainty estimates. Capturing reliable uncertainty estimates in safety and security critical applications will help to establish trust in the AI system. Our contribution is to apply Bayesian deep learning framework to visual activity recognition application and quantify model uncertainty along with principled confidence. We utilize the stochastic variational inference technique while training the Bayesian DNNs to infer the approximate posterior distribution around model parameters and perform Monte Carlo sampling on the posterior of model parameters to obtain the predictive distribution. We show that the Bayesian inference applied to DNNs provide reliable confidence measures for visual activity recognition task as compared to conventional DNNs. We also show that our method improves the visual activity recognition precision-recall AUC by 6.2% compared to non-Bayesian baseline. We evaluate our models on Moments-In-Time (MiT) activity recognition dataset by selecting a subset of in- and out-of-distribution video samples.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
112,808
2011.03164
Learning Power Control for Cellular Systems with Heterogeneous Graph Neural Network
Optimizing power control in multi-cell cellular networks with deep learning enables such a non-convex problem to be implemented in real-time. When channels are time-varying, the deep neural networks (DNNs) need to be re-trained frequently, which calls for low training complexity. To reduce the number of training samples and the size of DNN required to achieve good performance, a promising approach is to embed the DNNs with priori knowledge. Since cellular networks can be modelled as a graph, it is natural to employ graph neural networks (GNNs) for learning, which exhibit permutation invariance (PI) and equivalence (PE) properties. Unlike the homogeneous GNNs that have been used for wireless problems, whose outputs are invariant or equivalent to arbitrary permutations of vertexes, heterogeneous GNNs (HetGNNs), which are more appropriate to model cellular networks, are only invariant or equivalent to some permutations. If the PI or PE properties of the HetGNN do not match the property of the task to be learned, the performance degrades dramatically. In this paper, we show that the power control policy has a combination of different PI and PE properties, and existing HetGNN does not satisfy these properties. We then design a parameter sharing scheme for HetGNN such that the learned relationship satisfies the desired properties. Simulation results show that the sample complexity and the size of designed GNN for learning the optimal power control policy in multi-user multi-cell networks are much lower than the existing DNNs, when achieving the same sum rate loss from the numerically obtained solutions.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
205,156
1803.05588
Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment
Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and high-level features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
92,663
2103.10390
Challenges of 3D Surface Reconstruction in Capsule Endoscopy
Essential for improving the accuracy and reliability of bowel cancer screening, three-dimensional (3D) surface reconstruction using capsule endoscopy (CE) images remains challenging due to CE hardware and software limitations. This report generally focuses on challenges associated with 3D visualization and specifically investigates the impact of the indeterminate selection of the angle of the line of sight on 3D surfaces. Furthermore, it demonstrates that impact through 3D surfaces viewed at the same azimuth angles and different elevation angles of the line of sight. The report concludes that 3D printing of reconstructed 3D surfaces can potentially overcome line of sight indeterminate selection and 2D screen visual restriction-related errors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
225,440
2104.09798
CoDR: Computation and Data Reuse Aware CNN Accelerator
Computation and Data Reuse is critical for the resource-limited Convolutional Neural Network (CNN) accelerators. This paper presents Universal Computation Reuse to exploit weight sparsity, repetition, and similarity simultaneously in a convolutional layer. Moreover, CoDR decreases the cost of weight memory access by proposing a customized Run-Length Encoding scheme and the number of memory accesses to the intermediate results by introducing an input and output stationary dataflow. Compared to two recent compressed CNN accelerators with the same area of 2.85 mm^2, CoDR decreases SRAM access by 5.08x and 7.99x, and consumes 3.76x and 6.84x less energy.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
true
false
true
231,360
2309.08030
AV2Wav: Diffusion-Based Re-synthesis from Continuous Self-supervised Features for Audio-Visual Speech Enhancement
Speech enhancement systems are typically trained using pairs of clean and noisy speech. In audio-visual speech enhancement (AVSE), there is not as much ground-truth clean data available; most audio-visual datasets are collected in real-world environments with background noise and reverberation, hampering the development of AVSE. In this work, we introduce AV2Wav, a resynthesis-based audio-visual speech enhancement approach that can generate clean speech despite the challenges of real-world training data. We obtain a subset of nearly clean speech from an audio-visual corpus using a neural quality estimator, and then train a diffusion model on this subset to generate waveforms conditioned on continuous speech representations from AV-HuBERT with noise-robust training. We use continuous rather than discrete representations to retain prosody and speaker information. With this vocoding task alone, the model can perform speech enhancement better than a masking-based baseline. We further fine-tune the diffusion model on clean/noisy utterance pairs to improve the performance. Our approach outperforms a masking-based baseline in terms of both automatic metrics and a human listening test and is close in quality to the target speech in the listening test. Audio samples can be found at https://home.ttic.edu/~jcchou/demo/avse/avse_demo.html.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
392,004
2112.12616
Deep Filtering with DNN, CNN and RNN
This paper is about a deep learning approach for linear and nonlinear filtering. The idea is to train a neural network with Monte Carlo samples generated from a nominal dynamic model. Then the network weights are applied to Monte Carlo samples from an actual dynamic model. A main focus of this paper is on the deep filters with three major neural network architectures (DNN, CNN, RNN). Our deep filter compares favorably to the traditional Kalman filter in linear cases and outperform the extended Kalman filter in nonlinear cases. Then a switching model with jumps is studied to show the adaptiveness and power of our deep filtering. Among the three major NNs, the CNN outperform the others on average. while the RNN does not seem to be suitable for the filtering problem. One advantage of the deep filter is its robustness when the nominal model and actual model differ. The other advantage of deep filtering is real data can be used directly to train the deep neutral network. Therefore, model calibration can be by-passed all together.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,016
1402.2071
Attribute Dependencies for Data with Grades
This paper examines attribute dependencies in data that involve grades, such as a grade to which an object is red or a grade to which two objects are similar. We thus extend the classical agenda by allowing graded, or fuzzy, attributes instead of Boolean attributes in case of attribute implications, and allowing approximate match based on degrees of similarity instead of exact match in case of functional dependencies. In a sense, we move from bivalence, inherently present in the now-available theories of dependencies, to a more flexible setting that involves grades. Such a shift has far-reaching consequences. We argue that a reasonable theory of dependencies may be developed by making use of mathematical fuzzy logic. Namely, the theory of dependencies is then based on a solid logic calculus the same way the classical dependencies are based on classical logic. For instance, rather than handling degrees of similarity in an ad hoc manner, we consistently treat them as truth values, the same way as true (match) and false (mismatch) are treated in classical theories. In addition, several notions intuitively embraced in the presence of grades, such as a degree of validity of a particular dependence or a degree of entailment, naturally emerge and receive a conceptually clean treatment in the presented approach. In the paper, we discuss motivations, provide basic notions of syntax and semantics, and develop basic results which include entailment of dependencies, associated closure structures, a logic of dependencies with two versions of completeness theorem, results and algorithms regarding complete non-redundant sets of dependencies, relationship to and a possible reductionist interface to classical dependencies, and relationship to functional dependencies over domains with similarity.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
30,750
2202.04499
Lightweight Jet Reconstruction and Identification as an Object Detection Task
We apply object detection techniques based on deep convolutional blocks to end-to-end jet identification and reconstruction tasks encountered at the CERN Large Hadron Collider (LHC). Collision events produced at the LHC and represented as an image composed of calorimeter and tracker cells are given as an input to a Single Shot Detection network. The algorithm, named PFJet-SSD performs simultaneous localization, classification and regression tasks to cluster jets and reconstruct their features. This all-in-one single feed-forward pass gives advantages in terms of execution time and an improved accuracy w.r.t. traditional rule-based methods. A further gain is obtained from network slimming, homogeneous quantization, and optimized runtime for meeting memory and latency constraints of a typical real-time processing environment. We experiment with 8-bit and ternary quantization, benchmarking their accuracy and inference latency against a single-precision floating-point. We show that the ternary network closely matches the performance of its full-precision equivalent and outperforms the state-of-the-art rule-based algorithm. Finally, we report the inference latency on different hardware platforms and discuss future applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
279,570
1704.05973
Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection
The proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diffusion is known as \textit{early rumor detection}, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. Thus, identifying trending rumors demands an efficient yet flexible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classification algorithms to rumor detection in earliness since they rely on hand-crafted features which require intensive manual efforts in the case of large amount of posts. This paper presents a deep attention model on the basis of recurrent neural networks (RNN) to learn \textit{selectively} temporal hidden representations of sequential posts for identifying rumors. The proposed model delves soft-attention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep attention based RNN model outperforms state-of-the-arts that rely on hand-crafted features; (2) the introduction of soft attention mechanism can effectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
72,102
1311.3198
Sound, Complete and Minimal UCQ-Rewriting for Existential Rules
We address the issue of Ontology-Based Data Access, with ontologies represented in the framework of existential rules, also known as Datalog+/-. A well-known approach involves rewriting the query using ontological knowledge. We focus here on the basic rewriting technique which consists of rewriting the initial query into a union of conjunctive queries. First, we study a generic breadth-first rewriting algorithm, which takes as input any rewriting operator, and define properties of rewriting operators that ensure the correctness of the algorithm. Then, we focus on piece-unifiers, which provide a rewriting operator with the desired properties. Finally, we propose an implementation of this framework and report some experiments.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
28,390
2501.11870
Coarse-to-Fine Lightweight Meta-Embedding for ID-Based Recommendation
The state-of-the-art recommendation systems have shifted the attention to efficient recommendation, e.g., on-device recommendation, under memory constraints. To this end, the existing methods either focused on the lightweight embeddings for both users and items, or involved on-device systems enjoying the compact embeddings to enhance reusability and reduces space complexity. However, they focus solely on the coarse granularity of embedding, while overlook the fine-grained semantic nuances, to adversarially downgrade the efficacy of meta-embeddings in capturing the intricate relationship over both user and item, consequently resulting into the suboptimal recommendations. In this paper, we aim to study how the meta-embedding can efficiently learn varied grained semantics, together with how the fine-grained meta-embedding can strengthen the representation of coarse-grained meta-embedding. To answer these questions, we develop a novel graph neural networks (GNNs) based recommender where each user and item serves as the node, linked directly to coarse-grained virtual nodes and indirectly to fine-grained virtual nodes, ensuring different grained semantic learning, while disclosing: 1) In contrast to coarse-grained semantics, fine-grained semantics are well captured through sparse meta-embeddings, which adaptively 2) balance the embedding uniqueness and memory constraint. Additionally, the initialization method come up upon SparsePCA, along with a soft thresholding activation function to render the sparseness of the meta-embeddings. We propose a weight bridging update strategy that focuses on matching each coarse-grained meta-embedding with several fine-grained meta-embeddings based on the users/items' semantics. Extensive experiments substantiate our method's superiority over existing baselines. Our code is available at https://github.com/htyjers/C2F-MetaEmbed.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
526,071
2409.16938
Generative Object Insertion in Gaussian Splatting with a Multi-View Diffusion Model
Generating and inserting new objects into 3D content is a compelling approach for achieving versatile scene recreation. Existing methods, which rely on SDS optimization or single-view inpainting, often struggle to produce high-quality results. To address this, we propose a novel method for object insertion in 3D content represented by Gaussian Splatting. Our approach introduces a multi-view diffusion model, dubbed MVInpainter, which is built upon a pre-trained stable video diffusion model to facilitate view-consistent object inpainting. Within MVInpainter, we incorporate a ControlNet-based conditional injection module to enable controlled and more predictable multi-view generation. After generating the multi-view inpainted results, we further propose a mask-aware 3D reconstruction technique to refine Gaussian Splatting reconstruction from these sparse inpainted views. By leveraging these fabricate techniques, our approach yields diverse results, ensures view-consistent and harmonious insertions, and produces better object quality. Extensive experiments demonstrate that our approach outperforms existing methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
491,582
1811.11127
Unprocessing Images for Learned Raw Denoising
Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real data requires careful consideration of the noise properties of image sensors, the other aspects of a camera's image processing pipeline (gain, color correction, tone mapping, etc) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to "unprocess" images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By processing and unprocessing model outputs and training data in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9x-18x faster than the previous state of the art on the Darmstadt Noise Dataset, and generalizes to sensors outside of that dataset as well.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
114,688
2405.19864
Out-of-distribution Reject Option Method for Dataset Shift Problem in Early Disease Onset Prediction
Machine learning is increasingly used to predict lifestyle-related disease onset using health and medical data. However, the prediction effectiveness is hindered by dataset shift, which involves discrepancies in data distribution between the training and testing datasets, misclassifying out-of-distribution (OOD) data. To diminish dataset shift effects, this paper proposes the out-of-distribution reject option for prediction (ODROP), which integrates OOD detection models to preclude OOD data from the prediction phase. We investigated the efficacy of five OOD detection methods (variational autoencoder, neural network ensemble std, neural network ensemble epistemic, neural network energy, and neural network gaussian mixture based energy measurement) across two datasets, the Hirosaki and Wakayama health checkup data, in the context of three disease onset prediction tasks: diabetes, dyslipidemia, and hypertension. To evaluate the ODROP method, we trained disease onset prediction models and OOD detection models on Hirosaki data and used AUROC-rejection curve plots from Wakayama data. The variational autoencoder method showed superior stability and magnitude of improvement in Area Under the Receiver Operating Curve (AUROC) in five cases: AUROC in the Wakayama data was improved from 0.80 to 0.90 at a 31.1% rejection rate for diabetes onset and from 0.70 to 0.76 at a 34% rejection rate for dyslipidemia. We categorized dataset shifts into two types using SHAP clustering - those that considerably affect predictions and those that do not. We expect that this classification will help standardize measuring instruments. This study is the first to apply OOD detection to actual health and medical data, demonstrating its potential to substantially improve the accuracy and reliability of disease prediction models amidst dataset shift.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
459,099
1510.07905
Defect Detection Techniques for Airbag Production Sewing Stages
Airbags are subject to strict quality control in order to ensure passengers safety. The quality of fabric and sewing thread influence the final product and therefore, sewing defects must be early and accurately detected, in order to remove the item from production. Airbag seams assembly can take various forms, using linear and circle primitives, with threads of different colors and length densities, creating lockstitch or double threads chainstitch. The paper presents a framework for the automatic detection of defects occurring during the airbag sewing stage. Types of defects as skipped stitch, missed stitch or superimposed seam for lockstitch and two threads chainstitch are detected and marked. Using image processing methods, the proposed framework follows the seams path and determines if a color pattern of the considered stitches is valid.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
48,242
2205.12012
Analysing the Greek Parliament Records with Emotion Classification
In this project, we tackle emotion classification for the Greek language, presenting and releasing a new dataset in Greek. We fine-tune and assess Transformer-based masked language models that were pre-trained on monolingual and multilingual resources, and we present the results per emotion and by aggregating at the sentiment and subjectivity level. The potential of the presented resources is investigated by detecting and studying the emotion of `disgust' in the Greek Parliament records. We: (a) locate the months with the highest values from 1989 to present, (b) rank the Greek political parties based on the presence of this emotion in their speeches, and (c) study the emotional context shift of words used to stigmatise people.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,374
1512.08571
Structured Pruning of Deep Convolutional Neural Networks
Real time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks, which are channel wise, kernel wise and intra kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, parallel computing environments and hardware based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by computing the misclassification rate with corresponding connectivity pattern. The pruned network is re-trained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra kernel strided sparsity with a simple constraint can significantly reduce the size of kernel and feature map matrices. The pruned network is finally fixed point optimized with reduced word length precision. This results in significant reduction in the total storage size providing advantages for on-chip memory based implementations of deep neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
50,528
1802.03638
Beyond Markov Logic: Efficient Mining of Prediction Rules in Large Graphs
Graph representations of large knowledge bases may comprise billions of edges. Usually built upon human-generated ontologies, several knowledge bases do not feature declared ontological rules and are far from being complete. Current rule mining approaches rely on schemata or store the graph in-memory, which can be unfeasible for large graphs. In this paper, we introduce HornConcerto, an algorithm to discover Horn clauses in large graphs without the need of a schema. Using a standard fact-based confidence score, we can mine close Horn rules having an arbitrary body size. We show that our method can outperform existing approaches in terms of runtime and memory consumption and mine high-quality rules for the link prediction task, achieving state-of-the-art results on a widely-used benchmark. Moreover, we find that rules alone can perform inference significantly faster than embedding-based methods and achieve accuracies on link prediction comparable to resource-demanding approaches such as Markov Logic Networks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
90,027
2405.09197
Parallel and Proximal Constrained Linear-Quadratic Methods for Real-Time Nonlinear MPC
Recent strides in nonlinear model predictive control (NMPC) underscore a dependence on numerical advancements to efficiently and accurately solve large-scale problems. Given the substantial number of variables characterizing typical whole-body optimal control (OC) problems - often numbering in the thousands - exploiting the sparse structure of the numerical problem becomes crucial to meet computational demands, typically in the range of a few milliseconds. Addressing the linear-quadratic regulator (LQR) problem is a fundamental building block for computing Newton or Sequential Quadratic Programming (SQP) steps in direct optimal control methods. This paper concentrates on equality-constrained problems featuring implicit system dynamics and dual regularization, a characteristic of advanced interiorpoint or augmented Lagrangian solvers. Here, we introduce a parallel algorithm for solving an LQR problem with dual regularization. Leveraging a rewriting of the LQR recursion through block elimination, we first enhanced the efficiency of the serial algorithm and then subsequently generalized it to handle parametric problems. This extension enables us to split decision variables and solve multiple subproblems concurrently. Our algorithm is implemented in our nonlinear numerical optimal control library ALIGATOR. It showcases improved performance over previous serial formulations and we validate its efficacy by deploying it in the model predictive control of a real quadruped robot.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
454,325
2207.14500
A Transfer Learning-Based Approach to Marine Vessel Re-Identification
Marine vessel re-identification technology is an important component of intelligent shipping systems and an important part of the visual perception tasks required for marine surveillance. However, unlike the situation on land, the maritime environment is complex and variable with fewer samples, and it is more difficult to perform vessel re-identification at sea. Therefore, this paper proposes a transfer dynamic alignment algorithm and simulates the swaying situation of vessels at sea, using a well-camouflaged and similar warship as the test target to improve the recognition difficulty and thus cope with the impact caused by complex sea conditions, and discusses the effect of different types of vessels as transfer objects. The experimental results show that the improved algorithm improves the mean average accuracy (mAP) by 10.2% and the first hit rate (Rank1) by 4.9% on average.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
310,609
2002.02220
Toward good families of codes from towers of surfaces
We introduce in this article a new method to estimate the minimum distance of codes from algebraic surfaces. This lower bound is generic, i.e. can be applied to any surface, and turns out to be ``liftable'' under finite morphisms, paving the way toward the construction of good codes from towers of surfaces. In the same direction, we establish a criterion for a surface with a fixed finite set of closed points $\mathcal P$ to have an infinite tower of $\ell$--\'etale covers in which $\mathcal P$ splits totally. We conclude by stating several open problems. In particular, we relate the existence of asymptotically good codes from general type surfaces with a very ample canonical class to the behaviour of their number of rational points with respect to their $K^2$ and coherent Euler characteristic.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
162,867
2307.06125
Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation
Existing object-search approaches enable robots to search through free pathways, however, robots operating in unstructured human-centered environments frequently also have to manipulate the environment to their needs. In this work, we introduce a novel interactive multi-object search task in which a robot has to open doors to navigate rooms and search inside cabinets and drawers to find target objects. These new challenges require combining manipulation and navigation skills in unexplored environments. We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills. To achieve this, we design an abstract high-level action space around a semantic map memory and leverage the explored environment as instance navigation points. We perform extensive experiments in simulation and the real world that demonstrate that, with accurate perception, the decision making of HIMOS effectively transfers to new environments in a zero-shot manner. It shows robustness to unseen subpolicies, failures in their execution, and different robot kinematics. These capabilities open the door to a wide range of downstream tasks across embodied AI and real-world use cases.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
378,976
2012.12403
Performance Analysis of Adaptive Dynamic Tube MPC
Model predictive control (MPC) is an effective method for control of constrained systems but is susceptible to the external disturbances and modeling error often encountered in real-world applications. To address these issues, techniques such as Tube MPC (TMPC) utilize an ancillary offline-generated robust controller to ensure that the system remains within an invariant set, referred to as a tube, around an online-generated trajectory. However, TMPC is unable to modify its tube and ancillary controller in response to changing state-dependent uncertainty, often resulting in overly-conservative solutions. Dynamic Tube MPC (DTMPC) addresses these problems by simultaneously optimizing the desired trajectory and tube geometry online. Building upon this framework, Adaptive DTMPC (ADTMPC) produces better model approximations by reducing model uncertainty, resulting in more accurate control policies. This work presents an experimental analysis and performance evaluation of TMPC, DTMPC, and ADTMPC for an uncertain nonlinear system. In particular, DTMPC is shown to outperform TMPC because it is able to dynamically adjust to changing environments, limiting aggressive control and conservative behavior to only the cases when the constraints and uncertainty require it. Applied to a pendulum testbed, this enables DTMPC to use up to 30% less control effort while achieving up to 80% higher speeds. This performance is further improved by ADTMPC, which reduces the feedback control effort by up to another 35%, while delivering up to 34% better trajectory tracking. This analysis establishes that the DTMPC and ADTMPC frameworks yield significantly more effective robust control policies for systems with changing uncertainty, goals, and operating conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
212,916
2309.04960
SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven Generative Adversarial Networks
Computed Tomography (CT) is a medical imaging modality that can generate more informative 3D images than 2D X-rays. However, this advantage comes at the expense of more radiation exposure, higher costs, and longer acquisition time. Hence, the reconstruction of 3D CT images using a limited number of 2D X-rays has gained significant importance as an economical alternative. Nevertheless, existing methods primarily prioritize minimizing pixel/voxel-level intensity discrepancies, often neglecting the preservation of textural details in the synthesized images. This oversight directly impacts the quality of the reconstructed images and thus affects the clinical diagnosis. To address the deficits, this paper presents a new self-driven generative adversarial network model (SdCT-GAN), which is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator. In addition, a Sobel Gradient Guider (SGG) idea is applied throughout the model, where the edge information from the 2D X-ray image at the input can be integrated. Moreover, LPIPS (Learned Perceptual Image Patch Similarity) evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones. Finally, the qualitative and quantitative results of the empirical studies justify the power of the proposed model compared to mainstream state-of-the-art baselines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
390,909
2108.06078
Piecewise Linear De-skewing for LiDAR Inertial Odometry
Light detection and ranging (LiDAR) on a moving agent could suffer from motion distortion due to simultaneous rotation of the LiDAR and fast movement of the agent. An accurate piecewise linear de skewing algorithm is proposed to correct the motion distortions for LiDAR inertial odometry (LIO) using high frequency motion information provided by an Inertial Measurement Unit (IMU). Experimental results show that the proposed algorithm can be adopted to improve the performance of existing LIO algorithms especially in cases of fast movement.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
250,495
1512.00932
The Indian Spontaneous Expression Database for Emotion Recognition
Automatic recognition of spontaneous facial expressions is a major challenge in the field of affective computing. Head rotation, face pose, illumination variation, occlusion etc. are the attributes that increase the complexity of recognition of spontaneous expressions in practical applications. Effective recognition of expressions depends significantly on the quality of the database used. Most well-known facial expression databases consist of posed expressions. However, currently there is a huge demand for spontaneous expression databases for the pragmatic implementation of the facial expression recognition algorithms. In this paper, we propose and establish a new facial expression database containing spontaneous expressions of both male and female participants of Indian origin. The database consists of 428 segmented video clips of the spontaneous facial expressions of 50 participants. In our experiment, emotions were induced among the participants by using emotional videos and simultaneously their self-ratings were collected for each experienced emotion. Facial expression clips were annotated carefully by four trained decoders, which were further validated by the nature of stimuli used and self-report of emotions. An extensive analysis was carried out on the database using several machine learning algorithms and the results are provided for future reference. Such a spontaneous database will help in the development and validation of algorithms for recognition of spontaneous expressions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
49,757
1910.10831
Variational Predictive Information Bottleneck
In classic papers, Zellner demonstrated that Bayesian inference could be derived as the solution to an information theoretic functional. Below we derive a generalized form of this functional as a variational lower bound of a predictive information bottleneck objective. This generalized functional encompasses most modern inference procedures and suggests novel ones.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
150,595
2501.11351
Automatic Labelling & Semantic Segmentation with 4D Radar Tensors
In this paper, an automatic labelling process is presented for automotive datasets, leveraging on complementary information from LiDAR and camera. The generated labels are then used as ground truth with the corresponding 4D radar data as inputs to a proposed semantic segmentation network, to associate a class label to each spatial voxel. Promising results are shown by applying both approaches to the publicly shared RaDelft dataset, with the proposed network achieving over 65% of the LiDAR detection performance, improving 13.2% in vehicle detection probability, and reducing 0.54 m in terms of Chamfer distance, compared to variants inspired from the literature.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,895
2010.02012
Deep Representational Similarity Learning for analyzing neural signatures in task-based fMRI dataset
Similarity analysis is one of the crucial steps in most fMRI studies. Representational Similarity Analysis (RSA) can measure similarities of neural signatures generated by different cognitive states. This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of RSA that is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects, and high-dimensionality -- such as whole-brain images. Unlike the previous methods, DRSL is not limited by a linear transformation or a restricted fixed nonlinear kernel function -- such as Gaussian kernel. DRSL utilizes a multi-layer neural network for mapping neural responses to linear space, where this network can implement a customized nonlinear transformation for each subject separately. Furthermore, utilizing a gradient-based optimization in DRSL can significantly reduce runtime of analysis on large datasets because it uses a batch of samples in each iteration rather than all neural responses to find an optimal solution. Empirical studies on multi-subject fMRI datasets with various tasks -- including visual stimuli, decision making, flavor, and working memory -- confirm that the proposed method achieves superior performance to other state-of-the-art RSA algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
198,879
2404.03414
Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., <1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks. Specifically, the lightweight LM first generates a rationale for each input instance. The Frozen large LM is then prompted to predict a task output based on the rationale generated by the lightweight LM. Our approach is resource-efficient in the sense that it only requires training the lightweight LM. We optimize the model through 1) knowledge distillation and 2) reinforcement learning from rationale-oriented and task-oriented reward signals. We assess our method with multi-hop extractive question answering (QA) benchmarks, HotpotQA, and 2WikiMultiHopQA. Experimental results show that our approach outperforms all baselines regarding answer prediction accuracy. We also find that reinforcement learning helps the model to produce higher-quality rationales with improved QA performance.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
444,246
2311.11533
Event Camera Data Dense Pre-training
This paper introduces a self-supervised learning framework designed for pre-training neural networks tailored to dense prediction tasks using event camera data. Our approach utilizes solely event data for training. Transferring achievements from dense RGB pre-training directly to event camera data yields subpar performance. This is attributed to the spatial sparsity inherent in an event image (converted from event data), where many pixels do not contain information. To mitigate this sparsity issue, we encode an event image into event patch features, automatically mine contextual similarity relationships among patches, group the patch features into distinctive contexts, and enforce context-to-context similarities to learn discriminative event features. For training our framework, we curate a synthetic event camera dataset featuring diverse scene and motion patterns. Transfer learning performance on downstream dense prediction tasks illustrates the superiority of our method over state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,980
2408.01293
Underwater Object Detection Enhancement via Channel Stabilization
The complex marine environment exacerbates the challenges of object detection manifold. Marine trash endangers the aquatic ecosystem, presenting a persistent challenge. Accurate detection of marine deposits is crucial for mitigating this harm. Our work addresses underwater object detection by enhancing image quality and evaluating detection methods. We use Detectron2's backbone with various base models and configurations for this task. We propose a novel channel stabilization technique alongside a simplified image enhancement model to reduce haze and color cast in training images, improving multi-scale object detection. Following image processing, we test different Detectron2 backbones for optimal detection accuracy. Additionally, we apply a sharpening filter with augmentation techniques to highlight object profiles for easier recognition. Results are demonstrated on the TrashCan Dataset, both instance and material versions. The best-performing backbone method incorporates our channel stabilization and augmentation techniques. We also compare our Detectron2 detection results with the Deformable Transformer. In the instance version of TrashCan 1.0, our method achieves a 9.53% absolute increase in average precision for small objects and a 7% absolute gain in bounding box detection compared to the baseline. The code will be available on Code: https://github.com/aliman80/Underwater- Object-Detection-via-Channel-Stablization
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
478,174
1907.12122
It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling
"Text can appear anywhere". This property requires us to carefully process all the pixels in an image in order to accurately localize all text instances. In particular, for the more difficult task of localizing small text regions, many methods use an enlarged image or even several rescaled ones as their input. This significantly increases the processing time of the entire image and needlessly enlarges background regions. If we were to have a prior telling us the coarse location of text instances in the image and their approximate scale, we could have adaptively chosen which regions to process and how to rescale them, thus significantly reducing the processing time. To estimate this prior we propose a segmentation-based network with an additional "scale predictor", an output channel that predicts the scale of each text segment. The network is applied on a scaled down image to efficiently approximate the desired prior, without processing all the pixels of the original image. The approximated prior is then used to create a compact image containing only text regions, resized to a canonical scale, which is fed again to the segmentation network for fine-grained detection. We show that our approach offers a powerful alternative to fixed scaling schemes, achieving an equivalent accuracy to larger input scales while processing far fewer pixels. Qualitative and quantitative results are presented on the ICDAR15 and ICDAR17 MLT benchmarks to validate our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,039
2111.11862
Inferring User Facial Affect in Work-like Settings
Unlike the six basic emotions of happiness, sadness, fear, anger, disgust and surprise, modelling and predicting dimensional affect in terms of valence (positivity - negativity) and arousal (intensity) has proven to be more flexible, applicable and useful for naturalistic and real-world settings. In this paper, we aim to infer user facial affect when the user is engaged in multiple work-like tasks under varying difficulty levels (baseline, easy, hard and stressful conditions), including (i) an office-like setting where they undertake a task that is less physically demanding but requires greater mental strain; (ii) an assembly-line-like setting that requires the usage of fine motor skills; and (iii) an office-like setting representing teleworking and teleconferencing. In line with this aim, we first design a study with different conditions and gather multimodal data from 12 subjects. We then perform several experiments with various machine learning models and find that: (i) the display and prediction of facial affect vary from non-working to working settings; (ii) prediction capability can be boosted by using datasets captured in a work-like context; and (iii) segment-level (spectral representation) information is crucial in improving the facial affect prediction.
true
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
267,800
2306.05390
HQ-50K: A Large-scale, High-quality Dataset for Image Restoration
This paper introduces a new large-scale image restoration dataset, called HQ-50K, which contains 50,000 high-quality images with rich texture details and semantic diversity. We analyze existing image restoration datasets from five different perspectives, including data scale, resolution, compression rates, texture details, and semantic coverage. However, we find that all of these datasets are deficient in some aspects. In contrast, HQ-50K considers all of these five aspects during the data curation process and meets all requirements. We also present a new Degradation-Aware Mixture of Expert (DAMoE) model, which enables a single model to handle multiple corruption types and unknown levels. Our extensive experiments demonstrate that HQ-50K consistently improves the performance on various image restoration tasks, such as super-resolution, denoising, dejpeg, and deraining. Furthermore, our proposed DAMoE, trained on our \dataset, outperforms existing state-of-the-art unified models designed for multiple restoration tasks and levels. The dataset and code are available at \url{https://github.com/littleYaang/HQ-50K}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
372,174
1207.2714
Clustering based approach extracting collocations
The following study presents a collocation extraction approach based on clustering technique. This study uses a combination of several classical measures which cover all aspects of a given corpus then it suggests separating bigrams found in the corpus in several disjoint groups according to the probability of presence of collocations. This will allow excluding groups where the presence of collocations is very unlikely and thus reducing in a meaningful way the search space.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
17,412
2305.14361
Criticality Analysis: Bio-inspired Nonlinear Data Representation
The representation of arbitrary data in a biological system is one of the most elusive elements of biological information processing. The often logarithmic nature of information in amplitude and frequency presented to biosystems prevents simple encapsulation of the information contained in the input. Criticality Analysis (CA) is a bio-inspired method of information representation within a controlled self-organised critical system that allows scale-free representation. This is based on the concept of a reservoir of dynamic behaviour in which self-similar data will create dynamic nonlinear representations. This unique projection of data preserves the similarity of data within a multidimensional neighbourhood. The input can be reduced dimensionally to a projection output that retains the features of the overall data, yet has much simpler dynamic response. The method depends only on the rate control of chaos applied to the underlying controlled models, that allows the encoding of arbitrary data, and promises optimal encoding of data given biological relevant networks of oscillators. The CA method allows for a biologically relevant encoding mechanism of arbitrary input to biosystems, creating a suitable model for information processing in varying complexity of organisms and scale-free data representation for machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
366,991
2105.01714
Drifting Features: Detection and evaluation in the context of automatic RRLs identification in VVV
As most of the modern astronomical sky surveys produce data faster than humans can analyze it, Machine Learning (ML) has become a central tool in Astronomy. Modern ML methods can be characterized as highly resistant to some experimental errors. However, small changes on the data over long distances or long periods of time, which cannot be easily detected by statistical methods, can be harmful to these methods. We develop a new strategy to cope with this problem, also using ML methods in an innovative way, to identify these potentially harmful features. We introduce and discuss the notion of Drifting Features, related with small changes in the properties as measured in the data features. We use the identification of RRLs in VVV based on an earlier work and introduce a method for detecting Drifting Features. Our method forces a classifier to learn the tile of origin of diverse sources (mostly stellar 'point sources'), and select the features more relevant to the task of finding candidates to Drifting Features. We show that this method can efficiently identify a reduced set of features that contains useful information about the tile of origin of the sources. For our particular example of detecting RRLs in VVV, we find that Drifting Features are mostly related to color indices. On the other hand, we show that, even if we have a clear set of Drifting Features in our problem, they are mostly insensitive to the identification of RRLs. Drifting Features can be efficiently identified using ML methods. However, in our example, removing Drifting Features does not improve the identification of RRLs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
233,597
1805.03545
Solving Sudoku with Ant Colony Optimisation
In this paper we present a new Ant Colony Optimisation-based algorithm for Sudoku, which out-performs existing methods on large instances. Our method includes a novel anti-stagnation operator, which we call Best Value Evaporation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
97,069
2305.13119
Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for Word Sense Disambiguation
Word sense disambiguation (WSD), which aims to determine an appropriate sense for a target word given its context, is crucial for natural language understanding. Existing supervised methods treat WSD as a classification task and have achieved remarkable performance. However, they ignore uncertainty estimation (UE) in the real-world setting, where the data is always noisy and out of distribution. This paper extensively studies UE on the benchmark designed for WSD. Specifically, we first compare four uncertainty scores for a state-of-the-art WSD model and verify that the conventional predictive probabilities obtained at the end of the model are inadequate to quantify uncertainty. Then, we examine the capability of capturing data and model uncertainties by the model with the selected UE score on well-designed test scenarios and discover that the model reflects data uncertainty satisfactorily but underestimates model uncertainty. Furthermore, we explore numerous lexical properties that intrinsically affect data uncertainty and provide a detailed analysis of four critical aspects: the syntactic category, morphology, sense granularity, and semantic relations. The code is available at https://github.com/RyanLiut/WSD-UE.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
366,357
2111.04798
TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary Data
Machine learning practitioners often have access to a spectrum of data: labeled data for the target task (which is often limited), unlabeled data, and auxiliary data, the many available labeled datasets for other tasks. We describe TAGLETS, a system built to study techniques for automatically exploiting all three types of data and creating high-quality, servable classifiers. The key components of TAGLETS are: (1) auxiliary data organized according to a knowledge graph, (2) modules encapsulating different methods for exploiting auxiliary and unlabeled data, and (3) a distillation stage in which the ensembled modules are combined into a servable model. We compare TAGLETS with state-of-the-art transfer learning and semi-supervised learning methods on four image classification tasks. Our study covers a range of settings, varying the amount of labeled data and the semantic relatedness of the auxiliary data to the target task. We find that the intelligent incorporation of auxiliary and unlabeled data into multiple learning techniques enables TAGLETS to match-and most often significantly surpass-these alternatives. TAGLETS is available as an open-source system at github.com/BatsResearch/taglets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
265,593
2010.11981
A novel auction system for selecting advertisements in Real-Time bidding
Real-Time Bidding is a new Internet advertising system that has become very popular in recent years. This system works like a global auction where advertisers bid to display their impressions in the publishers' ad slots. The most popular system to select which advertiser wins each auction is the Generalized second-price auction in which the advertiser that offers the most wins the bet and is charged with the price of the second largest bet. In this paper, we propose an alternative betting system with a new approach that not only considers the economic aspect but also other relevant factors for the functioning of the advertising system. The factors that we consider are, among others, the benefit that can be given to each advertiser, the probability of conversion from the advertisement, the probability that the visit is fraudulent, how balanced are the networks participating in RTB and if the advertisers are not paying over the market price. In addition, we propose a methodology based on genetic algorithms to optimize the selection of each advertiser. We also conducted some experiments to compare the performance of the proposed model with the famous Generalized Second-Price method. We think that this new approach, which considers more relevant aspects besides the price, offers greater benefits for RTB networks in the medium and long-term.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
202,500
1908.02947
Graph Node Embeddings using Domain-Aware Biased Random Walks
The recent proliferation of publicly available graph-structured data has sparked an interest in machine learning algorithms for graph data. Since most traditional machine learning algorithms assume data to be tabular, embedding algorithms for mapping graph data to real-valued vector spaces has become an active area of research. Existing graph embedding approaches are based purely on structural information and ignore any semantic information from the underlying domain. In this paper, we demonstrate that semantic information can play a useful role in computing graph embeddings. Specifically, we present a framework for devising embedding strategies aware of domain-specific interpretations of graph nodes and edges, and use knowledge of downstream machine learning tasks to identify relevant graph substructures. Using two real-life domains, we show that our framework yields embeddings that are simple to implement and yet achieve equal or greater accuracy in machine learning tasks compared to domain independent approaches.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
141,123
1708.08994
Clustering Patients with Tensor Decomposition
In this paper we present a method for the unsupervised clustering of high-dimensional binary data, with a special focus on electronic healthcare records. We present a robust and efficient heuristic to face this problem using tensor decomposition. We present the reasons why this approach is preferable for tasks such as clustering patient records, to more commonly used distance-based methods. We run the algorithm on two datasets of healthcare records, obtaining clinically meaningful results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
79,711
2502.11565
STARS-Enabled Full-Duplex Two-Way mMIMO System Under Spatially-Correlated Channels
\underline{S}imultaneous \underline{t}ransmitting \underline{a}nd \underline{r}eflecting \underline{s}urface (STARS)-assisted systems have emerged to fill this gap by providing $ 360^{\circ}$ wireless coverage. In parallel, full-duplex (FD) communication offers a higher achievable rate through efficient spectrum utilization compared to the half-duplex (HD) counterpart. Moreover, two-way/bi-directional communications in an FD system can further enhance the system's spectral efficiency. Hence, in this paper, we propose a STARS-enabled massive MIMO deployment in an FD two-way communication network for highly efficient spectrum utilization, while covering the dead zones around the STARS. This model enables simultaneous information exchange between multiple nodes, while \emph{potentially} doubling the spectral efficiency (SE). By invoking the use-and-then-forget (UaTF) combining scheme, we derive a closed-form expression for an achievable SE at each user of the system considering both uplink and downlink communications based on statistical channel state information (CSI), while also accounting for imperfect CSI and correlated fading conditions. Moreover, we formulate an optimization problem to obtain an optimal passive beamforming matrix design at the STARS that maximizes the sum achievable SE. The considered problem is non-convex and we propose a provably-convergent low-complexity algorithm, termed as \underline{pro}jected \underline{gr}adient \underline{a}scent \underline{m}ethod (ProGrAM), to obtain a stationary solution. Extensive numerical results are provided to establish the performance superiority of the FD STARS-enabled system over the HD STARS-enabled and FD conventional RIS (cRIS)-enabled counterparts, and also to show the effect of different parameters of interest on the system performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
534,444