id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2412.09385
AI Predicts AGI: Leveraging AGI Forecasting and Peer Review to Explore LLMs' Complex Reasoning Capabilities
We tasked 16 state-of-the-art large language models (LLMs) with estimating the likelihood of Artificial General Intelligence (AGI) emerging by 2030. To assess the quality of these forecasts, we implemented an automated peer review process (LLM-PR). The LLMs' estimates varied widely, ranging from 3% (Reka- Core) to 47.6% (GPT-4o), with a median of 12.5%. These estimates closely align with a recent expert survey that projected a 10% likelihood of AGI by 2027, underscoring the relevance of LLMs in forecasting complex, speculative scenarios. The LLM-PR process demonstrated strong reliability, evidenced by a high Intraclass Correlation Coefficient (ICC = 0.79), reflecting notable consistency in scoring across the models. Among the models, Pplx-70b-online emerged as the top performer, while Gemini-1.5-pro-api ranked the lowest. A cross-comparison with external benchmarks, such as LMSYS Chatbot Arena, revealed that LLM rankings remained consistent across different evaluation methods, suggesting that existing benchmarks may not encapsulate some of the skills relevant for AGI prediction. We further explored the use of weighting schemes based on external benchmarks, optimizing the alignment of LLMs' predictions with human expert forecasts. This analysis led to the development of a new, 'AGI benchmark' designed to highlight performance differences in AGI-related tasks. Our findings offer insights into LLMs' capabilities in speculative, interdisciplinary forecasting tasks and emphasize the growing need for innovative evaluation frameworks for assessing AI performance in complex, uncertain real-world scenarios.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
516,460
2307.10034
Validation of Modern JSON Schema: Formalization and Complexity
JSON Schema is the de-facto standard schema language for JSON data. The language went through many minor revisions, but the most recent versions of the language added two novel features, dynamic references and annotation-dependent validation, that change the evaluation model. Modern JSON Schema is the name used to indicate all versions from Draft 2019-09, which are characterized by these new features, while Classical JSON Schema is used to indicate the previous versions. These new "modern" features make the schema language quite difficult to understand, and have generated many discussions about the correct interpretation of their official specifications; for this reason we undertook the task of their formalization. During this process, we also analyzed the complexity of data validation in Modern JSON Schema, with the idea of confirming the PTIME complexity of Classical JSON Schema validation, and we were surprised to discover a completely different truth: data validation, that is expected to be an extremely efficient process, acquires, with Modern JSON Schema features, a PSPACE complexity. In this paper, we give the first formal description of Modern JSON Schema, which we consider a central contribution of the work that we present here. We then prove that its data validation problem is PSPACE-complete. We prove that the origin of the problem lies in dynamic references, and not in annotation-dependent validation. We study the schema and data complexities, showing that the problem is PSPACE-complete with respect to the schema size even with a fixed instance, but is in PTIME when the schema is fixed and only the instance size is allowed to vary. Finally, we run experiments that show that there are families of schemas where the difference in asymptotic complexity between dynamic and static references is extremely visible, even with small schemas.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
380,392
2103.07197
Latent Space Explorations of Singing Voice Synthesis using DDSP
Machine learning based singing voice models require large datasets and lengthy training times. In this work we present a lightweight architecture, based on the Differentiable Digital Signal Processing (DDSP) library, that is able to output song-like utterances conditioned only on pitch and amplitude, after twelve hours of training using small datasets of unprocessed audio. The results are promising, as both the melody and the singer's voice are recognizable. In addition, we present two zero-configuration tools to train new models and experiment with them. Currently we are exploring the latent space representation, which is included in the DDSP library, but not in the original DDSP examples. Our results indicate that the latent space improves both the identification of the singer as well as the comprehension of the lyrics. Our code is available at https://github.com/juanalonso/DDSP-singing-experiments with links to the zero-configuration notebooks, and our sound examples are at https://juanalonso.github.io/DDSP-singing-experiments/ .
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
224,526
2102.03984
One-shot Face Reenactment Using Appearance Adaptive Normalization
The paper proposes a novel generative adversarial network for one-shot face reenactment, which can animate a single face image to a different pose-and-expression (provided by a driving image) while keeping its original appearance. The core of our network is a novel mechanism called appearance adaptive normalization, which can effectively integrate the appearance information from the input image into our face generator by modulating the feature maps of the generator using the learned adaptive parameters. Furthermore, we specially design a local net to reenact the local facial components (i.e., eyes, nose and mouth) first, which is a much easier task for the network to learn and can in turn provide explicit anchors to guide our face generator to learn the global appearance and pose-and-expression. Extensive quantitative and qualitative experiments demonstrate the significant efficacy of our model compared with prior one-shot methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
218,945
1312.4283
On Load Shedding in Complex Event Processing
Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to "classical" stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
29,119
1812.03849
Weakly Supervised Dense Event Captioning in Videos
Dense event captioning aims to detect and describe all events of interest contained in a video. Despite the advanced development in this area, existing methods tackle this task by making use of dense temporal annotations, which is dramatically source-consuming. This paper formulates a new problem: weakly supervised dense event captioning, which does not require temporal segment annotations for model training. Our solution is based on the one-to-one correspondence assumption, each caption describes one temporal segment, and each temporal segment has one caption, which holds in current benchmark datasets and most real-world cases. We decompose the problem into a pair of dual problems: event captioning and sentence localization and present a cycle system to train our model. Extensive experimental results are provided to demonstrate the ability of our model on both dense event captioning and sentence localization in videos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,094
2403.04205
OGMP: Oracle Guided Multi-mode Policies for Agile and Versatile Robot Control
The efficacy of reinforcement learning for robot control relies on the tailored integration of task-specific priors and heuristics for effective exploration, which challenges their straightforward application to complex tasks and necessitates a unified approach. In this work, we define a general class for priors called oracles that generate state references when queried in a closed-loop manner during training. By bounding the permissible state around the oracle's ansatz, we propose a task-agnostic oracle-guided policy optimization. To enhance modularity, we introduce task-vital modes, showing that a policy mastering a compact set of modes and transitions can handle infinite-horizon tasks. For instance, to perform parkour on an infinitely long track, the policy must learn to jump, leap, pace, and transition between these modes effectively. We validate this approach in challenging bipedal control tasks: parkour and diving using a 16 DoF dynamic bipedal robot, HECTOR. Our method results in a single policy per task, solving parkour across diverse tracks and omnidirectional diving from varied heights up to 2m in simulation, showcasing versatile agility. We demonstrate successful sim-to-real transfer of parkour, including leaping over gaps up to 105 % of the leg length, jumping over blocks up to 20 % of the robot's nominal height, and pacing at speeds of up to 0.6 m/s, along with effective transitions between these modes in the real robot.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
435,508
1908.00409
Deduction Theorem: The Problematic Nature of Common Practice in Game Theory
We consider the Deduction Theorem used in the literature of game theory to run a purported proof by contradiction. In the context of game theory, it is stated that if we have a proof of $\phi \vdash \varphi$, then we also have a proof of $\phi \Rightarrow \varphi$. Hence, the proof of $\phi \Rightarrow \varphi$ is deduced from a previously known statement. However, we argue that one has to manage to establish that a proof exists for the clauses $\phi$ and $\varphi$, i.e., they are known true statements in order to show that $\phi \vdash \varphi$ is provable, and that therefore $\phi \Rightarrow \varphi$ is provable as well. Thus, we are not allowed to assume that the clause $\phi$ or $\varphi$ is a true statement. This leads immediately to a wrong conclusion. Apart from this, we stress to other facts why the Deduction Theorem is not applicable to run a proof by contradiction. Finally, we present an example from industrial cooperation where the Deduction Theorem is not correctly applied with the consequence that the obtained result contradicts the well-known aggregation issue.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
140,508
2407.19402
NVC-1B: A Large Neural Video Coding Model
The emerging large models have achieved notable progress in the fields of natural language processing and computer vision. However, large models for neural video coding are still unexplored. In this paper, we try to explore how to build a large neural video coding model. Based on a small baseline model, we gradually scale up the model sizes of its different coding parts, including the motion encoder-decoder, motion entropy model, contextual encoder-decoder, contextual entropy model, and temporal context mining module, and analyze the influence of model sizes on video compression performance. Then, we explore to use different architectures, including CNN, mixed CNN-Transformer, and Transformer architectures, to implement the neural video coding model and analyze the influence of model architectures on video compression performance. Based on our exploration results, we design the first neural video coding model with more than 1 billion parameters -- NVC-1B. Experimental results show that our proposed large model achieves a significant video compression performance improvement over the small baseline model, and represents the state-of-the-art compression efficiency. We anticipate large models may bring up the video coding technologies to the next level.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
476,769
2404.08006
Learning Efficient and Fair Policies for Uncertainty-Aware Collaborative Human-Robot Order Picking
In collaborative human-robot order picking systems, human pickers and Autonomous Mobile Robots (AMRs) travel independently through a warehouse and meet at pick locations where pickers load items onto the AMRs. In this paper, we consider an optimization problem in such systems where we allocate pickers to AMRs in a stochastic environment. We propose a novel multi-objective Deep Reinforcement Learning (DRL) approach to learn effective allocation policies to maximize pick efficiency while also aiming to improve workload fairness amongst human pickers. In our approach, we model the warehouse states using a graph, and define a neural network architecture that captures regional information and effectively extracts representations related to efficiency and workload. We develop a discrete-event simulation model, which we use to train and evaluate the proposed DRL approach. In the experiments, we demonstrate that our approach can find non-dominated policy sets that outline good trade-offs between fairness and efficiency objectives. The trained policies outperform the benchmarks in terms of both efficiency and fairness. Moreover, they show good transferability properties when tested on scenarios with different warehouse sizes. The implementation of the simulation model, proposed approach, and experiments are published.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
446,074
1812.00884
Cluster-Based Learning from Weakly Labeled Bags in Digital Pathology
To alleviate the burden of gathering detailed expert annotations when training deep neural networks, we propose a weakly supervised learning approach to recognize metastases in microscopic images of breast lymph nodes. We describe an alternative training loss which clusters weakly labeled bags in latent space to inform relevance of patch-instances during training of a convolutional neural network. We evaluate our method on the Camelyon dataset which contains high-resolution digital slides of breast lymph nodes, where labels are provided at the image-level and only subsets of patches are made available during training.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
115,371
1805.02830
Several Tunable GMM Kernels
While tree methods have been popular in practice, researchers and practitioners are also looking for simple algorithms which can reach similar accuracy of trees. In 2010, (Ping Li UAI'10) developed the method of "abc-robust-logitboost" and compared it with other supervised learning methods on datasets used by the deep learning literature. In this study, we propose a series of "tunable GMM kernels" which are simple and perform largely comparably to tree methods on the same datasets. Note that "abc-robust-logitboost" substantially improved the original "GDBT" in that (a) it developed a tree-split formula based on second-order information of the derivatives of the loss function; (b) it developed a new set of derivatives for multi-class classification formulation. In the prior study in 2017, the "generalized min-max" (GMM) kernel was shown to have good performance compared to the "radial-basis function" (RBF) kernel. However, as demonstrated in this paper, the original GMM kernel is often not as competitive as tree methods on the datasets used in the deep learning literature. Since the original GMM kernel has no parameters, we propose tunable GMM kernels by adding tuning parameters in various ways. Three basic (i.e., with only one parameter) GMM kernels are the "$e$GMM kernel", "$p$GMM kernel", and "$\gamma$GMM kernel", respectively. Extensive experiments show that they are able to produce good results for a large number of classification tasks. Furthermore, the basic kernels can be combined to boost the performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
96,924
2405.15475
Efficient Degradation-aware Any Image Restoration
Reconstructing missing details from degraded low-quality inputs poses a significant challenge. Recent progress in image restoration has demonstrated the efficacy of learning large models capable of addressing various degradations simultaneously. Nonetheless, these approaches introduce considerable computational overhead and complex learning paradigms, limiting their practical utility. In response, we propose \textit{DaAIR}, an efficient All-in-One image restorer employing a Degradation-aware Learner (DaLe) in the low-rank regime to collaboratively mine shared aspects and subtle nuances across diverse degradations, generating a degradation-aware embedding. By dynamically allocating model capacity to input degradations, we realize an efficient restorer integrating holistic and specific learning within a unified model. Furthermore, DaAIR introduces a cost-efficient parameter update mechanism that enhances degradation awareness while maintaining computational efficiency. Extensive comparisons across five image degradations demonstrate that our DaAIR outperforms both state-of-the-art All-in-One models and degradation-specific counterparts, affirming our efficacy and practicality. The source will be publicly made available at https://eduardzamfir.github.io/daair/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
456,954
2409.06888
A Quality Diversity Method to Automatically Generate Multi-Agent Path Finding Benchmark Maps
We use the Quality Diversity (QD) algorithm with Neural Cellular Automata (NCA) to generate benchmark maps for Multi-Agent Path Finding (MAPF) algorithms. Previously, MAPF algorithms are tested using fixed, human-designed benchmark maps. However, such fixed benchmark maps have several problems. First, these maps may not cover all the potential failure scenarios for the algorithms. Second, when comparing different algorithms, fixed benchmark maps may introduce bias leading to unfair comparisons between algorithms. Third, since researchers test new algorithms on a small set of fixed benchmark maps, the design of the algorithms may overfit to the small set of maps. In this work, we take advantage of the QD algorithm to (1) generate maps with patterns to comprehensively understand the performance of MAPF algorithms, (2) be able to make fair comparisons between two MAPF algorithms, providing further information on the selection between two algorithms and on the design of the algorithms. Empirically, we employ this technique to generate diverse benchmark maps to evaluate and compare the behavior of different types of MAPF algorithms, including search-based, priority-based, rule-based, and learning-based algorithms. Through both single-algorithm experiments and comparisons between algorithms, we identify patterns where each algorithm excels and detect disparities in runtime or success rates between different algorithms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
487,308
2406.14483
Valid Error Bars for Neural Weather Models using Conformal Prediction
Neural weather models have shown immense potential as inexpensive and accurate alternatives to physics-based models. However, most models trained to perform weather forecasting do not quantify the uncertainty associated with their forecasts. This limits the trust in the model and the usefulness of the forecasts. In this work we construct and formalise a conformal prediction framework as a post-processing method for estimating this uncertainty. The method is model-agnostic and gives calibrated error bounds for all variables, lead times and spatial locations. No modifications are required to the model and the computational cost is negligible compared to model training. We demonstrate the usefulness of the conformal prediction framework on a limited area neural weather model for the Nordic region. We further explore the advantages of the framework for deterministic and probabilistic models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
466,332
2006.02703
Fast Unbalanced Optimal Transport on a Tree
This study examines the time complexities of the unbalanced optimal transport problems from an algorithmic perspective for the first time. We reveal which problems in unbalanced optimal transport can/cannot be solved efficiently. Specifically, we prove that the Kantorovich Rubinstein distance and optimal partial transport in the Euclidean metric cannot be computed in strongly subquadratic time under the strong exponential time hypothesis. Then, we propose an algorithm that solves a more general unbalanced optimal transport problem exactly in quasi-linear time on a tree metric. The proposed algorithm processes a tree with one million nodes in less than one second. Our analysis forms a foundation for the theoretical study of unbalanced optimal transport algorithms and opens the door to the applications of unbalanced optimal transport to million-scale datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,119
2501.00048
Stroke Prediction using Clinical and Social Features in Machine Learning
Every year in the United States, 800,000 individuals suffer a stroke - one person every 40 seconds, with a death occurring every four minutes. While individual factors vary, certain predictors are more prevalent in determining stroke risk. As strokes are the second leading cause of death and disability worldwide, predicting stroke likelihood based on lifestyle factors is crucial. Showing individuals their stroke risk could motivate lifestyle changes, and machine learning offers solutions to this prediction challenge. Neural networks excel at predicting outcomes based on training features like lifestyle factors, however, they're not the only option. Logistic regression models can also effectively compute the likelihood of binary outcomes based on independent variables, making them well-suited for stroke prediction. This analysis will compare both neural networks (dense and convolutional) and logistic regression models for stroke prediction, examining their pros, cons, and differences to develop the most effective predictor that minimizes false negatives.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
521,506
2111.09917
Interactive Set Discovery
We study the problem of set discovery where given a few example tuples of a desired set, we want to find the set in a collection of sets. A challenge is that the example tuples may not uniquely identify a set, and a large number of candidate sets may be returned. Our focus is on interactive exploration to set discovery where additional example tuples from the candidate sets are shown and the user either accepts or rejects them as members of the target set. The goal is to find the target set with the least number of user interactions. The problem can be cast as an optimization problem where we want to find a decision tree that can guide the search to the target set with the least number of questions to be answered by the user. We propose a general algorithm that is capable of reaching an optimal solution and two variations of it that strike a balance between the quality of a solution and the running time. We also propose a novel pruning strategy that safely reduces the search space without introducing false negatives. We evaluate the efficiency and the effectiveness of our algorithms through an extensive experimental study using both real and synthetic datasets and comparing them to previous approaches in the literature. We show that our pruning strategy reduces the running time of the search algorithms by 2-5 orders of magnitude.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
267,147
2308.03573
When Federated Learning meets Watermarking: A Comprehensive Overview of Techniques for Intellectual Property Protection
Federated Learning (FL) is a technique that allows multiple participants to collaboratively train a Deep Neural Network (DNN) without the need of centralizing their data. Among other advantages, it comes with privacy-preserving properties making it attractive for application in sensitive contexts, such as health care or the military. Although the data are not explicitly exchanged, the training procedure requires sharing information about participants' models. This makes the individual models vulnerable to theft or unauthorized distribution by malicious actors. To address the issue of ownership rights protection in the context of Machine Learning (ML), DNN Watermarking methods have been developed during the last five years. Most existing works have focused on watermarking in a centralized manner, but only a few methods have been designed for FL and its unique constraints. In this paper, we provide an overview of recent advancements in Federated Learning watermarking, shedding light on the new challenges and opportunities that arise in this field.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
384,085
2407.02336
Mining Constraints from Reference Process Models for Detecting Best-Practice Violations in Event Log
Detecting undesired process behavior is one of the main tasks of process mining and various conformance-checking techniques have been developed to this end. These techniques typically require a normative process model as input, specifically designed for the processes to be analyzed. Such models are rarely available, though, and their creation involves considerable manual effort.However, reference process models serve as best-practice templates for organizational processes in a plethora of domains, containing valuable knowledge about general behavioral relations in well-engineered processes. These general models can thus mitigate the need for dedicated models by providing a basis to check for undesired behavior. Still, finding a perfectly matching reference model for a real-life event log is unrealistic because organizational needs can vary, despite similarities in process execution. Furthermore, event logs may encompass behavior related to different reference models, making traditional conformance checking impractical as it requires aligning process executions to individual models. To still use reference models for conformance checking, we propose a framework for mining declarative best-practice constraints from a reference model collection, automatically selecting constraints that are relevant for a given event log, and checking for best-practice violations. We demonstrate the capability of our framework to detect best-practice violations through an evaluation based on real-world process model collections and event logs.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
469,683
2412.14220
Distilled Pooling Transformer Encoder for Efficient Realistic Image Dehazing
This paper proposes a lightweight neural network designed for realistic image dehazing, utilizing a Distilled Pooling Transformer Encoder, named DPTE-Net. Recently, while vision transformers (ViTs) have achieved great success in various vision tasks, their self-attention (SA) module's complexity scales quadratically with image resolution, hindering their applicability on resource-constrained devices. To overcome this, the proposed DPTE-Net substitutes traditional SA modules with efficient pooling mechanisms, significantly reducing computational demands while preserving ViTs' learning capabilities. To further enhance semantic feature learning, a distillation-based training process is implemented which transfers rich knowledge from a larger teacher network to DPTE-Net. Additionally, DPTE-Net is trained within a generative adversarial network (GAN) framework, leveraging the strong generalization of GAN in image restoration, and employs a transmission-aware loss function to dynamically adapt to varying haze densities. Experimental results on various benchmark datasets have shown that the proposed DPTE-Net can achieve competitive dehazing performance when compared to state-of-the-art methods while maintaining low computational complexity, making it a promising solution for resource-limited applications. The code of this work is available at https://github.com/tranleanh/dpte-net.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
518,626
2307.01559
Secure Deep Learning-based Distributed Intelligence on Pocket-sized Drones
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard. Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted. To tackle this concern, we propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone. Compared to a State-of-the-Art visual pose estimation network that entirely runs onboard, a larger network executed in a distributed way improves the $R^2$ score by +0.19; in case of attack, our approach detects it within 2s with 95% probability.
false
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
false
377,396
2407.15616
Sustainable broadcasting in Blockchain Network with Reinforcement Learning
Recent estimates put the carbon footprint of Bitcoin and Ethereum at an average of 64 and 26 million tonnes of CO2 per year, respectively. To address this growing problem, several possible approaches have been proposed in the literature: creating alternative blockchain consensus mechanisms, applying redundancy reduction techniques, utilizing renewable energy sources, and employing energy-efficient devices, etc. In this paper, we follow the second avenue and propose an efficient approach based on reinforcement learning that improves the block broadcasting scheme in blockchain networks. The analysis and experimental results confirmed that the proposed improvement of the block propagation scheme could cleverly handle network dynamics and achieve better results than the default approach. Additionally, our technical integration of the simulator and developed RL environment can be used as a complete solution for further study of new schemes and protocols that use RL or other ML techniques.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
475,257
1806.10069
Deep $k$-Means: Jointly clustering with $k$-Means and learning representations
We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for $k$-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
101,481
1705.04524
Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks
Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics. As a result, these models suffer from accuracy decay over a long time and thus require frequent calibration. In this work, we address this issue by formulating BP estimation as a sequence prediction problem in which both the input and target are temporal sequences. We propose a novel deep recurrent neural network (RNN) consisting of multilayered Long Short-Term Memory (LSTM) networks, which are incorporated with (1) a bidirectional structure to access larger-scale context information of input sequence, and (2) residual connections to allow gradients in deep RNN to propagate more effectively. The proposed deep RNN model was tested on a static BP dataset, and it achieved root mean square error (RMSE) of 3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction respectively, surpassing the accuracy of traditional BP prediction models. On a multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81 mmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP prediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction, respectively, which outperforms all previous models with notable improvement. The experimental results suggest that modeling the temporal dependencies in BP dynamics significantly improves the long-term BP prediction accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
73,350
2307.09312
Multi-Modal Discussion Transformer: Integrating Text, Images and Graph Transformers to Detect Hate Speech on Social Media
We present the Multi-Modal Discussion Transformer (mDT), a novel methodfor detecting hate speech in online social networks such as Reddit discussions. In contrast to traditional comment-only methods, our approach to labelling a comment as hate speech involves a holistic analysis of text and images grounded in the discussion context. This is done by leveraging graph transformers to capture the contextual relationships in the discussion surrounding a comment and grounding the interwoven fusion layers that combine text and image embeddings instead of processing modalities separately. To evaluate our work, we present a new dataset, HatefulDiscussions, comprising complete multi-modal discussions from multiple online communities on Reddit. We compare the performance of our model to baselines that only process individual comments and conduct extensive ablation studies.
false
false
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
true
380,130
1711.11536
Predicting Severe Sepsis Using Text from the Electronic Health Record
Employing a machine learning approach we predict, up to 24 hours prior, a diagnosis of severe sepsis. Strongly predictive models are possible that use only text reports from the Electronic Health Record (EHR), and omit structured numerical data. Unstructured text alone gives slightly better performance than structured data alone, and the combination further improves performance. We also discuss advantages of using unstructured EHR text for modeling, as compared to structured EHR data.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
85,798
2107.01595
Learning in nonatomic games, Part I: Finite action spaces and population games
We examine the long-run behavior of a wide range of dynamics for learning in nonatomic games, in both discrete and continuous time. The class of dynamics under consideration includes fictitious play and its regularized variants, the best-reply dynamics (again, possibly regularized), as well as the dynamics of dual averaging / "follow the regularized leader" (which themselves include as special cases the replicator dynamics and Friedman's projection dynamics). Our analysis concerns both the actual trajectory of play and its time-average, and we cover potential and monotone games, as well as games with an evolutionarily stable state (global or otherwise). We focus exclusively on games with finite action spaces; nonatomic games with continuous action spaces are treated in detail in Part II of this paper.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
244,533
2001.03017
Shallow Encoder Deep Decoder (SEDD) Networks for Image Encryption and Decryption
This paper explores a new framework for lossy image encryption and decryption using a simple shallow encoder neural network E for encryption, and a complex deep decoder neural network D for decryption. E is kept simple so that encoding can be done on low power and portable devices and can in principle be any nonlinear function which outputs an encoded vector. D is trained to decode the encodings using the dataset of image - encoded vector pairs obtained from E and happens independently of E. As the encodings come from E which while being a simple neural network, still has thousands of random parameters and therefore the encodings would be practically impossible to crack without D. This approach differs from autoencoders as D is trained completely independently of E, although the structure may seem similar. Therefore, this paper also explores empirically if a deep neural network can learn to reconstruct the original data in any useful form given the output of a neural network or any other nonlinear function, which can have very useful applications in Cryptanalysis. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the decoded images from D along with some limitations.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
159,864
1805.10421
Enhanced-alignment Measure for Binary Foreground Map Evaluation
The existing binary foreground map (FM) measures to address various types of errors in either pixel-wise or structural ways. These measures consider pixel-level match or image-level information independently, while cognitive vision studies have shown that human vision is highly sensitive to both global information and local details in scenes. In this paper, we take a detailed look at current binary FM evaluation measures and propose a novel and effective E-measure (Enhanced-alignment measure). Our measure combines local pixel values with the image-level mean value in one term, jointly capturing image-level statistics and local pixel matching information. We demonstrate the superiority of our measure over the available measures on 4 popular datasets via 5 meta-measures, including ranking models for applications, demoting generic, random Gaussian noise maps, ground-truth switch, as well as human judgments. We find large improvements in almost all the meta-measures. For instance, in terms of application ranking, we observe improvementrangingfrom9.08% to 19.65% compared with other popular measures.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
98,676
2203.16214
Adaptive Divergence-based Non-negative Latent Factor Analysis
High-Dimensional and Incomplete (HDI) data are frequently found in various industrial applications with complex interactions among numerous nodes, which are commonly non-negative for representing the inherent non-negativity of node interactions. A Non-negative Latent Factor (NLF) model is able to extract intrinsic features from such data efficiently. However, existing NLF models all adopt a static divergence metric like Euclidean distance or {\alpha}-\b{eta} divergence to build its learning objective, which greatly restricts its scalability of accurately representing HDI data from different domains. Aiming at addressing this issue, this study presents an Adaptive Divergence-based Non-negative Latent Factor (ADNLF) model with three-fold ideas: a) generalizing the objective function with the {\alpha}-\b{eta}-divergence to expand its potential of representing various HDI data; b) adopting a non-negative bridging function to connect the optimization variables with output latent factors for fulfilling the non-negativity constraints constantly; and c) making the divergence parameters adaptive through particle swarm optimization, thereby facilitating adaptive divergence in the learning objective to achieve high scalability. Empirical studies are conducted on four HDI datasets from real applications, whose results demonstrate that in comparison with state-of-the-art NLF models, an ADNLF model achieves significantly higher estimation accuracy for missing data of an HDI dataset with high computational efficiency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,696
2106.03455
Knowledge-aware Deep Framework for Collaborative Skin Lesion Segmentation and Melanoma Recognition
Deep learning techniques have shown their superior performance in dermatologist clinical inspection. Nevertheless, melanoma diagnosis is still a challenging task due to the difficulty of incorporating the useful dermatologist clinical knowledge into the learning process. In this paper, we propose a novel knowledge-aware deep framework that incorporates some clinical knowledge into collaborative learning of two important melanoma diagnosis tasks, i.e., skin lesion segmentation and melanoma recognition. Specifically, to exploit the knowledge of morphological expressions of the lesion region and also the periphery region for melanoma identification, a lesion-based pooling and shape extraction (LPSE) scheme is designed, which transfers the structure information obtained from skin lesion segmentation into melanoma recognition. Meanwhile, to pass the skin lesion diagnosis knowledge from melanoma recognition to skin lesion segmentation, an effective diagnosis guided feature fusion (DGFF) strategy is designed. Moreover, we propose a recursive mutual learning mechanism that further promotes the inter-task cooperation, and thus iteratively improves the joint learning capability of the model for both skin lesion segmentation and melanoma recognition. Experimental results on two publicly available skin lesion datasets show the effectiveness of the proposed method for melanoma analysis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
239,325
0912.2316
Heart Rate Variability Analysis Using Threshold of Wavelet Package Coefficients
In this paper, a new efficient feature extraction method based on the adaptive threshold of wavelet package coefficients is presented. This paper especially deals with the assessment of autonomic nervous system using the background variation of the signal Heart Rate Variability HRV extracted from the wavelet package coefficients. The application of a wavelet package transform allows us to obtain a time-frequency representation of the signal, which provides better insight in the frequency distribution of the signal with time. A 6 level decomposition of HRV was achieved with db4 as mother wavelet, and the above two bands LF and HF were combined in 12 specialized frequencies sub-bands obtained in wavelet package transform. Features extracted from these coefficients can efficiently represent the characteristics of the original signal. ANOVA statistical test is used for the evaluation of proposed algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
5,143
1605.02066
Shape from Mixed Polarization
Shape from Polarization (SfP) estimates surface normals using photos captured at different polarizer rotations. Fundamentally, the SfP model assumes that light is reflected either diffusely or specularly. However, this model is not valid for many real-world surfaces exhibiting a mixture of diffuse and specular properties. To address this challenge, previous methods have used a sequential solution: first, use an existing algorithm to separate the scene into diffuse and specular components, then apply the appropriate SfP model. In this paper, we propose a new method that jointly uses viewpoint and polarization data to holistically separate diffuse and specular components, recover refractive index, and ultimately recover 3D shape. By involving the physics of polarization in the separation process, we demonstrate competitive results with a benchmark method, while recovering additional information (e.g. refractive index).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,570
2103.14823
Co-Imitation Learning without Expert Demonstration
Imitation learning is a primary approach to improve the efficiency of reinforcement learning by exploiting the expert demonstrations. However, in many real scenarios, obtaining expert demonstrations could be extremely expensive or even impossible. To overcome this challenge, in this paper, we propose a novel learning framework called Co-Imitation Learning (CoIL) to exploit the past good experiences of the agents themselves without expert demonstration. Specifically, we train two different agents via letting each of them alternately explore the environment and exploit the peer agent's experience. While the experiences could be valuable or misleading, we propose to estimate the potential utility of each piece of experience with the expected gain of the value function. Thus the agents can selectively imitate from each other by emphasizing the more useful experiences while filtering out noisy ones. Experimental results on various tasks show significant superiority of the proposed Co-Imitation Learning framework, validating that the agents can benefit from each other without external supervision.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
226,982
2410.13318
Computational Approaches to Arabic-English Code-Switching
Natural Language Processing (NLP) is a vital computational method for addressing language processing, analysis, and generation. NLP tasks form the core of many daily applications, from automatic text correction to speech recognition. While significant research has focused on NLP tasks for the English language, less attention has been given to Modern Standard Arabic and Dialectal Arabic. Globalization has also contributed to the rise of Code-Switching (CS), where speakers mix languages within conversations and even within individual words (intra-word CS). This is especially common in Arab countries, where people often switch between dialects or between dialects and a foreign language they master. CS between Arabic and English is frequent in Egypt, especially on social media. Consequently, a significant amount of code-switched content can be found online. Such code-switched data needs to be investigated and analyzed for several NLP tasks to tackle the challenges of this multilingual phenomenon and Arabic language challenges. No work has been done before for several integral NLP tasks on Arabic-English CS data. In this work, we focus on the Named Entity Recognition (NER) task and other tasks that help propose a solution for the NER task on CS data, e.g., Language Identification. This work addresses this gap by proposing and applying state-of-the-art techniques for Modern Standard Arabic and Arabic-English NER. We have created the first annotated CS Arabic-English corpus for the NER task. Also, we apply two enhancement techniques to improve the NER tagger on CS data using CS contextual embeddings and data augmentation techniques. All methods showed improvements in the performance of the NER taggers on CS data. Finally, we propose several intra-word language identification approaches to determine the language type of a mixed text and identify whether it is a named entity or not.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
499,481
2201.06027
A Reliable Reinforcement Learning for Resource Allocation in Uplink NOMA-URLLC Networks
In this paper, we propose a deep state-action-reward-state-action (SARSA) $\lambda$ learning approach for optimising the uplink resource allocation in non-orthogonal multiple access (NOMA) aided ultra-reliable low-latency communication (URLLC). To reduce the mean decoding error probability in time-varying network environments, this work designs a reliable learning algorithm for providing a long-term resource allocation, where the reward feedback is based on the instantaneous network performance. With the aid of the proposed algorithm, this paper addresses three main challenges of the reliable resource sharing in NOMA-URLLC networks: 1) user clustering; 2) Instantaneous feedback system; and 3) Optimal resource allocation. All of these designs interact with the considered communication environment. Lastly, we compare the performance of the proposed algorithm with conventional Q-learning and SARSA Q-learning algorithms. The simulation outcomes show that: 1) Compared with the traditional Q learning algorithms, the proposed solution is able to converges within \myb{200} episodes for providing as low as $10^{-2}$ long-term mean error; 2) NOMA assisted URLLC outperforms traditional OMA systems in terms of decoding error probabilities; and 3) The proposed feedback system is efficient for the long-term learning process.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
275,592
2410.03223
Consultation on Industrial Machine Faults with Large language Models
Industrial machine fault diagnosis is a critical component of operational efficiency and safety in manufacturing environments. Traditional methods rely heavily on expert knowledge and specific machine learning models, which can be limited in their adaptability and require extensive labeled data. This paper introduces a novel approach leveraging Large Language Models (LLMs), specifically through a structured multi-round prompting technique, to improve fault diagnosis accuracy. By dynamically crafting prompts, our method enhances the model's ability to synthesize information from diverse data sources, leading to improved contextual understanding and actionable recommendations. Experimental results demonstrate that our approach outperforms baseline models, achieving an accuracy of 91% in diagnosing various fault types. The findings underscore the potential of LLMs in revolutionizing industrial fault consultation practices, paving the way for more effective maintenance strategies in complex environments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
494,686
2406.08300
From Chaos to Clarity: 3DGS in the Dark
Novel view synthesis from raw images provides superior high dynamic range (HDR) information compared to reconstructions from low dynamic range RGB images. However, the inherent noise in unprocessed raw images compromises the accuracy of 3D scene representation. Our study reveals that 3D Gaussian Splatting (3DGS) is particularly susceptible to this noise, leading to numerous elongated Gaussian shapes that overfit the noise, thereby significantly degrading reconstruction quality and reducing inference speed, especially in scenarios with limited views. To address these issues, we introduce a novel self-supervised learning framework designed to reconstruct HDR 3DGS from a limited number of noisy raw images. This framework enhances 3DGS by integrating a noise extractor and employing a noise-robust reconstruction loss that leverages a noise distribution prior. Experimental results show that our method outperforms LDR/HDR 3DGS and previous state-of-the-art (SOTA) self-supervised and supervised pre-trained models in both reconstruction quality and inference speed on the RawNeRF dataset across a broad range of training views. Code can be found in \url{https://lizhihao6.github.io/Raw3DGS}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
463,425
2307.13552
On Solving the Rubik's Cube with Domain-Independent Planners Using Standard Representations
Rubik's Cube (RC) is a well-known and computationally challenging puzzle that has motivated AI researchers to explore efficient alternative representations and problem-solving methods. The ideal situation for planning here is that a problem be solved optimally and efficiently represented in a standard notation using a general-purpose solver and heuristics. The fastest solver today for RC is DeepCubeA with a custom representation, and another approach is with Scorpion planner with State-Action-Space+ (SAS+) representation. In this paper, we present the first RC representation in the popular PDDL language so that the domain becomes more accessible to PDDL planners, competitions, and knowledge engineering tools, and is more human-readable. We then bridge across existing approaches and compare performance. We find that in one comparable experiment, DeepCubeA (trained with 12 RC actions) solves all problems with varying complexities, albeit only 78.5% are optimal plans. For the same problem set, Scorpion with SAS+ representation and pattern database heuristics solves 61.50% problems optimally, while FastDownward with PDDL representation and FF heuristic solves 56.50% problems, out of which 79.64% of the plans generated were optimal. Our study provides valuable insights into the trade-offs between representational choice and plan optimality that can help researchers design future strategies for challenging domains combining general-purpose solving methods (planning, reinforcement learning), heuristics, and representations (standard or custom).
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
381,627
2002.04732
Generalized Bayesian Cram\'{e}r-Rao Inequality via Information Geometry of Relative $\alpha$-Entropy
The relative $\alpha$-entropy is the R\'enyi analog of relative entropy and arises prominently in information-theoretic problems. Recent information geometric investigations on this quantity have enabled the generalization of the Cram\'{e}r-Rao inequality, which provides a lower bound for the variance of an estimator of an escort of the underlying parametric probability distribution. However, this framework remains unexamined in the Bayesian framework. In this paper, we propose a general Riemannian metric based on relative $\alpha$-entropy to obtain a generalized Bayesian Cram\'{e}r-Rao inequality. This establishes a lower bound for the variance of an unbiased estimator for the $\alpha$-escort distribution starting from an unbiased estimator for the underlying distribution. We show that in the limiting case when the entropy order approaches unity, this framework reduces to the conventional Bayesian Cram\'{e}r-Rao inequality. Further, in the absence of priors, the same framework yields the deterministic Cram\'{e}r-Rao inequality.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
163,680
2412.04097
D-LORD for Motion Stylization
This paper introduces a novel framework named D-LORD (Double Latent Optimization for Representation Disentanglement), which is designed for motion stylization (motion style transfer and motion retargeting). The primary objective of this framework is to separate the class and content information from a given motion sequence using a data-driven latent optimization approach. Here, class refers to person-specific style, such as a particular emotion or an individual's identity, while content relates to the style-agnostic aspect of an action, such as walking or jumping, as universally understood concepts. The key advantage of D-LORD is its ability to perform style transfer without needing paired motion data. Instead, it utilizes class and content labels during the latent optimization process. By disentangling the representation, the framework enables the transformation of one motion sequences style to another's style using Adaptive Instance Normalization. The proposed D-LORD framework is designed with a focus on generalization, allowing it to handle different class and content labels for various applications. Additionally, it can generate diverse motion sequences when specific class and content labels are provided. The framework's efficacy is demonstrated through experimentation on three datasets: the CMU XIA dataset for motion style transfer, the MHAD dataset, and the RRIS Ability dataset for motion retargeting. Notably, this paper presents the first generalized framework for motion style transfer and motion retargeting, showcasing its potential contributions in this area.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
514,259
2011.10799
Deep Smartphone Sensors-WiFi Fusion for Indoor Positioning and Tracking
We address the indoor localization problem, where the goal is to predict user's trajectory from the data collected by their smartphone, using inertial sensors such as accelerometer, gyroscope and magnetometer, as well as other environment and network sensors such as barometer and WiFi. Our system implements a deep learning based pedestrian dead reckoning (deep PDR) model that provides a high-rate estimation of the relative position of the user. Using Kalman Filter, we correct the PDR's drift using WiFi that provides a prediction of the user's absolute position each time a WiFi scan is received. Finally, we adjust Kalman Filter results with a map-free projection method that takes into account the physical constraints of the environment (corridors, doors, etc.) and projects the prediction on the possible walkable paths. We test our pipeline on IPIN'19 Indoor Localization challenge dataset and demonstrate that it improves the winner's results by 20\% using the challenge evaluation protocol.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
true
207,625
1912.07024
Multi-Object Rearrangement with Monte Carlo Tree Search:A Case Study on Planar Nonprehensile Sorting
In this work, we address a planar non-prehensile sorting task. Here, a robot needs to push many densely packed objects belonging to different classes into a configuration where these classes are clearly separated from each other. To achieve this, we propose to employ Monte Carlo tree search equipped with a task-specific heuristic function. We evaluate the algorithm on various simulated and real-world sorting tasks. We observe that the algorithm is capable to reliably sort large numbers of convex and non-convex objects, as well as convex objects in the presence of immovable obstacles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
157,497
2303.15023
ScarceNet: Animal Pose Estimation with Scarce Annotations
Animal pose estimation is an important but under-explored task due to the lack of labeled data. In this paper, we tackle the task of animal pose estimation with scarce annotations, where only a small set of labeled data and unlabeled images are available. At the core of the solution to this problem setting is the use of the unlabeled data to compensate for the lack of well-labeled animal pose data. To this end, we propose the ScarceNet, a pseudo label-based approach to generate artificial labels for the unlabeled images. The pseudo labels, which are generated with a model trained with the small set of labeled images, are generally noisy and can hurt the performance when directly used for training. To solve this problem, we first use a small-loss trick to select reliable pseudo labels. Although effective, the selection process is improvident since numerous high-loss samples are left unused. We further propose to identify reusable samples from the high-loss samples based on an agreement check. Pseudo labels are re-generated to provide supervision for those reusable samples. Lastly, we introduce a student-teacher framework to enforce a consistency constraint since there are still samples that are neither reliable nor reusable. By combining the reliable pseudo label selection with the reusable sample re-labeling and the consistency constraint, we can make full use of the unlabeled data. We evaluate our approach on the challenging AP-10K dataset, where our approach outperforms existing semi-supervised approaches by a large margin. We also test on the TigDog dataset, where our approach can achieve better performance than domain adaptation based approaches when only very few annotations are available. Our code is available at the project website.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
354,331
2401.06010
Attention to detail: inter-resolution knowledge distillation
The development of computer vision solutions for gigapixel images in digital pathology is hampered by significant computational limitations due to the large size of whole slide images. In particular, digitizing biopsies at high resolutions is a time-consuming process, which is necessary due to the worsening results from the decrease in image detail. To alleviate this issue, recent literature has proposed using knowledge distillation to enhance the model performance at reduced image resolutions. In particular, soft labels and features extracted at the highest magnification level are distilled into a model that takes lower-magnification images as input. However, this approach fails to transfer knowledge about the most discriminative image regions in the classification process, which may be lost when the resolution is decreased. In this work, we propose to distill this information by incorporating attention maps during training. In particular, our formulation leverages saliency maps of the target class via grad-CAMs, which guides the lower-resolution Student model to match the Teacher distribution by minimizing the l2 distance between them. Comprehensive experiments on prostate histology image grading demonstrate that the proposed approach substantially improves the model performance across different image resolutions compared to previous literature.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,989
2401.03495
Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions
Due to the inherent flexibility of prompting, foundation models have emerged as the predominant force in the fields of natural language processing and computer vision. The recent introduction of the Segment Anything Model (SAM) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image segmentation, thereby introducing a plethora of previously unexplored capabilities. However, the viability of its application to medical image segmentation remains uncertain, given the substantial distinctions between natural and medical images. In this work, we provide a comprehensive overview of recent endeavors aimed at extending the efficacy of SAM to medical image segmentation tasks, encompassing both empirical benchmarking and methodological adaptations. Additionally, we explore potential avenues for future research directions in SAM's role within medical image segmentation. While direct application of SAM to medical image segmentation does not yield satisfactory performance on multi-modal and multi-target medical datasets so far, numerous insights gleaned from these efforts serve as valuable guidance for shaping the trajectory of foundational models in the realm of medical image analysis. To support ongoing research endeavors, we maintain an active repository that contains an up-to-date paper list and a succinct summary of open-source projects at https://github.com/YichiZhang98/SAM4MIS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,126
1711.01866
Combined shared/dedicated resource allocation for Device-to-Device Communication
Device-to-device (D2D) communication is an effective technology enhancing spectral efficiency and network throughput of contemporary cellular networks. Typically, the users exploiting D2D reuse the same radio resources as common cellular users (CUEs) that communicate through a base station. This mode is known as shared mode. Another option is to dedicate specific amount of resources exclusively for the D2D users in so-called a dedicated mode. In this paper, we propose novel combined share/dedicated resource allocation scheme enabling the D2D users to utilize the radio resources in both modes simultaneously. To that end, we propose a graph theory-based framework for efficient resource allocation. Within this framework, neighborhood relations between the cellular users and the D2D users and between the individual D2D users are derived to form graphs. Then, the graphs are decomposed into subgraphs to identify resources, which can be reused by other users so that capacity of the D2D users is maximized. The results show that the sum D2D capacity is increased from 1.67 and 2.5 times (depending on a density of D2D users) when compared to schemes selecting only between shared or dedicated modes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
83,967
2207.13223
XADLiME: eXplainable Alzheimer's Disease Likelihood Map Estimation via Clinically-guided Prototype Learning
Diagnosing Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Furthermore, there is a high possibility of getting entangled with normal aging. We propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs using clinically-guided prototype learning. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. We then measure the similarities between latent clinical features and well-established prototypes, estimating a "pseudo" likelihood map. By considering this pseudo map as an enriched reference, we employ an estimating network to estimate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from two perspectives: clinical and morphological. During the inference, this estimated likelihood map served as a substitute over unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,228
2212.04096
ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction
This work introduces alternating latent topologies (ALTO) for high-fidelity reconstruction of implicit 3D surfaces from noisy point clouds. Previous work identifies that the spatial arrangement of latent encodings is important to recover detail. One school of thought is to encode a latent vector for each point (point latents). Another school of thought is to project point latents into a grid (grid latents) which could be a voxel grid or triplane grid. Each school of thought has tradeoffs. Grid latents are coarse and lose high-frequency detail. In contrast, point latents preserve detail. However, point latents are more difficult to decode into a surface, and quality and runtime suffer. In this paper, we propose ALTO to sequentially alternate between geometric representations, before converging to an easy-to-decode latent. We find that this preserves spatial expressiveness and makes decoding lightweight. We validate ALTO on implicit 3D recovery and observe not only a performance improvement over the state-of-the-art, but a runtime improvement of 3-10$\times$. Project website at https://visual.ee.ucla.edu/alto.htm/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,322
2002.08041
Enlarging Discriminative Power by Adding an Extra Class in Unsupervised Domain Adaptation
In this paper, we study the problem of unsupervised domain adaptation that aims at obtaining a prediction model for the target domain using labeled data from the source domain and unlabeled data from the target domain. There exists an array of recent research based on the idea of extracting features that are not only invariant for both domains but also provide high discriminative power for the target domain. In this paper, we propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class. The trained model based on the new class samples is capable of extracting the features that are more discriminative by repositioning data of current classes in the target domain and therefore drawing the decision boundaries more effectively. Our idea is highly generic so that it is compatible with many existing methods such as DANN, VADA, and DIRT-T. We conduct various experiments for the standard data commonly used for the evaluation of unsupervised domain adaptations and demonstrate that our algorithm achieves the SOTA performance for many scenarios.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
164,643
1607.04763
Design and implementation of computational platform for social-humanoid robot Lumen as an exhibition guide in Electrical Engineering Days 2015
Social Robot Lumen is an Artificial Intelligence development project that aims to create an Artificial Intelligence (AI) which allows a humanoid robot to communicate with human being naturally. In this study, Lumen will be developed to be a tour guide in Electrical Engineering Days 2015 exhibition. In developing an AI, there are a lot of modules that need to be developed separately. To make the development easier, we need a computational platform which becomes basis for all developers to give easiness in developing the modules in parallel way. That computational platform that developed by the writer is called Lumen Server. Lumen Server has two main function, which are to be a bridge between all Lumen intelligence modules with NAO robot, and to be the communication bridge between those Lumen intelligence modules. For the second function, Lumen Server implements the AMQP protocol using RabbitMQ. Besides that, writer also developed a control system for robot movement called Lumen Motion. Lumen motion is implemented by modelling the movement of NAO robot and also by creating a control system using fuzzy logic controller. Writer also developed a program that connects all Lumen intelligence modules so that Lumen can act like a tour guide. The implementation of this program uses FSM and event-driven program. From implementation result, all the features which were designed are successfully implemented. By the developing of this computational platform, it can ease the development of Lumen in the future. For next development, it must be focused on creating integration system so that Lumen can be more responsive to the environment. ----- Sosial Robot Lumen adalah proyek pengembangan kecerdasan buatan yang bertujuan untuk menciptakan kecerdasan buatan atau artificial intelligence (AI) yang memungkinkan robot untuk dapat berkomunikasi dengan manusia secara alami.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
58,657
2206.01523
A High-Performance Customer Churn Prediction System based on Self-Attention
Customer churn prediction is a challenging domain of research that contributes to customer retention strategy. The predictive performance of existing machine learning models, which are often adopted by churn communities, appear to be at a bottleneck, partly due to models' poor feature extraction capability. Therefore, a novel algorithm, a hybrid neural network with self-attention enhancement (HNNSAE), is proposed in this paper to improve the efficiency of feature screening and feature extraction, consequently improving the model's predictive performance. This model consists of three main blocks. The first block is the entity embedding layer, which is employed to process the categorical variables transformed into 0-1 code. The second block is the feature extractor, which extracts the significant features through the multi-head self-attention mechanism. In addition, to improve the feature extraction effect, we stack the residual connection neural network on multi-head self-attention modules. The third block is a classifier, which is a three-layer multilayer perceptron. This work conducts experiments on publicly available dataset related to commercial bank customers. The result demonstrates that HNNSAE significantly outperforms the other Individual Machine Learning (IML), Ensemble Machine Learning (EML), and Deep Learning (DL) methods tested in this paper. Furthermore, we compare the performance of the feature extractor proposed in this paper with that of other three feature extractors and find that the method proposed in this paper significantly outperforms other methods. In addition, four hypotheses about model prediction performance and overfitting risk are tested on the publicly available dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
300,506
1703.02239
Functions that Emerge through End-to-End Reinforcement Learning - The Direction for Artificial General Intelligence -
Recently, triggered by the impressive results in TV-games or game of Go by Google DeepMind, end-to-end reinforcement learning (RL) is collecting attentions. Although little is known, the author's group has propounded this framework for around 20 years and already has shown various functions that emerge in a neural network (NN) through RL. In this paper, they are introduced again at this timing. "Function Modularization" approach is deeply penetrated subconsciously. The inputs and outputs for a learning system can be raw sensor signals and motor commands. "State space" or "action space" generally used in RL show the existence of functional modules. That has limited reinforcement learning to learning only for the action-planning module. In order to extend reinforcement learning to learning of the entire function on a huge degree of freedom of a massively parallel learning system and to explain or develop human-like intelligence, the author has believed that end-to-end RL from sensors to motors using a recurrent NN (RNN) becomes an essential key. Especially in the higher functions, this approach is very effective by being free from the need to decide their inputs and outputs. The functions that emerge, we have confirmed, through RL using a NN cover a broad range from real robot learning with raw camera pixel inputs to acquisition of dynamic functions in a RNN. Those are (1)image recognition, (2)color constancy (optical illusion), (3)sensor motion (active recognition), (4)hand-eye coordination and hand reaching movement, (5)explanation of brain activities, (6)communication, (7)knowledge transfer, (8)memory, (9)selective attention, (10)prediction, (11)exploration. The end-to-end RL enables the emergence of very flexible comprehensive functions that consider many things in parallel although it is difficult to give the boundary of each function clearly.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
69,520
2410.08395
Nesterov acceleration in benignly non-convex landscapes
While momentum-based optimization algorithms are commonly used in the notoriously non-convex optimization problems of deep learning, their analysis has historically been restricted to the convex and strongly convex setting. In this article, we partially close this gap between theory and practice and demonstrate that virtually identical guarantees can be obtained in optimization problems with a `benign' non-convexity. We show that these weaker geometric assumptions are well justified in overparametrized deep learning, at least locally. Variations of this result are obtained for a continuous time model of Nesterov's accelerated gradient descent algorithm (NAG), the classical discrete time version of NAG, and versions of NAG with stochastic gradient estimates with purely additive noise and with noise that exhibits both additive and multiplicative scaling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
497,093
1907.07739
Deep Multi-View Learning via Task-Optimal CCA
Canonical Correlation Analysis (CCA) is widely used for multimodal data analysis and, more recently, for discriminative tasks such as multi-view learning; however, it makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification, regularization with a second view, and semi-supervised learning on real data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
138,948
2309.15603
Distill Knowledge in Multi-task Reinforcement Learning with Optimal-Transport Regularization
In multi-task reinforcement learning, it is possible to improve the data efficiency of training agents by transferring knowledge from other different but related tasks. Because the experiences from different tasks are usually biased toward the specific task goals. Traditional methods rely on Kullback-Leibler regularization to stabilize the transfer of knowledge from one task to the others. In this work, we explore the direction of replacing the Kullback-Leibler divergence with a novel Optimal transport-based regularization. By using the Sinkhorn mapping, we can approximate the Optimal transport distance between the state distribution of tasks. The distance is then used as an amortized reward to regularize the amount of sharing information. We experiment our frameworks on several grid-based navigation multi-goal to validate the effectiveness of the approach. The results show that our added Optimal transport-based rewards are able to speed up the learning process of agents and outperforms several baselines on multi-task learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
395,036
2109.01844
On robustness of generative representations against catastrophic forgetting
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks. Although many continual learning methods are proposed to mitigate this drawback, the main question remains unanswered: what is the root cause of catastrophic forgetting? In this work, we aim at answering this question by posing and validating a set of research hypotheses related to the specificity of representations built internally by neural models. More specifically, we design a set of empirical evaluations that compare the robustness of representations in discriminative and generative models against catastrophic forgetting. We observe that representations learned by discriminative models are more prone to catastrophic forgetting than their generative counterparts, which sheds new light on the advantages of developing generative models for continual learning. Finally, our work opens new research pathways and possibilities to adopt generative models in continual learning beyond mere replay mechanisms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
253,554
2005.08505
Sudden Attention Shifts on Wikipedia During the COVID-19 Crisis
We study how the COVID-19 pandemic, alongside the severe mobility restrictions that ensued, has impacted information access on Wikipedia, the world's largest online encyclopedia. A longitudinal analysis that combines pageview statistics for 12 Wikipedia language editions with mobility reports published by Apple and Google reveals massive shifts in the volume and nature of information seeking patterns during the pandemic. Interestingly, while we observe a transient increase in Wikipedia's pageview volume following mobility restrictions, the nature of information sought was impacted more permanently. These changes are most pronounced for language editions associated with countries where the most severe mobility restrictions were implemented. We also find that articles belonging to different topics behaved differently; e.g., attention towards entertainment-related topics is lingering and even increasing, while the interest in health- and biology-related topics was either small or transient. Our results highlight the utility of Wikipedia for studying how the pandemic is affecting people's needs, interests, and concerns.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
177,642
2011.01776
Leveraging Activity Recognition to Enable Protective Behavior Detection in Continuous Data
Protective behavior exhibited by people with chronic pain (CP) during physical activities is the key to understanding their physical and emotional states. Existing automatic protective behavior detection (PBD) methods rely on pre-segmentation of activities predefined by users. However, in real life, people perform activities casually. Therefore, where those activities present difficulties for people with chronic pain, technology-enabled support should be delivered continuously and automatically adapted to activity type and occurrence of protective behavior. Hence, to facilitate ubiquitous CP management, it becomes critical to enable accurate PBD over continuous data. In this paper, we propose to integrate human activity recognition (HAR) with PBD via a novel hierarchical HAR-PBD architecture comprising graph-convolution and long short-term memory (GC-LSTM) networks, and alleviate class imbalances using a class-balanced focal categorical-cross-entropy (CFCC) loss. Through in-depth evaluation of the approach using a CP patients' dataset, we show that the leveraging of HAR, GC-LSTM networks, and CFCC loss leads to clear increase in PBD performance against the baseline (macro F1 score of 0.81 vs. 0.66 and precision-recall area-under-the-curve (PR-AUC) of 0.60 vs. 0.44). We conclude by discussing possible use cases of the hierarchical architecture in CP management and beyond. We also discuss current limitations and ways forward.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
204,702
2007.08341
Zero Correlation Zone Sequences With Flexible Block-Repetitive Spectral Constraints
A general construction of a set of time-domain sequences with sparse periodic correlation functions, having multiple segments of consecutive zero-values, i.e. multiple zero correlation zones (ZCZs), is presented. All such sequences have a common and block-repetitive structure of the positions of zeros in their Discrete Fourier Transform (DFT) sequences, where the exact positions of zeros in a DFT sequence do not impact the positions and sizes of ZCZs. This property offers completely new degree of flexibility in designing signals with good correlation properties under various spectral constraints. The non-zero values of the DFT sequences are determined by the corresponding frequency-domain modulation sequences, constructed as the element-by-element product of two component sequences: a "long" one, which is common to the set of time-domain sequences, and which controls the peak-to-average power ratio (PAPR) properties of the time-domain sequences; and a "short" one, periodically extended to match the length of the "long" component sequence, which controls the non-zero crosscorrelation values of all time-domain sequences. It is shown that 0 dB PAPR of time-domain sequences can be obtained if the "long" frequency-domain component sequence is selected to be a modulatable constant amplitude zero autocorrelation (MCAZAC) sequence. A generalized and simplified unified construction of MCAZAC sequences is presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
187,596
2305.09662
Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation
Text-guided human motion generation has drawn significant interest because of its impactful applications spanning animation and robotics. Recently, application of diffusion models for motion generation has enabled improvements in the quality of generated motions. However, existing approaches are limited by their reliance on relatively small-scale motion capture data, leading to poor performance on more diverse, in-the-wild prompts. In this paper, we introduce Make-An-Animation, a text-conditioned human motion generation model which learns more diverse poses and prompts from large-scale image-text datasets, enabling significant improvement in performance over prior works. Make-An-Animation is trained in two stages. First, we train on a curated large-scale dataset of (text, static pseudo-pose) pairs extracted from image-text datasets. Second, we fine-tune on motion capture data, adding additional layers to model the temporal dimension. Unlike prior diffusion models for motion generation, Make-An-Animation uses a U-Net architecture similar to recent text-to-video generation models. Human evaluation of motion realism and alignment with input text shows that our model reaches state-of-the-art performance on text-to-motion generation.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
364,727
2108.10803
Reducing Exposure Bias in Training Recurrent Neural Network Transducers
When recurrent neural network transducers (RNNTs) are trained using the typical maximum likelihood criterion, the prediction network is trained only on ground truth label sequences. This leads to a mismatch during inference, known as exposure bias, when the model must deal with label sequences containing errors. In this paper we investigate approaches to reducing exposure bias in training to improve the generalization of RNNT models for automatic speech recognition (ASR). A label-preserving input perturbation to the prediction network is introduced. The input token sequences are perturbed using SwitchOut and scheduled sampling based on an additional token language model. Experiments conducted on the 300-hour Switchboard dataset demonstrate their effectiveness. By reducing the exposure bias, we show that we can further improve the accuracy of a high-performance RNNT ASR model and obtain state-of-the-art results on the 300-hour Switchboard dataset.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
252,011
cs/0701038
Approximate Eigenstructure of LTV Channels with Compactly Supported Spreading
In this article we obtain estimates on the approximate eigenstructure of channels with a spreading function supported only on a set of finite measure $|U|$.Because in typical application like wireless communication the spreading function is a random process corresponding to a random Hilbert--Schmidt channel operator $\BH$ we measure this approximation in terms of the ratio of the $p$--norm of the deviation from variants of the Weyl symbol calculus to the $a$--norm of the spreading function itself. This generalizes recent results obtained for the case $p=2$ and $a=1$. We provide a general approach to this topic and consider then operators with $|U|<\infty$ in more detail. We show the relation to pulse shaping and weighted norms of ambiguity functions. Finally we derive several necessary conditions on $|U|$, such that the approximation error is below certain levels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,021
2008.06081
Adversarial Training and Provable Robustness: A Tale of Two Objectives
We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can consistently match or outperform prior approaches for provable l infinity robustness. Notably, we achieve 6.60% verified test error on MNIST at epsilon = 0.3, and 66.57% on CIFAR-10 with epsilon = 8/255.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
191,691
1912.07360
Non-intrusive Load Monitoring via Multi-label Sparse Representation based Classification
This work follows the approach of multi-label classification for non-intrusive load monitoring (NILM). We modify the popular sparse representation based classification (SRC) approach (developed for single label classification) to solve multi-label classification problems. Results on benchmark REDD and Pecan Street dataset shows significant improvement over state-of-the-art techniques with small volume of training data.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
157,585
2106.13054
Weighted majority tournaments and Kemeny ranking with 2-dimensional Euclidean preferences
The assumption that voters' preferences share some common structure is a standard way to circumvent NP-hardness results in social choice problems. While the Kemeny ranking problem is NP-hard in the general case, it is known to become easy if the preferences are 1-dimensional Euclidean. In this note, we prove that the Kemeny ranking problem remains NP-hard for $k$-dimensional Euclidean preferences with $k\!\ge\!2$ under norms $\ell_1$, $\ell_2$ and $\ell_\infty$, by showing that any weighted tournament (resp. weighted bipartite tournament) with weights of same parity (resp. even weights) is inducible as the weighted majority tournament of a profile of 2-Euclidean preferences under norm $\ell_2$ (resp. $\ell_1,\ell_{\infty}$), computable in polynomial time. More generally, this result regarding weighted tournaments implies, essentially, that hardness results relying on the (weighted) majority tournament that hold in the general case (e.g., NP-hardness of Slater ranking) are still true for 2-dimensional Euclidean preferences.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
242,945
0912.5043
Bit Error Rate is Convex at High SNR
Motivated by a wide-spread use of convex optimization techniques, convexity properties of bit error rate of the maximum likelihood detector operating in the AWGN channel are studied for arbitrary constellations and bit mappings, which may also include coding under maximum-likelihood decoding. Under this generic setting, the pairwise probability of error and bit error rate are shown to be convex functions of the SNR in the high SNR regime with explicitly-determined boundary. The bit error rate is also shown to be a convex function of the noise power in the low noise/high SNR regime.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,223
2410.15224
Robust Low-rank Tensor Train Recovery
Tensor train (TT) decomposition represents an $N$-order tensor using $O(N)$ matrices (i.e., factors) of small dimensions, achieved through products among these factors. Due to its compact representation, TT decomposition has found wide applications, including various tensor recovery problems in signal processing and quantum information. In this paper, we study the problem of reconstructing a TT format tensor from measurements that are contaminated by outliers with arbitrary values. Given the vulnerability of smooth formulations to corruptions, we use an $\ell_1$ loss function to enhance robustness against outliers. We first establish the $\ell_1/\ell_2$-restricted isometry property (RIP) for Gaussian measurement operators, demonstrating that the information in the TT format tensor can be preserved using a number of measurements that grows linearly with $N$. We also prove the sharpness property for the $\ell_1$ loss function optimized over TT format tensors. Building on the $\ell_1/\ell_2$-RIP and sharpness property, we then propose two complementary methods to recover the TT format tensor from the corrupted measurements: the projected subgradient method (PSubGM), which optimizes over the entire tensor, and the factorized Riemannian subgradient method (FRSubGM), which optimizes directly over the factors. Compared to PSubGM, the factorized approach FRSubGM significantly reduces the memory cost at the expense of a slightly slower convergence rate. Nevertheless, we show that both methods, with diminishing step sizes, converge linearly to the ground-truth tensor given an appropriate initialization, which can be obtained by a truncated spectral method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,426
cmp-lg/9707014
Towards a PURE Spoken Dialogue System for Information Access
With the rapid explosion of the World Wide Web, it is becoming increasingly possible to easily acquire a wide variety of information such as flight schedules, yellow pages, used car prices, current stock prices, entertainment event schedules, account balances, etc. It would be very useful to have spoken dialogue interfaces for such information access tasks. We identify portability, usability, robustness, and extensibility as the four primary design objectives for such systems. In other words, the objective is to develop a PURE (Portable, Usable, Robust, Extensible) system. A two-layered dialogue architecture for spoken dialogue systems is presented where the upper layer is domain-independent and the lower layer is domain-specific. We are implementing this architecture in a mixed-initiative system that accesses flight arrival/departure information from the World Wide Web.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,781
2208.08655
Generating Synthetic Clinical Data that Capture Class Imbalanced Distributions with Generative Adversarial Networks: Example using Antiretroviral Therapy for HIV
Clinical data usually cannot be freely distributed due to their highly confidential nature and this hampers the development of machine learning in the healthcare domain. One way to mitigate this problem is by generating realistic synthetic datasets using generative adversarial networks (GANs). However, GANs are known to suffer from mode collapse thus creating outputs of low diversity. This lowers the quality of the synthetic healthcare data, and may cause it to omit patients of minority demographics or neglect less common clinical practices. In this paper, we extend the classic GAN setup with an additional variational autoencoder (VAE) and include an external memory to replay latent features observed from the real samples to the GAN generator. Using antiretroviral therapy for human immunodeficiency virus (ART for HIV) as a case study, we show that our extended setup overcomes mode collapse and generates a synthetic dataset that accurately describes severely imbalanced class distributions commonly found in real-world clinical variables. In addition, we demonstrate that our synthetic dataset is associated with a very low patient disclosure risk, and that it retains a high level of utility from the ground truth dataset to support the development of downstream machine learning algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
313,430
2206.10592
Identifying Electrocardiogram Abnormalities Using a Handcrafted-Rule-Enhanced Neural Network
A large number of people suffer from life-threatening cardiac abnormalities, and electrocardiogram (ECG) analysis is beneficial to determining whether an individual is at risk of such abnormalities. Automatic ECG classification methods, especially the deep learning based ones, have been proposed to detect cardiac abnormalities using ECG records, showing good potential to improve clinical diagnosis and help early prevention of cardiovascular diseases. However, the predictions of the known neural networks still do not satisfactorily meet the needs of clinicians, and this phenomenon suggests that some information used in clinical diagnosis may not be well captured and utilized by these methods. In this paper, we introduce some rules into convolutional neural networks, which help present clinical knowledge to deep learning based ECG analysis, in order to improve automated ECG diagnosis performance. Specifically, we propose a Handcrafted-Rule-enhanced Neural Network (called HRNN) for ECG classification with standard 12-lead ECG input, which consists of a rule inference module and a deep learning module. Experiments on two large-scale public ECG datasets show that our new approach considerably outperforms existing state-of-the-art methods. Further, our proposed approach not only can improve the diagnosis performance, but also can assist in detecting mislabelled ECG samples. Our codes are available at https://github.com/alwaysbyx/ecg_processing.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
303,967
2012.01547
Meshless physics-informed deep learning method for three-dimensional solid mechanics
Deep learning and the collocation method are merged and used to solve partial differential equations describing structures' deformation. We have considered different types of materials: linear elasticity, hyperelasticity (neo-Hookean) with large deformation, and von Mises plasticity with isotropic and kinematic hardening. The performance of this deep collocation method (DCM) depends on the architecture of the neural network and the corresponding hyperparameters. The presented DCM is meshfree and avoids any spatial discretization, which is usually needed for the finite element method (FEM). We show that the DCM can capture the response qualitatively and quantitatively, without the need for any data generation using other numerical methods such as the FEM. Data generation usually is the main bottleneck in most data-driven models. The deep learning model is trained to learn the model's parameters yielding accurate approximate solutions. Once the model is properly trained, solutions can be obtained almost instantly at any point in the domain, given its spatial coordinates. Therefore, the deep collocation method is potentially a promising standalone technique to solve partial differential equations involved in the deformation of materials and structural systems as well as other physical phenomena.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,446
2005.13094
Salience and Market-aware Skill Extraction for Job Targeting
At LinkedIn, we want to create economic opportunity for everyone in the global workforce. To make this happen, LinkedIn offers a reactive Job Search system, and a proactive Jobs You May Be Interested In (JYMBII) system to match the best candidates with their dream jobs. One of the most challenging tasks for developing these systems is to properly extract important skill entities from job postings and then target members with matched attributes. In this work, we show that the commonly used text-based \emph{salience and market-agnostic} skill extraction approach is sub-optimal because it only considers skill mention and ignores the salient level of a skill and its market dynamics, i.e., the market supply and demand influence on the importance of skills. To address the above drawbacks, we present \model, our deployed \emph{salience and market-aware} skill extraction system. The proposed \model ~shows promising results in improving the online performance of job recommendation (JYMBII) ($+1.92\%$ job apply) and skill suggestions for job posters ($-37\%$ suggestion rejection rate). Lastly, we present case studies to show interesting insights that contrast traditional skill recognition method and the proposed \model~from occupation, industry, country, and individual skill levels. Based on the above promising results, we deployed the \model ~online to extract job targeting skills for all $20$M job postings served at LinkedIn.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
178,896
2212.07098
Interactive Sketching of Mannequin Poses
It can be easy and even fun to sketch humans in different poses. In contrast, creating those same poses on a 3D graphics "mannequin" is comparatively tedious. Yet 3D body poses are necessary for various downstream applications. We seek to preserve the convenience of 2D sketching while giving users of different skill levels the flexibility to accurately and more quickly pose\slash refine a 3D mannequin. At the core of the interactive system, we propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style. Training such a model is challenging because of artist variability, a lack of sketch training data with corresponding ground truth 3D poses, and the high dimensionality of human pose-space. Our unique approach to synthesizing vector graphics training data underpins our integrated ML-and-kinematics system. We validate the system by tightly coupling it with a user interface, and by performing a user study, in addition to quantitative comparisons.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
336,304
2403.02995
Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking which could be detrimental to business operations. Consequently, the detection of these URLs is of crucial importance; however, current Machine Learning (ML) models are susceptible to backdoor attacks. These attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa. This manipulation results in misclassification and leads to incorrect model behavior. Therefore, integrating defense mechanisms into the architecture of ML models becomes an imperative consideration to fortify against potential attacks. The focus of this study is on backdoor attacks in the context of URL detection using ensemble trees. By illuminating the motivations behind such attacks, highlighting the roles of attackers, and emphasizing the critical importance of effective defense strategies, this paper contributes to the ongoing efforts to fortify ML models against adversarial threats within the ML domain in network security. We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels with the aim of mitigating backdoor attacks on ensemble tree classifiers. We conducted a case study using the Alexa and Phishing Site URL datasets and showed that LF attacks can be addressed using our proposed defense mechanism. Our experimental results prove that the LF attack achieved an Attack Success Rate (ASR) between 50-65% within 2-5%, and the innovative defense method successfully detected poisoned labels with an accuracy of up to 100%.
false
false
false
false
true
false
true
false
false
false
false
false
true
true
false
false
false
true
435,030
2304.00551
Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks
Coordination in a large number of networked robots is a challenging task, especially when robots are constantly moving around the environment and there are malicious attacks within the network. Various approaches in the literature exist for detecting malicious robots, such as message sampling or suspicious behavior analysis. However, these approaches require every robot to sample every other robot in the network, leading to a slow detection process that degrades team performance. This paper introduces a method that significantly decreases the detection time for legitimate robots to identify malicious robots in a scenario where legitimate robots are randomly moving around the environment. Our method leverages the concept of ``Dynamic Crowd Vetting" by utilizing observations from random encounters and trusted neighboring robots' opinions to quickly improve the accuracy of detecting malicious robots. The key intuition is that as long as each legitimate robot accurately estimates the legitimacy of at least some fixed subset of the team, the second-hand information they receive from trusted neighbors is enough to correct any misclassifications and provide accurate trust estimations of the rest of the team. We show that the size of this fixed subset can be characterized as a function of fundamental graph and random walk properties. Furthermore, we formally show that as the number of robots in the team increases the detection time remains constant. We develop a closed form expression for the critical number of time-steps required for our algorithm to successfully identify the true legitimacy of each robot to within a specified failure probability. Our theoretical results are validated through simulations demonstrating significant reductions in detection time when compared to previous works that do not leverage trusted neighbor information.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
355,730
2106.11189
Well-tuned Simple Nets Excel on Tabular Datasets
Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
242,308
1910.11472
Learning Domain Invariant Representations for Child-Adult Classification from Speech
Diagnostic procedures for ASD (autism spectrum disorder) involve semi-naturalistic interactions between the child and a clinician. Computational methods to analyze these sessions require an end-to-end speech and language processing pipeline that go from raw audio to clinically-meaningful behavioral features. An important component of this pipeline is the ability to automatically detect who is speaking when i.e., perform child-adult speaker classification. This binary classification task is often confounded due to variability associated with the participants' speech and background conditions. Further, scarcity of training data often restricts direct application of conventional deep learning methods. In this work, we address two major sources of variability - age of the child and data source collection location - using domain adversarial learning which does not require labeled target domain data. We use two methods, generative adversarial training with inverted label loss and gradient reversal layer to learn speaker embeddings invariant to the above sources of variability, and analyze different conditions under which the proposed techniques improve over conventional learning methods. Using a large corpus of ADOS-2 (autism diagnostic observation schedule, 2nd edition) sessions, we demonstrate upto 13.45% and 6.44% relative improvements over conventional learning methods.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
150,783
2105.08253
Finding a Needle in a Haystack: Tiny Flying Object Detection in 4K Videos using a Joint Detection-and-Tracking Approach
Detecting tiny objects in a high-resolution video is challenging because the visual information is little and unreliable. Specifically, the challenge includes very low resolution of the objects, MPEG artifacts due to compression and a large searching area with many hard negatives. Tracking is equally difficult because of the unreliable appearance, and the unreliable motion estimation. Luckily, we found that by combining this two challenging tasks together, there will be mutual benefits. Following the idea, in this paper, we present a neural network model called the Recurrent Correlational Network, where detection and tracking are jointly performed over a multi-frame representation learned through a single, trainable, and end-to-end network. The framework exploits a convolutional long short-term memory network for learning informative appearance changes for detection, while the learned representation is shared in tracking for enhancing its performance. In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements in detection performance over deep single-frame detectors and existing motion-based detectors. Furthermore, our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on a bird image dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,704
2006.15495
Model-Driven Deep Learning for Massive MU-MIMO with Finite-Alphabet Precoding
Massive multiuser multiple-input multiple-output (MU-MIMO) has been the mainstream technology in fifth-generation wireless systems. To reduce high hardware costs and power consumption in massive MU-MIMO, low-resolution digital-to-analog converters (DAC) for each antenna and radio frequency (RF) chain in downlink transmission is used, which brings challenges for precoding design. To circumvent these obstacles, we develop a model-driven deep learning (DL) network for massive MU-MIMO with finite-alphabet precoding in this article. The architecture of the network is specially designed by unfolding an iterative algorithm. Compared with the traditional state-of-the-art techniques, the proposed DL-based precoder shows significant advantages in performance, complexity, and robustness to channel estimation error under Rayleigh fading channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
184,525
2212.05916
NETpred: Network-based modeling and prediction of multiple connected market indices
Market prediction plays a major role in supporting financial decisions. An emerging approach in this domain is to use graphical modeling and analysis to for prediction of next market index fluctuations. One important question in this domain is how to construct an appropriate graphical model of the data that can be effectively used by a semi-supervised GNN to predict index fluctuations. In this paper, we introduce a framework called NETpred that generates a novel heterogeneous graph representing multiple related indices and their stocks by using several stock-stock and stock-index relation measures. It then thoroughly selects a diverse set of representative nodes that cover different parts of the state space and whose price movements are accurately predictable. By assigning initial predicted labels to such a set of nodes, NETpred makes sure that the subsequent GCN model can be successfully trained using a semi-supervised learning process. The resulting model is then used to predict the stock labels which are finally aggregated to infer the labels for all the index nodes in the graph. Our comprehensive set of experiments shows that NETpred improves the performance of the state-of-the-art baselines by 3%-5% in terms of F-score measure on different well-known data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
335,937
1406.1247
Shared Representation Learning for Heterogeneous Face Recognition
After intensive research, heterogenous face recognition is still a challenging problem. The main difficulties are owing to the complex relationship between heterogenous face image spaces. The heterogeneity is always tightly coupled with other variations, which makes the relationship of heterogenous face images highly nonlinear. Many excellent methods have been proposed to model the nonlinear relationship, but they apt to overfit to the training set, due to limited samples. Inspired by the unsupervised algorithms in deep learning, this paper proposes an novel framework for heterogeneous face recognition. We first extract Gabor features at some localized facial points, and then use Restricted Boltzmann Machines (RBMs) to learn a shared representation locally to remove the heterogeneity around each facial point. Finally, the shared representations of local RBMs are connected together and processed by PCA. Two problems (Sketch-Photo and NIR-VIS) and three databases are selected to evaluate the proposed method. For Sketch-Photo problem, we obtain perfect results on the CUFS database. For NIR-VIS problem, we produce new state-of-the-art performance on the CASIA HFB and NIR-VIS 2.0 databases.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
33,617
2212.04362
CiaoSR: Continuous Implicit Attention-in-Attention Network for Arbitrary-Scale Image Super-Resolution
Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with arbitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learnable parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot ensemble relevant features in a large field which are important in an image. To address these issues, this paper proposes a continuous implicit attention-in-attention network, called CiaoSR. We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this implicit attention network to exploit additional non-local information. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image SR methods with the same backbone. In addition, CiaoSR also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,421
1707.01356
On the classification of $\mathbb{Z}_4$-codes
In this note, we study the classification of $\mathbb{Z}_4$-codes. For some special cases $(k_1,k_2)$, by hand, we give a classification of $\mathbb{Z}_4$-codes of length $n$ and type $4^{k_1}2^{k_2}$ satisfying a certain condition. Our exhaustive computer search completes the classification of $\mathbb{Z}_4$-codes of lengths up to $7$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
76,525
2407.09698
RIO-CPD: A Riemannian Geometric Method for Correlation-aware Online Change Point Detection
Change point detection aims to identify abrupt shifts occurring at multiple points within a data sequence. This task becomes particularly challenging in the online setting, where different types of changes can occur, including shifts in both the marginal and joint distributions of the data. In this paper, we address these challenges by tracking the Riemannian geometry of correlation matrices, allowing Riemannian metrics to compute the geodesic distance as an accurate measure of correlation dynamics. We introduce Rio-CPD, a non-parametric, correlation-aware online change point detection framework that integrates the Riemannian geometry of the manifold of symmetric positive definite matrices with the cumulative sum (CUSUM) statistic for detecting change points. Rio-CPD employs a novel CUSUM design by computing the geodesic distance between current observations and the Fr\'echet mean of prior observations. With appropriate choices of Riemannian metrics, Rio-CPD offers a simple yet effective and computationally efficient algorithm. Experimental results on both synthetic and real-world datasets demonstrate that Rio-CPD outperforms existing methods on detection accuracy, average detection delay and efficiency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
472,685
2003.09861
A SIDARTHE Model of COVID-19 Epidemic in Italy
In late December 2019, a novel strand of Coronavirus (SARS-CoV-2) causing a severe, potentially fatal respiratory syndrome (COVID-19) was identified in Wuhan, Hubei Province, China and is causing outbreaks in multiple world countries, soon becoming a pandemic. Italy has now become the most hit country outside of Asia: on March 16, 2020, the Italian Civil Protection documented a total of 27980 confirmed cases and 2158 deaths of people tested positive for SARS-CoV-2. In the context of an emerging infectious disease outbreak, it is of paramount importance to predict the trend of the epidemic in order to plan an effective control strategy and to determine its impact. This paper proposes a new epidemic model that discriminates between infected individuals depending on whether they have been diagnosed and on the severity of their symptoms. The distinction between diagnosed and non-diagnosed is important because non-diagnosed individuals are more likely to spread the infection than diagnosed ones, since the latter are typically isolated, and can explain misperceptions of the case fatality rate and of the seriousness of the epidemic phenomenon. Being able to predict the amount of patients that will develop life-threatening symptoms is important since the disease frequently requires hospitalisation (and even Intensive Care Unit admission) and challenges the healthcare system capacity. We show how the basic reproduction number can be redefined in the new framework, thus capturing the potential for epidemic containment. Simulation results are compared with real data on the COVID-19 epidemic in Italy, to show the validity of the model and compare different possible predicted scenarios depending on the adopted countermeasures.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
169,166
2312.17235
A Simple LLM Framework for Long-Range Video Question-Answering
We present LLoVi, a language-based framework for long-range video question-answering (LVQA). Unlike prior long-range video understanding methods, which are often costly and require specialized long-range video modeling design (e.g., memory queues, state-space layers, etc.), our approach uses a frame/clip-level visual captioner (e.g., BLIP2, LaViLa, LLaVA) coupled with a Large Language Model (GPT-3.5, GPT-4) leading to a simple yet surprisingly effective LVQA framework. Specifically, we decompose short and long-range modeling aspects of LVQA into two stages. First, we use a short-term visual captioner to generate textual descriptions of short video clips (0.5-8s in length) densely sampled from a long input video. Afterward, an LLM aggregates the densely extracted short-term captions to perform long-range temporal reasoning needed to understand the whole video and answer a question. To analyze what makes our simple framework so effective, we thoroughly evaluate various components of our system. Our empirical analysis reveals that the choice of the visual captioner and LLM is critical for good LVQA performance. Furthermore, we show that a specialized prompt that asks the LLM first to summarize the noisy short-term visual captions and then answer a given input question leads to a significant LVQA performance boost. On EgoSchema, which is best known as a very long-form video question-answering benchmark, our method achieves 50.3% accuracy, outperforming the previous best-performing approach by 18.1% (absolute gain). In addition, our approach outperforms the previous state-of-the-art by 4.1% and 3.1% on NeXT-QA and IntentQA. We also extend LLoVi to grounded LVQA and show that it outperforms all prior methods on the NeXT-GQA dataset. We will release our code at https://github.com/CeeZh/LLoVi.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,658
2008.00821
Experimental results on palmvein-based personal recognition by multi-snapshot fusion of textural features
In this paper, we investigate multiple snapshot fusion of textural features for palmvein recognition including identification and verification. Although the literature proposed several approaches for palmvein recognition, the palmvein performance is still affected by identification and verification errors. As well-known, palmveins are usually described by line-based methods which enhance the vein flow. This is claimed to be unique from person to person. However, palmvein images are also characterized by texture that can be pointed out by textural features, which relies on recent and efficient hand-crafted algorithms such as Local Binary Patterns, Local Phase Quantization, Local Tera Pattern, Local directional Pattern, and Binarized Statistical Image Features (LBP, LPQ, LTP, LDP and BSIF, respectively), among others. Finally, they can be easily managed at feature-level fusion, when more than one sample can be acquired for recognition. Therefore, multi-snapshot fusion can be adopted for exploiting these features complementarity. Our goal in this paper is to show that this is confirmed for palmvein recognition, thus allowing to achieve very high recognition rates on a well-known benchmark data set.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
190,126
cmp-lg/9706003
Three New Probabilistic Models for Dependency Parsing: An Exploration
After presenting a novel O(n^3) parsing algorithm for dependency grammar, we develop three contrasting ways to stochasticize it. We propose (a) a lexical affinity model where words struggle to modify each other, (b) a sense tagging model where words fluctuate randomly in their selectional preferences, and (c) a generative model where the speaker fleshes out each word's syntactic and conceptual structure without regard to the implications for the hearer. We also give preliminary empirical results from evaluating the three models' parsing performance on annotated Wall Street Journal training text (derived from the Penn Treebank). In these results, the generative (i.e., top-down) model performs significantly better than the others, and does about equally well at assigning part-of-speech tags.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,741
2206.14801
Meta-Learning over Time for Destination Prediction Tasks
A need to understand and predict vehicles' behavior underlies both public and private goals in the transportation domain, including urban planning and management, ride-sharing services, and intelligent transportation systems. Individuals' preferences and intended destinations vary throughout the day, week, and year: for example, bars are most popular in the evenings, and beaches are most popular in the summer. Despite this principle, we note that recent studies on a popular benchmark dataset from Porto, Portugal have found, at best, only marginal improvements in predictive performance from incorporating temporal information. We propose an approach based on hypernetworks, a variant of meta-learning ("learning to learn") in which a neural network learns to change its own weights in response to an input. In our case, the weights responsible for destination prediction vary with the metadata, in particular the time, of the input trajectory. The time-conditioned weights notably improve the model's error relative to ablation studies and comparable prior work, and we confirm our hypothesis that knowledge of time should improve prediction of a vehicle's intended destination.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
305,401
1910.02851
ER-index: a referential index for encrypted genomic databases
Huge DBMSs storing genomic information are being created and engineerized for doing large-scale, comprehensive and in-depth analysis of human beings and their diseases. However, recent regulations like the GDPR require that sensitive data are stored and elaborated thanks to privacy-by-design methods and software. We designed and implemented ER-index, a new full-text index in minute space which was optimized for compressing and encrypting collections of genomic sequences, and for performing on them fast pattern-search queries. Our new index complements the E2FM-index, which was introduced to compress and encrypt collections of nucleotide sequences without relying on a reference sequence. When used on collections of highly similar sequences, the ER-index allows to obtain compression ratios which are an order of magnitude smaller than those achieved with the E2FM-index, but maintaining its very good search performance. Moreover, thanks to the ER-index multi-user and multiple-keys encryption model, a single index can store the sequences related to a population of individuals so that users may perform search operations only on the sequences to which they were granted access. The ER-index C++ source code plus scripts and data to assess the tool performance are available at: https://github.com/EncryptedIndexes/erindex.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
148,355
1405.4008
CDF-Intervals: A Reliable Framework to Reason about Data with Uncertainty
This research introduces a new constraint domain for reasoning about data with uncertainty. It extends convex modeling with the notion of p-box to gain additional quantifiable information on the data whereabouts. Unlike existing approaches, the p-box envelops an unknown probability instead of approximating its representation. The p-box bounds are uniform cumulative distribution functions (cdf) in order to employ linear computations in the probabilistic domain. The reasoning by means of p-box cdf-intervals is an interval computation which is exerted on the real domain then it is projected onto the cdf domain. This operation conveys additional knowledge represented by the obtained probabilistic bounds. Empirical evaluation shows that, with minimal overhead, the output solution set realizes a full enclosure of the data along with tighter bounds on its probabilistic distributions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
33,131
2309.09492
Target-aware Bi-Transformer for Few-shot Segmentation
Traditional semantic segmentation tasks require a large number of labels and are difficult to identify unlearned categories. Few-shot semantic segmentation (FSS) aims to use limited labeled support images to identify the segmentation of new classes of objects, which is very practical in the real world. Previous researches were primarily based on prototypes or correlations. Due to colors, textures, and styles are similar in the same image, we argue that the query image can be regarded as its own support image. In this paper, we proposed the Target-aware Bi-Transformer Network (TBTNet) to equivalent treat of support images and query image. A vigorous Target-aware Transformer Layer (TTL) also be designed to distill correlations and force the model to focus on foreground information. It treats the hypercorrelation as a feature, resulting a significant reduction in the number of feature channels. Benefit from this characteristic, our model is the lightest up to now with only 0.4M learnable parameters. Futhermore, TBTNet converges in only 10% to 25% of the training epochs compared to traditional methods. The excellent performance on standard FSS benchmarks of PASCAL-5i and COCO-20i proves the efficiency of our method. Extensive ablation studies were also carried out to evaluate the effectiveness of Bi-Transformer architecture and TTL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,632
2409.13282
Velocity Field: An Informative Traveling Cost Representation for Trajectory Planning
Trajectory planning involves generating a series of space points to be followed in the near future. However, due to the complex and uncertain nature of the driving environment, it is impractical for autonomous vehicles~(AVs) to exhaustively design planning rules for optimizing future trajectories. To address this issue, we propose a local map representation method called Velocity Field. This approach provides heading and velocity priors for trajectory planning tasks, simplifying the planning process in complex urban driving. The heading and velocity priors can be learned from demonstrations of human drivers using our proposed loss. Additionally, we developed an iterative sampling-based planner to train and compare the differences between local map representations. We investigated local map representation forms for planning performance on a real-world dataset. Compared to learned rasterized cost maps, our method demonstrated greater reliability and computational efficiency.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
489,926
2310.00221
Beyond Random Noise: Insights on Anonymization Strategies from a Latent Bandit Study
This paper investigates the issue of privacy in a learning scenario where users share knowledge for a recommendation task. Our study contributes to the growing body of research on privacy-preserving machine learning and underscores the need for tailored privacy techniques that address specific attack patterns rather than relying on one-size-fits-all solutions. We use the latent bandit setting to evaluate the trade-off between privacy and recommender performance by employing various aggregation strategies, such as averaging, nearest neighbor, and clustering combined with noise injection. More specifically, we simulate a linkage attack scenario leveraging publicly available auxiliary information acquired by the adversary. Our results on three open real-world datasets reveal that adding noise using the Laplace mechanism to an individual user's data record is a poor choice. It provides the highest regret for any noise level, relative to de-anonymization probability and the ADS metric. Instead, one should combine noise with appropriate aggregation strategies. For example, using averages from clusters of different sizes provides flexibility not achievable by varying the amount of noise alone. Generally, no single aggregation strategy can consistently achieve the optimum regret for a given desired level of privacy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
395,877
2307.02138
Prompting Diffusion Representations for Cross-Domain Semantic Segmentation
While originally designed for image generation, diffusion models have recently shown to provide excellent pretrained feature representations for semantic segmentation. Intrigued by this result, we set out to explore how well diffusion-pretrained representations generalize to new domains, a crucial ability for any representation. We find that diffusion-pretraining achieves extraordinary domain generalization results for semantic segmentation, outperforming both supervised and self-supervised backbone networks. Motivated by this, we investigate how to utilize the model's unique ability of taking an input prompt, in order to further enhance its cross-domain performance. We introduce a scene prompt and a prompt randomization strategy to help further disentangle the domain-invariant information when training the segmentation head. Moreover, we propose a simple but highly effective approach for test-time domain adaptation, based on learning a scene prompt on the target domain in an unsupervised manner. Extensive experiments conducted on four synthetic-to-real and clear-to-adverse weather benchmarks demonstrate the effectiveness of our approaches. Without resorting to any complex techniques, such as image translation, augmentation, or rare-class sampling, we set a new state-of-the-art on all benchmarks. Our implementation will be publicly available at \url{https://github.com/ETHRuiGong/PTDiffSeg}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
377,599
1705.10754
A Low Dimensionality Representation for Language Variety Identification
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality --and increasing the big data suitability-- to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
74,466
2301.11709
Semantic Network Model for Sign Language Comprehension
In this study, the authors propose a computational cognitive model for sign language (SL) perception and comprehension with detailed algorithmic descriptions based on cognitive functionalities in human language processing. The semantic network model (SNM) that represents semantic relations between concepts, it is used as a form of knowledge representation. The proposed model is applied in the comprehension of sign language for classifier predicates. The spreading activation search method is initiated by labeling a set of source nodes (e.g. concepts in the semantic network) with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. The results demonstrate that the proposed search method improves the performance of sign language comprehension in the SNM.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
342,246
2309.05281
Class-Incremental Grouping Network for Continual Audio-Visual Learning
Continual learning is a challenging problem in which models need to be trained on non-stationary data across sequential tasks for class-incremental learning. While previous methods have focused on using either regularization or rehearsal-based frameworks to alleviate catastrophic forgetting in image classification, they are limited to a single modality and cannot learn compact class-aware cross-modal representations for continual audio-visual learning. To address this gap, we propose a novel class-incremental grouping network (CIGN) that can learn category-wise semantic features to achieve continual audio-visual learning. Our CIGN leverages learnable audio-visual class tokens and audio-visual grouping to continually aggregate class-aware features. Additionally, it utilizes class tokens distillation and continual grouping to prevent forgetting parameters learned from previous tasks, thereby improving the model's ability to capture discriminative audio-visual categories. We conduct extensive experiments on VGGSound-Instruments, VGGSound-100, and VGG-Sound Sources benchmarks. Our experimental results demonstrate that the CIGN achieves state-of-the-art audio-visual class-incremental learning performance. Code is available at https://github.com/stoneMo/CIGN.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
391,033