id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1908.01997
Learning Cross-Modal Deep Representations for Multi-Modal MR Image Segmentation
Multi-modal magnetic resonance imaging (MRI) is essential in clinics for comprehensive diagnosis and surgical planning. Nevertheless, the segmentation of multi-modal MR images tends to be time-consuming and challenging. Convolutional neural network (CNN)-based multi-modal MR image analysis commonly proceeds with multiple down-sampling streams fused at one or several layers. Although inspiring performance has been achieved, the feature fusion is usually conducted through simple summation or concatenation without optimization. In this work, we propose a supervised image fusion method to selectively fuse the useful information from different modalities and suppress the respective noise signals. Specifically, an attention block is introduced as guidance for the information selection. From the different modalities, one modality that contributes most to the results is selected as the master modality, which supervises the information selection of the other assistant modalities. The effectiveness of the proposed method is confirmed through breast mass segmentation in MR images of two modalities and better segmentation results are achieved compared to the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,906
1604.00117
Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding
The goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
53,977
2406.02921
Text Injection for Neural Contextual Biasing
Neural contextual biasing effectively improves automatic speech recognition (ASR) for crucial phrases within a speaker's context, particularly those that are infrequent in the training data. This work proposes contextual text injection (CTI) to enhance contextual ASR. CTI leverages not only the paired speech-text data, but also a much larger corpus of unpaired text to optimize the ASR model and its biasing component. Unpaired text is converted into speech-like representations and used to guide the model's attention towards relevant bias phrases. Moreover, we introduce a contextual text-injected (CTI) minimum word error rate (MWER) training, which minimizes the expected WER caused by contextual biasing when unpaired text is injected into the model. Experiments show that CTI with 100 billion text sentences can achieve up to 43.3% relative WER reduction from a strong neural biasing model. CTI-MWER provides a further relative improvement of 23.5%.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
461,006
1905.11828
AsymDPOP: Complete Inference for Asymmetric Distributed Constraint Optimization Problems
Asymmetric distributed constraint optimization problems (ADCOPs) are an emerging model for coordinating agents with personal preferences. However, the existing inference-based complete algorithms which use local eliminations cannot be applied to ADCOPs, as the parent agents are required to transfer their private functions to their children. Rather than disclosing private functions explicitly to facilitate local eliminations, we solve the problem by enforcing delayed eliminations and propose AsymDPOP, the first inference-based complete algorithm for ADCOPs. To solve the severe scalability problems incurred by delayed eliminations, we propose to reduce the memory consumption by propagating a set of smaller utility tables instead of a joint utility table, and to reduce the computation efforts by sequential optimizations instead of joint optimizations. The empirical evaluation indicates that AsymDPOP significantly outperforms the state-of-the-arts, as well as the vanilla DPOP with PEAV formulation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
132,561
2008.04224
The Cone epsilon-Dominance: An Approach for Evolutionary Multiobjective Optimization
We propose the cone epsilon-dominance approach to improve convergence and diversity in multiobjective evolutionary algorithms (MOEAs). A cone-eps-MOEA is presented and compared with MOEAs based on the standard Pareto relation (NSGA-II, NSGA-II*, SPEA2, and a clustered NSGA-II) and on the epsilon-dominance (eps-MOEA). The comparison is performed both in terms of computational complexity and on four performance indicators selected to quantify the quality of the final results obtained by each algorithm: the convergence, diversity, hypervolume, and coverage of many sets metrics. Sixteen well-known benchmark problems are considered in the experimental section, including the ZDT and the DTLZ families. To evaluate the possible differences amongst the algorithms, a carefully designed experiment is performed for the four performance metrics. The results obtained suggest that the cone-eps-MOEA is capable of presenting an efficient and balanced performance over all the performance metrics considered. These results strongly support the conclusion that the cone-eps-MOEA is a competitive approach for obtaining an efficient balance between convergence and diversity to the Pareto front, and as such represents a useful tool for the solution of multiobjective optimization problems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
191,176
2311.16851
Edge AI for Internet of Energy: Challenges and Perspectives
The digital landscape of the Internet of Energy (IoE) is on the brink of a revolutionary transformation with the integration of edge Artificial Intelligence (AI). This comprehensive review elucidates the promise and potential that edge AI holds for reshaping the IoE ecosystem. Commencing with a meticulously curated research methodology, the article delves into the myriad of edge AI techniques specifically tailored for IoE. The myriad benefits, spanning from reduced latency and real-time analytics to the pivotal aspects of information security, scalability, and cost-efficiency, underscore the indispensability of edge AI in modern IoE frameworks. As the narrative progresses, readers are acquainted with pragmatic applications and techniques, highlighting on-device computation, secure private inference methods, and the avant-garde paradigms of AI training on the edge. A critical analysis follows, offering a deep dive into the present challenges including security concerns, computational hurdles, and standardization issues. However, as the horizon of technology ever expands, the review culminates in a forward-looking perspective, envisaging the future symbiosis of 5G networks, federated edge AI, deep reinforcement learning, and more, painting a vibrant panorama of what the future beholds. For anyone vested in the domains of IoE and AI, this review offers both a foundation and a visionary lens, bridging the present realities with future possibilities.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
true
411,070
2208.03659
Fast Online and Relational Tracking
To overcome challenges in multiple object tracking task, recent algorithms use interaction cues alongside motion and appearance features. These algorithms use graph neural networks or transformers to extract interaction features that lead to high computation costs. In this paper, a novel interaction cue based on geometric features is presented aiming to detect occlusion and re-identify lost targets with low computational cost. Moreover, in most algorithms, camera motion is considered negligible, which is a strong assumption that is not always true and leads to ID Switch or mismatching of targets. In this paper, a method for measuring camera motion and removing its effect is presented that efficiently reduces the camera motion effect on tracking. The proposed algorithm is evaluated on MOT17 and MOT20 datasets and it achieves the state-of-the-art performance of MOT17 and comparable results on MOT20. The code is also publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,858
2303.07758
Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle Detectors
The global trends of urbanization and increased personal mobility force us to rethink the way we live and use urban space. The Traffic4cast competition series tackles this problem in a data-driven way, advancing the latest methods in machine learning for modeling complex spatial systems over time. In this edition, our dynamic road graph data combine information from road maps, $10^{12}$ probe data points, and stationary vehicle detectors in three cities over the span of two years. While stationary vehicle detectors are the most accurate way to capture traffic volume, they are only available in few locations. Traffic4cast 2022 explores models that have the ability to generalize loosely related temporal vertex data on just a few nodes to predict dynamic future traffic states on the edges of the entire road graph. In the core challenge, participants are invited to predict the likelihoods of three congestion classes derived from the speed levels in the GPS data for the entire road graph in three cities 15 min into the future. We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task. The data are aggregated in 15 min time bins for one hour prior to the prediction time. For the extended challenge, participants are tasked to predict the average travel times on super-segments 15 min into the future - super-segments are longer sequences of road segments in the graph. The competition results provide an important advance in the prediction of complex city-wide traffic states just from publicly available sparse vehicle data and without the need for large amounts of real-time floating vehicle data.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
351,369
1712.06935
Mining Smart Card Data for Travelers' Mini Activities
In the context of public transport modeling and simulation, we address the problem of mismatch between simulated transit trips and observed ones. We point to the weakness of the current travel demand modeling process; the trips it generates are over-optimistic and do not reflect the real passenger choices. We introduce the notion of mini activities the travelers do during the trips; they can explain the deviation of simulated trips from the observed trips. We propose to mine the smart card data to extract the mini activities. We develop a technique to integrate them in the generated trips and learn such an integration from two available sources, the trip history and trip planner recommendations. For an input travel demand, we build a Markov chain over the trip collection and apply the Monte Carlo Markov Chain algorithm to integrate mini activities in such a way that the selected characteristics converge to the desired distributions. We test our method in different settings on the passenger trip collection of Nancy, France. We report experimental results demonstrating a very important mismatch reduction.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
86,964
2011.13241
The Devil is in the Boundary: Exploiting Boundary Representation for Basis-based Instance Segmentation
Pursuing a more coherent scene understanding towards real-time vision applications, single-stage instance segmentation has recently gained popularity, achieving a simpler and more efficient design than its two-stage counterparts. Besides, its global mask representation often leads to superior accuracy to the two-stage Mask R-CNN which has been dominant thus far. Despite the promising advances in single-stage methods, finer delineation of instance boundaries still remains unexcavated. Indeed, boundary information provides a strong shape representation that can operate in synergy with the fully-convolutional mask features of the single-stage segmenter. In this work, we propose Boundary Basis based Instance Segmentation(B2Inst) to learn a global boundary representation that can complement existing global-mask-based methods that are often lacking high-frequency details. Besides, we devise a unified quality measure of both mask and boundary and introduce a network block that learns to score the per-instance predictions of itself. When applied to the strongest baselines in single-stage instance segmentation, our B2Inst leads to consistent improvements and accurately parse out the instance boundaries in a scene. Regardless of being single-stage or two-stage frameworks, we outperform the existing state-of-the-art methods on the COCO dataset with the same ResNet-50 and ResNet-101 backbones.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
208,423
2403.09488
Rectifying Demonstration Shortcut in In-Context Learning
Large language models (LLMs) are able to solve various tasks with only a few demonstrations utilizing their in-context learning (ICL) abilities. However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction. In this work, we term this phenomenon as the 'Demonstration Shortcut'. While previous works have primarily focused on improving ICL prediction results for predefined tasks, we aim to rectify the Demonstration Shortcut, thereby enabling the LLM to effectively learn new input-label relationships from demonstrations. To achieve this, we introduce In-Context Calibration, a demonstration-aware calibration method. We evaluate the effectiveness of the proposed method in two settings: (1) the Original ICL Task using the standard label space and (2) the Task Learning setting, where the label space is replaced with semantically unrelated tokens. In both settings, In-Context Calibration demonstrates substantial improvements, with results generalized across three LLM families (OPT, GPT, and Llama2) under various configurations.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
437,784
2406.12946
Instruction Data Generation and Unsupervised Adaptation for Speech Language Models
In this paper, we propose three methods for generating synthetic samples to train and evaluate multimodal large language models capable of processing both text and speech inputs. Addressing the scarcity of samples containing both modalities, synthetic data generation emerges as a crucial strategy to enhance the performance of such systems and facilitate the modeling of cross-modal relationships between the speech and text domains. Our process employs large language models to generate textual components and text-to-speech systems to generate speech components. The proposed methods offer a practical and effective means to expand the training dataset for these models. Experimental results show progress in achieving an integrated understanding of text and speech. We also highlight the potential of using unlabeled speech data to generate synthetic samples comparable in quality to those with available transcriptions, enabling the expansion of these models to more languages.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
465,640
2301.05427
Building a Fuel Moisture Model for the Coupled Fire-Atmosphere Model WRF-SFIRE from Data: From Kalman Filters to Recurrent Neural Networks
The current fuel moisture content (FMC) subsystems in WRF-SFIRE and its workflow system WRFx use a time-lag differential equation model with assimilation of data from FMC sensors on Remote Automated Weather Stations (RAWS) by the extended augmented Kalman filter. But the quality of the result is constrained by the limitations of the model and of the Kalman filter. We observe that the data flow in a system consisting of a model and the Kalman filter can be interpreted to be the same as the data flow in a recurrent neural network (RNN). Thus, instead of building more sophisticated models and data assimilation methods, we want to train a RNN to approximate the dynamics of the response of the FMC sensor to a time series of environmental data. Because standard AI approaches did not converge to reasonable solutions, we pre-train the RNN with special initial weights devised to turn it into a numerical solver of the differential equation. We then allow the AI training machinery to optimize the RNN weights to fit the data better. We illustrate the method on an example of a time series of 10h-FMC from RAWS and weather data from the Real-Time Mesoscale Analysis (RTMA).
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
340,346
2404.09779
A replica analysis of under-bagging
Under-bagging (UB), which combines under-sampling and bagging, is a popular ensemble learning method for training classifiers on an imbalanced data. Using bagging to reduce the increased variance caused by the reduction in sample size due to under-sampling is a natural approach. However, it has recently been pointed out that in generalized linear models, naive bagging, which does not consider the class imbalance structure, and ridge regularization can produce the same results. Therefore, it is not obvious whether it is better to use UB, which requires an increased computational cost proportional to the number of under-sampled data sets, when training linear models. Given such a situation, in this study, we heuristically derive a sharp asymptotics of UB and use it to compare with several other popular methods for learning from imbalanced data, in the scenario where a linear classifier is trained from a two-component mixture data. The methods compared include the under-sampling (US) method, which trains a model using a single realization of the under-sampled data, and the simple weighting (SW) method, which trains a model with a weighted loss on the entire data. It is shown that the performance of UB is improved by increasing the size of the majority class while keeping the size of the minority fixed, even though the class imbalance can be large, especially when the size of the minority class is small. This is in contrast to US, whose performance is almost independent of the majority class size. In this sense, bagging and simple regularization differ as methods to reduce the variance increased by under-sampling. On the other hand, the performance of SW with the optimal weighting coefficients is almost equal to UB, indicating that the combination of reweighting and regularization may be similar to UB.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
446,826
2212.11922
SupeRGB-D: Zero-shot Instance Segmentation in Cluttered Indoor Environments
Object instance segmentation is a key challenge for indoor robots navigating cluttered environments with many small objects. Limitations in 3D sensing capabilities often make it difficult to detect every possible object. While deep learning approaches may be effective for this problem, manually annotating 3D data for supervised learning is time-consuming. In this work, we explore zero-shot instance segmentation (ZSIS) from RGB-D data to identify unseen objects in a semantic category-agnostic manner. We introduce a zero-shot split for Tabletop Objects Dataset (TOD-Z) to enable this study and present a method that uses annotated objects to learn the ``objectness'' of pixels and generalize to unseen object categories in cluttered indoor environments. Our method, SupeRGB-D, groups pixels into small patches based on geometric cues and learns to merge the patches in a deep agglomerative clustering fashion. SupeRGB-D outperforms existing baselines on unseen objects while achieving similar performance on seen objects. We further show competitive results on the real dataset OCID. With its lightweight design (0.4 MB memory requirement), our method is extremely suitable for mobile and robotic applications. Additional DINO features can increase performance with a higher memory requirement. The dataset split and code are available at https://github.com/evinpinar/supergb-d.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
337,918
2012.08733
Exploiting Sample Uncertainty for Domain Adaptive Person Re-Identification
Many unsupervised domain adaptive (UDA) person re-identification (ReID) approaches combine clustering-based pseudo-label prediction with feature fine-tuning. However, because of domain gap, the pseudo-labels are not always reliable and there are noisy/incorrect labels. This would mislead the feature representation learning and deteriorate the performance. In this paper, we propose to estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels, by suppressing the contribution of noisy samples. We build our baseline framework using the mean teacher method together with an additional contrastive loss. We have observed that a sample with a wrong pseudo-label through clustering in general has a weaker consistency between the output of the mean teacher model and the student model. Based on this finding, we propose to exploit the uncertainty (measured by consistency levels) to evaluate the reliability of the pseudo-label of a sample and incorporate the uncertainty to re-weight its contribution within various ReID losses, including the identity (ID) classification loss per sample, the triplet loss, and the contrastive loss. Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
211,851
2005.13450
Spatially Coupled Codes with Sub-Block Locality: Joint Finite Length-Asymptotic Design Approach
SC-LDPC codes with sub-block locality can be decoded locally at the level of sub-blocks that are much smaller than the full code block, thus providing fast access to the coded information. The same code can also be decoded globally using the entire code block, for increased data reliability. In this paper, we pursue the analysis and design of such codes from both finite-length and asymptotic lenses. This mixed approach has rarely been applied in designing SC codes, but it is beneficial for optimizing code graphs for local and global performance simultaneously. Our proposed framework consists of two steps: 1) designing the local code for both threshold and cycle counts, and 2) designing the coupling of local codes for best cycle count in the global design.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
179,013
2210.05038
Fighting FIRe with FIRE: Assessing the Validity of Text-to-Video Retrieval Benchmarks
Searching troves of videos with textual descriptions is a core multimodal retrieval task. Owing to the lack of a purpose-built dataset for text-to-video retrieval, video captioning datasets have been re-purposed to evaluate models by (1) treating captions as positive matches to their respective videos and (2) assuming all other videos to be negatives. However, this methodology leads to a fundamental flaw during evaluation: since captions are marked as relevant only to their original video, many alternate videos also match the caption, which introduces false-negative caption-video pairs. We show that when these false negatives are corrected, a recent state-of-the-art model gains 25\% recall points -- a difference that threatens the validity of the benchmark itself. To diagnose and mitigate this issue, we annotate and release 683K additional caption-video pairs. Using these, we recompute effectiveness scores for three models on two standard benchmarks (MSR-VTT and MSVD). We find that (1) the recomputed metrics are up to 25\% recall points higher for the best models, (2) these benchmarks are nearing saturation for Recall@10, (3) caption length (generality) is related to the number of positives, and (4) annotation costs can be mitigated through sampling. We recommend retiring these benchmarks in their current form, and we make recommendations for future text-to-video retrieval benchmarks.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
322,673
2312.01679
Adversarial Medical Image with Hierarchical Feature Hiding
Deep learning based methods for medical images can be easily compromised by adversarial examples (AEs), posing a great security flaw in clinical decision-making. It has been discovered that conventional adversarial attacks like PGD which optimize the classification logits, are easy to distinguish in the feature space, resulting in accurate reactive defenses. To better understand this phenomenon and reassess the reliability of the reactive defenses for medical AEs, we thoroughly investigate the characteristic of conventional medical AEs. Specifically, we first theoretically prove that conventional adversarial attacks change the outputs by continuously optimizing vulnerable features in a fixed direction, thereby leading to outlier representations in the feature space. Then, a stress test is conducted to reveal the vulnerability of medical images, by comparing with natural images. Interestingly, this vulnerability is a double-edged sword, which can be exploited to hide AEs. We then propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution. The proposed method is evaluated on three medical datasets, both 2D and 3D, with different modalities. The experimental results demonstrate the superiority of HFC, \emph{i.e.,} it bypasses an array of state-of-the-art adversarial medical AE detectors more efficiently than competing adaptive attacks, which reveals the deficiencies of medical reactive defense and allows to develop more robust defenses in future.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
412,547
2106.01809
Exploring Distantly-Labeled Rationales in Neural Network Models
Recent studies strive to incorporate various human rationales into neural networks to improve model performance, but few pay attention to the quality of the rationales. Most existing methods distribute their models' focus to distantly-labeled rationale words entirely and equally, while ignoring the potential important non-rationale words and not distinguishing the importance of different rationale words. In this paper, we propose two novel auxiliary loss functions to make better use of distantly-labeled rationales, which encourage models to maintain their focus on important words beyond labeled rationales (PINs) and alleviate redundant training on non-helpful rationales (NoIRs). Experiments on two representative classification tasks show that our proposed methods can push a classification model to effectively learn crucial clues from non-perfect rationales while maintaining the ability to spread its focus to other unlabeled important words, thus significantly outperform existing methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,633
2004.05849
MLPSVM:A new parallel support vector machine to multi-label learning
Multi-label learning has attracted the attention of the machine learning community. The problem conversion method Binary Relevance converts a familiar single label into a multi-label algorithm. The binary relevance method is widely used because of its simple structure and efficient algorithm. But binary relevance does not consider the links between labels, making it cumbersome to handle some tasks. This paper proposes a multi-label learning algorithm that can also be used for single-label classification. It is based on standard support vector machines and changes the original single decision hyperplane into two parallel decision hyper-planes, which call multi-label parallel support vector machine (MLPSVM). At the end of the article, MLPSVM is compared with other multi-label learning algorithms. The experimental results show that the algorithm performs well on data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
172,341
2007.06032
Probabilistic Jacobian-based Saliency Maps Attacks
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or $L_0$ attacks. Effective and fast $L_0$ attacks, such as the widely used Jacobian-based Saliency Map Attack (JSMA) are practical to fool NNCs but also to improve their robustness. In this paper, we show that penalising saliency maps of JSMA by the output probabilities and the input features of the NNC allows to obtain more powerful attack algorithms that better take into account each input's characteristics. This leads us to introduce improved versions of JSMA, named Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), and demonstrate through a variety of white-box and black-box experiments on three different datasets (MNIST, CIFAR-10 and GTSRB), that they are both significantly faster and more efficient than the original targeted and non-targeted versions of JSMA. Experiments also demonstrate, in some cases, very competitive results of our attacks in comparison with the Carlini-Wagner (CW) $L_0$ attack, while remaining, like JSMA, significantly faster (WJSMA and TJSMA are more than 50 times faster than CW $L_0$ on CIFAR-10). Therefore, our new attacks provide good trade-offs between JSMA and CW for $L_0$ real-time adversarial testing on datasets such as the ones previously cited. Codes are publicly available through the link https://github.com/probabilistic-jsmas/probabilistic-jsmas.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
186,872
2407.09779
Layout-and-Retouch: A Dual-stage Framework for Improving Diversity in Personalized Image Generation
Personalized text-to-image (P-T2I) generation aims to create new, text-guided images featuring the personalized subject with a few reference images. However, balancing the trade-off relationship between prompt fidelity and identity preservation remains a critical challenge. To address the issue, we propose a novel P-T2I method called Layout-and-Retouch, consisting of two stages: 1) layout generation and 2) retouch. In the first stage, our step-blended inference utilizes the inherent sample diversity of vanilla T2I models to produce diversified layout images, while also enhancing prompt fidelity. In the second stage, multi-source attention swapping integrates the context image from the first stage with the reference image, leveraging the structure from the context image and extracting visual features from the reference image. This achieves high prompt fidelity while preserving identity characteristics. Through our extensive experiments, we demonstrate that our method generates a wide variety of images with diverse layouts while maintaining the unique identity features of the personalized objects, even with challenging text prompts. This versatility highlights the potential of our framework to handle complex conditions, significantly enhancing the diversity and applicability of personalized image synthesis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
472,712
2103.02542
Modularity and Mutual Information in Networks: Two Sides of the Same Coin
Modularity, first proposed by [Newman and Girvan, 2004], is one of the most popular ways to quantify the significance of community structure in complex networks. It can serve as both a standard benchmark to compare different community detection algorithms, and an optimization objective to detect communities itself. Previous work on modularity has developed many efficient algorithms for modularity maximization. However, few of researchers considered the interpretation of the modularity function itself. In this paper, we study modularity from an information-theoretical perspective and show that modularity and mutual information in networks are essentially the same. The main contribution is that we develop a family of generalized modularity measures, f-modularity based on f-mutual information. f-Modularity has an information-theoretical interpretation, enjoys the desired properties of mutual information measure, and provides an approach to estimate the mutual information between discrete random variables. At a high level, we show the significance of community structure is equivalent to the amount of information contained in the network. The connection of f-modularity and f-mutual information bridges two important fields, complex network and information theory and also sheds light on the design of measures on community structure in future.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
222,996
2412.09612
Olympus: A Universal Task Router for Computer Vision Tasks
We introduce Olympus, a new approach that transforms Multimodal Large Language Models (MLLMs) into a unified framework capable of handling a wide array of computer vision tasks. Utilizing a controller MLLM, Olympus delegates over 20 specialized tasks across images, videos, and 3D objects to dedicated modules. This instruction-based routing enables complex workflows through chained actions without the need for training heavy generative models. Olympus easily integrates with existing MLLMs, expanding their capabilities with comparable performance. Experimental results demonstrate that Olympus achieves an average routing accuracy of 94.75% across 20 tasks and precision of 91.82% in chained action scenarios, showcasing its effectiveness as a universal task router that can solve a diverse range of computer vision tasks. Project page: http://yuanze-lin.me/Olympus_page/
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
516,558
2108.00819
Active Learning in Gaussian Process State Space Model
We investigate active learning in Gaussian Process state-space models (GPSSM). Our problem is to actively steer the system through latent states by determining its inputs such that the underlying dynamics can be optimally learned by a GPSSM. In order that the most informative inputs are selected, we employ mutual information as our active learning criterion. In particular, we present two approaches for the approximation of mutual information for the GPSSM given latent states. The proposed approaches are evaluated in several physical systems where we actively learn the underlying non-linear dynamics represented by the state-space model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
248,843
2409.12884
Hypersphere Secure Sketch Revisited: Probabilistic Linear Regression Attack on IronMask in Multiple Usage
Protection of biometric templates is a critical and urgent area of focus. IronMask demonstrates outstanding recognition performance while protecting facial templates against existing known attacks. In high-level, IronMask can be conceptualized as a fuzzy commitment scheme building on the hypersphere directly. We devise an attack on IronMask targeting on the security notion of renewability. Our attack, termed as Probabilistic Linear Regression Attack, utilizes the linearity of underlying used error correcting code. This attack is the first algorithm to successfully recover the original template when getting multiple protected templates in acceptable time and requirement of storage. We implement experiments on IronMask applied to protect ArcFace that well verify the validity of our attacks. Furthermore, we carry out experiments in noisy environments and confirm that our attacks are still applicable. Finally, we put forward two strategies to mitigate this type of attacks.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
489,754
2501.07013
Sthymuli: a Static Educational Robot. Leveraging the Thymio II Platform
The use of robots in education represents a challenge for teachers and a fixed vision of what robots can do for students. This paper presents the development of Sthymuli, a static educational robot designed to explore new classroom interactions between robots, students and teachers. We propose the use of the Thymio II educational platform as a base, ensuring a robust benchmark for a fair comparison of the commonly available wheeled robots and our exploratory approach with Sthymuli. This paper outlines the constraints and requirements for developing such a robot, the current state of development and future work.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
524,222
2011.06192
Motion Generation Using Bilateral Control-Based Imitation Learning with Autoregressive Learning
Robots that can execute various tasks automatically on behalf of humans are becoming an increasingly important focus of research in the field of robotics. Imitation learning has been studied as an efficient and high-performance method, and imitation learning based on bilateral control has been proposed as a method that can realize fast motion. However, because this method cannot implement autoregressive learning, this method may not generate desirable long-term behavior. Therefore, in this paper, we propose a method of autoregressive learning for bilateral control-based imitation learning. A new neural network model for implementing autoregressive learning is proposed. In this study, three types of experiments are conducted to verify the effectiveness of the proposed method. The performance is improved compared to conventional approaches; the proposed method has the highest rate of success. Owing to the structure and autoregressive learning of the proposed model, the proposed method can generate the desirable motion for successful tasks and have a high generalization ability for environmental changes.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
206,165
2103.06742
Visibility-aware Trajectory Optimization with Application to Aerial Tracking
The visibility of targets determines performance and even success rate of various applications, such as active slam, exploration, and target tracking. Therefore, it is crucial to take the visibility of targets into explicit account in trajectory planning. In this paper, we propose a general metric for target visibility, considering observation distance and angle as well as occlusion effect. We formulate this metric into a differentiable visibility cost function, with which spatial trajectory and yaw can be jointly optimized. Furthermore, this visibility-aware trajectory optimization handles dynamic feasibility of position and yaw simultaneously. To validate that our method is practical and generic, we integrate it into a customized quadrotor tracking system. The experimental results show that our visibility-aware planner performs more robustly and observes targets better. In order to benefit related researches, we release our code to the public.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
224,396
1706.07351
An approach to reachability analysis for feed-forward ReLU neural networks
We study the reachability problem for systems implemented as feed-forward neural networks whose activation function is implemented via ReLU functions. We draw a correspondence between establishing whether some arbitrary output can ever be outputed by a neural system and linear problems characterising a neural system of interest. We present a methodology to solve cases of practical interest by means of a state-of-the-art linear programs solver. We evaluate the technique presented by discussing the experimental results obtained by analysing reachability properties for a number of benchmarks in the literature.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
75,829
1908.05926
Empirical Bayesian Mixture Models for Medical Image Translation
Automatically generating one medical imaging modality from another is known as medical image translation, and has numerous interesting applications. This paper presents an interpretable generative modelling approach to medical image translation. By allowing a common model for group-wise normalisation and segmentation of brain scans to handle missing data, the model allows for predicting entirely missing modalities from one, or a few, MR contrasts. Furthermore, the model can be trained on a fairly small number of subjects. The proposed model is validated on three clinically relevant scenarios. Results appear promising and show that a principled, probabilistic model of the relationship between multi-channel signal intensities can be used to infer missing modalities -- both MR contrasts and CT images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
141,863
1909.09390
SPSC: a new execution policy for exploring discrete-time stochastic simulations
In this paper, we introduce a new method called SPSC (Simulation, Partitioning, Selection, Cloning) to estimate efficiently the probability of possible solutions in stochastic simulations. This method can be applied to any type of simulation, however it is particularly suitable for multi-agent-based simulations (MABS). Therefore, its performance is evaluated on a well-known MABS and compared to the classical approach, i.e., Monte Carlo.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
146,245
1901.04641
SISC: End-to-end Interpretable Discovery Radiomics-Driven Lung Cancer Prediction via Stacked Interpretable Sequencing Cells
Objective: Lung cancer is the leading cause of cancer-related death worldwide. Computer-aided diagnosis (CAD) systems have shown significant promise in recent years for facilitating the effective detection and classification of abnormal lung nodules in computed tomography (CT) scans. While hand-engineered radiomic features have been traditionally used for lung cancer prediction, there have been significant recent successes achieving state-of-the-art results in the area of discovery radiomics. Here, radiomic sequencers comprising of highly discriminative radiomic features are discovered directly from archival medical data. However, the interpretation of predictions made using such radiomic sequencers remains a challenge. Method: A novel end-to-end interpretable discovery radiomics-driven lung cancer prediction pipeline has been designed, build, and tested. The radiomic sequencer being discovered possesses a deep architecture comprised of stacked interpretable sequencing cells (SISC). Results: The SISC architecture is shown to outperform previous approaches while providing more insight in to its decision making process. Conclusion: The SISC radiomic sequencer is able to achieve state-of-the-art results in lung cancer prediction, and also offers prediction interpretability in the form of critical response maps. Significance: The critical response maps are useful for not only validating the predictions of the proposed SISC radiomic sequencer, but also provide improved radiologist-machine collaboration for effective diagnosis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
118,634
1407.0323
Laboratories of Oligarchy? How the Iron Law Extends to Peer Production
Peer production projects like Wikipedia have inspired voluntary associations, collectives, social movements, and scholars to embrace open online collaboration as a model of democratic organization. However, many peer production projects exhibit entrenched leadership and deep inequalities, suggesting that they may not fulfill democratic ideals. Instead, peer production projects may conform to Robert Michels' "iron law of oligarchy," which proposes that democratic membership organizations become increasingly oligarchic as they grow. Using exhaustive data of internal processes from a sample of 683 wikis, we construct empirical measures of participation and test for increases in oligarchy associated with growth. In contrast to previous studies, we find support for Michels' iron law and conclude that peer production entails oligarchic organizational forms.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
34,318
2112.04527
Building Quantum Field Theories Out of Neurons
An approach to field theory is studied in which fields are comprised of $N$ constituent random neurons. Gaussian theories arise in the infinite-$N$ limit when neurons are independently distributed, via the Central Limit Theorem, while interactions arise due to finite-$N$ effects or non-independently distributed neurons. Euclidean-invariant ensembles of neurons are engineered, with tunable two-point function, yielding families of Euclidean-invariant field theories. Some Gaussian, Euclidean invariant theories are reflection positive, which allows for analytic continuation to a Lorentz-invariant quantum field theory. Examples are presented that yield dual theories at infinite-$N$, but have different symmetries at finite-$N$. Landscapes of classical field configurations are determined by local maxima of parameter distributions. Predictions arise from mixed field-neuron correlators. Near-Gaussianity is exhibited at large-$N$, potentially explaining a feature of field theories in Nature.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
270,551
2108.11942
Machine Learning for Mediation in Armed Conflicts
Today's conflicts are becoming increasingly complex, fluid and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace making, or the identification of key conflict issues and their interdependence. International peace efforts appear increasingly ill-equipped to successfully address these challenges. While technology is being increasingly used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study is the first to apply state-of-the-art machine learning technologies to data from an ongoing mediation process. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning tools can effectively support international mediators by managing knowledge and offering additional conflict analysis tools to assess complex information. Apart from illustrating the potential of machine learning tools in conflict mediation, the paper also emphasises the importance of interdisciplinary and participatory research design for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation.
false
false
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
252,341
2007.13106
Detection and Annotation of Plant Organs from Digitized Herbarium Scans using Deep Learning
As herbarium specimens are increasingly becoming digitized and accessible in online repositories, advanced computer vision techniques are being used to extract information from them. The presence of certain plant organs on herbarium sheets is useful information in various scientific contexts and automatic recognition of these organs will help mobilize such information. In our study we use deep learning to detect plant organs on digitized herbarium specimens with Faster R-CNN. For our experiment we manually annotated hundreds of herbarium scans with thousands of bounding boxes for six types of plant organs and used them for training and evaluating the plant organ detection model. The model worked particularly well on leaves and stems, while flowers were also present in large numbers in the sheets, but not equally well recognized.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
189,024
1403.0057
Real-time Topic-aware Influence Maximization Using Preprocessing
Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks (referred collectively as items in this paper) are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods for these topics to avoid redoing influence maximization for each item from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
31,261
2206.09672
Adaptive Domain Interest Network for Multi-domain Recommendation
Industrial recommender systems usually hold data from multiple business scenarios and are expected to provide recommendation services for these scenarios simultaneously. In the retrieval step, the topK high-quality items selected from a large number of corpus usually need to be various for multiple scenarios. Take Alibaba display advertising system for example, not only because the behavior patterns of Taobao users are diverse, but also differentiated scenarios' bid prices assigned by advertisers vary significantly. Traditional methods either train models for each scenario separately, ignoring the cross-domain overlapping of user groups and items, or simply mix all samples and maintain a shared model which makes it difficult to capture significant diversities between scenarios. In this paper, we present Adaptive Domain Interest network that adaptively handles the commonalities and diversities across scenarios, making full use of multi-scenarios data during training. Then the proposed method is able to improve the performance of each business domain by giving various topK candidates for different scenarios during online inference. Specifically, our proposed ADI models the commonalities and diversities for different domains by shared networks and domain-specific networks, respectively. In addition, we apply the domain-specific batch normalization and design the domain interest adaptation layer for feature-level domain adaptation. A self training strategy is also incorporated to capture label-level connections across domains.ADI has been deployed in the display advertising system of Alibaba, and obtains 1.8% improvement on advertising revenue.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
303,656
2409.18295
Enhancing Lossy Compression Through Cross-Field Information for Scientific Applications
Lossy compression is one of the most effective methods for reducing the size of scientific data containing multiple data fields. It reduces information density through prediction or transformation techniques to compress the data. Previous approaches use local information from a single target field when predicting target data points, limiting their potential to achieve higher compression ratios. In this paper, we identified significant cross-field correlations within scientific datasets. We propose a novel hybrid prediction model that utilizes CNN to extract cross-field information and combine it with existing local field information. Our solution enhances the prediction accuracy of lossy compressors, leading to improved compression ratios without compromising data quality. We evaluate our solution on three scientific datasets, demonstrating its ability to improve compression ratios by up to 25% under specific error bounds. Additionally, our solution preserves more data details and reduces artifacts compared to baseline approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
492,178
2303.14090
Physics-informed neural networks in the recreation of hydrodynamic simulations from dark matter
Physics-informed neural networks have emerged as a coherent framework for building predictive models that combine statistical patterns with domain knowledge. The underlying notion is to enrich the optimization loss function with known relationships to constrain the space of possible solutions. Hydrodynamic simulations are a core constituent of modern cosmology, while the required computations are both expensive and time-consuming. At the same time, the comparatively fast simulation of dark matter requires fewer resources, which has led to the emergence of machine learning algorithms for baryon inpainting as an active area of research; here, recreating the scatter found in hydrodynamic simulations is an ongoing challenge. This paper presents the first application of physics-informed neural networks to baryon inpainting by combining advances in neural network architectures with physical constraints, injecting theory on baryon conversion efficiency into the model loss function. We also introduce a punitive prediction comparison based on the Kullback-Leibler divergence, which enforces scatter reproduction. By simultaneously extracting the complete set of baryonic properties for the Simba suite of cosmological simulations, our results demonstrate improved accuracy of baryonic predictions based on dark matter halo properties, successful recovery of the fundamental metallicity relation, and retrieve scatter that traces the target simulation's distribution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
353,946
1112.5370
Enhancing Support for Knowledge Works: A relatively unexplored vista of computing research
Let us envision a new class of IT systems, the "Support Systems for Knowledge Works" or SSKW. An SSKW can be defined as a system built for providing comprehensive support to human knowledge-workers while performing instances of complex knowledge-works of a particular type within a particular domain of professional activities To get an idea what an SSKW-enabled work environment can be like, let us look into a hypothetical scenario that depicts the interaction between a physician and a patient-care SSKW during the activity of diagnosing a patient.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
13,564
2210.09496
CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations
Although reinforcement learning has found widespread use in dense reward settings, training autonomous agents with sparse rewards remains challenging. To address this difficulty, prior work has shown promising results when using not only task-specific demonstrations but also task-agnostic albeit somewhat related demonstrations. In most cases, the available demonstrations are distilled into an implicit prior, commonly represented via a single deep net. Explicit priors in the form of a database that can be queried have also been shown to lead to encouraging results. To better benefit from available demonstrations, we develop a method to Combine Explicit and Implicit Priors (CEIP). CEIP exploits multiple implicit priors in the form of normalizing flows in parallel to form a single complex prior. Moreover, CEIP uses an effective explicit retrieval and push-forward mechanism to condition the implicit priors. In three challenging environments, we find the proposed CEIP method to improve upon sophisticated state-of-the-art techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
324,552
2204.13973
Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle Perception
Autonomous Vehicles (AVs) are mostly reliant on LiDAR sensors which enable spatial perception of their surroundings and help make driving decisions. Recent works demonstrated attacks that aim to hide objects from AV perception, which can result in severe consequences. 3D shadows, are regions void of measurements in 3D point clouds which arise from occlusions of objects in a scene. 3D shadows were proposed as a physical invariant valuable for detecting spoofed or fake objects. In this work, we leverage 3D shadows to locate obstacles that are hidden from object detectors. We achieve this by searching for void regions and locating the obstacles that cause these shadows. Our proposed methodology can be used to detect an object that has been hidden by an adversary as these objects, while hidden from 3D object detectors, still induce shadow artifacts in 3D point clouds, which we use for obstacle detection. We show that using 3D shadows for obstacle detection can achieve high accuracy in matching shadows to their object and provide precise prediction of an obstacle's distance from the ego-vehicle.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
294,009
2104.12186
Random Spreading for Unsourced MAC with Power Diversity
We propose an improvement of the random spreading approach with polar codes for unsourced multiple access. Each user encodes its message by a polar code, and the coded bits are then spread using a random spreading sequence. The proposed approach divides the active users into different groups, and employs different power levels for each group in such a way that the average power constraint is satisfied. We formulate and solve an optimization problem to determine the number of groups, and the number of users and power level of each group. Extensive simulations show that the proposed approach outperforms the existing methods, especially when the number of active users is large.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
232,139
1905.13271
Exploring Computational User Models for Agent Policy Summarization
AI agents are being developed to support high stakes decision-making processes from driving cars to prescribing drugs, making it increasingly important for human users to understand their behavior. Policy summarization methods aim to convey strengths and weaknesses of such agents by demonstrating their behavior in a subset of informative states. Some policy summarization methods extract a summary that optimizes the ability to reconstruct the agent's policy under the assumption that users will deploy inverse reinforcement learning. In this paper, we explore the use of different models for extracting summaries. We introduce an imitation learning-based approach to policy summarization; we demonstrate through computational simulations that a mismatch between the model used to extract a summary and the model used to reconstruct the policy results in worse reconstruction quality; and we demonstrate through a human-subject study that people use different models to reconstruct policies in different contexts, and that matching the summary extraction model to these can improve performance. Together, our results suggest that it is important to carefully consider user models in policy summarization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,050
2502.01043
Multiphysics Continuous Shape Optimization of the TAP Reactor Components
The Transatomic Power (TAP) reactor has an unusual design for a molten salt reactor technology, building upon the foundation laid by the Molten Salt Reactor Experiment (MSRE). This design introduces three key modifications to enhance efficiency and compactness: a revised fuel salt composition, an alternative moderator material, and moderator pins surrounded by the molten salt fuel. Unlike traditional solid-fueled reactors that rely on excess positive reactivity at the beginning of life, the TAP concept employs a dynamic approach. The core's design, featuring a cylindrical geometry with square assemblies of moderator rods surrounded by flowing fuel salt, provides flexibility in adjusting the moderator-to-fuel ratio during operation - using movable moderator rods - further adding criticality control capability in addition to the control rods system. Shape optimization of the core can play a crucial role in enhancing performance and efficiency. By applying multiphysics continuous shape optimization techniques to key components, such as the unit cells of the TAP reactor or its moderator assemblies, we can fine-tune the reactor's geometry to achieve optimal performance in key physics like neutronics and thermal hydraulics. We explore this aspect using the optimization module in the Multiphysics Object Oriented Simulation Environment (MOOSE) framework which allows for multiphysics continuous shape optimization. The results reported here illustrate the benefits of applying continuous shape optimization in the design of nuclear reactor components and can help in extending the TAP reactor's performance.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
529,663
2212.09049
Perfectly Covert Communication with a Reflective Panel
This work considers the problem of \emph{perfect} covert communication in wireless networks. Specifically, harnessing an Intelligent Reflecting Surface (IRS), we turn our attention to schemes that allow the transmitter to completely hide the communication, with \emph{zero energy} at the unwanted listener (Willie) and hence zero probability of detection. Applications of such schemes go beyond simple covertness, as we prevent detectability or decoding even when the codebook, timings, and channel characteristics are known to Willie. We define perfect covertness, give a necessary and sufficient condition for it in IRS-assisted communication, and define the optimization problem. For two IRS elements, we analyze the probability of finding a solution and derive its closed form. We then investigate the problem of more than two IRS elements by analyzing the probability of such a zero-detection solution. We prove that this probability converges to $1$ as the number of elements tends to infinity. We provide an iterative algorithm to find a perfectly covert solution and prove its convergence. The results are also supported by simulations, showing that a small amount of IRS elements allows for a positive rate at the legitimate user yet with zero probability of detection at an unwanted listener.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
336,979
1011.5115
Optimal Utility-Energy tradeoff in Delay Constrained Random Access Networks
Rate, energy and delay are three main parameters of interest in ad-hoc networks. In this paper, we discuss the problem of maximizing network utility and minimizing energy consumption while satisfying a given transmission delay constraint for each packet. We formulate this problem in the standard convex optimization form and subsequently discuss the tradeoff between utility, energy and delay in such framework. Also, in order to adapt for the distributed nature of the network, a distributed algorithm where nodes decide on choosing transmission rates and probabilities based on their local information is introduced.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
8,311
1502.05871
Robust CS reconstruction based on appropriate minimization norm
Noise robust compressive sensing algorithm is considered. This algorithm allows an efficient signal reconstruction in the presence of different types of noise due to the possibility to change minimization norm. For instance, the commonly used l1 and l2 norms, provide good results in case of Laplace and Gaussian noise. However, when the signal is corrupted by Cauchy or Cubic Gaussian noise, these norms fail to provide accurate reconstruction. Therefore, in order to achieve accurate reconstruction, the application of l3 minimization norm is analyzed. The efficiency of algorithm will be demonstrated on examples.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,420
2202.08504
Finding Representative Sampling Subsets in Sensor Graphs using Time Series Similarities
With the increasing use of IoT-enabled sensors, it is important to have effective methods for querying the sensors. For example, in a dense network of battery-driven temperature sensors, it is often possible to query (sample) just a subset of the sensors at any given time, since the values of the non-sampled sensors can be estimated from the sampled values. If we can divide the set of sensors into disjoint so-called representative sampling subsets that each represent the other sensors sufficiently well, we can alternate the sampling between the sampling subsets and thus, increase battery life significantly. In this paper, we formulate the problem of finding representative sampling subsets as a graph problem on a so-called sensor graph with the sensors as nodes. Our proposed solution, SubGraphSample, consists of two phases. In Phase-I, we create edges in the sensor graph based on the similarities between the time series of sensor values, analyzing six different techniques based on proven time series similarity metrics. In Phase-II, we propose two new techniques and extend four existing ones to find the maximal number of representative sampling subsets. Finally, we propose AutoSubGraphSample which auto-selects the best technique for Phase-I and Phase-II for a given dataset. Our extensive experimental evaluation shows that our approach can yield significant battery life improvements within realistic error bounds.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
280,899
2405.08392
Neuromorphic Robust Estimation of Nonlinear Dynamical Systems Applied to Satellite Rendezvous
State estimation of nonlinear dynamical systems has long aimed to balance accuracy, computational efficiency, robustness, and reliability. The rapid evolution of various industries has amplified the demand for estimation frameworks that satisfy all these factors. This study introduces a neuromorphic approach for robust filtering of nonlinear dynamical systems: SNN-EMSIF (spiking neural network-extended modified sliding innovation filter). SNN-EMSIF combines the computational efficiency and scalability of SNNs with the robustness of EMSIF, an estimation framework designed for nonlinear systems with zero-mean Gaussian noise. Notably, the weight matrices are designed according to the system model, eliminating the need for a learning process. The framework's efficacy is evaluated through comprehensive Monte Carlo simulations, comparing SNN-EMSIF with EKF and EMSIF. Additionally, it is compared with SNN-EKF in the presence of modeling uncertainties and neuron loss, using RMSEs as a metric. The results demonstrate the superior accuracy and robustness of SNN-EMSIF. Further analysis of runtimes and spiking patterns reveals an impressive reduction of 85% in emitted spikes compared to possible spikes, highlighting the computational efficiency of SNN-EMSIF. This framework offers a promising solution for robust estimation in nonlinear dynamical systems, opening new avenues for efficient and reliable estimation in various industries that can benefit from neuromorphic computing.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
454,080
2103.06124
Semantically Constrained Memory Allocation (SCMA) for Embedding in Efficient Recommendation Systems
Deep learning-based models are utilized to achieve state-of-the-art performance for recommendation systems. A key challenge for these models is to work with millions of categorical classes or tokens. The standard approach is to learn end-to-end, dense latent representations or embeddings for each token. The resulting embeddings require large amounts of memory that blow up with the number of tokens. Training and inference with these models create storage, and memory bandwidth bottlenecks leading to significant computing and energy consumption when deployed in practice. To this end, we present the problem of \textit{Memory Allocation} under budget for embeddings and propose a novel formulation of memory shared embedding, where memory is shared in proportion to the overlap in semantic information. Our formulation admits a practical and efficient randomized solution with Locality sensitive hashing based Memory Allocation (LMA). We demonstrate a significant reduction in the memory footprint while maintaining performance. In particular, our LMA embeddings achieve the same performance compared to standard embeddings with a 16$\times$ reduction in memory footprint. Moreover, LMA achieves an average improvement of over 0.003 AUC across different memory regimes than standard DLRM models on Criteo and Avazu datasets
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
224,199
2404.06325
Automatically Learning HTN Methods from Landmarks
Hierarchical Task Network (HTN) planning usually requires a domain engineer to provide manual input about how to decompose a planning problem. Even HTN-MAKER, a well-known method-learning algorithm, requires a domain engineer to annotate the tasks with information about what to learn. We introduce CURRICULAMA, an HTN method learning algorithm that completely automates the learning process. It uses landmark analysis to compose annotated tasks and leverages curriculum learning to order the learning of methods from simpler to more complex. This eliminates the need for manual input, resolving a core issue with HTN-MAKER. We prove CURRICULAMA's soundness, and show experimentally that it has a substantially similar convergence rate in learning a complete set of methods to HTN-MAKER.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
445,418
2501.09887
FLORA: Formal Language Model Enables Robust Training-free Zero-shot Object Referring Analysis
Object Referring Analysis (ORA), commonly known as referring expression comprehension, requires the identification and localization of specific objects in an image based on natural descriptions. Unlike generic object detection, ORA requires both accurate language understanding and precise visual localization, making it inherently more complex. Although recent pre-trained large visual grounding detectors have achieved significant progress, they heavily rely on extensively labeled data and time-consuming learning. To address these, we introduce a novel, training-free framework for zero-shot ORA, termed FLORA (Formal Language for Object Referring and Analysis). FLORA harnesses the inherent reasoning capabilities of large language models (LLMs) and integrates a formal language model - a logical framework that regulates language within structured, rule-based descriptions - to provide effective zero-shot ORA. More specifically, our formal language model (FLM) enables an effective, logic-driven interpretation of object descriptions without necessitating any training processes. Built upon FLM-regulated LLM outputs, we further devise a Bayesian inference framework and employ appropriate off-the-shelf interpretive models to finalize the reasoning, delivering favorable robustness against LLM hallucinations and compelling ORA performance in a training-free manner. In practice, our FLORA boosts the zero-shot performance of existing pretrained grounding detectors by up to around 45%. Our comprehensive evaluation across different challenging datasets also confirms that FLORA consistently surpasses current state-of-the-art zero-shot methods in both detection and segmentation tasks associated with zero-shot ORA. We believe our probabilistic parsing and reasoning of the LLM outputs elevate the reliability and interpretability of zero-shot ORA. We shall release codes upon publication.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,305
2211.02842
A Machine Learning-based Framework for Predictive Maintenance of Semiconductor Laser for Optical Communication
Semiconductor lasers, one of the key components for optical communication systems, have been rapidly evolving to meet the requirements of next generation optical networks with respect to high speed, low power consumption, small form factor etc. However, these demands have brought severe challenges to the semiconductor laser reliability. Therefore, a great deal of attention has been devoted to improving it and thereby ensuring reliable transmission. In this paper, a predictive maintenance framework using machine learning techniques is proposed for real-time heath monitoring and prognosis of semiconductor laser and thus enhancing its reliability. The proposed approach is composed of three stages: i) real-time performance degradation prediction, ii) degradation detection, and iii) remaining useful life (RUL) prediction. First of all, an attention based gated recurrent unit (GRU) model is adopted for real-time prediction of performance degradation. Then, a convolutional autoencoder is used to detect the degradation or abnormal behavior of a laser, given the predicted degradation performance values. Once an abnormal state is detected, a RUL prediction model based on attention-based deep learning is utilized. Afterwards, the estimated RUL is input for decision making and maintenance planning. The proposed framework is validated using experimental data derived from accelerated aging tests conducted for semiconductor tunable lasers. The proposed approach achieves a very good degradation performance prediction capability with a small root mean square error (RMSE) of 0.01, a good anomaly detection accuracy of 94.24% and a better RUL estimation capability compared to the existing ML-based laser RUL prediction models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
328,718
2310.05927
The evolution of cooperation in a mobile population on random networks: Network topology matters only for low-degree networks
We consider a finite structured population of mobile individuals that strategically explore a network using a Markov movement model and interact with each other via a public goods game. We extend the model of Erovenko et al. (2019) from complete, circle, and star graphs to various random networks to further investigate the effect of network topology on the evolution of cooperation. We discover that the network topology affects the outcomes of the evolutionary process only for networks of small average degree. Once the degree becomes sufficiently high, the outcomes match those for the complete graph. The actual value of the degree when this happens is much smaller than that of the complete graph, and the threshold value depends on other network characteristics.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
398,356
2010.09388
Learning Parameter Distributions to Detect Concept Drift in Data Streams
Data distributions in streaming environments are usually not stationary. In order to maintain a high predictive quality at all times, online learning models need to adapt to distributional changes, which are known as concept drift. The timely and robust identification of concept drift can be difficult, as we never have access to the true distribution of streaming data. In this work, we propose a novel framework for the detection of real concept drift, called ERICS. By treating the parameters of a predictive model as random variables, we show that concept drift corresponds to a change in the distribution of optimal parameters. To this end, we adopt common measures from information theory. The proposed framework is completely model-agnostic. By choosing an appropriate base model, ERICS is also capable to detect concept drift at the input level, which is a significant advantage over existing approaches. An evaluation on several synthetic and real-world data sets suggests that the proposed framework identifies concept drift more effectively and precisely than various existing works.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
201,519
2202.11525
GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction
Short video has witnessed rapid growth in the past few years in e-commerce platforms like Taobao. To ensure the freshness of the content, platforms need to release a large number of new videos every day, making conventional click-through rate (CTR) prediction methods suffer from the item cold-start problem. In this paper, we propose GIFT, an efficient Graph-guIded Feature Transfer system, to fully take advantages of the rich information of warmed-up videos to compensate for the cold-start ones. Specifically, we establish a heterogeneous graph that contains physical and semantic linkages to guide the feature transfer process from warmed-up video to cold-start videos. The physical linkages represent explicit relationships, while the semantic linkages measure the proximity of multi-modal representations of two videos. We elaborately design the feature transfer function to make aware of different types of transferred features (e.g., id representations and historical statistics) from different metapaths on the graph. We conduct extensive experiments on a large real-world dataset, and the results show that our GIFT system outperforms SOTA methods significantly and brings a 6.82% lift on CTR in the homepage of Taobao App.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
281,918
2002.05515
Improving Deep Learning For Airbnb Search
The application of deep learning to search ranking was one of the most impactful product improvements at Airbnb. But what comes next after you launch a deep learning model? In this paper we describe the journey beyond, discussing what we refer to as the ABCs of improving search: A for architecture, B for bias and C for cold start. For architecture, we describe a new ranking neural network, focusing on the process that evolved our existing DNN beyond a fully connected two layer network. On handling positional bias in ranking, we describe a novel approach that led to one of the most significant improvements in tackling inventory that the DNN historically found challenging. To solve cold start, we describe our perspective on the problem and changes we made to improve the treatment of new listings on the platform. We hope ranking teams transitioning to deep learning will find this a practical case study of how to iterate on DNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,919
2412.04591
MetaFormer: High-fidelity Metalens Imaging via Aberration Correcting Transformers
Metalens is an emerging optical system with an irreplaceable merit in that it can be manufactured in ultra-thin and compact sizes, which shows great promise of various applications such as medical imaging and augmented/virtual reality (AR/VR). Despite its advantage in miniaturization, its practicality is constrained by severe aberrations and distortions, which significantly degrade the image quality. Several previous arts have attempted to address different types of aberrations, yet most of them are mainly designed for the traditional bulky lens and not convincing enough to remedy harsh aberrations of the metalens. While there have existed aberration correction methods specifically for metalens, they still fall short of restoration quality. In this work, we propose MetaFormer, an aberration correction framework for metalens-captured images, harnessing Vision Transformers (ViT) that has shown remarkable restoration performance in diverse image restoration tasks. Specifically, we devise a Multiple Adaptive Filters Guidance (MAFG), where multiple Wiener filters enrich the degraded input images with various noise-detail balances, enhancing output restoration quality. In addition, we introduce a Spatial and Transposed self-Attention Fusion (STAF) module, which aggregates features from spatial self-attention and transposed self-attention modules to further ameliorate aberration correction. We conduct extensive experiments, including correcting aberrated images and videos, and clean 3D reconstruction from the degraded images. The proposed method outperforms the previous arts by a significant margin. We further fabricate a metalens and verify the practicality of MetaFormer by restoring the images captured with the manufactured metalens in the wild. Code and pre-trained models are available at https://benhenryl.github.io/MetaFormer
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,474
1105.0155
Optimal Decoding Algorithm for Asynchronous Physical-Layer Network Coding
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. In this paper, 1) we propose and investigate a general framework based on belief propagation (BP) that can effectively deal with symbol and phase asynchronies; 2) we show that for BPSK and QPSK modulations, our BP method can significantly reduce the SNR penalty due to asynchrony compared with prior methods; 3) we find that symbol misalignment makes the system performance less sensitive and more robust against carrier-phase offset. Observation 3) has the following practical implication. It is relatively easier to control symbol timing than carrier-phase offset. Our results indicate that if we could control the symbol offset in PNC, it would actually be advantageous to deliberately introduce symbol misalignment to desensitize the system to phase offset.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
10,196
2411.00205
Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning
Goal-conditioned reinforcement learning is a powerful way to control an AI agent's behavior at runtime. That said, popular goal representations, e.g., target states or natural language, are either limited to Markovian tasks or rely on ambiguous task semantics. We propose representing temporal goals using compositions of deterministic finite automata (cDFAs) and use cDFAs to guide RL agents. cDFAs balance the need for formal temporal semantics with ease of interpretation: if one can understand a flow chart, one can understand a cDFA. On the other hand, cDFAs form a countably infinite concept class with Boolean semantics, and subtle changes to the automaton can result in very different tasks, making them difficult to condition agent behavior on. To address this, we observe that all paths through a DFA correspond to a series of reach-avoid tasks and propose pre-training graph neural network embeddings on "reach-avoid derived" DFAs. Through empirical evaluation, we demonstrate that the proposed pre-training method enables zero-shot generalization to various cDFA task classes and accelerated policy specialization without the myopic suboptimality of hierarchical methods.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
504,483
1309.0261
Multi-Column Deep Neural Networks for Offline Handwritten Chinese Character Classification
Our Multi-Column Deep Neural Networks achieve best known recognition rates on Chinese characters from the ICDAR 2011 and 2013 offline handwriting competitions, approaching human performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
26,773
2011.10173
Exploring Global Information for Session-based Recommendation
Session-based recommendation (SBR) is a challenging task, which aims at recommending items based on anonymous behavior sequences. Most existing SBR studies model the user preferences based only on the current session while neglecting the item-transition information from the other sessions, which suffer from the inability of modeling the complicated item-transition pattern. To address the limitations, we introduce global item-transition information to strength the modeling of the dynamic item-transition. For fully exploiting the global item-transition information, two ways of exploring global information for SBR are studied in this work. Specifically, we first propose a basic GNN-based framework (BGNN), which solely uses session-level item-transition information on session graph. Based on BGNN, we propose a novel approach, called Session-based Recommendation with Global Information (SRGI), which infers the user preferences via fully exploring global item-transitions over all sessions from two different perspectives: (i) Fusion-based Model (SRGI-FM), which recursively incorporates the neighbor embeddings of each node on global graph into the learning process of session level item representation; and (ii) Constrained-based Model (SRGI-CM), which treats the global-level item-transition information as a constraint to ensure the learned item embeddings are consistent with the global item-transition. Extensive experiments conducted on three popular benchmark datasets demonstrate that both SRGI-FM and SRGI-CM outperform the state-of-the-art methods consistently.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
207,426
2212.14720
Learning from Data Streams: An Overview and Update
The literature on machine learning in the context of data streams is vast and growing. However, many of the defining assumptions regarding data-stream learning tasks are too strong to hold in practice, or are even contradictory such that they cannot be met in the contexts of supervised learning. Algorithms are chosen and designed based on criteria which are often not clearly stated, for problem settings not clearly defined, tested in unrealistic settings, and/or in isolation from related approaches in the wider literature. This puts into question the potential for real-world impact of many approaches conceived in such contexts, and risks propagating a misguided research focus. We propose to tackle these issues by reformulating the fundamental definitions and settings of supervised data-stream learning with regard to contemporary considerations of concept drift and temporal dependence; and we take a fresh look at what constitutes a supervised data-stream learning task, and a reconsideration of algorithms that may be applied to tackle such tasks. Through and in reflection of this formulation and overview, helped by an informal survey of industrial players dealing with real-world data streams, we provide recommendations. Our main emphasis is that learning from data streams does not impose a single-pass or online-learning approach, or any particular learning regime; and any constraints on memory and time are not specific to streaming. Meanwhile, there exist established techniques for dealing with temporal dependence and concept drift, in other areas of the literature. For the data streams community, we thus encourage a shift in research focus, from dealing with often-artificial constraints and assumptions on the learning mode, to issues such as robustness, privacy, and interpretability which are increasingly relevant to learning in data streams in academic and industrial settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,716
2004.04719
On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration
We undertake a precise study of the asymptotic and non-asymptotic properties of stochastic approximation procedures with Polyak-Ruppert averaging for solving a linear system $\bar{A} \theta = \bar{b}$. When the matrix $\bar{A}$ is Hurwitz, we prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity. The CLT characterizes the exact asymptotic covariance matrix, which is the sum of the classical Polyak-Ruppert covariance and a correction term that scales with the step size. Under assumptions on the tail of the noise distribution, we prove a non-asymptotic concentration inequality whose main term matches the covariance in CLT in any direction, up to universal constants. When the matrix $\bar{A}$ is not Hurwitz but only has non-negative real parts in its eigenvalues, we prove that the averaged LSA procedure actually achieves an $O(1/T)$ rate in mean-squared error. Our results provide a more refined understanding of linear stochastic approximation in both the asymptotic and non-asymptotic settings. We also show various applications of the main results, including the study of momentum-based stochastic gradient methods as well as temporal difference algorithms in reinforcement learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
171,972
1704.00917
Deriving Probability Density Functions from Probabilistic Functional Programs
The probability density function of a probability distribution is a fundamental concept in probability theory and a key ingredient in various widely used machine learning methods. However, the necessary framework for compiling probabilistic functional programs to density functions has only recently been developed. In this work, we present a density compiler for a probabilistic language with failure and both discrete and continuous distributions, and provide a proof of its soundness. The compiler greatly reduces the development effort of domain experts, which we demonstrate by solving inference problems from various scientific applications, such as modelling the global carbon cycle, using a standard Markov chain Monte Carlo framework.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
71,169
2110.12763
SSMF: Shifting Seasonal Matrix Factorization
Given taxi-ride counts information between departure and destination locations, how can we forecast their future demands? In general, given a data stream of events with seasonal patterns that innovate over time, how can we effectively and efficiently forecast future events? In this paper, we propose Shifting Seasonal Matrix Factorization approach, namely SSMF, that can adaptively learn multiple seasonal patterns (called regimes), as well as switching between them. Our proposed method has the following properties: (a) it accurately forecasts future events by detecting regime shifts in seasonal patterns as the data stream evolves; (b) it works in an online setting, i.e., processes each observation in constant time and memory; (c) it effectively realizes regime shifts without human intervention by using a lossless data compression scheme. We demonstrate that our algorithm outperforms state-of-the-art baseline methods by accurately forecasting upcoming events on three real-world data streams.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
262,956
2209.08604
An Interactive Knowledge-based Multi-objective Evolutionary Algorithm Framework for Practical Optimization Problems
Experienced users often have useful knowledge and intuition in solving real-world optimization problems. User knowledge can be formulated as inter-variable relationships to assist an optimization algorithm in finding good solutions faster. Such inter-variable interactions can also be automatically learned from high-performing solutions discovered at intermediate iterations in an optimization run - a process called innovization. These relations, if vetted by the users, can be enforced among newly generated solutions to steer the optimization algorithm towards practically promising regions in the search space. Challenges arise for large-scale problems where the number of such variable relationships may be high. This paper proposes an interactive knowledge-based evolutionary multi-objective optimization (IK-EMO) framework that extracts hidden variable-wise relationships as knowledge from evolving high-performing solutions, shares them with users to receive feedback, and applies them back to the optimization process to improve its effectiveness. The knowledge extraction process uses a systematic and elegant graph analysis method which scales well with number of variables. The working of the proposed IK-EMO is demonstrated on three large-scale real-world engineering design problems. The simplicity and elegance of the proposed knowledge extraction process and achievement of high-performing solutions quickly indicate the power of the proposed framework. The results presented should motivate further such interaction-based optimization studies for their routine use in practice.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
318,187
1810.10910
Une approche totalement instanci\'ee pour la planification HTN
Many planning techniques have been developed to allow autonomous systems to act and make decisions based on their perceptions of the environment. Among these techniques, HTN ({\it Hierarchical Task Network}) planning is one of the most used in practice. Unlike classical approaches of planning. HTN operates by decomposing task into sub-tasks until each of these sub-tasks can be achieved an action. This hierarchical representation provide a richer representation of planning problems and allows to better guide the plan search and provides more knowledge to the underlying algorithms. In this paper, we propose a new approach of HTN planning in which, as in conventional planning, we instantiate all planning operators before starting the search process. This approach has proven its effectiveness in classical planning and is necessary for the development of effective heuristics and encoding planning problems in other formalism such as CSP or SAT. The instantiation is actually used by most modern planners but has never been applied in an HTN based planning framework. We present in this article a generic instantiation algorithm which implements many simplification techniques to reduce the process complexity inspired from those used in classical planning. Finally we present some results obtained from an experimentation on a range of problems used in the international planning competitions with a modified version of SHOP planner using fully instantiated problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
111,395
2108.03340
Tiny Neural Models for Seq2Seq
Semantic parsing models with applications in task oriented dialog systems require efficient sequence to sequence (seq2seq) architectures to be run on-device. To this end, we propose a projection based encoder-decoder model referred to as pQRNN-MAtt. Studies based on projection methods were restricted to encoder-only models, and we believe this is the first study extending it to seq2seq architectures. The resulting quantized models are less than 3.5MB in size and are well suited for on-device latency critical applications. We show that on MTOP, a challenging multilingual semantic parsing dataset, the average model performance surpasses LSTM based seq2seq model that uses pre-trained embeddings despite being 85x smaller. Furthermore, the model can be an effective student for distilling large pre-trained models such as T5/BERT.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
249,623
2008.00463
Structural Causal Models Are (Solvable by) Credal Networks
A structural causal model is made of endogenous (manifest) and exogenous (latent) variables. We show that endogenous observations induce linear constraints on the probabilities of the exogenous variables. This allows to exactly map a causal model into a credal network. Causal inferences, such as interventions and counterfactuals, can consequently be obtained by standard algorithms for the updating of credal nets. These natively return sharp values in the identifiable case, while intervals corresponding to the exact bounds are produced for unidentifiable queries. A characterization of the causal models that allow the map above to be compactly derived is given, along with a discussion about the scalability for general models. This contribution should be regarded as a systematic approach to represent structural causal models by credal networks and hence to systematically compute causal inferences. A number of demonstrative examples is presented to clarify our methodology. Extensive experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
190,019
2305.04123
Transform-Equivariant Consistency Learning for Temporal Sentence Grounding
This paper addresses the temporal sentence grounding (TSG). Although existing methods have made decent achievements in this task, they not only severely rely on abundant video-query paired data for training, but also easily fail into the dataset distribution bias. To alleviate these limitations, we introduce a novel Equivariant Consistency Regulation Learning (ECRL) framework to learn more discriminative query-related frame-wise representations for each video, in a self-supervised manner. Our motivation comes from that the temporal boundary of the query-guided activity should be consistently predicted under various video-level transformations. Concretely, we first design a series of spatio-temporal augmentations on both foreground and background video segments to generate a set of synthetic video samples. In particular, we devise a self-refine module to enhance the completeness and smoothness of the augmented video. Then, we present a novel self-supervised consistency loss (SSCL) applied on the original and augmented videos to capture their invariant query-related semantic by minimizing the KL-divergence between the sequence similarity of two videos and a prior Gaussian distribution of timestamp distance. At last, a shared grounding head is introduced to predict the transform-equivariant query-guided segment boundaries for both the original and augmented videos. Extensive experiments on three challenging datasets (ActivityNet, TACoS, and Charades-STA) demonstrate both effectiveness and efficiency of our proposed ECRL framework.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
362,647
2108.05276
Trading Complexity for Sparsity in Random Forest Explanations
Random forests have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting random forest can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by random forests are much more difficult to understand, as they involve a majority vote over hundreds of decision trees. In this paper, we examine different types of reasons that explain "why" an input instance is classified as positive or negative by a Boolean random forest. Notably, as an alternative to sufficient reasons taking the form of prime implicants of the random forest, we introduce majoritary reasons which are prime implicants of a strict majority of decision trees. For these different abductive explanations, the tractability of the generation problem (finding one reason) and the minimization problem (finding one shortest reason) are investigated. Experiments conducted on various datasets reveal the existence of a trade-off between runtime complexity and sparsity. Sufficient reasons - for which the identification problem is DP-complete - are slightly larger than majoritary reasons that can be generated using a simple linear- time greedy algorithm, and significantly larger than minimal majoritary reasons that can be approached using an anytime P ARTIAL M AX SAT algorithm.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
250,258
2403.19507
SineNet: Learning Temporal Dynamics in Time-Dependent Partial Differential Equations
We consider using deep neural networks to solve time-dependent partial differential equations (PDEs), where multi-scale processing is crucial for modeling complex, time-evolving dynamics. While the U-Net architecture with skip connections is commonly used by prior studies to enable multi-scale processing, our analysis shows that the need for features to evolve across layers results in temporally misaligned features in skip connections, which limits the model's performance. To address this limitation, we propose SineNet, consisting of multiple sequentially connected U-shaped network blocks, referred to as waves. In SineNet, high-resolution features are evolved progressively through multiple stages, thereby reducing the amount of misalignment within each stage. We furthermore analyze the role of skip connections in enabling both parallel and sequential processing of multi-scale information. Our method is rigorously tested on multiple PDE datasets, including the Navier-Stokes equations and shallow water equations, showcasing the advantages of our proposed approach over conventional U-Nets with a comparable parameter budget. We further demonstrate that increasing the number of waves in SineNet while maintaining the same number of parameters leads to a monotonically improved performance. The results highlight the effectiveness of SineNet and the potential of our approach in advancing the state-of-the-art in neural PDE solver design. Our code is available as part of AIRS (https://github.com/divelab/AIRS).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
442,375
2405.02961
JOSENet: A Joint Stream Embedding Network for Violence Detection in Surveillance Videos
The increasing proliferation of video surveillance cameras and the escalating demand for crime prevention have intensified interest in the task of violence detection within the research community. Compared to other action recognition tasks, violence detection in surveillance videos presents additional issues, such as the wide variety of real fight scenes. Unfortunately, existing datasets for violence detection are relatively small in comparison to those for other action recognition tasks. Moreover, surveillance footage often features different individuals in each video and varying backgrounds for each camera. In addition, fast detection of violent actions in real-life surveillance videos is crucial to prevent adverse outcomes, thus necessitating models that are optimized for reduced memory usage and computational costs. These challenges complicate the application of traditional action recognition methods. To tackle all these issues, we introduce JOSENet, a novel self-supervised framework that provides outstanding performance for violence detection in surveillance videos. The proposed model processes two spatiotemporal video streams, namely RGB frames and optical flows, and incorporates a new regularized self-supervised learning approach for videos. JOSENet demonstrates improved performance compared to state-of-the-art methods, while utilizing only one-fourth of the frames per video segment and operating at a reduced frame rate. The source code is available at https://github.com/ispamm/JOSENet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
451,994
2311.01907
BoschAI @ PLABA 2023: Leveraging Edit Operations in End-to-End Neural Sentence Simplification
Automatic simplification can help laypeople to comprehend complex scientific text. Language models are frequently applied to this task by translating from complex to simple language. In this paper, we describe our system based on Llama 2, which ranked first in the PLABA shared task addressing the simplification of biomedical text. We find that the large portion of shared tokens between input and output leads to weak training signals and conservatively editing models. To mitigate these issues, we propose sentence-level and token-level loss weights. They give higher weight to modified tokens, indicated by edit distance and edit operations, respectively. We conduct an empirical evaluation on the PLABA dataset and find that both approaches lead to simplifications closer to those created by human annotators (+1.8% / +3.5% SARI), simpler language (-1 / -1.1 FKGL) and more edits (1.6x / 1.8x edit distance) compared to the same model fine-tuned with standard cross entropy. We furthermore show that the hyperparameter $\lambda$ in token-level loss weights can be used to control the edit distance and the simplicity level (FKGL).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
405,225
2405.17506
Subspace Node Pruning
Efficiency of neural network inference is undeniably important in a time where commercial use of AI models increases daily. Node pruning is the art of removing computational units such as neurons, filters, attention heads, or even entire layers to significantly reduce inference time while retaining network performance. In this work, we propose the projection of unit activations to an orthogonal subspace in which there is no redundant activity and within which we may prune nodes while simultaneously recovering the impact of lost units via linear least squares. We identify that, for effective node pruning, this subspace must be constructed using a triangular transformation matrix, a transformation which is equivalent to and unnormalized Gram-Schmidt orthogonalization. We furthermore show that the order in which units are orthogonalized can be optimised to maximally reduce node activations in our subspace and thereby form a more optimal ranking of nodes. Finally, we leverage these orthogonal subspaces to automatically determine layer-wise pruning ratios based upon the relative scale of node activations in our subspace, equivalent to cumulative variance. Our proposed method reaches state of the art when pruning ImageNet trained VGG-16 and rivals more complex state of the art methods when pruning ResNet-50 networks across a range of pruning ratios.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
457,975
2112.08791
Beamspace MIMO for Satellite Swarms
Systems of small distributed satellites in low Earth orbit (LEO) transmitting cooperatively to a multiple antenna ground station (GS) are investigated. These satellite swarms have the benefit of much higher spatial separation in the transmit antennas than traditional big satellites with antenna arrays, promising a massive increase in spectral efficiency. However, this would require instantaneous perfect channel state information (CSI) and strong cooperation between satellites. In practice, orbital velocities around 7.5 km/s lead to very short channel coherence times on the order of fractions of the inter-satellite propagation delay, invalidating these assumptions. In this paper, we propose a distributed linear precoding scheme and a GS equalizer relying on local position information. In particular, each satellite only requires information about its own position and that of the GS, while the GS has complete positional information. Due to the deterministic nature of satellite movement this information is easily obtained and no inter-satellite information exchange is required during transmission. Based on the underlying geometrical channel approximation, the optimal inter-satellite distance is obtained analytically. Numerical evaluations show that the proposed scheme is, on average, within 99.8% of the maximum achievable rate for instantaneous CSI and perfect cooperation
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
271,939
2403.01318
High-Dimensional Tail Index Regression: with An Application to Text Analyses of Viral Posts in Social Media
Motivated by the empirical observation of power-law distributions in the credits (e.g., "likes") of viral social media posts, we introduce a high-dimensional tail index regression model and propose methods for estimation and inference of its parameters. First, we present a regularized estimator, establish its consistency, and derive its convergence rate. Second, we introduce a debiasing technique for the regularized estimator to facilitate inference and prove its asymptotic normality. Third, we extend our approach to handle large-scale online streaming data using stochastic gradient descent. Simulation studies corroborate our theoretical findings. We apply these methods to the text analysis of viral posts on X (formerly Twitter) related to LGBTQ+ topics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
434,359
0910.0650
Capacity Region of a State Dependent Degraded Broadcast Channel with Noncausal Transmitter CSI
This paper has been withdrawn due to a mistake in the previous version.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
4,625
2001.01049
Optimal Binary Linear Codes from Maximal Arcs
The binary Hamming codes with parameters $[2^m-1, 2^m-1-m, 3]$ are perfect. Their extended codes have parameters $[2^m, 2^m-1-m, 4]$ and are distance-optimal. The first objective of this paper is to construct a class of binary linear codes with parameters $[2^{m+s}+2^s-2^m,2^{m+s}+2^s-2^m-2m-2,4]$, which have better information rates than the class of extended binary Hamming codes, and are also distance-optimal. The second objective is to construct a class of distance-optimal binary codes with parameters $[2^m+2, 2^m-2m, 6]$. Both classes of binary linear codes have new parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
159,388
1909.10400
Robot Navigation in Crowds by Graph Convolutional Networks with Attention Learned from Human Gaze
Safe and efficient crowd navigation for mobile robot is a crucial yet challenging task. Previous work has shown the power of deep reinforcement learning frameworks to train efficient policies. However, their performance deteriorates when the crowd size grows. We suggest that this can be addressed by enabling the network to identify and pay attention to the humans in the crowd that are most critical to navigation. We propose a novel network utilizing a graph representation to learn the policy. We first train a graph convolutional network based on human gaze data that accurately predicts human attention to different agents in the crowd. Then we incorporate the learned attention into a graph-based reinforcement learning architecture. The proposed attention mechanism enables the assignment of meaningful weightings to the neighbors of the robot, and has the additional benefit of interpretability. Experiments on real-world dense pedestrian datasets with various crowd sizes demonstrate that our model outperforms state-of-art methods by 18.4% in task accomplishment and by 16.4% in time efficiency.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
146,534
2212.07126
Explainability of Text Processing and Retrieval Methods: A Critical Survey
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
336,313
2110.08776
Self-Supervised U-Net for Segmenting Flat and Sessile Polyps
Colorectal Cancer(CRC) poses a great risk to public health. It is the third most common cause of cancer in the US. Development of colorectal polyps is one of the earliest signs of cancer. Early detection and resection of polyps can greatly increase survival rate to 90%. Manual inspection can cause misdetections because polyps vary in color, shape, size and appearance. To this end, Computer-Aided Diagnosis systems(CADx) has been proposed that detect polyps by processing the colonoscopic videos. The system acts a secondary check to help clinicians reduce misdetections so that polyps may be resected before they transform to cancer. Polyps vary in color, shape, size, texture and appearance. As a result, the miss rate of polyps is between 6% and 27% despite the prominence of CADx solutions. Furthermore, sessile and flat polyps which have diameter less than 10 mm are more likely to be undetected. Convolutional Neural Networks(CNN) have shown promising results in polyp segmentation. However, all of these works have a supervised approach and are limited by the size of the dataset. It was observed that smaller datasets reduce the segmentation accuracy of ResUNet++. We train a U-Net to inpaint randomly dropped out pixels in the image as a proxy task. The dataset we use for pre-training is Kvasir-SEG dataset. This is followed by a supervised training on the limited Kvasir-Sessile dataset. Our experimental results demonstrate that with limited annotated dataset and a larger unlabeled dataset, self-supervised approach is a better alternative than fully supervised approach. Specifically, our self-supervised U-Net performs better than five segmentation models which were trained in supervised manner on the Kvasir-Sessile dataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
261,554
2212.04917
TRBLLmaker -- Transformer Reads Between Lyrics Lines maker
Even for us, it can be challenging to comprehend the meaning of songs. As part of this project, we explore the process of generating the meaning of songs. Despite the widespread use of text-to-text models, few attempts have been made to achieve a similar objective. Songs are primarily studied in the context of sentiment analysis. This involves identifying opinions and emotions in texts, evaluating them as positive or negative, and utilizing these evaluations to make music recommendations. In this paper, we present a generative model that offers implicit meanings for several lines of a song. Our model uses a decoder Transformer architecture GPT-2, where the input is the lyrics of a song. Furthermore, we compared the performance of this architecture with that of the encoder-decoder Transformer architecture of the T5 model. We also examined the effect of different prompt types with the option of appending additional information, such as the name of the artist and the title of the song. Moreover, we tested different decoding methods with different training parameters and evaluated our results using ROUGE. In order to build our dataset, we utilized the 'Genious' API, which allowed us to acquire the lyrics of songs and their explanations, as well as their rich metadata.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
335,620
1711.08513
Calibration for the (Computationally-Identifiable) Masses
As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern. We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group. We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
85,222
2204.04844
HFL at SemEval-2022 Task 8: A Linguistics-inspired Regression Model with Data Augmentation for Multilingual News Similarity
This paper describes our system designed for SemEval-2022 Task 8: Multilingual News Article Similarity. We proposed a linguistics-inspired model trained with a few task-specific strategies. The main techniques of our system are: 1) data augmentation, 2) multi-label loss, 3) adapted R-Drop, 4) samples reconstruction with the head-tail combination. We also present a brief analysis of some negative methods like two-tower architecture. Our system ranked 1st on the leaderboard while achieving a Pearson's Correlation Coefficient of 0.818 on the official evaluation set.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
290,810
2212.01610
Exploring Stochastic Autoregressive Image Modeling for Visual Representation
Autoregressive language modeling (ALM) have been successfully used in self-supervised pre-training in Natural language processing (NLP). However, this paradigm has not achieved comparable results with other self-supervised approach in computer vision (e.g., contrastive learning, mask image modeling). In this paper, we try to find the reason why autoregressive modeling does not work well on vision tasks. To tackle this problem, we fully analyze the limitation of visual autoregressive methods and proposed a novel stochastic autoregressive image modeling (named SAIM) by the two simple designs. First, we employ stochastic permutation strategy to generate effective and robust image context which is critical for vision tasks. Second, we create a parallel encoder-decoder training process in which the encoder serves a similar role to the standard vision transformer focus on learning the whole contextual information, and meanwhile the decoder predicts the content of the current position, so that the encoder and decoder can reinforce each other. By introducing stochastic prediction and the parallel encoder-decoder, SAIM significantly improve the performance of autoregressive image modeling. Our method achieves the best accuracy (83.9%) on the vanilla ViT-Base model among methods using only ImageNet-1K data. Transfer performance in downstream tasks also show that our model achieves competitive performance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
334,501
2310.10893
Active Learning Framework for Cost-Effective TCR-Epitope Binding Affinity Prediction
T cell receptors (TCRs) are critical components of adaptive immune systems, responsible for responding to threats by recognizing epitope sequences presented on host cell surface. Computational prediction of binding affinity between TCRs and epitope sequences using machine/deep learning has attracted intense attention recently. However, its success is hindered by the lack of large collections of annotated TCR-epitope pairs. Annotating their binding affinity requires expensive and time-consuming wet-lab evaluation. To reduce annotation cost, we present ActiveTCR, a framework that incorporates active learning and TCR-epitope binding affinity prediction models. Starting with a small set of labeled training pairs, ActiveTCR iteratively searches for unlabeled TCR-epitope pairs that are ''worth'' for annotation. It aims to maximize performance gains while minimizing the cost of annotation. We compared four query strategies with a random sampling baseline and demonstrated that ActiveTCR reduces annotation costs by approximately 40%. Furthermore, we showed that providing ground truth labels of TCR-epitope pairs to query strategies can help identify and reduce more than 40% redundancy among already annotated pairs without compromising model performance, enabling users to train equally powerful prediction models with less training data. Our work is the first systematic investigation of data optimization for TCR-epitope binding affinity prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
400,422
2303.03373
Detecting Human-Object Contact in Images
Humans constantly contact objects to move and perform tasks. Thus, detecting human-object contact is important for building human-centered artificial intelligence. However, there exists no robust method to detect contact between the body and the scene from an image, and there exists no dataset to learn such a detector. We fill this gap with HOT ("Human-Object conTact"), a new dataset of human-object contacts for images. To build HOT, we use two data sources: (1) We use the PROX dataset of 3D human meshes moving in 3D scenes, and automatically annotate 2D image areas for contact via 3D mesh proximity and projection. (2) We use the V-COCO, HAKE and Watch-n-Patch datasets, and ask trained annotators to draw polygons for the 2D image areas where contact takes place. We also annotate the involved body part of the human body. We use our HOT dataset to train a new contact detector, which takes a single color image as input, and outputs 2D contact heatmaps as well as the body-part labels that are in contact. This is a new and challenging task that extends current foot-ground or hand-object contact detectors to the full generality of the whole body. The detector uses a part-attention branch to guide contact estimation through the context of the surrounding body parts and scene. We evaluate our detector extensively, and quantitative results show that our model outperforms baselines, and that all components contribute to better performance. Results on images from an online repository show reasonable detections and generalizability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
349,703
1710.08601
Homophily and minority size explain perception biases in social networks
People's perceptions about the size of minority groups in social networks can be biased, often showing systematic over- or underestimation. These social perception biases are often attributed to biased cognitive or motivational processes. Here we show that both over- and underestimation of the size of a minority group can emerge solely from structural properties of social networks. Using a generative network model, we show analytically that these biases depend on the level of homophily and its asymmetric nature, as well as on the size of the minority group. Our model predictions correspond well with empirical data from a cross-cultural survey and with numerical calculations on six real-world networks. We also show under what circumstances individuals can reduce their biases by relying on perceptions of their neighbors. This work advances our understanding of the impact of network structure on social perception biases and offers a quantitative approach for addressing related issues in society.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
83,102
2312.01345
Introducing Modelling, Analysis and Control of Three-Phase Electrical Systems Using Geometric Algebra
State-of-the-art techniques for modeling, analysis and control of three-phase electrical systems belong to the real-valued multi-input/multi-output (MIMO) domain, or to the complex-valued nonlinear single-input/single-output (SISO) domain. In order to complement both domains while simplifying complexity and offering new analysis and design perspectives, this paper introduces the application of geometric algebra (GA) principles to the modeling, analysis and control of three-phase electrical systems. The key contribution for the modeling part is the identification of the transformation that allows transferring real-valued linear MIMO systems into GA-valued linear SISO representations (with independence of having a balanced or unbalanced system). Closed-loop stability analysis in the new space is addressed by using intrinsic properties of GA. In addition, a recipe for designing stabilizing and decoupling GA-valued controllers is provided. Numerical examples illustrate key developments and experiments corroborate the main findings.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
412,414
2401.14923
Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks
Many important behavior changes are frictionful; they require individuals to expend effort over a long period with little immediate gratification. Here, an artificial intelligence (AI) agent can provide personalized interventions to help individuals stick to their goals. In these settings, the AI agent must personalize rapidly (before the individual disengages) and interpretably, to help us understand the behavioral interventions. In this paper, we introduce Behavior Model Reinforcement Learning (BMRL), a framework in which an AI agent intervenes on the parameters of a Markov Decision Process (MDP) belonging to a boundedly rational human agent. Our formulation of the human decision-maker as a planning agent allows us to attribute undesirable human policies (ones that do not lead to the goal) to their maladapted MDP parameters, such as an extremely low discount factor. Furthermore, we propose a class of tractable human models that captures fundamental behaviors in frictionful tasks. Introducing a notion of MDP equivalence specific to BMRL, we theoretically and empirically show that AI planning with our human models can lead to helpful policies on a wide range of more complex, ground-truth humans.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
424,262
2104.06470
Joint Matrix Completion and Compressed Sensing for State Estimation in Low-observable Distribution System
Limited measurement availability at the distribution grid presents challenges for state estimation and situational awareness. This paper combines the advantages of two sparsity-based state estimation approaches (matrix completion and compressive sensing) that have been proposed recently to address the challenge of unobservability. The proposed approach exploits both the low rank structure and a suitable transform domain representation to leverage the correlation structure of the spatio-temporal data matrix while incorporating the power-flow constraints of the distribution grid. Simulations are carried out on three phase unbalanced IEEE 37 test system to verify the effectiveness of the proposed approach. The performance results reveal - (1) the superiority over traditional matrix completion and (2) very low state estimation errors for high compression ratios representing very low observability.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
230,081
2310.04948
TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
The past decade has witnessed significant advances in time series modeling with deep learning. While achieving state-of-the-art results, the best-performing architectures vary highly across applications and domains. Meanwhile, for natural language processing, the Generative Pre-trained Transformer (GPT) has demonstrated impressive performance via training one general-purpose model across various textual datasets. It is intriguing to explore whether GPT-type architectures can be effective for time series, capturing the intrinsic dynamic attributes and leading to significant accuracy improvements. In this paper, we propose a novel framework, TEMPO, that can effectively learn time series representations. We focus on utilizing two essential inductive biases of the time series task for pre-trained models: (i) decomposition of the complex interaction between trend, seasonal and residual components; and (ii) introducing the design of prompts to facilitate distribution adaptation in different types of time series. TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains. Our experiments demonstrate the superior performance of TEMPO over state-of-the-art methods on zero shot setting for a number of time series benchmark datasets. This performance gain is observed not only in scenarios involving previously unseen datasets but also in scenarios with multi-modal inputs. This compelling finding highlights TEMPO's potential to constitute a foundational model-building framework.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
397,899
2409.09704
AlpaPICO: Extraction of PICO Frames from Clinical Trial Documents Using LLMs
In recent years, there has been a surge in the publication of clinical trial reports, making it challenging to conduct systematic reviews. Automatically extracting Population, Intervention, Comparator, and Outcome (PICO) from clinical trial studies can alleviate the traditionally time-consuming process of manually scrutinizing systematic reviews. Existing approaches of PICO frame extraction involves supervised approach that relies on the existence of manually annotated data points in the form of BIO label tagging. Recent approaches, such as In-Context Learning (ICL), which has been shown to be effective for a number of downstream NLP tasks, require the use of labeled examples. In this work, we adopt ICL strategy by employing the pretrained knowledge of Large Language Models (LLMs), gathered during the pretraining phase of an LLM, to automatically extract the PICO-related terminologies from clinical trial documents in unsupervised set up to bypass the availability of large number of annotated data instances. Additionally, to showcase the highest effectiveness of LLM in oracle scenario where large number of annotated samples are available, we adopt the instruction tuning strategy by employing Low Rank Adaptation (LORA) to conduct the training of gigantic model in low resource environment for the PICO frame extraction task. Our empirical results show that our proposed ICL-based framework produces comparable results on all the version of EBM-NLP datasets and the proposed instruction tuned version of our framework produces state-of-the-art results on all the different EBM-NLP datasets. Our project is available at \url{https://github.com/shrimonmuke0202/AlpaPICO.git}.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
488,434
2309.15979
Clinical Trial Recommendations Using Semantics-Based Inductive Inference and Knowledge Graph Embeddings
Designing a new clinical trial entails many decisions, such as defining a cohort and setting the study objectives to name a few, and therefore can benefit from recommendations based on exhaustive mining of past clinical trial records. Here, we propose a novel recommendation methodology, based on neural embeddings trained on a first-of-a-kind knowledge graph of clinical trials. We addressed several important research questions in this context, including designing a knowledge graph (KG) for clinical trial data, effectiveness of various KG embedding (KGE) methods for it, a novel inductive inference using KGE, and its use in generating recommendations for clinical trial design. We used publicly available data from clinicaltrials.gov for the study. Results show that our recommendations approach achieves relevance scores of 70%-83%, measured as the text similarity to actual clinical trial elements, and the most relevant recommendation can be found near the top of list. Our study also suggests potential improvement in training KGE using node semantics.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
395,172