id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2212.01282
CHAPTER: Exploiting Convolutional Neural Network Adapters for Self-supervised Speech Models
Self-supervised learning (SSL) is a powerful technique for learning representations from unlabeled data. Transformer based models such as HuBERT, which consist a feature extractor and transformer layers, are leading the field in the speech domain. SSL models are fine-tuned on a wide range of downstream tasks, which involves re-training the majority of the model for each task. Previous studies have introduced applying adapters, which are small lightweight modules commonly used in Natural Language Processing (NLP) to adapt pre-trained models to new tasks. However, such efficient tuning techniques only provide adaptation at the transformer layer, but failed to perform adaptation at the feature extractor. In this paper, we propose CHAPTER, an efficient tuning method specifically designed for SSL speech model, by applying CNN adapters at the feature extractor. Using this method, we can only fine-tune fewer than 5% of parameters per task compared to fully fine-tuning and achieve better and more stable performance. We empirically found that adding CNN adapters to the feature extractor can help the adaptation on emotion and speaker tasks. For instance, the accuracy of SID is improved from 87.71 to 91.56, and the accuracy of ER is improved by 5%.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
334,373
2303.07608
On the Implicit Geometry of Cross-Entropy Parameterizations for Label-Imbalanced Data
Various logit-adjusted parameterizations of the cross-entropy (CE) loss have been proposed as alternatives to weighted CE for training large models on label-imbalanced data far beyond the zero train error regime. The driving force behind those designs has been the theory of implicit bias, which for linear(ized) models, explains why they successfully induce bias on the optimization path towards solutions that favor minorities. Aiming to extend this theory to non-linear models, we investigate the implicit geometry of classifiers and embeddings that are learned by different CE parameterizations. Our main result characterizes the global minimizers of a non-convex cost-sensitive SVM classifier for the unconstrained features model, which serves as an abstraction of deep nets. We derive closed-form formulas for the angles and norms of classifiers and embeddings as a function of the number of classes, the imbalance and the minority ratios, and the loss hyperparameters. Using these, we show that logit-adjusted parameterizations can be appropriately tuned to learn symmetric geometries irrespective of the imbalance ratio. We complement our analysis with experiments and an empirical study of convergence accuracy in deep-nets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
351,316
2109.03941
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference
While recent research on natural language inference has considerably benefited from large annotated datasets, the amount of inference-related knowledge (including commonsense) provided in the annotated data is still rather limited. There have been two lines of approaches that can be used to further address the limitation: (1) unsupervised pretraining can leverage knowledge in much larger unstructured text data; (2) structured (often human-curated) knowledge has started to be considered in neural-network-based models for NLI. An immediate question is whether these two approaches complement each other, or how to develop models that can bring together their advantages. In this paper, we propose models that leverage structured knowledge in different components of pre-trained models. Our results show that the proposed models perform better than previous BERT-based state-of-the-art models. Although our models are proposed for NLI, they can be easily extended to other sentence or sentence-pair classification problems.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
254,222
2208.04389
Share with Me: A Study on a Social Robot Collecting Mental Health Data
Social robots have been used to assist with mental well-being in various ways such as to help children with autism improve on their social skills and executive functioning such as joint attention and bodily awareness. They are also used to help older adults by reducing feelings of isolation and loneliness, as well as supporting mental well-being of teens and children. However, existing work in this sphere has only shown support for mental health through social robots by responding interactively to human activity to help them learn relevant skills. We hypothesize that humans can also get help from social robots in mental well-being by releasing or sharing their mental health data with the social robots. In this paper, we present a human-robot interaction (HRI) study to evaluate this hypothesis. During the five-day study, a total of fifty-five (n=55) participants shared their in-the-moment mood and stress levels with a social robot. We saw a majority of positive results indicating it is worth conducting future work in this direction, and the potential of social robots to largely support mental well-being.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
312,095
1605.05173
Early Stopping for Interleaver Recovering of Turbo Codes
Parameter recovering of channel codes is important in applications such as cognitive radio. The main task for that of a turbo code is to recover the interleaver. The existing optimal algorithm recovers interleaver parameters incrementally one by one. This algorithm continues till the end even if it has failed in the halfway. And there would be lots of wasted computation, as well as incorrectly recovered parameters that will badly deteriorate turbo decoding performance. To address such drawbacks, this paper proposes an early stopping method for the algorithm. Thresholds needed for the method are set through theoretical analysis. Simulations show that the proposed method is able to stop the algorithm in time after it has failed, while no significant degradation in the correct recovering probability is observed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,966
1410.7851
Efficient optimisation of structures using tabu search
This paper presents a novel approach to the optimisation of structures using a Tabu search (TS) method. TS is a metaheuristic which is used to guide local search methods towards a globally optimal solution by using flexible memory cycles of differing time spans. Results are presented for the well established ten bar truss problem and compared to results published in the literature. In the first example a truss is optimised to minimise mass and the results compared to results obtained using an alternative TS implementation. In the second example, the problem has multiple objectives that are compounded into a single objective function value using game theory. In general the results demonstrate that the TS method is capable of solving structural optimisation problems at least as efficiently as other numerical optimisation approaches.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
37,106
2401.06923
Minimally Supervised Learning using Topological Projections in Self-Organizing Maps
Parameter prediction is essential for many applications, facilitating insightful interpretation and decision-making. However, in many real life domains, such as power systems, medicine, and engineering, it can be very expensive to acquire ground truth labels for certain datasets as they may require extensive and expensive laboratory testing. In this work, we introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs), which significantly reduces the required number of labeled data points to perform parameter prediction, effectively exploiting information contained in large unlabeled datasets. Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU). The values estimated for newly-encountered data points are computed utilizing the average of the $n$ closest labeled data points in the SOM's U-matrix in tandem with a topological shortest path distance calculation scheme. Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques, including linear and polynomial regression, Gaussian process regression, K-nearest neighbors, as well as deep neural network models and related clustering schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
421,346
2409.04723
NapTune: Efficient Model Tuning for Mood Classification using Previous Night's Sleep Measures along with Wearable Time-series
Sleep is known to be a key factor in emotional regulation and overall mental health. In this study, we explore the integration of sleep measures from the previous night into wearable-based mood recognition. To this end, we propose NapTune, a novel prompt-tuning framework that utilizes sleep-related measures as additional inputs to a frozen pre-trained wearable time-series encoder by adding and training lightweight prompt parameters to each Transformer layer. Through rigorous empirical evaluation, we demonstrate that the inclusion of sleep data using NapTune not only improves mood recognition performance across different wearable time-series namely ECG, PPG, and EDA, but also makes it more sample-efficient. Our method demonstrates significant improvements over the best baselines and unimodal variants. Furthermore, we analyze the impact of adding sleep-related measures on recognizing different moods as well as the influence of individual sleep-related measures.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
486,478
1612.01215
Do What I Want, Not What I Did: Imitation of Skills by Planning Sequences of Actions
We propose a learning-from-demonstration approach for grounding actions from expert data and an algorithm for using these actions to perform a task in new environments. Our approach is based on an application of sampling-based motion planning to search through the tree of discrete, high-level actions constructed from a symbolic representation of a task. Recursive sampling-based planning is used to explore the space of possible continuous-space instantiations of these actions. We demonstrate the utility of our approach with a magnetic structure assembly task, showing that the robot can intelligently select a sequence of actions in different parts of the workspace and in the presence of obstacles. This approach can better adapt to new environments by selecting the correct high-level actions for the particular environment while taking human preferences into account.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
65,038
2306.00985
Using generative AI to investigate medical imagery models and datasets
AI models have shown promise in many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust in AI-based models, and could enable novel scientific discovery by uncovering signals in the data that are not yet known to experts. In this paper, we present a method for automatic visual explanations leveraging team-based expertise by generating hypotheses of what visual signals in the images are correlated with the task. We propose the following 4 steps: (i) Train a classifier to perform a given task (ii) Train a classifier guided StyleGAN-based image generator (StylEx) (iii) Automatically detect and visualize the top visual attributes that the classifier is sensitive towards (iv) Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, we present the discovered attributes to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health. We demonstrate results on eight prediction tasks across three medical imaging modalities: retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples of attributes that capture clinically known features, confounders that arise from factors beyond physiological mechanisms, and reveal a number of physiologically plausible novel attributes. Our approach has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models. Importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors. Finally, we intend to release code to enable researchers to train their own StylEx models and analyze their predictive tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
370,247
2309.05655
Dynamic Handover: Throw and Catch with Bimanual Hands
Humans throw and catch objects all the time. However, such a seemingly common skill introduces a lot of challenges for robots to achieve: The robots need to operate such dynamic actions at high-speed, collaborate precisely, and interact with diverse objects. In this paper, we design a system with two multi-finger hands attached to robot arms to solve this problem. We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object. Such a model can help the robot catcher has a real-time estimation of where the object will be heading, and then react accordingly. We conduct our experiments with multiple objects in the real-world system, and show significant improvements over multiple baselines. Our project page is available at \url{https://binghao-huang.github.io/dynamic_handover/}.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
391,155
physics/0703126
The Laplace-Jaynes approach to induction
An approach to induction is presented, based on the idea of analysing the context of a given problem into `circumstances'. This approach, fully Bayesian in form and meaning, provides a complement or in some cases an alternative to that based on de Finetti's representation theorem and on the notion of infinite exchangeability. In particular, it gives an alternative interpretation of those formulae that apparently involve `unknown probabilities' or `propensities'. Various advantages and applications of the presented approach are discussed, especially in comparison to that based on exchangeability. Generalisations are also discussed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,833
1808.00934
Streaming Kernel PCA with $\tilde{O}(\sqrt{n})$ Random Features
We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity. Furthermore, we give a memory efficient streaming algorithm based on classical Oja's algorithm that achieves this rate.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
104,483
2110.11001
Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
An essential factor to achieve high performance in face recognition systems is the quality of its samples. Since these systems are involved in daily life there is a strong need of making face recognition processes understandable for humans. In this work, we introduce the concept of pixel-level face image quality that determines the utility of pixels in a face image for recognition. We propose a training-free approach to assess the pixel-level qualities of a face image given an arbitrary face recognition network. To achieve this, a model-specific quality value of the input image is estimated and used to build a sample-specific quality regression model. Based on this model, quality-based gradients are back-propagated and converted into pixel-level quality estimates. In the experiments, we qualitatively and quantitatively investigated the meaningfulness of our proposed pixel-level qualities based on real and artificial disturbances and by comparing the explanation maps on faces incompliant with the ICAO standards. In all scenarios, the results demonstrate that the proposed solution produces meaningful pixel-level qualities enhancing the interpretability of the complete face image quality. The code is publicly available
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
262,329
2403.17907
Multi-Agent Resilient Consensus under Intermittent Faulty and Malicious Transmissions (Extended Version)
In this work, we consider the consensus problem in which legitimate agents share their values over an undirected communication network in the presence of malicious or faulty agents. Different from the previous works, we characterize the conditions that generalize to several scenarios such as intermittent faulty or malicious transmissions, based on trust observations. As the standard trust aggregation approach based on a constant threshold fails to distinguish intermittent malicious/faulty activity, we propose a new detection algorithm utilizing time-varying thresholds and the random trust values available to legitimate agents. Under these conditions, legitimate agents almost surely determine their trusted neighborhood correctly with geometrically decaying misclassification probabilities. We further prove that the consensus process converges almost surely even in the presence of malicious agents. We also derive the probabilistic bounds on the deviation from the nominal consensus value that would have been achieved with no malicious agents in the system. Numerical results verify the convergence among agents and exemplify the deviation under different scenarios.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
441,686
2208.12996
A Federated Learning-enabled Smart Street Light Monitoring Application: Benefits and Future Challenges
Data-enabled cities are recently accelerated and enhanced with automated learning for improved Smart Cities applications. In the context of an Internet of Things (IoT) ecosystem, the data communication is frequently costly, inefficient, not scalable and lacks security. Federated Learning (FL) plays a pivotal role in providing privacy-preserving and communication efficient Machine Learning (ML) frameworks. In this paper we evaluate the feasibility of FL in the context of a Smart Cities Street Light Monitoring application. FL is evaluated against benchmarks of centralised and (fully) personalised machine learning techniques for the classification task of the lampposts operation. Incorporating FL in such a scenario shows minimal performance reduction in terms of the classification task, but huge improvements in the communication cost and the privacy preserving. These outcomes strengthen FL's viability and potential for IoT applications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
314,915
2501.09361
Strategic Base Representation Learning via Feature Augmentations for Few-Shot Class Incremental Learning
Few-shot class incremental learning implies the model to learn new classes while retaining knowledge of previously learned classes with a small number of training instances. Existing frameworks typically freeze the parameters of the previously learned classes during the incorporation of new classes. However, this approach often results in suboptimal class separation of previously learned classes, leading to overlap between old and new classes. Consequently, the performance of old classes degrades on new classes. To address these challenges, we propose a novel feature augmentation driven contrastive learning framework designed to enhance the separation of previously learned classes to accommodate new classes. Our approach involves augmenting feature vectors and assigning proxy labels to these vectors. This strategy expands the feature space, ensuring seamless integration of new classes within the expanded space. Additionally, we employ a self-supervised contrastive loss to improve the separation between previous classes. We validate our framework through experiments on three FSCIL benchmark datasets: CIFAR100, miniImageNet, and CUB200. The results demonstrate that our Feature Augmentation driven Contrastive Learning framework significantly outperforms other approaches, achieving state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,118
2202.08505
Schedule-based Analysis of Transmission Risk in Public Transportation Systems
Airborne diseases, including COVID-19, raise the question of transmission risk in public transportation systems. However, quantitative analysis of the effectiveness of transmission risk mitigation methods in public transportation is lacking. The paper develops a transmission risk modeling framework based on the Wells-Riley model using as inputs transit operating characteristics, schedule, Origin-Destination (OD) demand, and virus characteristics. The model is sensitive to various factors that operators can control, as well as external factors that may be subject of broader policy decisions (e.g. mask wearing). The model is utilized to assess transmission risk as a function of OD flows, planned operations, and factors such as mask-wearing, ventilation, and infection rates. Using actual data from the Massachusetts Bay Transportation Authority (MBTA) Red Line, the paper explores the transmission risk under different infection rate scenarios, both in magnitude and spatial characteristics. The paper assesses the combined impact from viral load related factors and passenger load factors. Increasing frequency can mitigate transmission risk, but cannot fully compensate for increases in infection rates. Imbalanced passenger distribution on different cars of a train is shown to increase the overall system-wide infection probability. Spatial infection rate patterns should also be taken into account during policymaking as it is shown to impact transmission risk. For lines with branches, demand distribution among the branches is important and headway allocation adjustment among branches to balance the load on trains to different branches can help reduce risk.
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
280,900
1011.6220
Multimodal Biometric Systems - Study to Improve Accuracy and Performance
Biometrics is the science and technology of measuring and analyzing biological data of human body, extracting a feature set from the acquired data, and comparing this set against to the template set in the database. Experimental studies show that Unimodal biometric systems had many disadvantages regarding performance and accuracy. Multimodal biometric systems perform better than unimodal biometric systems and are popular even more complex also. We examine the accuracy and performance of multimodal biometric authentication systems using state of the art Commercial Off- The-Shelf (COTS) products. Here we discuss fingerprint and face biometric systems, decision and fusion techniques used in these systems. We also discuss their advantage over unimodal biometric systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
8,360
2205.13255
Active Labeling: Streaming Stochastic Gradients
The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the "active labeling" problem, which focuses on active learning with partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples. We illustrate our technique in depth for robust regression.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
298,865
2410.09192
Long Range Named Entity Recognition for Marathi Documents
The demand for sophisticated natural language processing (NLP) methods, particularly Named Entity Recognition (NER), has increased due to the exponential growth of Marathi-language digital content. In particular, NER is essential for recognizing distant entities and for arranging and understanding unstructured Marathi text data. With an emphasis on managing long-range entities, this paper offers a comprehensive analysis of current NER techniques designed for Marathi documents. It dives into current practices and investigates the BERT transformer model's potential for long-range Marathi NER. Along with analyzing the effectiveness of earlier methods, the report draws comparisons between NER in English literature and suggests adaptation strategies for Marathi literature. The paper discusses the difficulties caused by Marathi's particular linguistic traits and contextual subtleties while acknowledging NER's critical role in NLP. To conclude, this project is a major step forward in improving Marathi NER techniques, with potential wider applications across a range of NLP tasks and domains.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
497,476
2107.11046
Learning the structure of wind: A data-driven nonlocal turbulence model for the atmospheric boundary layer
We develop a novel data-driven approach to modeling the atmospheric boundary layer. This approach leads to a nonlocal, anisotropic synthetic turbulence model which we refer to as the deep rapid distortion (DRD) model. Our approach relies on an operator regression problem which characterizes the best fitting candidate in a general family of nonlocal covariance kernels parameterized in part by a neural network. This family of covariance kernels is expressed in Fourier space and is obtained from approximate solutions to the Navier--Stokes equations at very high Reynolds numbers. Each member of the family incorporates important physical properties such as mass conservation and a realistic energy cascade. The DRD model can be calibrated with noisy data from field experiments. After calibration, the model can be used to generate synthetic turbulent velocity fields. To this end, we provide a new numerical method based on domain decomposition which delivers scalable, memory-efficient turbulence generation with the DRD model as well as others. We demonstrate the robustness of our approach with both filtered and noisy data coming from the 1968 Air Force Cambridge Research Laboratory Kansas experiments. Using this data, we witness exceptional accuracy with the DRD model, especially when compared to the International Electrotechnical Commission standard.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,485
2402.03137
Sociolinguistically Informed Interpretability: A Case Study on Hinglish Emotion Classification
Emotion classification is a challenging task in NLP due to the inherent idiosyncratic and subjective nature of linguistic expression, especially with code-mixed data. Pre-trained language models (PLMs) have achieved high performance for many tasks and languages, but it remains to be seen whether these models learn and are robust to the differences in emotional expression across languages. Sociolinguistic studies have shown that Hinglish speakers switch to Hindi when expressing negative emotions and to English when expressing positive emotions. To understand if language models can learn these associations, we study the effect of language on emotion prediction across 3 PLMs on a Hinglish emotion classification dataset. Using LIME and token level language ID, we find that models do learn these associations between language choice and emotional expression. Moreover, having code-mixed data present in the pre-training can augment that learning when task-specific data is scarce. We also conclude from the misclassifications that the models may overgeneralise this heuristic to other infrequent examples where this sociolinguistic phenomenon does not apply.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
426,867
cmp-lg/9709010
Message-Passing Protocols for Real-World Parsing -- An Object-Oriented Model and its Preliminary Evaluation
We argue for a performance-based design of natural language grammars and their associated parsers in order to meet the constraints imposed by real-world NLP. Our approach incorporates declarative and procedural knowledge about language and language use within an object-oriented specification framework. We discuss several message-passing protocols for parsing and provide reasons for sacrificing completeness of the parse in favor of efficiency based on a preliminary empirical evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,810
2006.14278
Graph Structural-topic Neural Network
Graph Convolutional Networks (GCNs) achieved tremendous success by effectively gathering local features for nodes. However, commonly do GCNs focus more on node features but less on graph structures within the neighborhood, especially higher-order structural patterns. However, such local structural patterns are shown to be indicative of node properties in numerous fields. In addition, it is not just single patterns, but the distribution over all these patterns matter, because networks are complex and the neighborhood of each node consists of a mixture of various nodes and structural patterns. Correspondingly, in this paper, we propose Graph Structural-topic Neural Network, abbreviated GraphSTONE, a GCN model that utilizes topic models of graphs, such that the structural topics capture indicative graph structures broadly from a probabilistic aspect rather than merely a few structures. Specifically, we build topic models upon graphs using anonymous walks and Graph Anchor LDA, an LDA variant that selects significant structural patterns first, so as to alleviate the complexity and generate structural topics efficiently. In addition, we design multi-view GCNs to unify node features and structural topic features and utilize structural topics to guide the aggregation. We evaluate our model through both quantitative and qualitative experiments, where our model exhibits promising performance, high efficiency, and clear interpretability.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,176
2007.00872
Design of Extra Robotic Legs for Augmenting Human Payload Capabilities by Exploiting Singularity and Torque Redistribution
We present the design of a new robotic human augmentation system that will assist the operator in carrying a heavy payload, reaching and maintaining difficult postures, and ultimately better performing their job. The Extra Robotic Legs (XRL) system is worn by the operator and consists of two articulated robotic legs that move with the operator to bear a heavy payload. The design was driven by a need to increase the effectiveness of hazardous material emergency response personnel who are encumbered by their personal protective equipment (PPE). The legs will ultimately walk, climb stairs, crouch down, and crawl with the operator while eliminating all external PPE loads on the operator. The forces involved in the most extreme loading cases were analyzed to find an effective strategy for reducing actuator loads. The analysis reveals that the maximum torque is exerted during the transition from the crawling to standing mode of motion. Peak torques are significantly reduced by leveraging redundancy in force application resulting from a closed-loop kinematic chain formed by a particular posture of the XRL. The actuators, power systems, and transmission elements were designed from the results of these analyses. Using differential mechanisms to combine the inputs of multiple actuators into a single degree of freedom, the gear reductions needed to bear the heavy loads could be kept at a minimum, enabling high bandwidth force control due to the near-direct-drive transmission. A prototype was fabricated utilizing the insights gained from these analyses and initial tests indicate the feasibility of the XRL system.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
185,250
2112.01441
A Review of SHACL: From Data Validation to Schema Reasoning for RDF Graphs
We present an introduction and a review of the Shapes Constraint Language (SHACL), the W3C recommendation language for validating RDF data. A SHACL document describes a set of constraints on RDF nodes, and a graph is valid with respect to the document if its nodes satisfy these constraints. We revisit the basic concepts of the language, its constructs and components and their interaction. We review the different formal frameworks used to study this language and the different semantics proposed. We examine a number of related problems, from containment and satisfiability to the interaction of SHACL with inference rules, and exhibit how different modellings of the language are useful for different problems. We also cover practical aspects of SHACL, discussing its implementations and state of adoption, to present a holistic review useful to practitioners and theoreticians alike.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
269,484
2312.14852
TACO: Topics in Algorithmic COde generation dataset
We introduce TACO, an open-source, large-scale code generation dataset, with a focus on the optics of algorithms, designed to provide a more challenging training dataset and evaluation benchmark in the field of code generation models. TACO includes competition-level programming questions that are more challenging, to enhance or evaluate problem understanding and reasoning abilities in real-world programming scenarios. There are 25433 and 1000 coding problems in training and test set, as well as up to 1.55 million diverse solution answers. Moreover, each TACO problem includes several fine-grained labels such as task topics, algorithms, programming skills, and difficulty levels, providing a more precise reference for the training and evaluation of code generation models. The dataset and evaluation scripts are available on Hugging Face Hub (https://huggingface.co/datasets/BAAI/TACO) and Github (https://github.com/FlagOpen/TACO).
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
417,769
2312.05418
Bauer's Spectral Factorization Method for Low Order Multiwavelet Filter Design
Para-Hermitian polynomial matrices obtained by matrix spectral factorization lead to functions useful in control theory systems, basis functions in numerical methods or multiscaling functions used in signal processing. We introduce a fast algorithm for matrix spectral factorization based on Bauer$'$s method. We convert Bauer$'$ method into a nonlinear matrix equation (NME). The NME is solved by two different numerical algorithms (Fixed Point Iteration and Newton$'$s Method) which produce approximate scalar or matrix factors, as well as a symbolic algorithm which produces exact factors in closed form for some low-order scalar or matrix polynomial matrices, respectively. Convergence rates of the two numerical algorithms are investigated for a number of singular and nonsingular scalar and matrix polynomials taken from different areas. In particular, one of the singular examples leads to new orthogonal multiscaling and multiwavelet filters. Since the NME can also be solved as a Generalized Discrete Time Algebraic Riccati Equation (GDARE), numerical results using built-in routines in Maple 17.0 and 6 Matlab versions are presented.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
414,075
2411.18956
Random Sampling for Diffusion-based Adversarial Purification
Denoising Diffusion Probabilistic Models (DDPMs) have gained great attention in adversarial purification. Current diffusion-based works focus on designing effective condition-guided mechanisms while ignoring a fundamental problem, i.e., the original DDPM sampling is intended for stable generation, which may not be the optimal solution for adversarial purification. Inspired by the stability of the Denoising Diffusion Implicit Model (DDIM), we propose an opposite sampling scheme called random sampling. In brief, random sampling will sample from a random noisy space during each diffusion process, while DDPM and DDIM sampling will continuously sample from the adjacent or original noisy space. Thus, random sampling obtains more randomness and achieves stronger robustness against adversarial attacks. Correspondingly, we also introduce a novel mediator conditional guidance to guarantee the consistency of the prediction under the purified image and clean image input. To expand awareness of guided diffusion purification, we conduct a detailed evaluation with different sampling methods and our random sampling achieves an impressive improvement in multiple settings. Leveraging mediator-guided random sampling, we also establish a baseline method named DiffAP, which significantly outperforms state-of-the-art (SOTA) approaches in performance and defensive stability. Remarkably, under strong attack, our DiffAP even achieves a more than 20% robustness advantage with 10$\times$ sampling acceleration.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
512,065
1809.06481
Talent Search and Recommendation Systems at LinkedIn: Practical Challenges and Lessons Learned
LinkedIn Talent Solutions business contributes to around 65% of LinkedIn's annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities. LinkedIn's job ecosystem has been designed as a platform to connect job providers and job seekers, and to serve as a marketplace for efficient matching between potential candidates and job openings. A key mechanism to help achieve these goals is the LinkedIn Recruiter product, which enables recruiters to search for relevant candidates and obtain candidate recommendations for their job postings. In this work, we highlight a set of unique information retrieval, system, and modeling challenges associated with talent search and recommendation systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
108,071
1711.04973
A Robust Variable Step Size Fractional Least Mean Square (RVSS-FLMS) Algorithm
In this paper, we propose an adaptive framework for the variable step size of the fractional least mean square (FLMS) algorithm. The proposed algorithm named the robust variable step size-FLMS (RVSS-FLMS), dynamically updates the step size of the FLMS to achieve high convergence rate with low steady state error. For the evaluation purpose, the problem of system identification is considered. The experiments clearly show that the proposed approach achieves better convergence rate compared to the FLMS and adaptive step-size modified FLMS (AMFLMS).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
84,470
1711.00962
Energy-Delay Efficient Power Control in Wireless Networks
This work aims at developing a power control framework to jointly optimize energy efficiency (measured in bit/Joule) and delay in wireless networks. A multi-objective approach is taken to deal with both performance metrics, while ensuring a minimum quality-of-service to each user in the network. Each user in the network is modeled as a rational agent that engages in a generalized non-cooperative game. Feasibility conditions are derived for the existence of each player's best response, and used to show that if these conditions are met, the game best response dynamics will converge to a unique Nash equilibrium. Based on these results, a convergent power control algorithm is derived, which can be implemented in a fully decentralized fashion. Next, a centralized power control algorithm is proposed, which also serves as a benchmark for the proposed decentralized solution. Due to the non-convexity of the centralized problem, the tool of maximum block improvement is used, to trade-off complexity with optimality.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
83,807
2406.01170
Zero-Shot Out-of-Distribution Detection with Outlier Label Exposure
As vision-language models like CLIP are widely applied to zero-shot tasks and gain remarkable performance on in-distribution (ID) data, detecting and rejecting out-of-distribution (OOD) inputs in the zero-shot setting have become crucial for ensuring the safety of using such models on the fly. Most existing zero-shot OOD detectors rely on ID class label-based prompts to guide CLIP in classifying ID images and rejecting OOD images. In this work we instead propose to leverage a large set of diverse auxiliary outlier class labels as pseudo OOD class text prompts to CLIP for enhancing zero-shot OOD detection, an approach we called Outlier Label Exposure (OLE). The key intuition is that ID images are expected to have lower similarity to these outlier class prompts than OOD images. One issue is that raw class labels often include noise labels, e.g., synonyms of ID labels, rendering raw OLE-based detection ineffective. To address this issue, we introduce an outlier prototype learning module that utilizes the prompt embeddings of the outlier labels to learn a small set of pivotal outlier prototypes for an embedding similarity-based OOD scoring. Additionally, the outlier classes and their prototypes can be loosely coupled with the ID classes, leading to an inseparable decision region between them. Thus, we also introduce an outlier label generation module that synthesizes our outlier prototypes and ID class embeddings to generate in-between outlier prototypes to further calibrate the detection in OLE. Despite its simplicity, extensive experiments show that OLE substantially improves detection performance and achieves new state-of-the-art performance in large-scale OOD and hard OOD detection benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
460,200
2412.17279
Learning from Mistakes: Self-correct Adversarial Training for Chinese Unnatural Text Correction
Unnatural text correction aims to automatically detect and correct spelling errors or adversarial perturbation errors in sentences. Existing methods typically rely on fine-tuning or adversarial training to correct errors, which have achieved significant success. However, these methods exhibit poor generalization performance due to the difference in data distribution between training data and real-world scenarios, known as the exposure bias problem. In this paper, we propose a self-correct adversarial training framework for \textbf{L}earn\textbf{I}ng from \textbf{MI}s\textbf{T}akes (\textbf{LIMIT}), which is a task- and model-independent framework to correct unnatural errors or mistakes. Specifically, we fully utilize errors generated by the model that are actively exposed during the inference phase, i.e., predictions that are inconsistent with the target. This training method not only simulates potential errors in real application scenarios, but also mitigates the exposure bias of the traditional training process. Meanwhile, we design a novel decoding intervention strategy to maintain semantic consistency. Extensive experimental results on Chinese unnatural text error correction datasets show that our proposed method can correct multiple forms of errors and outperforms the state-of-the-art text correction methods. In addition, extensive results on Chinese and English datasets validate that LIMIT can serve as a plug-and-play defense module and can extend to new models and datasets without further training.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
519,901
2304.14590
A logical word embedding for learning grammar
We introduce the logical grammar emdebbing (LGE), a model inspired by pregroup grammars and categorial grammars to enable unsupervised inference of lexical categories and syntactic rules from a corpus of text. LGE produces comprehensible output summarizing its inferences, has a completely transparent process for producing novel sentences, and can learn from as few as a hundred sentences.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
361,027
1811.10114
Cooperation in the spatial prisoner's dilemma game with probabilistic abstention
Research has shown that the addition of abstention as an option transforms social dilemmas to rock-paper-scissor type games, where defectors dominate cooperators, cooperators dominate abstainers (loners), and abstainers (loners), in turn, dominate defectors. In this way, abstention can sustain cooperation even under adverse conditions, although defection also persists due to cyclic dominance. However, to abstain or to act as a loner has, to date, always been considered as an independent, third strategy to complement traditional cooperation and defection. Here we consider probabilistic abstention, where each player is assigned a probability to abstain in a particular instance of the game. In the two limiting cases, the studied game reverts to the prisoner's dilemma game without loners or to the optional prisoner's dilemma game. For intermediate probabilities, we have a new hybrid game, which turns out to be most favorable for the successful evolution of cooperation. We hope this novel hybrid game provides a more realistic view of the dilemma of optional/voluntary participation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
false
true
114,413
1809.03589
Unconstraining graph-constrained group testing
In network tomography, one goal is to identify a small set of failed links in a network, by sending a few packets through the network and seeing which reach their destination. This problem can be seen as a variant of combinatorial group testing, which has been studied before under the moniker "graph-constrained group testing." The main contribution of this work is to show that for most graphs, the "constraints" imposed by the underlying network topology are no constraint at all. That is, the number of tests required to identify the failed links in "graph-constrained" group testing is near-optimal even for the corresponding group testing problem with no graph constraints. Our approach is based on a simple randomized construction of tests, to analyze our construction, we prove new results about the size of giant components in randomly sparsified graphs. Finally, we provide empirical results which suggest that our connected-subgraph tests perform better not just in theory but also in practice, and in particular perform better on a real-world network topology.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
107,358
2003.12772
Towards an immersive user interface for waypoint navigation of a mobile robot
In this paper, we investigate the utility of head-mounted display (HMD) interfaces for navigation of mobile robots. We focus on the selection of waypoint positions for the robot, whilst maintaining an egocentric view of the robot's environment. Inspired by virtual reality (VR) gaming, we propose a target selection method that uses the 6 degrees-of-freedom tracked controllers of a commercial VR headset. This allows an operator to point to the desired target position, in the vicinity of the robot, which the robot then autonomously navigates towards. A user study (37 participants) was conducted to examine the efficacy of this control strategy when compared to direct control, both with and without a communication delay. The results of the experiment showed that participants were able to learn how to use the novel system quickly, and the majority of participants reported a preference for waypoint control. Across all recorded metrics (task performance, operator workload and usability) the proposed waypoint control interface was not significantly affected by the communication delay, in contrast to direct control. The simulated experiment indicated that a real-world implementation of the proposed interface could be effective, but also highlighted the need to manage the negative effects of HMDs - particularly VR sickness.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
170,006
1712.04473
Enhancing approximation abilities of neural networks by training derivatives
A method to increase the precision of feedforward networks is proposed. It requires a prior knowledge of a target function derivatives of several orders and uses this information in gradient based training. Forward pass calculates not only the values of the output layer of a network but also their derivatives. The deviations of those derivatives from the target ones are used in an extended cost function and then backward pass calculates the gradient of the extended cost with respect to weights, which can then be used by any weights update algorithm. Despite a substantial increase in arithmetic operations per pattern (if compared to the conventional training), the extended cost allows to obtain 140--1000 times more accurate approximation for simple cases if the total number of operations is equal. This precision also happens to be out of reach for the regular cost function. The method fits well into the procedure of solving differential equations with neural networks. Unlike training a network to match some target mapping, which requires an explicit use of the target derivatives in the extended cost function, the cost function for solving a differential equation is based on the deviation of the equation's residual from zero and thus can be extended by differentiating the equation itself, which does not require any prior knowledge. Solving an equation with such a cost resulted in 13 times more accurate result and could be done with 3 times larger grid step. GPU-efficient algorithm for calculating the gradient of the extended cost function is proposed.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
86,610
2410.11769
Can Search-Based Testing with Pareto Optimization Effectively Cover Failure-Revealing Test Inputs?
Search-based software testing (SBST) is a widely adopted technique for testing complex systems with large input spaces, such as Deep Learning-enabled (DL-enabled) systems. Many SBST techniques focus on Pareto-based optimization, where multiple objectives are optimized in parallel to reveal failures. However, it is important to ensure that identified failures are spread throughout the entire failure-inducing area of a search domain and not clustered in a sub-region. This ensures that identified failures are semantically diverse and reveal a wide range of underlying causes. In this paper, we present a theoretical argument explaining why testing based on Pareto optimization is inadequate for covering failure-inducing areas within a search domain. We support our argument with empirical results obtained by applying two widely used types of Pareto-based optimization techniques, namely NSGA-II (an evolutionary algorithm) and OMOPSO (a swarm-based Pareto-optimization algorithm), to two DL-enabled systems: an industrial Automated Valet Parking (AVP) system and a system for classifying handwritten digits. We measure the coverage of failure-revealing test inputs in the input space using a metric that we refer to as the Coverage Inverted Distance quality indicator. Our results show that NSGA-II-based search and OMOPSO are not more effective than a na\"ive random search baseline in covering test inputs that reveal failures. The replication package for this study is available in a GitHub repository.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
498,708
1901.08898
Bayesian surrogate learning in dynamic simulator-based regression problems
The estimation of unknown values of parameters (or hidden variables, control variables) that characterise a physical system often relies on the comparison of measured data with synthetic data produced by some numerical simulator of the system as the parameter values are varied. This process often encounters two major difficulties: the generation of synthetic data for each considered set of parameter values can be computationally expensive if the system model is complicated; and the exploration of the parameter space can be inefficient and/or incomplete, a typical example being when the exploration becomes trapped in a local optimum of the objection function that characterises the mismatch between the measured and synthetic data. A method to address both these issues is presented, whereby: a surrogate model (or proxy), which emulates the computationally expensive system simulator, is constructed using deep recurrent networks (DRN); and a nested sampling (NS) algorithm is employed to perform efficient and robust exploration of the parameter space. The analysis is performed in a Bayesian context, in which the samples characterise the full joint posterior distribution of the parameters, from which parameter estimates and uncertainties are easily derived. The proposed approach is compared with conventional methods in some numerical examples, for which the results demonstrate that one can accelerate the parameter estimation process by at least an order of magnitude.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
119,595
2311.00187
Decodable and Sample Invariant Continuous Object Encoder
We propose Hyper-Dimensional Function Encoding (HDFE). Given samples of a continuous object (e.g. a function), HDFE produces an explicit vector representation of the given object, invariant to the sample distribution and density. Sample distribution and density invariance enables HDFE to consistently encode continuous objects regardless of their sampling, and therefore allows neural networks to receive continuous objects as inputs for machine learning tasks, such as classification and regression. Besides, HDFE does not require any training and is proved to map the object into an organized embedding space, which facilitates the training of the downstream tasks. In addition, the encoding is decodable, which enables neural networks to regress continuous objects by regressing their encodings. Therefore, HDFE serves as an interface for processing continuous objects. We apply HDFE to function-to-function mapping, where vanilla HDFE achieves competitive performance as the state-of-the-art algorithm. We apply HDFE to point cloud surface normal estimation, where a simple replacement from PointNet to HDFE leads to immediate 12% and 15% error reductions in two benchmarks. In addition, by integrating HDFE into the PointNet-based SOTA network, we improve the SOTA baseline by 2.5% and 1.7% in the same benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
404,537
1905.01585
Accurate Face Detection for High Performance
Face detection has witnessed significant progress due to the advances of deep convolutional neural networks (CNNs). Its central issue in recent years is how to improve the detection performance of tiny faces. To this end, many recent works propose some specific strategies, redesign the architecture and introduce new loss functions for tiny object detection. In this report, we start from the popular one-stage RetinaNet approach and apply some recent tricks to obtain a high performance face detector. Specifically, we apply the Intersection over Union (IoU) loss function for regression, employ the two-step classification and regression for detection, revisit the data augmentation based on data-anchor-sampling for training, utilize the max-out operation for classification and use the multi-scale testing strategy for inference. As a consequence, the proposed face detection method achieves state-of-the-art performance on the most popular and challenging face detection benchmark WIDER FACE dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,754
1809.08909
Language Identification with Deep Bottleneck Features
In this paper we proposed an end-to-end short utterances speech language identification(SLD) approach based on a Long Short Term Memory (LSTM) neural network which is special suitable for SLD application in intelligent vehicles. Features used for LSTM learning are generated by a transfer learning method. Bottle-neck features of a deep neural network (DNN) which are trained for mandarin acoustic-phonetic classification are used for LSTM training. In order to improve the SLD accuracy of short utterances a phase vocoder based time-scale modification(TSM) method is used to reduce and increase speech rated of the test utterance. By splicing the normal, speech rate reduced and increased utterances, we can extend length of test utterances so as to improved improved the performance of the SLD system. The experimental results on AP17-OLR database shows that the proposed methods can improve the performance of SLD, especially on short utterance with 1s and 3s durations.
false
false
true
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
108,617
2310.00301
Enhancing the Hierarchical Environment Design via Generative Trajectory Modeling
Unsupervised Environment Design (UED) is a paradigm for automatically generating a curriculum of training environments, enabling agents trained in these environments to develop general capabilities, i.e., achieving good zero-shot transfer performance. However, existing UED approaches focus primarily on the random generation of environments for open-ended agent training. This is impractical in scenarios with limited resources, such as the constraints on the number of generated environments. In this paper, we introduce a hierarchical MDP framework for environment design under resource constraints. It consists of an upper-level RL teacher agent that generates suitable training environments for a lower-level student agent. The RL teacher can leverage previously discovered environment structures and generate environments at the frontier of the student's capabilities by observing the student policy's representation. Moreover, to reduce the time-consuming collection of experiences for the upper-level teacher, we utilize recent advances in generative modeling to synthesize a trajectory dataset to train the teacher agent. Our proposed method significantly reduces the resource-intensive interactions between agents and environments and empirical experiments across various domains demonstrate the effectiveness of our approach.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
395,920
2309.13165
Large Language Models Are Also Good Prototypical Commonsense Reasoners
Commonsense reasoning is a pivotal skill for large language models, yet it presents persistent challenges in specific tasks requiring this competence. Traditional fine-tuning approaches can be resource-intensive and potentially compromise a model's generalization capacity. Furthermore, state-of-the-art language models like GPT-3.5 and Claude are primarily accessible through API calls, which makes fine-tuning models challenging. To address these challenges, we draw inspiration from the outputs of large models for tailored tasks and semi-automatically developed a set of novel prompts from several perspectives, including task-relevance, supportive evidence generation (e.g. chain-of-thought and knowledge), diverse path decoding to aid the model. Experimental results on ProtoQA dataset demonstrate that with better designed prompts we can achieve the new state-of-art(SOTA) on the ProtoQA leaderboard, improving the Max Answer@1 score by 8%, Max Incorrect@1 score by 4% (breakthrough 50% for the first time) compared to the previous SOTA model and achieved an improvement on StrategyQA and CommonsenseQA2.0 (3% and 1%, respectively). Furthermore, with the generated Chain-of-Thought and knowledge, we can improve the interpretability of the model while also surpassing the previous SOTA models. We hope that our work can provide insight for the NLP community to develop better prompts and explore the potential of large language models for more complex reasoning tasks.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
394,078
math/0308153
Mathematics and Logic as Information Compression by Multiple Alignment, Unification and Search
This article introduces the conjecture that "mathematics, logic and related disciplines may usefully be understood as information compression (IC) by 'multiple alignment', 'unification' and 'search' (ICMAUS)". As a preparation for the two main sections of the article, concepts of information and information compression are reviewed. Related areas of research are also described including IC in brains and nervous systems, and IC in relation to inductive inference, Minimum Length Encoding and probabilistic reasoning. The ICMAUS concepts and a computer model in which they are embodied are briefly described. The first of the two main sections describes how many of the commonly-used forms and structures in mathematics, logic and related disciplines (such as theoretical linguistics and computer programming) may be seen as devices for IC. In some cases, these forms and structures may be interpreted in terms of the ICMAUS framework. The second main section describes a selection of examples where processes of calculation and inference in mathematics, logic and related disciplines may be understood as IC. In many cases, these examples may be understood more specifically in terms of the ICMAUS concepts.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,665
1607.08077
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
59,108
2211.11812
RIC-CNN: Rotation-Invariant Coordinate Convolutional Neural Network
In recent years, convolutional neural network has shown good performance in many image processing and computer vision tasks. However, a standard CNN model is not invariant to image rotations. In fact, even slight rotation of an input image will seriously degrade its performance. This shortcoming precludes the use of CNN in some practical scenarios. Thus, in this paper, we focus on designing convolutional layer with good rotation invariance. Specifically, based on a simple rotation-invariant coordinate system, we propose a new convolutional operation, called Rotation-Invariant Coordinate Convolution (RIC-C). Without additional trainable parameters and data augmentation, RIC-C is naturally invariant to arbitrary rotations around the input center. Furthermore, we find the connection between RIC-C and deformable convolution, and propose a simple but efficient approach to implement RIC-C using Pytorch. By replacing all standard convolutional layers in a CNN with the corresponding RIC-C, a RIC-CNN can be derived. Using MNIST dataset, we first evaluate the rotation invariance of RIC-CNN and compare its performance with most of existing rotation-invariant CNN models. It can be observed that RIC-CNN achieves the state-of-the-art classification on the rotated test dataset of MNIST. Then, we deploy RIC-C to VGG, ResNet and DenseNet, and conduct the classification experiments on two real image datasets. Also, a shallow CNN and the corresponding RIC-CNN are trained to extract image patch descriptors, and we compare their performance in patch verification. These experimental results again show that RIC-C can be easily used as drop in replacement for standard convolutions, and greatly enhances the rotation invariance of CNN models designed for different applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,882
2305.11534
Developing Multi-Agent Systems with Degrees of Neuro-Symbolic Integration [A Position Paper]
In this short position paper we highlight our ongoing work on verifiable heterogeneous multi-agent systems and, in particular, the complex (and often non-functional) issues that impact the choice of structure within each agent.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
365,585
2410.13952
Satellite Streaming Video QoE Prediction: A Real-World Subjective Database and Network-Level Prediction Models
Demand for streaming services, including satellite, continues to exhibit unprecedented growth. Internet Service Providers find themselves at the crossroads of technological advancements and rising customer expectations. To stay relevant and competitive, these ISPs must ensure their networks deliver optimal video streaming quality, a key determinant of user satisfaction. Towards this end, it is important to have accurate Quality of Experience prediction models in place. However, achieving robust performance by these models requires extensive data sets labeled by subjective opinion scores on videos impaired by diverse playback disruptions. To bridge this data gap, we introduce the LIVE-Viasat Real-World Satellite QoE Database. This database consists of 179 videos recorded from real-world streaming services affected by various authentic distortion patterns. We also conducted a comprehensive subjective study involving 54 participants, who contributed both continuous-time opinion scores and endpoint (retrospective) QoE scores. Our analysis sheds light on various determinants influencing subjective QoE, such as stall events, spatial resolutions, bitrate, and certain network parameters. We demonstrate the usefulness of this unique new resource by evaluating the efficacy of prevalent QoE-prediction models on it. We also created a new model that maps the network parameters to predicted human perception scores, which can be used by ISPs to optimize the video streaming quality of their networks. Our proposed model, which we call SatQA, is able to accurately predict QoE using only network parameters, without any access to pixel data or video-specific metadata, estimated by Spearman's Rank Order Correlation Coefficient (SROCC), Pearson Linear Correlation Coefficient (PLCC), and Root Mean Squared Error (RMSE), indicating high accuracy and reliability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
499,787
2410.19239
Prompting Continual Person Search
The development of person search techniques has been greatly promoted in recent years for its superior practicality and challenging goals. Despite their significant progress, existing person search models still lack the ability to continually learn from increaseing real-world data and adaptively process input from different domains. To this end, this work introduces the continual person search task that sequentially learns on multiple domains and then performs person search on all seen domains. This requires balancing the stability and plasticity of the model to continually learn new knowledge without catastrophic forgetting. For this, we propose a Prompt-based Continual Person Search (PoPS) model in this paper. First, we design a compositional person search transformer to construct an effective pre-trained transformer without exhaustive pre-training from scratch on large-scale person search data. This serves as the fundamental for prompt-based continual learning. On top of that, we design a domain incremental prompt pool with a diverse attribute matching module. For each domain, we independently learn a set of prompts to encode the domain-oriented knowledge. Meanwhile, we jointly learn a group of diverse attribute projections and prototype embeddings to capture discriminative domain attributes. By matching an input image with the learned attributes across domains, the learned prompts can be properly selected for model inference. Extensive experiments are conducted to validate the proposed method for continual person search. The source code is available at https://github.com/PatrickZad/PoPS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
502,228
2411.18531
Statistic Maximal Leakage
We introduce a privacy measure called statistic maximal leakage that quantifies how much a privacy mechanism leaks about a specific secret, relative to the adversary's prior information about that secret. Statistic maximal leakage is an extension of the well-known maximal leakage. Unlike maximal leakage, which protects an arbitrary, unknown secret, statistic maximal leakage protects a single, known secret. We show that statistic maximal leakage satisfies composition and post-processing properties. Additionally, we show how to efficiently compute it in the special case of deterministic data release mechanisms. We analyze two important mechanisms under statistic maximal leakage: the quantization mechanism and randomized response. We show theoretically and empirically that the quantization mechanism achieves better privacy-utility tradeoffs in the settings we study.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
511,905
2009.11979
An Environmentally Sustainable Closed-Loop Supply Chain Network Design under Uncertainty: Application of Optimization
Newly, the rates of energy and material consumption to augment industrial pro-duction are substantially high, thus the environmentally sustainable industrial de-velopment has emerged as the main issue of either developed or developing coun-tries. A novel approach to supply chain management is proposed to maintain economic growth along with environmentally friendly concerns for the design of the supply chain network. In this paper, a new green supply chain design approach has been suggested to maintain the financial virtue accompanying the environ-mental factors that required to be mitigated the negative effect of rapid industrial development on the environment. This approach has been suggested a multi-objective mathematical model minimizing the total costs and CO2 emissions for establishing an environmentally sustainable closed-loop supply chain. Two opti-mization methods are used namely Epsilon Constraint Method, and Genetic Al-gorithm Optimization Method. The results of the two mentioned methods have been compared and illustrated their effectiveness. The outcome of the analysis is approved to verify the accuracy of the proposed model to deal with financial and environmental issues concurrently.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
197,299
2410.04519
RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference
Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput demands. Data multiplexing addresses this by merging multiple inputs into a single composite input, allowing more efficient inference through a shared forward pass. However, as distinguishing individuals from a composite input is challenging, conventional methods typically require training the entire backbone, yet still suffer from performance degradation. In this paper, we introduce RevMUX, a parameter-efficient data multiplexing framework that incorporates a reversible design in the multiplexer, which can be reused by the demultiplexer to perform reverse operations and restore individual samples for classification. Extensive experiments on four datasets and three types of LLM backbones demonstrate the effectiveness of RevMUX for enhancing LLM inference efficiency while retaining a satisfactory classification performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
495,321
2101.11566
An Integrated Localisation, Motion Planning and Obstacle Avoidance Algorithm in Belief Space
As robots are being increasingly used in close proximity to humans and objects, it is imperative that robots operate safely and efficiently under real-world conditions. Yet, the environment is seldom known perfectly. Noisy sensors and actuation errors compound to the errors introduced while estimating features of the environment. We present a novel approach (1) to incorporate these uncertainties for robot state estimation and (2) to compute the probability of collision pertaining to the estimated robot configurations. The expression for collision probability is obtained as an infinite series and we prove its convergence. An upper bound for the truncation error is also derived and the number of terms required is demonstrated by analyzing the convergence for different robot and obstacle configurations. We evaluate our approach using two simulation domains which use a roadmap-based strategy to synthesize trajectories that satisfy collision probability bounds.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
217,325
1502.07379
On the Griesmer bound for nonlinear codes
Most bounds on the size of codes hold for any code, whether linear or nonlinear. Notably, the Griesmer bound, holds only in the linear case. In this paper we characterize a family of systematic nonlinear codes for which the Griesmer bound holds. Moreover, we show that the Griesmer bound does not necessarily hold for a systematic code by showing explicit counterexamples. On the other hand, we are also able to provide (weaker) versions of the Griesmer bound holding for all systematic codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,569
2106.02529
Blockchain for Transactive Energy Management of Distributed Energy Resources in Smart Grid
This work presents the design and implementation of a blockchain system that enables the trustable transactive energy management for distributed energy resources (DERs). We model the interactions among DERs, including energy trading and flexible appliance scheduling, as a cost minimization problem. Considering the dispersed nature and diverse ownership of DERs, we develop a distributed algorithm to solve the optimization problem using the alternating direction method of multipliers (ADMM) method. Furthermore, we develop a blockchain system, on which we implement the proposed algorithm with the smart contract, to guarantee the transparency and correctness of the energy management. We prototype the blockchain in a small-scale test network and evaluate it through experiments using real-world data. The experimental results validate the feasibility and effectiveness of our design.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
238,907
2501.13247
Multimodal AI on Wound Images and Clinical Notes for Home Patient Referral
Chronic wounds affect 8.5 million Americans, particularly the elderly and patients with diabetes. These wounds can take up to nine months to heal, making regular care essential to ensure healing and prevent severe outcomes like limb amputations. Many patients receive care at home from visiting nurses with varying levels of wound expertise, leading to inconsistent care. Problematic, non-healing wounds should be referred to wound specialists, but referral decisions in non-clinical settings are often erroneous, delayed, or unnecessary. This paper introduces the Deep Multimodal Wound Assessment Tool (DM-WAT), a machine learning framework designed to assist visiting nurses in deciding whether to refer chronic wound patients. DM-WAT analyzes smartphone-captured wound images and clinical notes from Electronic Health Records (EHRs). It uses DeiT-Base-Distilled, a Vision Transformer (ViT), to extract visual features from images and DeBERTa-base to extract text features from clinical notes. DM-WAT combines visual and text features using an intermediate fusion approach. To address challenges posed by a small and imbalanced dataset, it integrates image and text augmentation with transfer learning to achieve high performance. In evaluations, DM-WAT achieved 77% with std 3% accuracy and a 70% with std 2% F1 score, outperforming prior approaches. Score-CAM and Captum interpretation algorithms provide insights into specific parts of image and text inputs that influence recommendations, enhancing interpretability and trust.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
526,603
2408.17112
Wireless Integrated Authenticated Communication System (WIA-Comm)
The exponential increase in the number of devices connected to the internet globally has led to the requirement for the introduction of better and improved security measures for maintaining data integrity. The development of a wireless and authenticated communication system is required to overcome the safety threats and illegal access to the application system/data. The WIA-Comm System is the one that provides a bridge to control the devices at the application side. It has been designed to provide security by giving control rights only to the device whose MAC (physical) address has already been registered, so only authorized users can control the system. LoRa WAN technology has been used for wireless communication and Arduino IDE to develop the code for the required functionality.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
484,581
2003.11904
Matrix Smoothing: A Regularization for DNN with Transition Matrix under Noisy Labels
Training deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach. However, recent probabilistic methods directly apply transition matrix to DNN, neglect DNN's susceptibility to overfitting, and achieve unsatisfactory performance, especially under the uniform noise. In this paper, inspired by label smoothing, we proposed a novel method, in which a smoothed transition matrix is used for updating DNN, to restrict the overfitting of DNN in probabilistic modeling. Our method is termed Matrix Smoothing. We also empirically demonstrate that our method not only improves the robustness of probabilistic modeling significantly, but also even obtains a better estimation of the transition matrix.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
169,747
1212.4751
Opinion formation model for markets with a social temperature and fear
In the spirit of behavioral finance, we study the process of opinion formation among investors using a variant of the 2D Voter Model with a tunable social temperature. Further, a feedback acting on the temperature is introduced, such that social temperature reacts to market imbalances and thus becomes time dependent. In this toy market model, social temperature represents nervousness of agents towards market imbalances representing speculative risk. We use the knowledge about the discontinuous Generalized Voter Model phase transition to determine critical fixed points. The system exhibits metastable phases around these fixed points characterized by structured lattice states, with intermittent excursions away from the fixed points. The statistical mechanics of the model is characterized and its relation to dynamics of opinion formation among investors in real markets is discussed.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
20,485
2307.04754
Action-State Dependent Dynamic Model Selection
A model among many may only be best under certain states of the world. Switching from a model to another can also be costly. Finding a procedure to dynamically choose a model in these circumstances requires to solve a complex estimation procedure and a dynamic programming problem. A Reinforcement learning algorithm is used to approximate and estimate from the data the optimal solution to this dynamic programming problem. The algorithm is shown to consistently estimate the optimal policy that may choose different models based on a set of covariates. A typical example is the one of switching between different portfolio models under rebalancing costs, using macroeconomic information. Using a set of macroeconomic variables and price data, an empirical application to the aforementioned portfolio problem shows superior performance to choosing the best portfolio model with hindsight.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
378,505
2305.11048
Robust Single-Point Pushing with Force Feedback
We present the first controller for quasistatic robotic planar pushing with single-point contact using only force feedback. We consider a mobile robot equipped with a force-torque sensor to measure the force at the contact point with the pushed object (the "slider"). The parameters of the slider are not known to the controller, nor is feedback on the slider's pose. We assume that the global position of the contact point is always known and that the approximate initial position of the slider is provided. We focus specifically on the case when it is desired to push the slider along a straight line. Simulations and real-world experiments show that our controller yields stable pushes that are robust to a wide range of slider parameters and state perturbations.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
365,357
2001.05036
Single Image Depth Estimation Trained via Depth from Defocus Cues
Estimating depth from a single RGB images is a fundamental task in computer vision, which is most directly solved using supervised deep learning. In the field of unsupervised learning of depth from a single RGB image, depth is not given explicitly. Existing work in the field receives either a stereo pair, a monocular video, or multiple views, and, using losses that are based on structure-from-motion, trains a depth estimation network. In this work, we rely, instead of different views, on depth from focus cues. Learning is based on a novel Point Spread Function convolutional layer, which applies location specific kernels that arise from the Circle-Of-Confusion in each image location. We evaluate our method on data derived from five common datasets for depth estimation and lightfield images, and present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches. Since the phenomenon of depth from defocus is not dataset specific, we hypothesize that learning based on it would overfit less to the specific content in each dataset. Our experiments show that this is indeed the case, and an estimator learned on one dataset using our method provides better results on other datasets, than the directly supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
160,424
2004.02229
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
We propose Falcon, an end-to-end 3-party protocol for efficient private training and inference of large machine learning models. Falcon presents four main advantages - (i) It is highly expressive with support for high capacity networks such as VGG16 (ii) it supports batch normalization which is important for training complex networks such as AlexNet (iii) Falcon guarantees security with abort against malicious adversaries, assuming an honest majority (iv) Lastly, Falcon presents new theoretical insights for protocol design that make it highly efficient and allow it to outperform existing secure deep learning solutions. Compared to prior art for private inference, we are about 8x faster than SecureNN (PETS'19) on average and comparable to ABY3 (CCS'18). We are about 16-200x more communication efficient than either of these. For private training, we are about 6x faster than SecureNN, 4.4x faster than ABY3 and about 2-60x more communication efficient. Our experiments in the WAN setting show that over large networks and datasets, compute operations dominate the overall latency of MPC, as opposed to the communication.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
171,168
2311.10835
Accelerating L-shaped Two-stage Stochastic SCUC with Learning Integrated Benders Decomposition
Benders decomposition is widely used to solve large mixed-integer problems. This paper takes advantage of machine learning and proposes enhanced variants of Benders decomposition for solving two-stage stochastic security-constrained unit commitment (SCUC). The problem is decomposed into a master problem and subproblems corresponding to a load scenario. The goal is to reduce the computational costs and memory usage of Benders decomposition by creating tighter cuts and reducing the size of the master problem. Three approaches are proposed, namely regression Benders, classification Benders, and regression-classification Benders. A regressor reads load profile scenarios and predicts subproblem objective function proxy variables to form tighter cuts for the master problem. A criterion is defined to measure the level of usefulness of cuts with respect to their contribution to lower bound improvement. Useful cuts that contain the necessary information to form the feasible region are identified with and without a classification learner. Useful cuts are iteratively added to the master problem, and non-useful cuts are discarded to reduce the computational burden of each Benders iteration. Simulation studies on multiple test systems show the effectiveness of the proposed learning-aided Benders decomposition for solving two-stage SCUC as compared to conventional multi-cut Benders decomposition.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
408,686
2406.11023
Physics-Informed Deep Learning and Partial Transfer Learning for Bearing Fault Diagnosis in the Presence of Highly Missing Data
One of the most significant obstacles in bearing fault diagnosis is a lack of labeled data for various fault types. Also, sensor-acquired data frequently lack labels and have a large amount of missing data. This paper tackles these issues by presenting the PTPAI method, which uses a physics-informed deep learning-based technique to generate synthetic labeled data. Labeled synthetic data makes up the source domain, whereas unlabeled data with missing data is present in the target domain. Consequently, imbalanced class problems and partial-set fault diagnosis hurdles emerge. To address these challenges, the RF-Mixup approach is used to handle imbalanced classes. As domain adaptation strategies, the MK-MMSD and CDAN are employed to mitigate the disparity in distribution between synthetic and actual data. Furthermore, the partial-set challenge is tackled by applying weighting methods at the class and instance levels. Experimental outcomes on the CWRU and JNU datasets indicate that the proposed approach effectively addresses these problems.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
464,675
2201.12204
From data to functa: Your data point is a function and you can treat it like one
It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification. Code: https://github.com/deepmind/functa
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
277,561
2009.10713
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
The purpose of this article is to review the achievements made in the last few years towards the understanding of the reasons behind the success and subtleties of neural network-based machine learning. In the tradition of good old applied mathematics, we will not only give attention to rigorous mathematical results, but also the insight we have gained from careful numerical experiments as well as the analysis of simplified models. Along the way, we also list the open problems which we believe to be the most important topics for further study. This is not a complete overview over this quickly moving field, but we hope to provide a perspective which may be helpful especially to new researchers in the area.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
196,967
cs/0412109
Global minimization of a quadratic functional: neural network approach
The problem of finding out the global minimum of a multiextremal functional is discussed. One frequently faces with such a functional in various applications. We propose a procedure, which depends on the dimensionality of the problem polynomially. In our approach we use the eigenvalues and eigenvectors of the connection matrix.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
538,470
2311.12418
Visual Analytics for Generative Transformer Models
While transformer-based models have achieved state-of-the-art results in a variety of classification and generation tasks, their black-box nature makes them challenging for interpretability. In this work, we present a novel visual analytical framework to support the analysis of transformer-based generative networks. In contrast to previous work, which has mainly focused on encoder-based models, our framework is one of the first dedicated to supporting the analysis of transformer-based encoder-decoder models and decoder-only models for generative and classification tasks. Hence, we offer an intuitive overview that allows the user to explore different facets of the model through interactive visualization. To demonstrate the feasibility and usefulness of our framework, we present three detailed case studies based on real-world NLP research problems.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
409,326
2205.08478
An Evaluation Framework for Legal Document Summarization
A law practitioner has to go through numerous lengthy legal case proceedings for their practices of various categories, such as land dispute, corruption, etc. Hence, it is important to summarize these documents, and ensure that summaries contain phrases with intent matching the category of the case. To the best of our knowledge, there is no evaluation metric that evaluates a summary based on its intent. We propose an automated intent-based summarization metric, which shows a better agreement with human evaluation as compared to other automated metrics like BLEU, ROUGE-L etc. in terms of human satisfaction. We also curate a dataset by annotating intent phrases in legal documents, and show a proof of concept as to how this system can be automated. Additionally, all the code and data to generate reproducible results is available on Github.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
296,955
2404.07838
The Role of Confidence for Trust-based Resilient Consensus (Extended Version)
We consider a multi-agent system where agents aim to achieve a consensus despite interactions with malicious agents that communicate misleading information. Physical channels supporting communication in cyberphysical systems offer attractive opportunities to detect malicious agents, nevertheless, trustworthiness indications coming from the channel are subject to uncertainty and need to be treated with this in mind. We propose a resilient consensus protocol that incorporates trust observations from the channel and weighs them with a parameter that accounts for how confident an agent is regarding its understanding of the legitimacy of other agents in the network, with no need for the initial observation window $T_0$ that has been utilized in previous works. Analytical and numerical results show that (i) our protocol achieves a resilient consensus in the presence of malicious agents and (ii) the steady-state deviation from nominal consensus can be minimized by a suitable choice of the confidence parameter that depends on the statistics of trust observations.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
445,994
2411.12008
Compression of Higher Order Ambisonics with Multichannel RVQGAN
A multichannel extension to the RVQGAN neural coding method is proposed, and realized for data-driven compression of third-order Ambisonics audio. The input- and output layers of the generator and discriminator models are modified to accept multiple (16) channels without increasing the model bitrate. We also propose a loss function for accounting for spatial perception in immersive reproduction, and transfer learning from single-channel models. Listening test results with 7.1.4 immersive playback show that the proposed extension is suitable for coding scene-based, 16-channel Ambisonics content with good quality at 16 kbps when trained and tested on the EigenScape database. The model has potential applications for learning other types of content and multichannel formats.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
509,259
2411.13615
A Deep Learning Approach to Predict the Fall [of Price] of Cryptocurrency Long Before its Actual Fall
In modern times, the cryptocurrency market is one of the world's most rapidly rising financial markets. The cryptocurrency market is regarded to be more volatile and illiquid than traditional markets such as equities, foreign exchange, and commodities. The risk of this market creates an uncertain condition among the investors. The purpose of this research is to predict the magnitude of the risk factor of the cryptocurrency market. Risk factor is also called volatility. Our approach will assist people who invest in the cryptocurrency market by overcoming the problems and difficulties they experience. Our approach starts with calculating the risk factor of the cryptocurrency market from the existing parameters. In twenty elements of the cryptocurrency market, the risk factor has been predicted using different machine learning algorithms such as CNN, LSTM, BiLSTM, and GRU. All of the models have been applied to the calculated risk factor parameter. A new model has been developed to predict better than the existing models. Our proposed model gives the highest RMSE value of 1.3229 and the lowest RMSE value of 0.0089. Following our model, it will be easier for investors to trade in complicated and challenging financial assets like bitcoin, Ethereum, dogecoin, etc. Where the other existing models, the highest RMSE was 14.5092, and the lower was 0.02769. So, the proposed model performs much better than models with proper generalization. Using our approach, it will be easier for investors to trade in complicated and challenging financial assets like Bitcoin, Ethereum, and Dogecoin.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
509,864
1801.06313
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights. The set constraint that characterizes the quantization of weights is not imposed until the late stage of training, and a sequence of \emph{pseudo} quantized weights is maintained. Specifically, we relax the hard constraint into a continuous regularizer via Moreau envelope, which turns out to be the squared Euclidean distance to the set of quantized weights. The pseudo quantized weights are obtained by linearly interpolating between the float weights and their quantizations. A continuation strategy is adopted to push the weights towards the quantized state by gradually increasing the regularization parameter. In the second phase, exact quantization scheme with a small learning rate is invoked to guarantee fully quantized weights. We test BinaryRelax on the benchmark CIFAR and ImageNet color image datasets to demonstrate the superiority of the relaxed quantization approach and the improved accuracy over the state-of-the-art training methods. Finally, we prove the convergence of BinaryRelax under an approximate orthogonality condition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
88,583
2311.14770
Learning to Cooperate and Communicate Over Imperfect Channels
Information exchange in multi-agent systems improves the cooperation among agents, especially in partially observable settings. In the real world, communication is often carried out over imperfect channels. This requires agents to handle uncertainty due to potential information loss. In this paper, we consider a cooperative multi-agent system where the agents act and exchange information in a decentralized manner using a limited and unreliable channel. To cope with such channel constraints, we propose a novel communication approach based on independent Q-learning. Our method allows agents to dynamically adapt how much information to share by sending messages of different sizes, depending on their local observations and the channel's properties. In addition to this message size selection, agents learn to encode and decode messages to improve their jointly trained policies. We show that our approach outperforms approaches without adaptive capabilities in a novel cooperative digit-prediction environment and discuss its limitations in the traffic junction environment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
410,255
2406.02426
Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls
In contextual optimization, a decision-maker observes historical samples of uncertain variables and associated concurrent covariates, without knowing their joint distribution. Given an additional covariate observation, the goal is to choose a decision that minimizes some operational costs. A prevalent issue here is covariate shift, where the marginal distribution of the new covariate differs from historical samples, leading to decision performance variations with nonparametric or parametric estimators. To address this, we propose a distributionally robust approach that uses an ambiguity set by the intersection of two Wasserstein balls, each centered on typical nonparametric or parametric distribution estimators. Computationally, we establish the tractable reformulation of this distributionally robust optimization problem. Statistically, we provide guarantees for our Wasserstein ball intersection approach under covariate shift by analyzing the measure concentration of the estimators. Furthermore, to reduce computational complexity, we employ a surrogate objective that maintains similar generalization guarantees. Through synthetic and empirical case studies on income prediction and portfolio optimization, we demonstrate the strong empirical performance of our proposed models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
460,763
1404.7298
Code Minimization for Fringe Projection Based 3D Stereo Sensors by Calibration Improvement
Code minimization provides a speed-up of the processing time of fringe projection based stereo sensors and possibly makes them real-time applicable. This paper reports a methodology which enables such sensors to completely omit Gray code or other additional code. Only a sequence of sinusoidal images is necessary. The code reduction is achieved by involvement of the projection unit into the measurement, double triangulation, and a precise projector calibration or significant projector calibration improvement, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
32,682
2410.20812
Fidelity-Imposed Displacement Editing for the Learn2Reg 2024 SHG-BF Challenge
Co-examination of second-harmonic generation (SHG) and bright-field (BF) microscopy enables the differentiation of tissue components and collagen fibers, aiding the analysis of human breast and pancreatic cancer tissues. However, large discrepancies between SHG and BF images pose challenges for current learning-based registration models in aligning SHG to BF. In this paper, we propose a novel multi-modal registration framework that employs fidelity-imposed displacement editing to address these challenges. The framework integrates batch-wise contrastive learning, feature-based pre-alignment, and instance-level optimization. Experimental results from the Learn2Reg COMULISglobe SHG-BF Challenge validate the effectiveness of our method, securing the 1st place on the online leaderboard.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
502,976
2210.17235
50 Ways to Bake a Cookie: Mapping the Landscape of Procedural Texts
The web is full of guidance on a wide variety of tasks, from changing the oil in your car to baking an apple pie. However, as content is created independently, a single task could have thousands of corresponding procedural texts. This makes it difficult for users to view the bigger picture and understand the multiple ways the task could be accomplished. In this work we propose an unsupervised learning approach for summarizing multiple procedural texts into an intuitive graph representation, allowing users to easily explore commonalities and differences. We demonstrate our approach on recipes, a prominent example of procedural texts. User studies show that our representation is intuitive and coherent and that it has the potential to help users with several sensemaking tasks, including adapting recipes for a novice cook and finding creative ways to spice up a dish.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
327,614
2009.04629
Adjusting Bias in Long Range Stereo Matching: A semantics guided approach
Stereo vision generally involves the computation of pixel correspondences and estimation of disparities between rectified image pairs. In many applications, including simultaneous localization and mapping (SLAM) and 3D object detection, the disparities are primarily needed to calculate depth values and the accuracy of depth estimation is often more compelling than disparity estimation. The accuracy of disparity estimation, however, does not directly translate to the accuracy of depth estimation, especially for faraway objects. In the context of learning-based stereo systems, this is largely due to biases imposed by the choices of the disparity-based loss function and the training data. Consequently, the learning algorithms often produce unreliable depth estimates of foreground objects, particularly at large distances~($>50$m). To resolve this issue, we first analyze the effect of those biases and then propose a pair of novel depth-based loss functions for foreground and background, separately. These loss functions are tunable and can balance the inherent bias of the stereo learning algorithms. The efficacy of our solution is demonstrated by an extensive set of experiments, which are benchmarked against state of the art. We show on KITTI~2015 benchmark that our proposed solution yields substantial improvements in disparity and depth estimation, particularly for objects located at distances beyond 50 meters, outperforming the previous state of the art by $10\%$.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
195,100
2309.14003
Hierarchical Imitation Learning for Stochastic Environments
Many applications of imitation learning require the agent to generate the full distribution of behaviour observed in the training data. For example, to evaluate the safety of autonomous vehicles in simulation, accurate and diverse behaviour models of other road users are paramount. Existing methods that improve this distributional realism typically rely on hierarchical policies. These condition the policy on types such as goals or personas that give rise to multi-modal behaviour. However, such methods are often inappropriate for stochastic environments where the agent must also react to external factors: because agent types are inferred from the observed future trajectory during training, these environments require that the contributions of internal and external factors to the agent behaviour are disentangled and only internal factors, i.e., those under the agent's control, are encoded in the type. Encoding future information about external factors leads to inappropriate agent reactions during testing, when the future is unknown and types must be drawn independently from the actual future. We formalize this challenge as distribution shift in the conditional distribution of agent types under environmental stochasticity. We propose Robust Type Conditioning (RTC), which eliminates this shift with adversarial training under randomly sampled types. Experiments on two domains, including the large-scale Waymo Open Motion Dataset, show improved distributional realism while maintaining or improving task performance compared to state-of-the-art baselines.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
394,434
0710.0169
Evaluation experiments on related terms search in Wikipedia: Information Content and Adapted HITS (In Russian)
The classification of metrics and algorithms search for related terms via WordNet, Roget's Thesaurus, and Wikipedia was extended to include adapted HITS algorithm. Evaluation experiments on Information Content and adapted HITS algorithm are described. The test collection of Russian word pairs with human-assigned similarity judgments is proposed. ----- Klassifikacija metrik i algoritmov poiska semanticheski blizkih slov v tezaurusah WordNet, Rozhe i jenciklopedii Vikipedija rasshirena adaptirovannym HITS algoritmom. S pomow'ju jeksperimentov v Vikipedii oceneny metrika Information Content i adaptirovannyj algoritm HITS. Predlozhen resurs dlja ocenki semanticheskoj blizosti russkih slov.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
711
2410.19789
Xeno-learning: knowledge transfer across species in deep learning-based spectral image analysis
Novel optical imaging techniques, such as hyperspectral imaging (HSI) combined with machine learning-based (ML) analysis, have the potential to revolutionize clinical surgical imaging. However, these novel modalities face a shortage of large-scale, representative clinical data for training ML algorithms, while preclinical animal data is abundantly available through standardized experiments and allows for controlled induction of pathological tissue states, which is not ethically possible in patients. To leverage this situation, we propose a novel concept called "xeno-learning", a cross-species knowledge transfer paradigm inspired by xeno-transplantation, where organs from a donor species are transplanted into a recipient species. Using a total of 11,268 HSI images from humans as well as porcine and rat models, we show that although spectral signatures of organs differ across species, shared pathophysiological mechanisms manifest as comparable relative spectral changes across species. Such changes learnt in one species can thus be transferred to a new species via a novel "physiology-based data augmentation" method, enabling the large-scale secondary use of preclinical animal data for humans. The resulting ethical, monetary, and performance benefits of the proposed knowledge transfer paradigm promise a high impact of the methodology on future developments in the field.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
502,483
2111.08857
SEIHAI: A Sample-efficient Hierarchical AI for the MineRL Competition
The MineRL competition is designed for the development of reinforcement learning and imitation learning algorithms that can efficiently leverage human demonstrations to drastically reduce the number of environment interactions needed to solve the complex \emph{ObtainDiamond} task with sparse rewards. To address the challenge, in this paper, we present \textbf{SEIHAI}, a \textbf{S}ample-\textbf{e}ff\textbf{i}cient \textbf{H}ierarchical \textbf{AI}, that fully takes advantage of the human demonstrations and the task structure. Specifically, we split the task into several sequentially dependent subtasks, and train a suitable agent for each subtask using reinforcement learning and imitation learning. We further design a scheduler to select different agents for different subtasks automatically. SEIHAI takes the first place in the preliminary and final of the NeurIPS-2020 MineRL competition.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
true
false
false
false
266,829
2401.11654
ActionHub: A Large-scale Action Video Description Dataset for Zero-shot Action Recognition
Zero-shot action recognition (ZSAR) aims to learn an alignment model between videos and class descriptions of seen actions that is transferable to unseen actions. The text queries (class descriptions) used in existing ZSAR works, however, are often short action names that fail to capture the rich semantics in the videos, leading to misalignment. With the intuition that video content descriptions (e.g., video captions) can provide rich contextual information of visual concepts in videos, we propose to utilize human annotated video descriptions to enrich the semantics of the class descriptions of each action. However, all existing action video description datasets are limited in terms of the number of actions, the semantics of video descriptions, etc. To this end, we collect a large-scale action video descriptions dataset named ActionHub, which covers a total of 1,211 common actions and provides 3.6 million action video descriptions. With the proposed ActionHub dataset, we further propose a novel Cross-modality and Cross-action Modeling (CoCo) framework for ZSAR, which consists of a Dual Cross-modality Alignment module and a Cross-action Invariance Mining module. Specifically, the Dual Cross-modality Alignment module utilizes both action labels and video descriptions from ActionHub to obtain rich class semantic features for feature alignment. The Cross-action Invariance Mining module exploits a cycle-reconstruction process between the class semantic feature spaces of seen actions and unseen actions, aiming to guide the model to learn cross-action invariant representations. Extensive experimental results demonstrate that our CoCo framework significantly outperforms the state-of-the-art on three popular ZSAR benchmarks (i.e., Kinetics-ZSAR, UCF101 and HMDB51) under two different learning protocols in ZSAR. We will release our code, models, and the proposed ActionHub dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
423,092
2501.10693
Distributionally Robust Policy Evaluation and Learning for Continuous Treatment with Observational Data
Using offline observational data for policy evaluation and learning allows decision-makers to evaluate and learn a policy that connects characteristics and interventions. Most existing literature has focused on either discrete treatment spaces or assumed no difference in the distributions between the policy-learning and policy-deployed environments. These restrict applications in many real-world scenarios where distribution shifts are present with continuous treatment. To overcome these challenges, this paper focuses on developing a distributionally robust policy under a continuous treatment setting. The proposed distributionally robust estimators are established using the Inverse Probability Weighting (IPW) method extended from the discrete one for policy evaluation and learning under continuous treatments. Specifically, we introduce a kernel function into the proposed IPW estimator to mitigate the exclusion of observations that can occur in the standard IPW method to continuous treatments. We then provide finite-sample analysis that guarantees the convergence of the proposed distributionally robust policy evaluation and learning estimators. The comprehensive experiments further verify the effectiveness of our approach when distribution shifts are present.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
525,622
2305.04887
Hardware Acceleration of Explainable Artificial Intelligence
Machine learning (ML) is successful in achieving human-level artificial intelligence in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While recent efforts on explainable AI (XAI) has received significant attention, most of the existing solutions are not applicable in real-time systems since they map interpretability as an optimization problem, which leads to numerous iterations of time-consuming complex computations. Although there are existing hardware-based acceleration framework for XAI, they are implemented through FPGA and designed for specific tasks, leading to expensive cost and lack of flexibility. In this paper, we propose a simple yet efficient framework to accelerate various XAI algorithms with existing hardware accelerators. Specifically, this paper makes three important contributions. (1) The proposed method is the first attempt in exploring the effectiveness of Tensor Processing Unit (TPU) to accelerate XAI. (2) Our proposed solution explores the close relationship between several existing XAI algorithms with matrix computations, and exploits the synergy between convolution and Fourier transform, which takes full advantage of TPU's inherent ability in accelerating matrix computations. (3) Our proposed approach can lead to real-time outcome interpretation. Extensive experimental evaluation demonstrates that proposed approach deployed on TPU can provide drastic improvement in interpretation time (39x on average) as well as energy efficiency (69x on average) compared to existing acceleration techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
362,941
2406.00364
Cognitive Manipulation: Semi-supervised Visual Representation and Classroom-to-real Reinforcement Learning for Assembly in Semi-structured Environments
Assembling a slave object into a fixture-free master object represents a critical challenge in flexible manufacturing. Existing deep reinforcement learning-based methods, while benefiting from visual or operational priors, often struggle with small-batch precise assembly tasks due to their reliance on insufficient priors and high-costed model development. To address these limitations, this paper introduces a cognitive manipulation and learning approach that utilizes skill graphs to integrate learning-based object detection with fine manipulation models into a cohesive modular policy. This approach enables the detection of the master object from both global and local perspectives to accommodate positional uncertainties and variable backgrounds, and parametric residual policy to handle pose error and intricate contact dynamics effectively. Leveraging the skill graph, our method supports knowledge-informed learning of semi-supervised learning for object detection and classroom-to-real reinforcement learning for fine manipulation. Simulation experiments on a gear-assembly task have demonstrated that the skill-graph-enabled coarse-operation planning and visual attention are essential for efficient learning and robust manipulation, showing substantial improvements of 13$\%$ in success rate and 15.4$\%$ in number of completion steps over competing methods. Real-world experiments further validate that our system is highly effective for robotic assembly in semi-structured environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
459,816
2103.06909
Toward the Next Generation of News Recommender Systems
This paper proposes a vision and research agenda for the next generation of news recommender systems (RS), called the table d'hote approach. A table d'hote (translates as host's table) meal is a sequence of courses that create a balanced and enjoyable dining experience for a guest. Likewise, we believe news RS should strive to create a similar experience for the users by satisfying the news-diet needs of a user. While extant news RS considers criteria such as diversity and serendipity, and RS bundles have been studied for other contexts such as tourism, table d'hote goes further by ensuring the recommended articles satisfy a diverse set of user needs in the right proportions and in a specific order. In table d'hote, available articles need to be stratified based on the different ways that news can create value for the reader, building from theories and empirical research in journalism and user engagement. Using theories and empirical research from communication on the uses and gratifications (U&G) consumers derive from media, we define two main strata in a table d'hote news RS, each with its own substrata: 1) surveillance, which consists of information the user needs to know, and 2) serendipity, which are the articles offering unexpected surprises. The diversity of the articles according to the defined strata and the order of the articles within the list of recommendations are also two important aspects of the table d'hote in order to give the users the most effective reading experience. We propose our vision, link it to the existing concepts in the RS literature, and identify challenges for future research.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
224,436
1801.09197
Algorithmic Linearly Constrained Gaussian Processes
We algorithmically construct multi-output Gaussian process priors which satisfy linear differential equations. Our approach attempts to parametrize all solutions of the equations using Gr\"obner bases. If successful, a push forward Gaussian process along the paramerization is the desired prior. We consider several examples from physics, geomathematics and control, among them the full inhomogeneous system of Maxwell's equations. By bringing together stochastic learning and computer algebra in a novel way, we combine noisy observations with precise algebraic computations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
89,063
2502.14214
Asymmetric Co-Training for Source-Free Few-Shot Domain Adaptation
Source-free unsupervised domain adaptation (SFUDA) has gained significant attention as an alternative to traditional unsupervised domain adaptation (UDA), which relies on the constant availability of labeled source data. However, SFUDA approaches come with inherent limitations that are frequently overlooked. These challenges include performance degradation when the unlabeled target data fails to meet critical assumptions, such as having a closed-set label distribution identical to that of the source domain, or when sufficient unlabeled target data is unavailable-a common situation in real-world applications. To address these issues, we propose an asymmetric co-training (ACT) method specifically designed for the SFFSDA scenario. SFFSDA presents a more practical alternative to SFUDA, as gathering a few labeled target instances is more feasible than acquiring large volumes of unlabeled target data in many real-world contexts. Our ACT method begins by employing a weak-strong augmentation to enhance data diversity. Then we use a two-step optimization process to train the target model. In the first step, we optimize the label smoothing cross-entropy loss, the entropy of the class-conditional distribution, and the reverse-entropy loss to bolster the model's discriminative ability while mitigating overfitting. The second step focuses on reducing redundancy in the output space by minimizing classifier determinacy disparity. Extensive experiments across four benchmarks demonstrate the superiority of our ACT approach, which outperforms state-of-the-art SFUDA methods and transfer learning techniques. Our findings suggest that adapting a source pre-trained model using only a small amount of labeled target data offers a practical and dependable solution. The code is available at https://github.com/gengxuli/ACT.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
535,723
2410.16519
Cancer Cell Classification using Deep Learning
In the current technological era, the medical profession has emerged as one of the researchers' favorite subject areas, and cancer is one of them. Because there is now no effective treatment for this illness, it is a matter of concern. Only if this disease is discovered early may patients be rescued (stage I and stage II). The likelihood of survival is quite low if it is discovered in later stages (stages III and IV). The application of machine learning, deep learning, and data mining techniques in the medical industry has the potential to address current issues and bring benefits. Numerous symptoms of cancer exist, including tumors, unusual bleeding, increased weight loss, etc. It is not necessary for all tumor types to be cancerous. There are two sorts of tumors: benign and malignant. To give patients, the right care, symptoms must be carefully examined, and an automated system is to distinguish between benign and malignant tumors. Most data produced in today's online environment comes from websites related to healthcare or social media. Using data mining techniques, it is possible to extract symptoms from this vast amount of data, which will be helpful for identifying or classifying cancer. This research classifies bacteria cells as benign or cancerous using various deep-learning Algorithms. To get the best and most reliable results for the classification, a variety of methodologies and models are trained and improved.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
501,064
1912.07010
A Shape Transformation-based Dataset Augmentation Framework for Pedestrian Detection
Deep learning-based computer vision is usually data-hungry. Many researchers attempt to augment datasets with synthesized data to improve model robustness. However, the augmentation of popular pedestrian datasets, such as Caltech and Citypersons, can be extremely challenging because real pedestrians are commonly in low quality. Due to the factors like occlusions, blurs, and low-resolution, it is significantly difficult for existing augmentation approaches, which generally synthesize data using 3D engines or generative adversarial networks (GANs), to generate realistic-looking pedestrians. Alternatively, to access much more natural-looking pedestrians, we propose to augment pedestrian detection datasets by transforming real pedestrians from the same dataset into different shapes. Accordingly, we propose the Shape Transformation-based Dataset Augmentation (STDA) framework. The proposed framework is composed of two subsequent modules, i.e. the shape-guided deformation and the environment adaptation. In the first module, we introduce a shape-guided warping field to help deform the shape of a real pedestrian into a different shape. Then, in the second stage, we propose an environment-aware blending map to better adapt the deformed pedestrians into surrounding environments, obtaining more realistic-looking pedestrians and more beneficial augmentation results for pedestrian detection. Extensive empirical studies on different pedestrian detection benchmarks show that the proposed STDA framework consistently produces much better augmentation results than other pedestrian synthesis approaches using low-quality pedestrians. By augmenting the original datasets, our proposed framework also improves the baseline pedestrian detector by up to 38% on the evaluated benchmarks, achieving state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,492
2308.09381
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box
Attribution methods shed light on the explainability of data-driven approaches such as deep learning models by uncovering the most influential features in a to-be-explained decision. While determining feature attributions via gradients delivers promising results, the internal access required for acquiring gradients can be impractical under safety concerns, thus limiting the applicability of gradient-based approaches. In response to such limited flexibility, this paper presents \methodAbr~(gradient-estimation-based explanation), an approach that produces gradient-like explanations through only query-level access. The proposed approach holds a set of fundamental properties for attribution methods, which are mathematically rigorously proved, ensuring the quality of its explanations. In addition to the theoretical analysis, with a focus on image data, the experimental results empirically demonstrate the superiority of the proposed method over state-of-the-art black-box methods and its competitive performance compared to methods with full access.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
386,273
cs/0608043
Using Users' Expectations to Adapt Business Intelligence Systems
This paper takes a look at the general characteristics of business or economic intelligence system. The role of the user within this type of system is emphasized. We propose two models which we consider important in order to adapt this system to the user. The first model is based on the definition of decisional problem and the second on the four cognitive phases of human learning. We also describe the application domain we are using to test these models in this type of system.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
539,638
2401.07710
Go-Explore for Residential Energy Management
Reinforcement learning is commonly applied in residential energy management, particularly for optimizing energy costs. However, RL agents often face challenges when dealing with deceptive and sparse rewards in the energy control domain, especially with stochastic rewards. In such situations, thorough exploration becomes crucial for learning an optimal policy. Unfortunately, the exploration mechanism can be misled by deceptive reward signals, making thorough exploration difficult. Go-Explore is a family of algorithms which combines planning methods and reinforcement learning methods to achieve efficient exploration. We use the Go-Explore algorithm to solve the cost-saving task in residential energy management problems and achieve an improvement of up to 19.84\% compared to the well-known reinforcement learning algorithms.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
421,631