id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2401.11118
Meta Reinforcement Learning for Strategic IoT Deployments Coverage in Disaster-Response UAV Swarms
In the past decade, Unmanned Aerial Vehicles (UAVs) have grabbed the attention of researchers in academia and industry for their potential use in critical emergency applications, such as providing wireless services to ground users and collecting data from areas affected by disasters, due to their advantages in terms of maneuverability and movement flexibility. The UAVs' limited resources, energy budget, and strict mission completion time have posed challenges in adopting UAVs for these applications. Our system model considers a UAV swarm that navigates an area collecting data from ground IoT devices focusing on providing better service for strategic locations and allowing UAVs to join and leave the swarm (e.g., for recharging) in a dynamic way. In this work, we introduce an optimization model with the aim of minimizing the total energy consumption and provide the optimal path planning of UAVs under the constraints of minimum completion time and transmit power. The formulated optimization is NP-hard making it not applicable for real-time decision making. Therefore, we introduce a light-weight meta-reinforcement learning solution that can also cope with sudden changes in the environment through fast convergence. We conduct extensive simulations and compare our approach to three state-of-the-art learning models. Our simulation results prove that our introduced approach is better than the three state-of-the-art algorithms in providing coverage to strategic locations with fast convergence.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
422,879
2311.03340
Multitask Kernel-based Learning with First-Order Logic Constraints
In this paper we propose a general framework to integrate supervised and unsupervised examples with background knowledge expressed by a collection of first-order logic clauses into kernel machines. In particular, we consider a multi-task learning scheme where multiple predicates defined on a set of objects are to be jointly learned from examples, enforcing a set of FOL constraints on the admissible configurations of their values. The predicates are defined on the feature spaces, in which the input objects are represented, and can be either known a priori or approximated by an appropriate kernel-based learner. A general approach is presented to convert the FOL clauses into a continuous implementation that can deal with the outputs computed by the kernel-based predicates. The learning problem is formulated as a semi-supervised task that requires the optimization in the primal of a loss function that combines a fitting loss measure on the supervised examples, a regularization term, and a penalty term that enforces the constraints on both the supervised and unsupervised examples. Unfortunately, the penalty term is not convex and it can hinder the optimization process. However, it is possible to avoid poor solutions by using a two stage learning schema, in which the supervised examples are learned first and then the constraints are enforced.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
405,808
2111.11415
Implicit Quantile Neural Networks for Jet Simulation and Correction
Reliable modeling of conditional densities is important for quantitative scientific fields such as particle physics. In domains outside physics, implicit quantile neural networks (IQN) have been shown to provide accurate models of conditional densities. We present a successful application of IQNs to jet simulation and correction using the tools and simulated data from the Compact Muon Solenoid (CMS) Open Data portal.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
267,658
2307.08182
Zero-Shot Image Harmonization with Generative Model Prior
We propose a zero-shot approach to image harmonization, aiming to overcome the reliance on large amounts of synthetic composite images in existing methods. These methods, while showing promising results, involve significant training expenses and often struggle with generalization to unseen images. To this end, we introduce a fully modularized framework inspired by human behavior. Leveraging the reasoning capabilities of recent foundation models in language and vision, our approach comprises three main stages. Initially, we employ a pretrained vision-language model (VLM) to generate descriptions for the composite image. Subsequently, these descriptions guide the foreground harmonization direction of a text-to-image generative model (T2I). We refine text embeddings for enhanced representation of imaging conditions and employ self-attention and edge maps for structure preservation. Following each harmonization iteration, an evaluator determines whether to conclude or modify the harmonization direction. The resulting framework, mirroring human behavior, achieves harmonious results without the need for extensive training. We present compelling visual results across diverse scenes and objects, along with a user study validating the effectiveness of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
379,701
2006.13416
On a Security vs Privacy Trade-off in Interconnected Dynamical Systems
We study a security problem for interconnected systems, where each subsystem aims to detect local attacks using local measurements and information exchanged with neighboring subsystems. The subsystems also wish to maintain the privacy of their states and, therefore, use privacy mechanisms that share limited or noisy information with other subsystems. We quantify the privacy level based on the estimation error of a subsystem's state and propose a novel framework to compare different mechanisms based on their privacy guarantees. We develop a local attack detection scheme without assuming the knowledge of the global dynamics, which uses local and shared information to detect attacks with provable guarantees. Additionally, we quantify a trade-off between security and privacy of the local subsystems. Interestingly, we show that, for some instances of the attack, the subsystems can achieve a better detection performance by being more private. We provide an explanation for this counter-intuitive behavior and illustrate our results through numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
183,904
1906.08720
Boosting for Control of Dynamical Systems
We study the question of how to aggregate controllers for dynamical systems in order to improve their performance. To this end, we propose a framework of boosting for online control. Our main result is an efficient boosting algorithm that combines weak controllers into a provably more accurate one. Empirical evaluation on a host of control settings supports our theoretical findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
135,951
1802.05811
Distributed Stochastic Optimization via Adaptive SGD
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial method that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic optimization method by combining adaptivity with variance reduction techniques. Our analysis yields a linear speedup in the number of machines, constant memory footprint, and only a logarithmic number of communication rounds. Critically, our approach is a black-box reduction that parallelizes any serial online learning algorithm, streamlining prior analysis and allowing us to leverage the significant progress that has been made in designing adaptive algorithms. In particular, we achieve optimal convergence rates without any prior knowledge of smoothness parameters, yielding a more robust algorithm that reduces the need for hyperparameter tuning. We implement our algorithm in the Spark distributed framework and exhibit dramatic performance gains on large-scale logistic regression problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,509
2106.00958
A Generalizable Approach to Learning Optimizers
A core issue with learning to optimize neural networks has been the lack of generalization to real world problems. To address this, we describe a system designed from a generalization-first perspective, learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function. This system outperforms Adam at all neural network tasks including on modalities not seen during training. We achieve 2x speedups on ImageNet, and a 2.5x speedup on a language modeling task using over 5 orders of magnitude more compute than the training tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
238,327
1204.3491
Rationale awareness for quality assurance in iterative human computation processes
Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
15,501
2111.06849
Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images. Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting, the underlying cause that impedes the generator's convergence. This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator. As an alternative method to existing approaches that rely on standard data augmentations or model regularization, APA alleviates overfitting by employing the generator itself to augment the real data distribution with generated images, which deceives the discriminator adaptively. Extensive experiments demonstrate the effectiveness of APA in improving synthesis quality in the low-data regime. We provide a theoretical analysis to examine the convergence and rationality of our new training strategy. APA is simple and effective. It can be added seamlessly to powerful contemporary GANs, such as StyleGAN2, with negligible computational cost.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
266,194
1702.06054
Learning to Repeat: Fine Grained Action Repetition for Deep Reinforcement Learning
Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step of the agent-environment interactions. In this paper, we propose a novel framework, Fine Grained Action Repetition (FiGAR), which enables the agent to decide the action as well as the time scale of repeating it. FiGAR can be used for improving any Deep Reinforcement Learning algorithm which maintains an explicit policy estimate by enabling temporal abstractions in the action space. We empirically demonstrate the efficacy of our framework by showing performance improvements on top of three policy search algorithms in different domains: Asynchronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the TORCS car racing domain.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
68,520
1805.09091
Neural networks for post-processing ensemble weather forecasts
Ensemble weather predictions require statistical post-processing of systematic errors to obtain reliable and accurate probabilistic forecasts. Traditionally, this is accomplished with distributional regression models in which the parameters of a predictive distribution are estimated from a training period. We propose a flexible alternative based on neural networks that can incorporate nonlinear relationships between arbitrary predictor variables and forecast distribution parameters that are automatically learned in a data-driven way rather than requiring pre-specified link functions. In a case study of 2-meter temperature forecasts at surface stations in Germany, the neural network approach significantly outperforms benchmark post-processing methods while being computationally more affordable. Key components to this improvement are the use of auxiliary predictor variables and station-specific information with the help of embeddings. Furthermore, the trained neural network can be used to gain insight into the importance of meteorological variables thereby challenging the notion of neural networks as uninterpretable black boxes. Our approach can easily be extended to other statistical post-processing and forecasting problems. We anticipate that recent advances in deep learning combined with the ever-increasing amounts of model and observation data will transform the post-processing of numerical weather forecasts in the coming decade.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,338
2207.04196
Robotic Depowdering for Additive Manufacturing Via Pose Tracking
With the rapid development of powder-based additive manufacturing, depowdering, a process of removing unfused powder that covers 3D-printed parts, has become a major bottleneck to further improve its productiveness. Traditional manual depowdering is extremely time-consuming and costly, and some prior automated systems either require pre-depowdering or lack adaptability to different 3D-printed parts. To solve these problems, we introduce a robotic system that automatically removes unfused powder from the surface of 3D-printed parts. The key component is a visual perception system, which consists of a pose-tracking module that tracks the 6D pose of powder-occluded parts in real-time, and a progress estimation module that estimates the depowdering completion percentage. The tracking module can be run efficiently on a laptop CPU at up to 60 FPS. Experiments show that our depowdering system can remove unfused powder from the surface of various 3D-printed parts without causing any damage. To the best of our knowledge, this is one of the first vision-based robotic depowdering systems that adapt to parts with various shapes without the need for pre-depowdering.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
307,110
2103.10670
Improving Image co-segmentation via Deep Metric Learning
Deep Metric Learning (DML) is helpful in computer vision tasks. In this paper, we firstly introduce DML into image co-segmentation. We propose a novel Triplet loss for Image Segmentation, called IS-Triplet loss for short, and combine it with traditional image segmentation loss. Different from the general DML task which learns the metric between pictures, we treat each pixel as a sample, and use their embedded features in high-dimensional space to form triples, then we tend to force the distance between pixels of different categories greater than of the same category by optimizing IS-Triplet loss so that the pixels from different categories are easier to be distinguished in the high-dimensional feature space. We further present an efficient triple sampling strategy to make a feasible computation of IS-Triplet loss. Finally, the IS-Triplet loss is combined with 3 traditional image segmentation losses to perform image segmentation. We apply the proposed approach to image co-segmentation and test it on the SBCoseg dataset and the Internet dataset. The experimental result shows that our approach can effectively improve the discrimination of pixels' categories in high-dimensional space and thus help traditional loss achieve better performance of image segmentation with fewer training epochs.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
225,529
2101.10643
Causal inference for observational longitudinal studies using deep survival models
Causal inference for observational longitudinal studies often requires the accurate estimation of treatment effects on time-to-event outcomes in the presence of time-dependent patient history and time-dependent covariates. To tackle this longitudinal treatment effect estimation problem, we have developed a time-variant causal survival (TCS) model that uses the potential outcomes framework with an ensemble of recurrent subnetworks to estimate the difference in survival probabilities and its confidence interval over time as a function of time-dependent covariates and treatments. Using simulated survival datasets, the TCS model showed good causal effect estimation performance across scenarios of varying sample dimensions, event rates, confounding and overlapping. However, increasing the sample size was not effective in alleviating the adverse impact of a high level of confounding. In a large clinical cohort study, TCS identified the expected conditional average treatment effect and detected individual treatment effect heterogeneity over time. TCS provides an efficient way to estimate and update individualized treatment effects over time, in order to improve clinical decisions. The use of a propensity score layer and potential outcome subnetworks helps correcting for selection bias. However, the proposed model is limited in its ability to correct the bias from unmeasured confounding, and more extensive testing of TCS under extreme scenarios such as low overlapping and the presence of unmeasured confounders is desired and left for future work.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
217,005
2307.03608
The impact of body and head dynamics on motion comfort assessment
Head motion is a key determinant of motion comfort and differs substantially from seat motion due to seat and body compliance and dynamic postural stabilization. This paper compares different human body model fidelities to transmit seat accelerations to the head for the assessment of motion comfort through simulations. Six-degree of freedom dynamics were analyzed using frequency response functions derived from an advanced human model (AHM), a computationally efficient human model (EHM) and experimental studies. Simulations of dynamic driving show that human models strongly affected the predicted ride comfort (increased up to a factor 3). Furthermore, they modestly affected sickness using the available filters from the literature and ISO-2631 (increased up to 30%), but more strongly affected sickness predicted by the subjective vertical conflict (SVC) model (increased up to 70%).
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
378,099
2210.00973
NCVX: A General-Purpose Optimization Solver for Constrained Machine and Deep Learning
Imposing explicit constraints is relatively new but increasingly pressing in deep learning, stimulated by, e.g., trustworthy AI that performs robust optimization over complicated perturbation sets and scientific applications that need to respect physical laws and constraints. However, it can be hard to reliably solve constrained deep learning problems without optimization expertise. The existing deep learning frameworks do not admit constraints. General-purpose optimization packages can handle constraints but do not perform auto-differentiation and have trouble dealing with nonsmoothness. In this paper, we introduce a new software package called NCVX, whose initial release contains the solver PyGRANSO, a PyTorch-enabled general-purpose optimization package for constrained machine/deep learning problems, the first of its kind. NCVX inherits auto-differentiation, GPU acceleration, and tensor variables from PyTorch, and is built on freely available and widely used open-source frameworks. NCVX is available at https://ncvx.org, with detailed documentation and numerous examples from machine/deep learning and other fields.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
321,082
2312.00803
InceptionCaps: A Performant Glaucoma Classification Model for Data-scarce Environment
Glaucoma is an irreversible ocular disease and is the second leading cause of visual disability worldwide. Slow vision loss and the asymptomatic nature of the disease make its diagnosis challenging. Early detection is crucial for preventing irreversible blindness. Ophthalmologists primarily use retinal fundus images as a non-invasive screening method. Convolutional neural networks (CNN) have demonstrated high accuracy in the classification of medical images. Nevertheless, CNN's translation-invariant nature and inability to handle the part-whole relationship between objects make its direct application unsuitable for glaucomatous fundus image classification, as it requires a large number of labelled images for training. This work reviews existing state of the art models and proposes InceptionCaps, a novel capsule network (CapsNet) based deep learning model having pre-trained InceptionV3 as its convolution base, for automatic glaucoma classification. InceptionCaps achieved an accuracy of 0.956, specificity of 0.96, and AUC of 0.9556, which surpasses several state-of-the-art deep learning model performances on the RIM-ONE v2 dataset. The obtained result demonstrates the robustness of the proposed deep learning model.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
412,182
2112.07327
Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models
As many fine-tuned pre-trained language models~(PLMs) with promising performance are generously released, investigating better ways to reuse these models is vital as it can greatly reduce the retraining computational cost and the potential environmental side-effects. In this paper, we explore a novel model reuse paradigm, Knowledge Amalgamation~(KA) for PLMs. Without human annotations available, KA aims to merge the knowledge from different teacher-PLMs, each of which specializes in a different classification problem, into a versatile student model. The achieve this, we design a Model Uncertainty--aware Knowledge Amalgamation~(MUKA) framework, which identifies the potential adequate teacher using Monte-Carlo Dropout for approximating the golden supervision to guide the student. Experimental results demonstrate that MUKA achieves substantial improvements over baselines on benchmark datasets. Further analysis shows that MUKA can generalize well under several complicate settings with multiple teacher models, heterogeneous teachers, and even cross-dataset teachers.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
271,446
2410.08860
Audio Description Generation in the Era of LLMs and VLMs: A Review of Transferable Generative AI Technologies
Audio descriptions (ADs) function as acoustic commentaries designed to assist blind persons and persons with visual impairments in accessing digital media content on television and in movies, among other settings. As an accessibility service typically provided by trained AD professionals, the generation of ADs demands significant human effort, making the process both time-consuming and costly. Recent advancements in natural language processing (NLP) and computer vision (CV), particularly in large language models (LLMs) and vision-language models (VLMs), have allowed for getting a step closer to automatic AD generation. This paper reviews the technologies pertinent to AD generation in the era of LLMs and VLMs: we discuss how state-of-the-art NLP and CV technologies can be applied to generate ADs and identify essential research directions for the future.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
497,310
2407.05924
Graph-Boosted Attentive Network for Semantic Body Parsing
Human body parsing remains a challenging problem in natural scenes due to multi-instance and inter-part semantic confusions as well as occlusions. This paper proposes a novel approach to decomposing multiple human bodies into semantic part regions in unconstrained environments. Specifically we propose a convolutional neural network (CNN) architecture which comprises of novel semantic and contour attention mechanisms across feature hierarchy to resolve the semantic ambiguities and boundary localization issues related to semantic body parsing. We further propose to encode estimated pose as higher-level contextual information which is combined with local semantic cues in a novel graphical model in a principled manner. In this proposed model, the lower-level semantic cues can be recursively updated by propagating higher-level contextual information from estimated pose and vice versa across the graph, so as to alleviate erroneous pose information and pixel level predictions. We further propose an optimization technique to efficiently derive the solutions. Our proposed method achieves the state-of-art results on the challenging Pascal Person-Part dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
471,182
2403.03999
Fair Artificial Currency Incentives in Repeated Weighted Congestion Games: Equity vs. Equality
When users access shared resources in a selfish manner, the resulting societal cost and perceived users' cost is often higher than what would result from a centrally coordinated optimal allocation. While several contributions in mechanism design manage to steer the aggregate users choices to the desired optimum by using monetary tolls, such approaches bear the inherent drawback of discriminating against users with a lower income. More recently, incentive schemes based on artificial currencies have been studied with the goal of achieving a system-optimal resource allocation that is also fair. In this resource-sharing context, this paper focuses on repeated weighted congestion game with two resources, where users contribute to the congestion to different extents that are captured by individual weights. First, we address the broad concept of fairness by providing a rigorous mathematical characterization of the distinct societal metrics of equity and equality, i.e., the concepts of providing equal outcomes and equal opportunities, respectively. Second, we devise weight-dependent and time-invariant optimal pricing policies to maximize equity and equality, and prove convergence of the aggregate user choices to the system-optimum. In our framework it is always possible to achieve system-optimal allocations with perfect equity, while the maximum equality that can be reached may not be perfect, which is also shown via numerical simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
435,412
2208.09671
Safe Subjoins in Acyclic Joins
It is expensive to compute joins, often due to large intermediate relations. For acyclic joins, monotone join expressions are guaranteed to produce intermediate relations not larger than the size of the output of the join when it is computed on a fully reduced database. Any subexpression of an acyclic join does not offer this guarantee, as it is easy to prove. In this paper, we consider joins with projections too and we ask the question whether we can characterize join subexpressions that produce, on every fully reduced database, an output without dangling tuples (which translates, in the case of joins without projections, to an output of size not larger than the size of the output of the join). We call such a subexpression a safe subjoin. Surprisingly, we prove that there is a simple characterization which is the following: A subjoin is safe if and only if there is a parse tree of the join (a.k.a. join tree) such that the relations in the subjoin form a subtree of it. We provide an algorithm that finds such a parse tree, if there is one.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
313,789
2408.10088
Recent Surge in Public Interest in Transportation: Sentiment Analysis of Baidu Apollo Go Using Weibo Data
Urban mobility and transportation systems have been profoundly transformed by the advancement of autonomous vehicle technologies. Baidu Apollo Go, a pioneer robotaxi service from the Chinese tech giant Baidu, has recently been widely deployed in major cities like Beijing and Wuhan, sparking increased conversation and offering a glimpse into the future of urban mobility. This study investigates public attitudes towards Apollo Go across China using Sentiment Analysis with a hybrid BERT model on 36,096 Weibo posts from January to July 2024. The analysis shows that 89.56\% of posts related to Apollo Go are clustered in July. From January to July, public sentiment was mostly positive, but negative comments began to rise after it became a hot topic on July 21. Spatial analysis indicates a strong correlation between provinces with high discussion intensity and those where Apollo Go operates. Initially, Hubei and Guangdong dominated online posting volume, but by July, Guangdong, Beijing, and international regions had overtaken Hubei. Attitudes varied significantly among provinces, with Xinjiang and Qinghai showing optimism and Tibet and Gansu expressing concerns about the impact on traditional taxi services. Sentiment analysis revealed that positive comments focused on technology applications and personal experiences, while negative comments centered on job displacement and safety concerns. In summary, this study highlights the divergence in public perceptions of autonomous ride-hailing services, providing valuable insights for planners, policymakers, and service providers. The model is published on Hugging Face at https://huggingface.co/wsqstar/bert-finetuned-weibo-luobokuaipao and the repository on GitHub at https://github.com/GIStudio/trb2024.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
481,713
2402.01070
FedShift: Robust Federated Learning Aggregation Scheme in Resource Constrained Environment via Weight Shifting
Federated Learning (FL) commonly relies on a central server to coordinate training across distributed clients. While effective, this paradigm suffers from significant communication overhead, impacting overall training efficiency. To mitigate this, prior work has explored compression techniques such as quantization. However, in heterogeneous FL settings, clients may employ different quantization levels based on their hardware or network constraints, necessitating a mixed-precision aggregation process at the server. This introduces additional challenges, exacerbating client drift and leading to performance degradation. In this work, we propose FedShift, a novel aggregation methodology designed to mitigate performance degradation in FL scenarios with mixed quantization levels. FedShift employs a statistical matching mechanism based on weight shifting to align mixed-precision models, thereby reducing model divergence and addressing quantization-induced bias. Our approach functions as an add-on to existing FL optimization algorithms, enhancing their robustness and improving convergence. Empirical results demonstrate that FedShift effectively mitigates the negative impact of mixed-precision aggregation, yielding superior performance across various FL benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
425,841
2209.04185
Simple and Powerful Architecture for Inductive Recommendation Using Knowledge Graph Convolutions
Using graph models with relational information in recommender systems has shown promising results. Yet, most methods are transductive, i.e., they are based on dimensionality reduction architectures. Hence, they require heavy retraining every time new items or users are added. Conversely, inductive methods promise to solve these issues. Nonetheless, all inductive methods rely only on interactions, making recommendations for users with few interactions sub-optimal and even impossible for new items. Therefore, we focus on inductive methods able to also exploit knowledge graphs (KGs). In this work, we propose SimpleRec, a strong baseline that uses a graph neural network and a KG to provide better recommendations than related inductive methods for new users and items. We show that it is unnecessary to create complex model architectures for user representations, but it is enough to allow users to be represented by the few ratings they provide and the indirect connections among them without any user metadata. As a result, we re-evaluate state-of-the-art methods, identify better evaluation protocols, highlight unwarranted conclusions from previous proposals, and showcase a novel, stronger baseline for this task.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
316,716
1307.0643
Discovering the Markov network structure
In this paper a new proof is given for the supermodularity of information content. Using the decomposability of the information content an algorithm is given for discovering the Markov network graph structure endowed by the pairwise Markov property of a given probability distribution. A discrete probability distribution is given for which the equivalence of Hammersley-Clifford theorem is fulfilled although some of the possible vector realizations are taken on with zero probability. Our algorithm for discovering the pairwise Markov network is illustrated on this example, too.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
25,569
2011.01535
3D-LaneNet+: Anchor Free Lane Detection using a Semi-Local Representation
3D-LaneNet+ is a camera-based DNN method for anchor free 3D lane detection which is able to detect 3d lanes of any arbitrary topology such as splits, merges, as well as short and perpendicular lanes. We follow recently proposed 3D-LaneNet, and extend it to enable the detection of these previously unsupported lane topologies. Our output representation is an anchor free, semi-local tile representation that breaks down lanes into simple lane segments whose parameters can be learnt. In addition we learn, per lane instance, feature embedding that reasons for the global connectivity of locally detected segments to form full 3d lanes. This combination allows 3D-LaneNet+ to avoid using lane anchors, non-maximum suppression, and lane model fitting as in the original 3D-LaneNet. We demonstrate the efficacy of 3D-LaneNet+ using both synthetic and real world data. Results show significant improvement relative to the original 3D-LaneNet that can be attributed to better generalization to complex lane topologies, curvatures and surface geometries.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
204,623
2210.05657
The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes
Convolutional neural networks were the standard for solving many computer vision tasks until recently, when Transformers of MLP-based architectures have started to show competitive performance. These architectures typically have a vast number of weights and need to be trained on massive datasets; hence, they are not suitable for their use in low-data regimes. In this work, we propose a simple yet effective framework to improve generalization from small amounts of data. We augment modern CNNs with fully-connected (FC) layers and show the massive impact this architectural change has in low-data regimes. We further present an online joint knowledge-distillation method to utilize the extra FC layers at train time but avoid them during test time. This allows us to improve the generalization of a CNN-based model without any increase in the number of weights at test time. We perform classification experiments for a large range of network backbones and several standard datasets on supervised learning and active learning. Our experiments significantly outperform the networks without fully-connected layers, reaching a relative improvement of up to $16\%$ validation accuracy in the supervised setting without adding any extra parameters during inference.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
322,947
2006.01025
Coded Caching for Heterogeneous Wireless Networks
This chapter provides an overview of coded caching in the context of heterogeneous wireless networks. We begin by briefly describing the key idea behind coded caching and then discuss in detail the impact of various aspects such as non-uniform content popularity, multiple cache access, and interference.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
179,652
2002.12647
REGNet: REgion-based Grasp Network for End-to-end Grasp Detection in Point Clouds
Reliable robotic grasping in unstructured environments is a crucial but challenging task. The main problem is to generate the optimal grasp of novel objects from partial noisy observations. This paper presents an end-to-end grasp detection network taking one single-view point cloud as input to tackle the problem. Our network includes three stages: Score Network (SN), Grasp Region Network (GRN), and Refine Network (RN). Specifically, SN regresses point grasp confidence and selects positive points with high confidence. Then GRN conducts grasp proposal prediction on the selected positive points. RN generates more accurate grasps by refining proposals predicted by GRN. To further improve the performance, we propose a grasp anchor mechanism, in which grasp anchors with assigned gripper orientations are introduced to generate grasp proposals. Experiments demonstrate that REGNet achieves a success rate of 79.34% and a completion rate of 96% in real-world clutter, which significantly outperforms several state-of-the-art point-cloud based methods, including GPD, PointNetGPD, and S4G. The code is available at https://github.com/zhaobinglei/REGNet_for_3D_Grasping.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
166,100
1311.6421
Synchronous Context-Free Grammars and Optimal Linear Parsing Strategies
Synchronous Context-Free Grammars (SCFGs), also known as syntax-directed translation schemata, are unlike context-free grammars in that they do not have a binary normal form. In general, parsing with SCFGs takes space and time polynomial in the length of the input strings, but with the degree of the polynomial depending on the permutations of the SCFG rules. We consider linear parsing strategies, which add one nonterminal at a time. We show that for a given input permutation, the problems of finding the linear parsing strategy with the minimum space and time complexity are both NP-hard.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
28,654
2102.04781
Fast discovery of multidimensional subsequences for robust trajectory classification
Trajectory classification tasks became more complex as large volumes of mobility data are being generated every day and enriched with new sources of information, such as social networks and IoT sensors. Fast classification algorithms are essential for discovering knowledge in trajectory data for real applications. In this work we propose a method for fast discovery of subtrajectories with the reduction of the search space and the optimization of the MASTERMovelets method, which has proven to be effective for discovering interpretable patterns in classification problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,227
2104.03304
Hand-Object Contact Consistency Reasoning for Human Grasps Generation
While predicting robot grasps with parallel jaw grippers have been well studied and widely applied in robot manipulation tasks, the study on natural human grasp generation with a multi-finger hand remains a very challenging problem. In this paper, we propose to generate human grasps given a 3D object in the world. Our key observation is that it is crucial to model the consistency between the hand contact points and object contact regions. That is, we encourage the prior hand contact points to be close to the object surface and the object common contact regions to be touched by the hand at the same time. Based on the hand-object contact consistency, we design novel objectives in training the human grasp generation model and also a new self-supervised task which allows the grasp generation network to be adjusted even during test time. Our experiments show significant improvement in human grasp generation over state-of-the-art approaches by a large margin. More interestingly, by optimizing the model during test time with the self-supervised task, it helps achieve larger gain on unseen and out-of-domain objects. Project page: https://hwjiang1510.github.io/GraspTTA/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
229,024
2501.17581
CSEval: Towards Automated, Multi-Dimensional, and Reference-Free Counterspeech Evaluation using Auto-Calibrated LLMs
Counterspeech has emerged as a popular and effective strategy for combating online hate speech, sparking growing research interest in automating its generation using language models. However, the field still lacks standardised evaluation protocols and reliable automated evaluation metrics that align with human judgement. Current automatic evaluation methods, primarily based on similarity metrics, do not effectively capture the complex and independent attributes of counterspeech quality, such as contextual relevance, aggressiveness, or argumentative coherence. This has led to an increased dependency on labor-intensive human evaluations to assess automated counter-speech generation methods. To address these challenges, we introduce CSEval, a novel dataset and framework for evaluating counterspeech quality across four dimensions: contextual-relevance, aggressiveness, argument-coherence, and suitableness. Furthermore, we propose Auto-Calibrated COT for Counterspeech Evaluation (Auto-CSEval), a prompt-based method with auto-calibrated chain-of-thoughts (CoT) for scoring counterspeech using large language models. Our experiments show that Auto-CSEval outperforms traditional metrics like ROUGE, METEOR, and BertScore in correlating with human judgement, indicating a significant improvement in automated counterspeech evaluation.
false
false
false
true
true
false
false
false
true
false
false
false
false
true
false
false
false
false
528,380
2107.06501
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning
High-level representation-guided pixel denoising and adversarial training are independent solutions to enhance the robustness of CNNs against adversarial attacks by pre-processing input data and re-training models, respectively. Most recently, adversarial training techniques have been widely studied and improved while the pixel denoising-based method is getting less attractive. However, it is still questionable whether there exists a more advanced pixel denoising-based method and whether the combination of the two solutions benefits each other. To this end, we first comprehensively investigate two kinds of pixel denoising methods for adversarial robustness enhancement (i.e., existing additive-based and unexplored filtering-based methods) under the loss functions of image-level and semantic-level, respectively, showing that pixel-wise filtering can obtain much higher image quality (e.g., higher PSNR) as well as higher robustness (e.g., higher accuracy on adversarial examples) than existing pixel-wise additive-based method. However, we also observe that the robustness results of the filtering-based method rely on the perturbation amplitude of adversarial examples used for training. To address this problem, we propose predictive perturbation-aware & pixel-wise filtering}, where dual-perturbation filtering and an uncertainty-aware fusion module are designed and employed to automatically perceive the perturbation amplitude during the training and testing process. The method is termed as AdvFilter. Moreover, we combine adversarial pixel denoising methods with three adversarial training-based methods, hinting that considering data and models jointly is able to achieve more robust CNNs. The experiments conduct on NeurIPS-2017DEV, SVHN and CIFAR10 datasets and show advantages over enhancing CNNs' robustness, high generalization to different models and noise levels.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
246,112
2306.03906
Biological Organisms as End Effectors
In robotics, an end effector is a device at the end of a robotic arm that is designed to physically interact with objects in the environment or with the environment itself. Effectively, it serves as the hand of the robot, carrying out tasks on behalf of humans. But could we turn this concept on its head and consider using living organisms themselves as end effectors? This paper introduces a novel idea of using whole living organisms as end effectors for robotics. We showcase this by demonstrating that pill bugs and chitons -- types of small, harmless creatures -- can be utilized as functional grippers. Crucially, this method does not harm these creatures, enabling their release back into nature after use. How this concept may be expanded to other organisms and applications is also discussed.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
371,524
2502.01310
A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Neural network based Optimal Transport (OT) is a recent and fruitful direction in the generative modeling community. It finds its applications in various fields such as domain translation, image super-resolution, computational biology and others. Among the existing approaches to OT, of considerable interest are adversarial minimax solvers based on semi-dual formulations of OT problems. While promising, these methods lack theoretical investigation from a statistical learning perspective. Our work fills this gap by establishing upper bounds on the generalization error of an approximate OT map recovered by the minimax quadratic OT solver. Importantly, the bounds we derive depend solely on some standard statistical and mathematical properties of the considered functional classes (neural networks). While our analysis focuses on the quadratic OT, we believe that similar bounds could be derived for more general OT formulations, paving the promising direction for future research.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
529,799
2111.00556
Revealing and Protecting Labels in Distributed Training
Distributed learning paradigms such as federated learning often involve transmission of model updates, or gradients, over a network, thereby avoiding transmission of private data. However, it is possible for sensitive information about the training data to be revealed from such gradients. Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e.g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model. In this work, we propose a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping. Our method is applicable to a wide variety of model architectures across multiple domains. We demonstrate the effectiveness of our method for model training in two domains - image classification, and automatic speech recognition. Furthermore, we show that existing reconstruction techniques improve their efficacy when used in conjunction with our method. Conversely, we demonstrate that gradient quantization and sparsification can significantly reduce the success of the attack.
false
false
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
264,257
1903.11385
Signal Demodulation with Machine Learning Methods for Physical Layer Visible Light Communications: Prototype Platform, Open Dataset and Algorithms
In this paper, we investigate the design and implementation of machine learning (ML) based demodulation methods in the physical layer of visible light communication (VLC) systems. We build a flexible hardware prototype of an end-to-end VLC system, from which the received signals are collected as the real data. The dataset is available online, which contains eight types of modulated signals. Then, we propose three ML demodulators based on convolutional neural network (CNN), deep belief network (DBN), and adaptive boosting (AdaBoost), respectively. Specifically, the CNN based demodulator converts the modulated signals to images and recognizes the signals by the image classification. The proposed DBN based demodulator contains three restricted Boltzmann machines (RBMs) to extract the modulation features. The AdaBoost method includes a strong classifier that is constructed by the weak classifiers with the k-nearest neighbor (KNN) algorithm. These three demodulators are trained and tested by our online open dataset. Experimental results show that the demodulation accuracy of the three data-driven demodulators drops as the transmission distance increases. A higher modulation order negatively influences the accuracy for a given transmission distance. Among the three ML methods, the AdaBoost modulator achieves the best performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
125,502
2204.04859
A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges
Legal judgment prediction (LJP) applies Natural Language Processing (NLP) techniques to predict judgment results based on fact descriptions automatically. Recently, large-scale public datasets and advances in NLP research have led to increasing interest in LJP. Despite a clear gap between machine and human performance, impressive results have been achieved in various benchmark datasets. In this paper, to address the current lack of comprehensive survey of existing LJP tasks, datasets, models and evaluations, (1) we analyze 31 LJP datasets in 6 languages, present their construction process and define a classification method of LJP with 3 different attributes; (2) we summarize 14 evaluation metrics under four categories for different outputs of LJP tasks; (3) we review 12 legal-domain pretrained models in 3 languages and highlight 3 major research directions for LJP; (4) we show the state-of-art results for 8 representative datasets from different court cases and discuss the open challenges. This paper can provide up-to-date and comprehensive reviews to help readers understand the status of LJP. We hope to facilitate both NLP researchers and legal professionals for further joint efforts in this problem.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
290,815
1609.06649
Minimally Supervised Written-to-Spoken Text Normalization
In speech-applications such as text-to-speech (TTS) or automatic speech recognition (ASR), \emph{text normalization} refers to the task of converting from a \emph{written} representation into a representation of how the text is to be \emph{spoken}. In all real-world speech applications, the text normalization engine is developed---in large part---by hand. For example, a hand-built grammar may be used to enumerate the possible ways of saying a given token in a given language, and a statistical model used to select the most appropriate pronunciation in context. In this study we examine the tradeoffs associated with using more or less language-specific domain knowledge in a text normalization engine. In the most data-rich scenario, we have access to a carefully constructed hand-built normalization grammar that for any given token will produce a set of all possible verbalizations for that token. We also assume a corpus of aligned written-spoken utterances, from which we can train a ranking model that selects the appropriate verbalization for the given context. As a substitute for the carefully constructed grammar, we also consider a scenario with a language-universal normalization \emph{covering grammar}, where the developer merely needs to provide a set of lexical items particular to the language. As a substitute for the aligned corpus, we also consider a scenario where one only has the spoken side, and the corresponding written side is "hallucinated" by composing the spoken side with the inverted normalization grammar. We investigate the accuracy of a text normalization engine under each of these scenarios. We report the results of experiments on English and Russian.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
61,325
2502.07828
Some things to know about achieving artificial general intelligence
Current and foreseeable GenAI models are not capable of achieving artificial general intelligence because they are burdened with anthropogenic debt. They depend heavily on human input to provide well-structured problems, architecture, and training data. They cast every problem as a language pattern learning problem and are thus not capable of the kind of autonomy needed to achieve artificial general intelligence. Current models succeed at their tasks because people solve most of the problems to which these models are directed, leaving only simple computations for the model to perform, such as gradient descent. Another barrier is the need to recognize that there are multiple kinds of problems, some of which cannot be solved by available computational methods (for example, "insight problems"). Current methods for evaluating models (benchmarks and tests) are not adequate to identify the generality of the solutions, because it is impossible to infer the means by which a problem was solved from the fact of its solution. A test could be passed, for example, by a test-specific or a test-general method. It is a logical fallacy (affirming the consequent) to infer a method of solution from the observation of success.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
532,781
2104.13216
Handling Long-Tail Queries with Slice-Aware Conversational Systems
We have been witnessing the usefulness of conversational AI systems such as Siri and Alexa, directly impacting our daily lives. These systems normally rely on machine learning models evolving over time to provide quality user experience. However, the development and improvement of the models are challenging because they need to support both high (head) and low (tail) usage scenarios, requiring fine-grained modeling strategies for specific data subsets or slices. In this paper, we explore the recent concept of slice-based learning (SBL) (Chen et al., 2019) to improve our baseline conversational skill routing system on the tail yet critical query traffic. We first define a set of labeling functions to generate weak supervision data for the tail intents. We then extend the baseline model towards a slice-aware architecture, which monitors and improves the model performance on the selected tail intents. Applied to de-identified live traffic from a commercial conversational AI system, our experiments show that the slice-aware model is beneficial in improving model performance for the tail intents while maintaining the overall performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
232,441
2103.01463
Audio-Visual Speech Separation Using Cross-Modal Correspondence Loss
We present an audio-visual speech separation learning method that considers the correspondence between the separated signals and the visual signals to reflect the speech characteristics during training. Audio-visual speech separation is a technique to estimate the individual speech signals from a mixture using the visual signals of the speakers. Conventional studies on audio-visual speech separation mainly train the separation model on the audio-only loss, which reflects the distance between the source signals and the separated signals. However, conventional losses do not reflect the characteristics of the speech signals, including the speaker's characteristics and phonetic information, which leads to distortion or remaining noise. To address this problem, we propose the cross-modal correspondence (CMC) loss, which is based on the cooccurrence of the speech signal and the visual signal. Since the visual signal is not affected by background noise and contains speaker and phonetic information, using the CMC loss enables the audio-visual speech separation model to remove noise while preserving the speech characteristics. Experimental results demonstrate that the proposed method learns the cooccurrence on the basis of CMC loss, which improves separation performance.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
222,627
2412.02287
Viewpoint Consistency in 3D Generation via Attention and CLIP Guidance
Despite recent advances in text-to-3D generation techniques, current methods often suffer from geometric inconsistencies, commonly referred to as the Janus Problem. This paper identifies the root cause of the Janus Problem: viewpoint generation bias in diffusion models, which creates a significant gap between the actual generated viewpoint and the expected one required for optimizing the 3D model. To address this issue, we propose a tuning-free approach called the Attention and CLIP Guidance (ACG) mechanism. ACG enhances desired viewpoints by adaptively controlling cross-attention maps, employs CLIP-based view-text similarities to filter out erroneous viewpoints, and uses a coarse-to-fine optimization strategy with staged prompts to progressively refine 3D generation. Extensive experiments demonstrate that our method significantly reduces the Janus Problem without compromising generation speed, establishing ACG as an efficient, plug-and-play component for existing text-to-3D frameworks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,477
2211.10039
Why the pseudo label based semi-supervised learning algorithm is effective?
Recently, pseudo label based semi-supervised learning has achieved great success in many fields. The core idea of the pseudo label based semi-supervised learning algorithm is to use the model trained on the labeled data to generate pseudo labels on the unlabeled data, and then train a model to fit the previously generated pseudo labels. We give a theory analysis for why pseudo label based semi-supervised learning is effective in this paper. We mainly compare the generalization error of the model trained under two settings: (1) There are N labeled data. (2) There are N unlabeled data and a suitable initial model. Our analysis shows that, firstly, when the amount of unlabeled data tends to infinity, the pseudo label based semi-supervised learning algorithm can obtain model which have the same generalization error upper bound as model obtained by normally training in the condition of the amount of labeled data tends to infinity. More importantly, we prove that when the amount of unlabeled data is large enough, the generalization error upper bound of the model obtained by pseudo label based semi-supervised learning algorithm can converge to the optimal upper bound with linear convergence rate. We also give the lower bound on sampling complexity to achieve linear convergence rate. Our analysis contributes to understanding the empirical successes of pseudo label-based semi-supervised learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
331,192
2211.07849
Linear Convergent Distributed Nash Equilibrium Seeking with Compression
Information compression techniques are majorly employed to address the concern of reducing communication cost over peer-to-peer links. In this paper, we investigate distributed Nash equilibrium (NE) seeking problems in a class of non-cooperative games over directed graphs with information compression. To improve communication efficiency, a compressed distributed NE seeking (C-DNES) algorithm is proposed to obtain a NE for games, where the differences between decision vectors and their estimates are compressed. The proposed algorithm is compatible with a general class of compression operators, including both unbiased and biased compressors. Moreover, our approach only requires the adjacency matrix of the directed graph to be row-stochastic, in contrast to past works that relied on balancedness or specific global network parameters. It is shown that C-DNES not only inherits the advantages of conventional distributed NE algorithms, achieving linear convergence rate for games with restricted strongly monotone mappings, but also saves communication costs in terms of transmitted bits. Finally, numerical simulations illustrate the advantages of C-DNES in saving communication cost by an order of magnitude under different compressors.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
330,390
2407.10239
What is Reproducibility in Artificial Intelligence and Machine Learning Research?
In the rapidly evolving fields of Artificial Intelligence (AI) and Machine Learning (ML), the reproducibility crisis underscores the urgent need for clear validation methodologies to maintain scientific integrity and encourage advancement. The crisis is compounded by the prevalent confusion over validation terminology. Responding to this challenge, we introduce a validation framework that clarifies the roles and definitions of key validation efforts: repeatability, dependent and independent reproducibility, and direct and conceptual replicability. This structured framework aims to provide AI/ML researchers with the necessary clarity on these essential concepts, facilitating the appropriate design, conduct, and interpretation of validation studies. By articulating the nuances and specific roles of each type of validation study, we hope to contribute to a more informed and methodical approach to addressing the challenges of reproducibility, thereby supporting the community's efforts to enhance the reliability and trustworthiness of its research findings.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
472,896
2205.12446
FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech
We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a variety of speech tasks, including Automatic Speech Recognition (ASR), Speech Language Identification (Speech LangID), Translation and Retrieval. In this paper, we provide baselines for the tasks based on multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable speech technology in more languages and catalyze research in low-resource speech understanding.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
298,538
2403.09254
Gun Culture in Fringe Social Media
The increasing frequency of mass shootings in the United States has, unfortunately, become a norm. While the issue of gun control in the US involves complex legal concerns, there are also societal issues at play. One such social issue is so-called "gun culture," i.e., a general set of beliefs and actions related to gun ownership. However relatively little is known about gun culture, and even less is known when it comes to fringe online communities. This is especially worrying considering the aforementioned rise in mass shootings and numerous instances of shooters being radicalized online. To address this gap, we explore gun culture on /k/, 4chan's weapons board. More specifically, using a variety of quantitative techniques, we examine over 4M posts on /k/ and position their discussion within the larger body of theoretical understanding of gun culture. Among other things, our findings suggest that gun culture on /k/ covers a relatively diverse set of topics (with a particular focus on legal discussion), some of which are signals of fetishism.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
437,692
2410.17422
Multimodal LLM Guided Exploration and Active Mapping using Fisher Information
We present an active mapping system that could plan for long-horizon exploration goals and short-term actions with a 3D Gaussian Splatting (3DGS) representation. Existing methods either did not take advantage of recent developments in multimodal Large Language Models (LLM) or did not consider challenges in localization uncertainty, which is critical in embodied agents. We propose employing multimodal LLMs for long-horizon planning in conjunction with detailed motion planning using our information-based algorithm. By leveraging high-quality view synthesis from our 3DGS representation, our method employs a multimodal LLM as a zero-shot planner for long-horizon exploration goals from the semantic perspective. We also introduce an uncertainty-aware path proposal and selection algorithm that balances the dual objectives of maximizing the information gain for the environment while minimizing the cost of localization errors. Experiments conducted on the Gibson and Habitat-Matterport 3D datasets demonstrate state-of-the-art results of the proposed method.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
501,449
2207.02052
A Two-Timescale Approach to Mobility Management for Multi-Cell Mobile Edge Computing
Mobile edge computing (MEC) is a promising technology for enhancing the computation capacities and features of mobile users by offloading complex computation tasks to the edge servers. However, mobility poses great challenges on delivering reliable MEC service required for latency-critical applications. First, mobility management has to tackle the dynamics of both user's location changes and task arrivals that vary in different timescales. Second, user mobility could induce service migration, leading to reliability loss due to the migration delay. In this paper, we propose a two-timescale mobility management framework by joint control of service migration and transmission power to address the above challenges. Specifically, the service migration operates at a large timescale to support user mobility in the multi-cell network, while the power control is performed at a small timescale for real-time task offloading. Their joint control is formulated as an optimization problem aiming at the long-term mobile energy minimization subject to the reliability requirement of computation offloading. To solve the problem, we propose a Lyapunov-based framework to decompose the problem into different timescales, based on which a low-complexity two-timescale online algorithm is developed by exploiting the problem structure. The proposed online algorithm is shown to be asymptotically optimal via theoretical analysis, and is further developed to accommodate the multiuser management. The simulation results demonstrate that our proposed algorithm can significantly improve the energy and reliability performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
306,382
1511.09413
Molecular Communication with a Reversible Adsorption Receiver
In this paper, we present an analytical model for a diffusive molecular communication (MC) system with a reversible adsorption receiver in a fluid environment. The time-varying spatial distribution of the information molecules under the reversible adsorption and desorption reaction at the surface of a bio-receiver is analytically characterized. Based on the spatial distribution, we derive the number of newly-adsorbed information molecules expected in any time duration. Importantly, we present a simulation framework for the proposed model that accounts for the diffusion and reversible reaction. Simulation results show the accuracy of our derived expressions, and demonstrate the positive effect of the adsorption rate and the negative effect of the desorption rate on the net number of newly-adsorbed information molecules expected. Moreover, our analytical results simplify to the special case of an absorbing receiver.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
49,662
1907.05664
Saliency Maps Generation for Automatic Text Summarization
Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these "explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
138,425
1905.10996
Graph Filtration Learning
We propose an approach to learning with graph-structured data in the problem domain of graph classification. In particular, we present a novel type of readout operation to aggregate node features into a graph-level representation. To this end, we leverage persistent homology computed via a real-valued, learnable, filter function. We establish the theoretical foundation for differentiating through the persistent homology computation. Empirically, we show that this type of readout operation compares favorably to previous techniques, especially when the graph connectivity structure is informative for the learning problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,296
2205.05940
SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation System
The recommendation system plays a vital role in many areas, especially academic fields, to support researchers in submitting and increasing the acceptance of their work through the conference or journal selection process. This study proposes a transformer-based model using transfer learning as an efficient approach for the paper submission recommendation system. By combining essential information (such as the title, the abstract, and the list of keywords) with the aims and scopes of journals, the model can recommend the Top K journals that maximize the acceptance of the paper. Our model had developed through two states: (i) Fine-tuning the pre-trained language model (LM) with a simple contrastive learning framework. We utilized a simple supervised contrastive objective to fine-tune all parameters, encouraging the LM to learn the document representation effectively. (ii) The fine-tuned LM was then trained on different combinations of the features for the downstream task. This study suggests a more advanced method for enhancing the efficiency of the paper submission recommendation system compared to previous approaches when we respectively achieve 0.5173, 0.8097, 0.8862, 0.9496 for Top 1, 3, 5, and 10 accuracies on the test set for combining the title, abstract, and keywords as input features. Incorporating the journals' aims and scopes, our model shows an exciting result by getting 0.5194, 0.8112, 0.8866, and 0.9496 respective to Top 1, 3, 5, and 10.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
296,095
2405.02219
A Normative Framework for Benchmarking Consumer Fairness in Large Language Model Recommender System
The rapid adoption of large language models (LLMs) in recommender systems (RS) presents new challenges in understanding and evaluating their biases, which can result in unfairness or the amplification of stereotypes. Traditional fairness evaluations in RS primarily focus on collaborative filtering (CF) settings, which may not fully capture the complexities of LLMs, as these models often inherit biases from large, unregulated data. This paper proposes a normative framework to benchmark consumer fairness in LLM-powered recommender systems (RecLLMs). We critically examine how fairness norms in classical RS fall short in addressing the challenges posed by LLMs. We argue that this gap can lead to arbitrary conclusions about fairness, and we propose a more structured, formal approach to evaluate fairness in such systems. Our experiments on the MovieLens dataset on consumer fairness, using in-context learning (zero-shot vs. few-shot) reveal fairness deviations in age-based recommendations, particularly when additional contextual examples are introduced (ICL-2). Statistical significance tests confirm that these deviations are not random, highlighting the need for robust evaluation methods. While this work offers a preliminary discussion on a proposed normative framework, our hope is that it could provide a formal, principled approach for auditing and mitigating bias in RecLLMs. The code and dataset used for this work will be shared at "gihub-anonymized".
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
451,670
2302.14630
Experience in Engineering Complex Systems: Active Preference Learning with Multiple Outcomes and Certainty Levels
Black-box optimization refers to the optimization problem whose objective function and/or constraint sets are either unknown, inaccessible, or non-existent. In many applications, especially with the involvement of humans, the only way to access the optimization problem is through performing physical experiments with the available outcomes being the preference of one candidate with respect to one or many others. Accordingly, the algorithm so-called Active Preference Learning has been developed to exploit this specific information in constructing a surrogate function based on standard radial basis functions, and then forming an easy-to-solve acquisition function which repetitively suggests new decision vectors to search for the optimal solution. Based on this idea, our approach aims to extend the algorithm in such a way that can exploit further information effectively, which can be obtained in reality such as: 5-point Likert type scale for the outcomes of the preference query (i.e., the preference can be described in not only "this is better than that" but also "this is much better than that" level), or multiple outcomes for a single preference query with possible additive information on how certain the outcomes are. The validation of the proposed algorithm is done through some standard benchmark functions, showing a promising improvement with respect to the state-of-the-art algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
348,375
2403.05495
Rediscovering the Mullins Effect With Deep Symbolic Regression
The Mullins effect represents a softening phenomenon observed in rubber-like materials and soft biological tissues. It is usually accompanied by many other inelastic effects like for example residual strain and induced anisotropy. In spite of the long term research and many material models proposed in literature, accurate modeling and prediction of this complex phenomenon still remain a challenging task. In this work, we present a novel approach using deep symbolic regression (DSR) to generate material models describing the Mullins effect in the context of nearly incompressible hyperelastic materials. The two step framework first identifies a strain energy function describing the primary loading. Subsequently, a damage function characterizing the softening behavior under cyclic loading is identified. The efficiency of the proposed approach is demonstrated through benchmark tests using the generalized the Mooney-Rivlin and the Ogden-Roxburgh model. The generalizability and robustness of the presented framework are thoroughly studied. In addition, the proposed methodology is extensively validated on a temperature-dependent data set, which demonstrates its versatile and reliable performance.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
436,029
2209.03254
3D Textured Shape Recovery with Learned Geometric Priors
3D textured shape recovery from partial scans is crucial for many real-world applications. Existing approaches have demonstrated the efficacy of implicit function representation, but they suffer from partial inputs with severe occlusions and varying object types, which greatly hinders their application value in the real world. This technical report presents our approach to address these limitations by incorporating learned geometric priors. To this end, we generate a SMPL model from learned pose prediction and fuse it into the partial input to add prior knowledge of human bodies. We also propose a novel completeness-aware bounding box adaptation for handling different levels of scales and partialness of partial scans.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
316,454
2412.12150
Rethinking Comprehensive Benchmark for Chart Understanding: A Perspective from Scientific Literature
Scientific Literature charts often contain complex visual elements, including multi-plot figures, flowcharts, structural diagrams and etc. Evaluating multimodal models using these authentic and intricate charts provides a more accurate assessment of their understanding abilities. However, existing benchmarks face limitations: a narrow range of chart types, overly simplistic template-based questions and visual elements, and inadequate evaluation methods. These shortcomings lead to inflated performance scores that fail to hold up when models encounter real-world scientific charts. To address these challenges, we introduce a new benchmark, Scientific Chart QA (SCI-CQA), which emphasizes flowcharts as a critical yet often overlooked category. To overcome the limitations of chart variety and simplistic visual elements, we curated a dataset of 202,760 image-text pairs from 15 top-tier computer science conferences papers over the past decade. After rigorous filtering, we refined this to 37,607 high-quality charts with contextual information. SCI-CQA also introduces a novel evaluation framework inspired by human exams, encompassing 5,629 carefully curated questions, both objective and open-ended. Additionally, we propose an efficient annotation pipeline that significantly reduces data annotation costs. Finally, we explore context-based chart understanding, highlighting the crucial role of contextual information in solving previously unanswerable questions.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
517,749
2110.07660
A Semi-Supervised Approach for Abnormal Event Prediction on Large Operational Network Time-Series Data
Large network logs, recording multivariate time series generated from heterogeneous devices and sensors in a network, can often reveal important information about abnormal activities, such as network intrusions and device malfunctions. Existing machine learning methods for anomaly detection on multivariate time series typically assume that 1) normal sequences would have consistent behavior for training unsupervised models, or 2) require a large set of labeled normal and abnormal sequences for supervised models. However, in practice, normal network activities can demonstrate significantly varying sequence patterns (e.g., before and after rerouting partial network traffic). Also, the recorded abnormal events can be sparse. This paper presents a novel semi-supervised method that efficiently captures dependencies between network time series and across time points to generate meaningful representations of network activities for predicting abnormal events. The method can use the limited labeled data to explicitly learn separable embedding space for normal and abnormal samples and effectively leverage unlabeled data to handle training data scarcity. The experiments demonstrate that our approach significantly outperformed state-of-the-art approaches for event detection on a large real-world network log.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,072
2303.08601
GCRE-GPT: A Generative Model for Comparative Relation Extraction
Given comparative text, comparative relation extraction aims to extract two targets (\eg two cameras) in comparison and the aspect they are compared for (\eg image quality). The extracted comparative relations form the basis of further opinion analysis.Existing solutions formulate this task as a sequence labeling task, to extract targets and aspects. However, they cannot directly extract comparative relation(s) from text. In this paper, we show that comparative relations can be directly extracted with high accuracy, by generative model. Based on GPT-2, we propose a Generation-based Comparative Relation Extractor (GCRE-GPT). Experiment results show that \modelname achieves state-of-the-art accuracy on two datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
351,708
2402.11942
The effect of Leaky ReLUs on the training and generalization of overparameterized networks
We investigate the training and generalization errors of overparameterized neural networks (NNs) with a wide class of leaky rectified linear unit (ReLU) functions. More specifically, we carefully upper bound both the convergence rate of the training error and the generalization error of such NNs and investigate the dependence of these bounds on the Leaky ReLU parameter, $\alpha$. We show that $\alpha =-1$, which corresponds to the absolute value activation function, is optimal for the training error bound. Furthermore, in special settings, it is also optimal for the generalization error bound. Numerical experiments empirically support the practical choices guided by the theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
430,652
2011.04308
Character-level Representations Improve DRS-based Semantic Parsing Even in the Age of BERT
We combine character-level and contextual language model representations to improve performance on Discourse Representation Structure parsing. Character representations can easily be added in a sequence-to-sequence model in either one encoder or as a fully separate encoder, with improvements that are robust to different language models, languages and data sets. For English, these improvements are larger than adding individual sources of linguistic information or adding non-contextual embeddings. A new method of analysis based on semantic tags demonstrates that the character-level representations improve performance across a subset of selected semantic phenomena.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
205,546
2008.11655
How to tune the RBF SVM hyperparameters?: An empirical evaluation of 18 search algorithms
SVM with an RBF kernel is usually one of the best classification algorithms for most data sets, but it is important to tune the two hyperparameters $C$ and $\gamma$ to the data itself. In general, the selection of the hyperparameters is a non-convex optimization problem and thus many algorithms have been proposed to solve it, among them: grid search, random search, Bayesian optimization, simulated annealing, particle swarm optimization, Nelder Mead, and others. There have also been proposals to decouple the selection of $\gamma$ and $C$. We empirically compare 18 of these proposed search algorithms (with different parameterizations for a total of 47 combinations) on 115 real-life binary data sets. We find (among other things) that trees of Parzen estimators and particle swarm optimization select better hyperparameters with only a slight increase in computation time with respect to a grid search with the same number of evaluations. We also find that spending too much computational effort searching the hyperparameters will not likely result in better performance for future data and that there are no significant differences among the different procedures to select the best set of hyperparameters when more than one is found by the search algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
193,342
2205.01513
Threshold Rates of Codes Ensembles: Linear is Best
In this work, we prove new results concerning the combinatorial properties of random linear codes. Firstly, we prove a lower bound on the list-size required for random linear codes over $\mathbb F_q$ $\varepsilon$-close to capacity to list-recover with error radius $\rho$ and input lists of size $\ell$. We show that the list-size $L$ must be at least $\frac{\log_q\binom{q}{\ell}-R}{\varepsilon}$, where $R$ is the rate of the random linear code. As a comparison, we also pin down the list size of random codes which is $\frac{\log_q\binom{q}{\ell}}{\varepsilon}$. This leaves open the possibility (that we consider likely) that random linear codes perform better than random codes for list-recoverability, which is in contrast to a recent gap shown for the case of list-recovery from erasures (Guruswami et al., IEEE TIT 2021B). Next, we consider list-decoding with constant list-sizes. Specifically, we obtain new lower bounds on the rate required for list-of-$3$ decodability of random linear codes over $\mathbb F_2$; and list-of-$2$ decodability of random linear codes over $\mathbb F_q$ (for any $q$). This expands upon Guruswami et al. (IEEE TIT 2021A) which only studied list-of-$2$ decodability of random linear codes over $\mathbb F_2$. Further, in both cases we are able to show that the rate is larger than that which is possible for uniformly random codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
294,615
2202.09129
Efficient computation of the volume of a polytope in high-dimensions using Piecewise Deterministic Markov Processes
Computing the volume of a polytope in high dimensions is computationally challenging but has wide applications. Current state-of-the-art algorithms to compute such volumes rely on efficient sampling of a Gaussian distribution restricted to the polytope, using e.g. Hamiltonian Monte Carlo. We present a new sampling strategy that uses a Piecewise Deterministic Markov Process. Like Hamiltonian Monte Carlo, this new method involves simulating trajectories of a non-reversible process and inherits similar good mixing properties. However, importantly, the process can be simulated more easily due to its piecewise linear trajectories - and this leads to a reduction of the computational cost by a factor of the dimension of the space. Our experiments indicate that our method is numerically robust and is one order of magnitude faster (or better) than existing methods using Hamiltonian Monte Carlo. On a single core processor, we report computational time of a few minutes up to dimension 500.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,101
2411.17525
Pushing the Limits of Large Language Model Quantization via the Linearity Theorem
Quantizing large language models has become a standard way to reduce their memory and computational costs. Typically, existing methods focus on breaking down the problem into individual layer-wise sub-problems, and minimizing per-layer error, measured via various metrics. Yet, this approach currently lacks theoretical justification and the metrics employed may be sub-optimal. In this paper, we present a "linearity theorem" establishing a direct relationship between the layer-wise $\ell_2$ reconstruction error and the model perplexity increase due to quantization. This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, which outperforms all prior data-free approaches such as the extremely popular NF4 quantized format, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels which match a given compression constraint in the medium-bitwidth regime, obtained by reduction to dynamic programming. On the practical side, we demonstrate improved accuracy-compression trade-offs on Llama-3.1 and 3.2-family models, as well as on Qwen-family models. Further, we show that our method can be efficiently supported in terms of GPU kernels at various batch sizes, advancing both data-free and non-uniform quantization for LLMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
511,483
2105.06292
A one-armed CNN for exoplanet detection from light curves
We propose Genesis, a one-armed simplified Convolutional Neural Network (CNN)for exoplanet detection, and compare it to the more complex, two-armed CNN called Astronet. Furthermore, we examine how Monte Carlo cross-validation affects the estimation of the exoplanet detection performance. Finally, we increase the input resolution twofold to assess its effect on performance. The experiments reveal that (i)the reduced complexity of Genesis, i.e., a more than 95% reduction in the number of free parameters, incurs a small performance cost of about 0.5% compared to Astronet, (ii) Monte Carlo cross-validation provides a more realistic performance estimate that is almost 0.7% below the original estimate, and (iii) the twofold increase in input resolution decreases the average performance by about 0.5%. We conclude by arguing that further exploration of shallower CNN architectures may be beneficial in order to improve the generalizability of CNN-based exoplanet detection across surveys.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
235,081
2308.02950
A criterion for Artificial General Intelligence: hypothetic-deductive reasoning, tested on ChatGPT
We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning. Problem-solving or question-answering can quite generally be construed as involving two steps: hypothesizing that a certain set of hypotheses T applies to the problem or question at hand, and deducing the solution or answer from T - hence the term hypothetic-deductive reasoning. An elementary proxy of hypothetic-deductive reasoning is causal reasoning. We propose simple tests for both types of reasoning, and apply them to ChatGPT. Our study shows that, at present, the chatbot has a limited capacity for either type of reasoning, as soon as the problems considered are somewhat complex. However, we submit that if an AI would be capable of this type of reasoning in a sufficiently wide range of contexts, it would be an AGI.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
383,834
2207.09225
Similarity of Pre-trained and Fine-tuned Representations
In transfer learning, only the last part of the networks - the so-called head - is often fine-tuned. Representation similarity analysis shows that the most significant change still occurs in the head even if all weights are updatable. However, recent results from few-shot learning have shown that representation change in the early layers, which are mostly convolutional, is beneficial, especially in the case of cross-domain adaption. In our paper, we find out whether that also holds true for transfer learning. In addition, we analyze the change of representation in transfer learning, both during pre-training and fine-tuning, and find out that pre-trained structure is unlearned if not usable.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
308,839
0806.2513
The Perfect Binary One-Error-Correcting Codes of Length 15: Part I--Classification
A complete classification of the perfect binary one-error-correcting codes of length 15 as well as their extensions of length 16 is presented. There are 5983 such inequivalent perfect codes and 2165 extended perfect codes. Efficient generation of these codes relies on the recent classification of Steiner quadruple systems of order 16. Utilizing a result of Blackmore, the optimal binary one-error-correcting codes of length 14 and the (15, 1024, 4) codes are also classified; there are 38408 and 5983 such codes, respectively.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,922
2403.20328
Learning Visual Quadrupedal Loco-Manipulation from Demonstrations
Quadruped robots are progressively being integrated into human environments. Despite the growing locomotion capabilities of quadrupedal robots, their interaction with objects in realistic scenes is still limited. While additional robotic arms on quadrupedal robots enable manipulating objects, they are sometimes redundant given that a quadruped robot is essentially a mobile unit equipped with four limbs, each possessing 3 degrees of freedom (DoFs). Hence, we aim to empower a quadruped robot to execute real-world manipulation tasks using only its legs. We decompose the loco-manipulation process into a low-level reinforcement learning (RL)-based controller and a high-level Behavior Cloning (BC)-based planner. By parameterizing the manipulation trajectory, we synchronize the efforts of the upper and lower layers, thereby leveraging the advantages of both RL and BC. Our approach is validated through simulations and real-world experiments, demonstrating the robot's ability to perform tasks that demand mobility and high precision, such as lifting a basket from the ground while moving, closing a dishwasher, pressing a button, and pushing a door. Project website: https://zhengmaohe.github.io/leg-manip
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
442,711
2305.05204
Popularity Debiasing from Exposure to Interaction in Collaborative Filtering
Recommender systems often suffer from popularity bias, where popular items are overly recommended while sacrificing unpopular items. Existing researches generally focus on ensuring the number of recommendations exposure of each item is equal or proportional, using inverse propensity weighting, causal intervention, or adversarial training. However, increasing the exposure of unpopular items may not bring more clicks or interactions, resulting in skewed benefits and failing in achieving real reasonable popularity debiasing. In this paper, we propose a new criterion for popularity debiasing, i.e., in an unbiased recommender system, both popular and unpopular items should receive Interactions Proportional to the number of users who Like it, namely IPL criterion. Under the guidance of the criterion, we then propose a debiasing framework with IPL regularization term which is theoretically shown to achieve a win-win situation of both popularity debiasing and recommendation performance. Experiments conducted on four public datasets demonstrate that when equipping two representative collaborative filtering models with our framework, the popularity bias is effectively alleviated while maintaining the recommendation performance.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
363,051
2409.19617
LiRA: Light-Robust Adversary for Model-based Reinforcement Learning in Real World
Model-based reinforcement learning has attracted much attention due to its high sample efficiency and is expected to be applied to real-world robotic applications. In the real world, as unobservable disturbances can lead to unexpected situations, robot policies should be taken to improve not only control performance but also robustness. Adversarial learning is an effective way to improve robustness, but excessive adversary would increase the risk of malfunction, and make the control performance too conservative. Therefore, this study addresses a new adversarial learning framework to make reinforcement learning robust moderately and not conservative too much. To this end, the adversarial learning is first rederived with variational inference. In addition, light robustness, which allows for maximizing robustness within an acceptable performance degradation, is utilized as a constraint. As a result, the proposed framework, so-called LiRA, can automatically adjust adversary level, balancing robustness and conservativeness. The expected behaviors of LiRA are confirmed in numerical simulations. In addition, LiRA succeeds in learning a force-reactive gait control of a quadrupedal robot only with real-world data collected less than two hours.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
492,772
1811.00416
Technical Note on Transcription Factor Motif Discovery from Importance Scores (TF-MoDISco) version 0.5.6.5
TF-MoDISco (Transcription Factor Motif Discovery from Importance Scores) is an algorithm for identifying motifs from basepair-level importance scores computed on genomic sequence data. This technical note focuses on version v0.5.6.5. The implementation is available at https://github.com/kundajelab/tfmodisco/tree/v0.5.6.5
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,093
1911.12263
Low density majority codes and the problem of graceful degradation
We study a problem of constructing codes that transform a channel with high bit error rate (BER) into one with low BER (at the expense of rate). Our focus is on obtaining codes with smooth ("graceful'') input-output BER curves (as opposed to threshold-like curves typical for long error-correcting codes). This paper restricts attention to binary erasure channels (BEC) and contains three contributions. First, we introduce the notion of Low Density Majority Codes (LDMCs). These codes are non-linear sparse-graph codes, which output majority function evaluated on randomly chosen small subsets of the data bits. This is similar to Low Density Generator Matrix codes (LDGMs), except that the XOR function is replaced with the majority. We show that even with a few iterations of belief propagation (BP) the attained input-output curves provably improve upon performance of any linear systematic code. The effect of non-linearity bootstraping the initial iterations of BP, suggests that LDMCs should improve performance in various applications, where LDGMs have been used traditionally. Second, we establish several \textit{two-point converse bounds} that lower bound the BER achievable at one erasure probability as a function of BER achieved at another one. The novel nature of our bounds is that they are specific to subclasses of codes (linear systematic and non-linear systematic) and outperform similar bounds implied by the area theorem for the EXIT function.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
155,352
2210.11053
The Network Structure of Unequal Diffusion
Social networks affect the diffusion of information, and thus have the potential to reduce or amplify inequality in access to opportunity. We show empirically that social networks often exhibit a much larger potential for unequal diffusion across groups along paths of length 2 and 3 than expected by our random graph models. We argue that homophily alone cannot not fully explain the extent of unequal diffusion and attribute this mismatch to unequal distribution of cross-group links among the nodes. Based on this insight, we develop a variant of the stochastic block model that incorporates the heterogeneity in cross-group linking. The model provides an unbiased and consistent estimate of assortativity or homophily on paths of length 2 and provide a more accurate estimate along paths of length 3 than existing models. We characterize the null distribution of its log-likelihood ratio test and argue that the goodness of fit test is valid only when the network is dense. Based on our empirical observations and modeling results, we conclude that the impact of any departure from equal distribution of links to source nodes in the diffusion process is not limited to its first order effects as some nodes will have fewer direct links to the sources. More importantly, this unequal distribution will also lead to second order effects as the whole group will have fewer diffusion paths to the sources.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
325,164
2011.06021
A WLAV-based Robust Hybrid State Estimation using Circuit-theoretic Approach
For reliable and secure power grid operation, AC state-estimation (ACSE) must provide certain guarantees of convergence while being resilient against bad-data. This paper develops a circuit-theoretic weighted least absolute value (WLAV) based hybrid ACSE that satisfies these needs to overcome some of the limitations of existing ACSE methods. Hybrid refers to the inclusion of RTU and PMU measurement data, and the use of the LAV objective function enables automatic rejection of bad data while providing clear identification of suspicious measurements from the sparse residual vector. Taking advantage of linear construction of the measurement models in circuit-theoretic approach, the proposed hybrid SE is formulated as a LP problem with guaranteed convergence. To address efficiency, we further develop problem-specific heuristics for fast convergence. To validate the efficacy of the proposed approach, we run ACSE on large cases and compare the results against WLS-based algorithms. We further demonstrate the advantages of our solution methodology over standard commercial LP solvers through comparison of runtime and convergence performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
206,110
2305.07567
The Critical Theorem for q-Polymatroids
The Critical Theorem, due to Henry Crapo and Gian-Carlo Rota, has been extended and generalised in many ways. In this paper, we describe properties of the characteristic polynomial of a weighted lattice and show that it has a recursive description, which we use to obtain results on the critical exponent of $q$-polymatroids. We prove a Critical Theorem for representable $q$-polymatroids and we provide a lower bound on the critical exponent. We show that $q$-polymatroids arising from certain families of rank-metric codes attain this lower bound.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
363,948
2412.10435
COEF-VQ: Cost-Efficient Video Quality Understanding through a Cascaded Multimodal LLM Framework
Recently, with the emergence of recent Multimodal Large Language Model (MLLM) technology, it has become possible to exploit its video understanding capability on different classification tasks. In practice, we face the difficulty of huge requirements for GPU resource if we need to deploy MLLMs online. In this paper, we propose COEF-VQ, a novel cascaded MLLM framework for better video quality understanding on TikTok. To this end, we first propose a MLLM fusing all visual, textual and audio signals, and then develop a cascade framework with a lightweight model as pre-filtering stage and MLLM as fine-consideration stage, significantly reducing the need for GPU resource, while retaining the performance demonstrated solely by MLLM. To demonstrate the effectiveness of COEF-VQ, we deployed this new framework onto the video management platform (VMP) at TikTok, and performed a series of detailed experiments on two in-house tasks related to video quality understanding. We show that COEF-VQ leads to substantial performance gains with limit resource consumption in these two tasks.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
516,926
2107.10068
From Single to Multiple: Leveraging Multi-level Prediction Spaces for Video Forecasting
Despite video forecasting has been a widely explored topic in recent years, the mainstream of the existing work still limits their models with a single prediction space but completely neglects the way to leverage their model with multi-prediction spaces. This work fills this gap. For the first time, we deeply study numerous strategies to perform video forecasting in multi-prediction spaces and fuse their results together to boost performance. The prediction in the pixel space usually lacks the ability to preserve the semantic and structure content of the video however the prediction in the high-level feature space is prone to generate errors in the reduction and recovering process. Therefore, we build a recurrent connection between different feature spaces and incorporate their generations in the upsampling process. Rather surprisingly, this simple idea yields a much more significant performance boost than PhyDNet (performance improved by 32.1% MAE on MNIST-2 dataset, and 21.4% MAE on KTH dataset). Both qualitative and quantitative evaluations on four datasets demonstrate the generalization ability and effectiveness of our approach. We show that our model significantly reduces the troublesome distortions and blurry artifacts and brings remarkable improvements to the accuracy in long term video prediction. The code will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
247,209
1808.10858
Automatic Lung Cancer Prediction from Chest X-ray Images Using Deep Learning Approach
Since, cancer is curable when diagnosed at an early stage, lung cancer screening plays an important role in preventive care. Although both low dose computed tomography (LDCT) and computed tomography (CT) scans provide more medical information than normal chest x-rays, there is very limited access to these technologies in rural areas. Recently, there is a trend in using computer-aided diagnosis (CADx) to assist in screening and diagnosing of cancer from biomedical images. In this study, the 121-layer convolutional neural network also known as DenseNet-121 by G. Huang et. al., along with the transfer learning scheme was explored as a means to classify lung cancer using chest X-ray images. The model was trained on a lung nodules dataset before training on the lung cancer dataset to alleviate the problem of a small dataset. The proposed model yields 74.43$\pm$6.01\% of mean accuracy, 74.96$\pm$9.85\% of mean specificity, and 74.68$\pm$15.33\% of mean sensitivity. The proposed model also provides a heatmap for identifying the location of the lung nodule. These findings are promising for further development of chest x-ray-based lung cancer diagnosis using the deep learning approach. Moreover, these findings solve the problem of small dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
106,457
2405.06454
E2TP: Element to Tuple Prompting Improves Aspect Sentiment Tuple Prediction
Generative approaches have significantly influenced Aspect-Based Sentiment Analysis (ABSA), garnering considerable attention. However, existing studies often predict target text components monolithically, neglecting the benefits of utilizing single elements for tuple prediction. In this paper, we introduce Element to Tuple Prompting (E2TP), employing a two-step architecture. The former step focuses on predicting single elements, while the latter step completes the process by mapping these predicted elements to their corresponding tuples. E2TP is inspired by human problem-solving, breaking down tasks into manageable parts, using the first step's output as a guide in the second step. Within this strategy, three types of paradigms, namely E2TP($diet$), E2TP($f_1$), and E2TP($f_2$), are designed to facilitate the training process. Beyond dataset-specific experiments, our paper addresses cross-domain scenarios, demonstrating the effectiveness and generalizability of the approach. By conducting a comprehensive analysis on various benchmarks, we show that E2TP achieves new state-of-the-art results in nearly all cases.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
453,297
1705.00794
Offline Handwritten Recognition of Malayalam District Name - A Holistic Approach
Various machine learning methods for writer independent recognition of Malayalam handwritten district names are discussed in this paper. Data collected from 56 different writers are used for the experiments. The proposed work can be used for the recognition of district in the address written in Malayalam. Different methods for Dimensionality reduction are discussed. Features consider for the recognition are Histogram of Oriented Gradient descriptor, Number of Black Pixels in the upper half and lower half, length of image. Classifiers used in this work are Neural Network, SVM and RandomForest.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
72,750
2008.13671
Adversarial Patch Camouflage against Aerial Detection
Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open. This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while only covering a small part of it. Second, we perform several experiments with different patch configurations, varying their size, position, number and saliency. Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
193,908
2110.00825
Recurrent networks improve neural response prediction and provide insights into underlying cortical circuits
Feedforward CNN models have proven themselves in recent years as state-of-the-art models for predicting single-neuron responses to natural images in early visual cortical neurons. In this paper, we extend these models with recurrent convolutional layers, reflecting the well-known massive recurrence in the cortex, and show robust increases in predictive performance over feedforward models across thousands of hyperparameter combinations in three datasets of macaque V1 and V2 single-neuron responses. We propose the recurrent circuit can be conceptualized as a form of ensemble computing, with each iteration generating more effective feedforward paths of various path lengths to allow a combination of solutions in the final approximation. The statistics of the paths in the ensemble provide insights to the differential performance increases among our recurrent models. We also assess whether the recurrent circuits learned for neural response prediction can be related to cortical circuits. We find that the hidden units in the recurrent circuits of the appropriate models, when trained on long-duration wide-field image presentations, exhibit similar temporal response dynamics and classical contextual modulations as observed in V1 neurons. This work provides insights to the computational rationale of recurrent circuits and suggests that neural response prediction could be useful for characterizing the recurrent neural circuits in the visual cortex.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
258,547
1506.00898
Extreme Compressive Sampling for Covariance Estimation
This paper studies the problem of estimating the covariance of a collection of vectors using only highly compressed measurements of each vector. An estimator based on back-projections of these compressive samples is proposed and analyzed. A distribution-free analysis shows that by observing just a single linear measurement of each vector, one can consistently estimate the covariance matrix, in both infinity and spectral norm, and this same analysis leads to precise rates of convergence in both norms. Via information-theoretic techniques, lower bounds showing that this estimator is minimax-optimal for both infinity and spectral norm estimation problems are established. These results are also specialized to give matching upper and lower bounds for estimating the population covariance of a collection of Gaussian vectors, again in the compressive measurement model. The analysis conducted in this paper shows that the effective sample complexity for this problem is scaled by a factor of $m^2/d^2$ where $m$ is the compression dimension and $d$ is the ambient dimension. Applications to subspace learning (Principal Components Analysis) and learning over distributed sensor networks are also discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
43,728
2211.01833
Basis Function feedforward for Position-Dependent Systems
Feedforward for motion systems is getting increasingly more important to achieve performance requirements. This leads to a situation where position-dependent effects cannot be neglected anymore.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
328,377
2412.15310
MRWeb: An Exploration of Generating Multi-Page Resource-Aware Web Code from UI Designs
Multi-page websites dominate modern web development. However, existing design-to-code methods rely on simplified assumptions, limiting to single-page, self-contained webpages without external resource connection. To address this gap, we introduce the Multi-Page Resource-Aware Webpage (MRWeb) generation task, which transforms UI designs into multi-page, functional web UIs with internal/external navigation, image loading, and backend routing. We propose a novel resource list data structure to track resources, links, and design components. Our study applies existing methods to the MRWeb problem using a newly curated dataset of 500 websites (300 synthetic, 200 real-world). Specifically, we identify the best metric to evaluate the similarity of the web UI, assess the impact of the resource list on MRWeb generation, analyze MLLM limitations, and evaluate the effectiveness of the MRWeb tool in real-world workflows. The results show that resource lists boost navigation functionality from 0% to 66%-80% while facilitating visual similarity. Our proposed metrics and evaluation framework provide new insights into MLLM performance on MRWeb tasks. We release the MRWeb tool, dataset, and evaluation framework to promote further research.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
true
519,061
1709.06173
Robustness of Neural Networks against Storage Media Errors
We study the trade-offs between storage/bandwidth and prediction accuracy of neural networks that are stored in noisy media. Conventionally, it is assumed that all parameters (e.g., weight and biases) of a trained neural network are stored as binary arrays and are error-free. This assumption is based upon the implementation of error correction codes (ECCs) that correct potential bit flips in storage media. However, ECCs add storage overhead and cause bandwidth reduction when loading the trained parameters during the inference. We study the robustness of deep neural networks when bit errors exist but ECCs are turned off for different neural network models and datasets. It is observed that more sophisticated models and datasets are more vulnerable to errors in their trained parameters. We propose a simple detection approach that can universally improve the robustness, which in some cases can be improved by orders of magnitude. We also propose an alternative binary representation of the parameters such that the distortion brought by bit flips is reduced and even theoretically vanishing when the number of bits to represent a parameter increases.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
81,045
2410.01801
FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images
We introduce FabricDiffusion, a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes. Existing approaches typically synthesize textures on the garment surface through 2D-to-3D texture mapping or depth-aware inpainting via generative models. Unfortunately, these methods often struggle to capture and preserve texture details, particularly due to challenging occlusions, distortions, or poses in the input image. Inspired by the observation that in the fashion industry, most garments are constructed by stitching sewing patterns with flat, repeatable textures, we cast the task of clothing texture transfer as extracting distortion-free, tileable texture materials that are subsequently mapped onto the UV space of the garment. Building upon this insight, we train a denoising diffusion model with a large-scale synthetic dataset to rectify distortions in the input texture image. This process yields a flat texture map that enables a tight coupling with existing Physically-Based Rendering (PBR) material generation pipelines, allowing for realistic relighting of the garment under various lighting conditions. We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos. Extensive experiments demonstrate that our model significantly outperforms state-to-the-art methods on both synthetic data and real-world, in-the-wild clothing images while generalizing to unseen textures and garment shapes.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
493,955
1709.00322
Disintegration and Bayesian Inversion via String Diagrams
The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability --- via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
79,883
2408.11969
DrivAerML: High-Fidelity Computational Fluid Dynamics Dataset for Road-Car External Aerodynamics
Machine Learning (ML) has the potential to revolutionise the field of automotive aerodynamics, enabling split-second flow predictions early in the design process. However, the lack of open-source training data for realistic road cars, using high-fidelity CFD methods, represents a barrier to their development. To address this, a high-fidelity open-source (CC-BY-SA) public dataset for automotive aerodynamics has been generated, based on 500 parametrically morphed variants of the widely-used DrivAer notchback generic vehicle. Mesh generation and scale-resolving CFD was executed using consistent and validated automatic workflows representative of the industrial state-of-the-art. Geometries and rich aerodynamic data are published in open-source formats. To our knowledge, this is the first large, public-domain dataset for complex automotive configurations generated using high-fidelity CFD.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
482,527
1701.07521
Floor Scale Modulo Lifting for QC-LDPC codes
In the given paper we present a novel approach for constructing a QC-LDPC code of smaller length by lifting a given QC-LDPC code. The proposed method can be considered as a generalization of floor lifting. Also we prove several probabilistic statements concerning a theoretical improvement of the method with respect to the number of small cycles. Making some offline calculation of scale parameter it is possible to construct a sequence of QC-LDPC codes with different circulant sizes generated from a single exponent matrix using only floor and scale operations. The only parameter we store in memory is a constant needed for scaling.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
67,306
1905.08868
Look Again at the Syntax: Relational Graph Convolutional Network for Gendered Ambiguous Pronoun Resolution
Gender bias has been found in existing coreference resolvers. In order to eliminate gender bias, a gender-balanced dataset Gendered Ambiguous Pronouns (GAP) has been released and the best baseline model achieves only 66.9% F1. Bidirectional Encoder Representations from Transformers (BERT) has broken several NLP task records and can be used on GAP dataset. However, fine-tune BERT on a specific task is computationally expensive. In this paper, we propose an end-to-end resolver by combining pre-trained BERT with Relational Graph Convolutional Network (R-GCN). R-GCN is used for digesting structural syntactic information and learning better task-specific embeddings. Empirical results demonstrate that, under explicit syntactic supervision and without the need to fine tune BERT, R-GCN's embeddings outperform the original BERT embeddings on the coreference task. Our work significantly improves the snippet-context baseline F1 score on GAP dataset from 66.9% to 80.3%. We participated in the 2019 GAP Coreference Shared Task, and our codes are available online.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
131,589
2005.03315
Encoding in the Dark Grand Challenge: An Overview
A big part of the video content we consume from video providers consists of genres featuring low-light aesthetics. Low light sequences have special characteristics, such as spatio-temporal varying acquisition noise and light flickering, that make the encoding process challenging. To deal with the spatio-temporal incoherent noise, higher bitrates are used to achieve high objective quality. Additionally, the quality assessment metrics and methods have not been designed, trained or tested for this type of content. This has inspired us to trigger research in that area and propose a Grand Challenge on encoding low-light video sequences. In this paper, we present an overview of the proposed challenge, and test state-of-the-art methods that will be part of the benchmark methods at the stage of the participants' deliverable assessment. From this exploration, our results show that VVC already achieves a high performance compared to simply denoising the video source prior to encoding. Moreover, the quality of the video streams can be further improved by employing a post-processing image enhancement method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
176,124
1806.02538
Segment-Based Credit Scoring Using Latent Clusters in the Variational Autoencoder
Identifying customer segments in retail banking portfolios with different risk profiles can improve the accuracy of credit scoring. The Variational Autoencoder (VAE) has shown promising results in different research domains, and it has been documented the powerful information embedded in the latent space of the VAE. We use the VAE and show that transforming the input data into a meaningful representation, it is possible to steer configurations in the latent space of the VAE. Specifically, the Weight of Evidence (WoE) transformation encapsulates the propensity to fall into financial distress and the latent space in the VAE preserves this characteristic in a well-defined clustering structure. These clusters have considerably different risk profiles and therefore are suitable not only for credit scoring but also for marketing and customer purposes. This new clustering methodology offers solutions to some of the challenges in the existing clustering algorithms, e.g., suggests the number of clusters, assigns cluster labels to new customers, enables cluster visualization, scales to large datasets, captures non-linear relationships among others. Finally, for portfolios with a large number of customers in each cluster, developing one classifier model per cluster can improve the credit scoring assessment.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
99,807