id
stringlengths 9
16
| title
stringlengths 4
278
| abstract
stringlengths 3
4.08k
| cs.HC
bool 2
classes | cs.CE
bool 2
classes | cs.SD
bool 2
classes | cs.SI
bool 2
classes | cs.AI
bool 2
classes | cs.IR
bool 2
classes | cs.LG
bool 2
classes | cs.RO
bool 2
classes | cs.CL
bool 2
classes | cs.IT
bool 2
classes | cs.SY
bool 2
classes | cs.CV
bool 2
classes | cs.CR
bool 2
classes | cs.CY
bool 2
classes | cs.MA
bool 2
classes | cs.NE
bool 2
classes | cs.DB
bool 2
classes | Other
bool 2
classes | __index_level_0__
int64 0
541k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.07370
|
M2D2: A Massively Multi-domain Language Modeling Dataset
|
We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation in language models (LMs). M2D2 consists of 8.5B tokens and spans 145 domains extracted from Wikipedia and Semantic Scholar. Using ontologies derived from Wikipedia and ArXiv categories, we organize the domains in each data source into 22 groups. This two-level hierarchy enables the study of relationships between domains and their effects on in- and out-of-domain performance after adaptation. We also present a number of insights into the nature of effective domain adaptation in LMs, as examples of the new types of studies M2D2 enables. To improve in-domain performance, we show the benefits of adapting the LM along a domain hierarchy; adapting to smaller amounts of fine-grained domain-specific data can lead to larger in-domain performance gains than larger amounts of weakly relevant data. We further demonstrate a trade-off between in-domain specialization and out-of-domain generalization within and across ontologies, as well as a strong correlation between out-of-domain performance and lexical overlap between domains.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 323,673
|
2412.05179
|
Spatially-Adaptive Hash Encodings For Neural Surface Reconstruction
|
Positional encodings are a common component of neural scene reconstruction methods, and provide a way to bias the learning of neural fields towards coarser or finer representations. Current neural surface reconstruction methods use a "one-size-fits-all" approach to encoding, choosing a fixed set of encoding functions, and therefore bias, across all scenes. Current state-of-the-art surface reconstruction approaches leverage grid-based multi-resolution hash encoding in order to recover high-detail geometry. We propose a learned approach which allows the network to choose its encoding basis as a function of space, by masking the contribution of features stored at separate grid resolutions. The resulting spatially adaptive approach allows the network to fit a wider range of frequencies without introducing noise. We test our approach on standard benchmark surface reconstruction datasets and achieve state-of-the-art performance on two benchmark datasets.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 514,723
|
1102.4868
|
Verifiable and computable performance analysis of sparsity recovery
|
In this paper, we develop verifiable and computable performance analysis of sparsity recovery. We define a family of goodness measures for arbitrary sensing matrices as a set of optimization problems, and design algorithms with a theoretical global convergence guarantee to compute these goodness measures. The proposed algorithms solve a series of second-order cone programs, or linear programs. As a by-product, we implement an efficient algorithm to verify a sufficient condition for exact sparsity recovery in the noise-free case. We derive performance bounds on the recovery errors in terms of these goodness measures. We also analytically demonstrate that the developed goodness measures are non-degenerate for a large class of random sensing matrices, as long as the number of measurements is relatively large. Numerical experiments show that, compared with the restricted isometry based performance bounds, our error bounds apply to a wider range of problems and are tighter, when the sparsity levels of the signals are relatively low.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 9,337
|
2001.06323
|
Wine quality rapid detection using a compact electronic nose system:
application focused on spoilage thresholds by acetic acid
|
It is crucial for the wine industry to have methods like electronic nose systems (E-Noses) for real-time monitoring thresholds of acetic acid in wines, preventing its spoilage or determining its quality. In this paper, we prove that the portable and compact self-developed E-Nose, based on thin film semiconductor (SnO2) sensors and trained with an approach that uses deep Multilayer Perceptron (MLP) neural network, can perform early detection of wine spoilage thresholds in routine tasks of wine quality control. To obtain rapid and online detection, we propose a method of rising-window focused on raw data processing to find an early portion of the sensor signals with the best recognition performance. Our approach was compared with the conventional approach employed in E-Noses for gas recognition that involves feature extraction and selection techniques for preprocessing data, succeeded by a Support Vector Machine (SVM) classifier. The results evidence that is possible to classify three wine spoilage levels in 2.7 seconds after the gas injection point, implying in a methodology 63 times faster than the results obtained with the conventional approach in our experimental setup.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 160,775
|
1612.09535
|
PAMPO: using pattern matching and pos-tagging for effective Named
Entities recognition in Portuguese
|
This paper deals with the entity extraction task (named entity recognition) of a text mining process that aims at unveiling non-trivial semantic structures, such as relationships and interaction between entities or communities. In this paper we present a simple and efficient named entity extraction algorithm. The method, named PAMPO (PAttern Matching and POs tagging based algorithm for NER), relies on flexible pattern matching, part-of-speech tagging and lexical-based rules. It was developed to process texts written in Portuguese, however it is potentially applicable to other languages as well. We compare our approach with current alternatives that support Named Entity Recognition (NER) for content written in Portuguese. These are Alchemy, Zemanta and Rembrandt. Evaluation of the efficacy of the entity extraction method on several texts written in Portuguese indicates a considerable improvement on $recall$ and $F_1$ measures.
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 66,203
|
2203.09283
|
PanoFormer: Panorama Transformer for Indoor 360 Depth Estimation
|
Existing panoramic depth estimation methods based on convolutional neural networks (CNNs) focus on removing panoramic distortions, failing to perceive panoramic structures efficiently due to the fixed receptive field in CNNs. This paper proposes the panorama transformer (named PanoFormer) to estimate the depth in panorama images, with tangent patches from spherical domain, learnable token flows, and panorama specific metrics. In particular, we divide patches on the spherical tangent domain into tokens to reduce the negative effect of panoramic distortions. Since the geometric structures are essential for depth estimation, a self-attention module is redesigned with an additional learnable token flow. In addition, considering the characteristic of the spherical domain, we present two panorama-specific metrics to comprehensively evaluate the panoramic depth estimation models' performance. Extensive experiments demonstrate that our approach significantly outperforms the state-of-the-art (SOTA) methods. Furthermore, the proposed method can be effectively extended to solve semantic panorama segmentation, a similar pixel2pixel task. Code will be available.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 286,103
|
2308.13764
|
Unified Single-Stage Transformer Network for Efficient RGB-T Tracking
|
Most existing RGB-T tracking networks extract modality features in a separate manner, which lacks interaction and mutual guidance between modalities. This limits the network's ability to adapt to the diverse dual-modality appearances of targets and the dynamic relationships between the modalities. Additionally, the three-stage fusion tracking paradigm followed by these networks significantly restricts the tracking speed. To overcome these problems, we propose a unified single-stage Transformer RGB-T tracking network, namely USTrack, which unifies the above three stages into a single ViT (Vision Transformer) backbone with a dual embedding layer through self-attention mechanism. With this structure, the network can extract fusion features of the template and search region under the mutual interaction of modalities. Simultaneously, relation modeling is performed between these features, efficiently obtaining the search region fusion features with better target-background discriminability for prediction. Furthermore, we introduce a novel feature selection mechanism based on modality reliability to mitigate the influence of invalid modalities for prediction, further improving the tracking performance. Extensive experiments on three popular RGB-T tracking benchmarks demonstrate that our method achieves new state-of-the-art performance while maintaining the fastest inference speed 84.2FPS. In particular, MPR/MSR on the short-term and long-term subsets of VTUAV dataset increased by 11.1$\%$/11.7$\%$ and 11.3$\%$/9.7$\%$.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 388,043
|
1106.5341
|
Pose Estimation from a Single Depth Image for Arbitrary Kinematic
Skeletons
|
We present a method for estimating pose information from a single depth image given an arbitrary kinematic structure without prior training. For an arbitrary skeleton and depth image, an evolutionary algorithm is used to find the optimal kinematic configuration to explain the observed image. Results show that our approach can correctly estimate poses of 39 and 78 degree-of-freedom models from a single depth image, even in cases of significant self-occlusion.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 11,023
|
1911.13062
|
Sentiment Analysis of German Twitter
|
This thesis explores the ways by how people express their opinions on German Twitter, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as polar terms and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding-based ones, finding that dictionary-based systems generally yield better lexicons than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatic lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiments, sources, and targets: conditional random fields and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning-, and deep-learning-based systems, and try to unite the first and last of these groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of discourse structure, I let it separately analyze the elementary discourse units of each microblog and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 155,576
|
2004.06407
|
Non-convex Feedback Optimization with Input and Output Constraints
|
In this paper, we present a novel control scheme for feedback optimization. That is, we propose a discrete-time controller that can steer the steady state of a physical plant to the solution of a constrained optimization problem without numerically solving the problem. Our controller can be interpreted as a discretization of a continuous-time projected gradient flow. Compared to other schemes used for feedback optimization, such as saddle-point flows or inexact penalty methods, our algorithm combines several desirable properties: It asymptotically enforces constraints on the plant steady-state outputs, and temporary constraint violations can be easily quantified. Our algorithm requires only reduced model information in the form of steady-state input-output sensitivities of the plant. Further, as we prove in this paper, global convergence is guaranteed even for non-convex problems. Finally, our algorithm is straightforward to tune, since the step-size is the only tuning parameter.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 172,502
|
1910.09337
|
Large-scale Causal Approaches to Debiasing Post-click Conversion Rate
Estimation with Multi-task Learning
|
Post-click conversion rate (CVR) estimation is a critical task in e-commerce recommender systems. This task is deemed quite challenging under the industrial setting with two major issues: 1) selection bias caused by user self-selection, and 2) data sparsity due to the rare click events. A successful conversion typically has the following sequential events: "exposure -> click -> conversion". Conventional CVR estimators are trained in the click space, but the inference is done in the entire exposure space. They fail to account for the causes of the missing data and treat them as missing at random. Hence, their estimations are highly likely to deviate from the real values by large. In addition, the data sparsity issue can also handicap many industrial CVR estimators which usually have large parameter spaces. In this paper, we propose two principled, efficient and highly effective CVR estimators for industrial CVR estimation, namely, Multi-IPW and Multi-DR. The proposed models approach the CVR estimation from a causal perspective and account for the causes of missing not at random. In addition, our methods are based on the multi-task learning framework and mitigate the data sparsity issue. Extensive experiments on industrial-level datasets show that our methods outperform the state-of-the-art CVR models.
| false
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 150,159
|
2305.17283
|
Sharpened Lazy Incremental Quasi-Newton Method
|
The problem of minimizing the sum of $n$ functions in $d$ dimensions is ubiquitous in machine learning and statistics. In many applications where the number of observations $n$ is large, it is necessary to use incremental or stochastic methods, as their per-iteration cost is independent of $n$. Of these, Quasi-Newton (QN) methods strike a balance between the per-iteration cost and the convergence rate. Specifically, they exhibit a superlinear rate with $O(d^2)$ cost in contrast to the linear rate of first-order methods with $O(d)$ cost and the quadratic rate of second-order methods with $O(d^3)$ cost. However, existing incremental methods have notable shortcomings: Incremental Quasi-Newton (IQN) only exhibits asymptotic superlinear convergence. In contrast, Incremental Greedy BFGS (IGS) offers explicit superlinear convergence but suffers from poor empirical performance and has a per-iteration cost of $O(d^3)$. To address these issues, we introduce the Sharpened Lazy Incremental Quasi-Newton Method (SLIQN) that achieves the best of both worlds: an explicit superlinear convergence rate, and superior empirical performance at a per-iteration $O(d^2)$ cost. SLIQN features two key changes: first, it incorporates a hybrid strategy of using both classic and greedy BFGS updates, allowing it to empirically outperform both IQN and IGS. Second, it employs a clever constant multiplicative factor along with a lazy propagation strategy, which enables it to have a cost of $O(d^2)$. Additionally, our experiments demonstrate the superiority of SLIQN over other incremental and stochastic Quasi-Newton variants and establish its competitiveness with second-order incremental methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 368,501
|
2410.16631
|
Benchmarking Multi-Scene Fire and Smoke Detection
|
The current irregularities in existing public Fire and Smoke Detection (FSD) datasets have become a bottleneck in the advancement of FSD technology. Upon in-depth analysis, we identify the core issue as the lack of standardized dataset construction, uniform evaluation systems, and clear performance benchmarks. To address this issue and drive innovation in FSD technology, we systematically gather diverse resources from public sources to create a more comprehensive and refined FSD benchmark. Additionally, recognizing the inadequate coverage of existing dataset scenes, we strategically expand scenes, relabel, and standardize existing public FSD datasets to ensure accuracy and consistency. We aim to establish a standardized, realistic, unified, and efficient FSD research platform that mirrors real-life scenes closely. Through our efforts, we aim to provide robust support for the breakthrough and development of FSD technology. The project is available at \href{https://xiaoyihan6.github.io/FSD/}{https://xiaoyihan6.github.io/FSD/}.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 501,108
|
2110.02700
|
Reversible Attack based on Local Visual Adversarial Perturbation
|
Adding perturbations to images can mislead classification models to produce incorrect results. Recently, researchers exploited adversarial perturbations to protect image privacy from retrieval by intelligent models. However, adding adversarial perturbations to images destroys the original data, making images useless in digital forensics and other fields. To prevent illegal or unauthorized access to sensitive image data such as human faces without impeding legitimate users, the use of reversible adversarial attack techniques is increasing. The original image can be recovered from its reversible adversarial examples. However, existing reversible adversarial attack methods are designed for traditional imperceptible adversarial perturbations and ignore the local visible adversarial perturbation. In this paper, we propose a new method for generating reversible adversarial examples based on local visible adversarial perturbation. The information needed for image recovery is embedded into the area beyond the adversarial patch by the reversible data hiding technique. To reduce image distortion, lossless compression and the B-R-G (bluered-green) embedding principle are adopted. Experiments on CIFAR-10 and ImageNet datasets show that the proposed method can restore the original images error-free while ensuring good attack performance.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 259,214
|
1602.07565
|
Stochastic Shortest Path with Energy Constraints in POMDPs
|
We consider partially observable Markov decision processes (POMDPs) with a set of target states and positive integer costs associated with every transition. The traditional optimization objective (stochastic shortest path) asks to minimize the expected total cost until the target set is reached. We extend the traditional framework of POMDPs to model energy consumption, which represents a hard constraint. The energy levels may increase and decrease with transitions, and the hard constraint requires that the energy level must remain positive in all steps till the target is reached. First, we present a novel algorithm for solving POMDPs with energy levels, developing on existing POMDP solvers and using RTDP as its main method. Our second contribution is related to policy representation. For larger POMDP instances the policies computed by existing solvers are too large to be understandable. We present an automated procedure based on machine learning techniques that automatically extracts important decisions of the policy allowing us to compute succinct human readable policies. Finally, we show experimentally that our algorithm performs well and computes succinct policies on a number of POMDP instances from the literature that were naturally enhanced with energy levels.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 52,528
|
1207.4552
|
Delay-Robustness of Linear Predictor Feedback Without Restriction on
Delay Rate
|
Robustness is established for the predictor feedback for linear time-invariant systems with respect to possibly time-varying perturbations of the input delay, with a constant nominal delay. Prior results have addressed qualitatively constant delay perturbations (robustness of stability in L2 norm of actuator state) and delay perturbations with restricted rate of change (robustness of stability in H1 norm of actuator state). The present work provides simple formulae that allow direct and accurate computation of the least upper bound of the magnitude of the delay perturbation for which exponential stability in supremum norm on the actuator state is preserved. While prior work has employed Lyapunov-Krasovskii functionals constructed via backstepping, the present work employs a particular form of small-gain analysis. Two cases are considered: the case of measurable (possibly discontinuous) perturbations and the case of constant perturbations.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 17,640
|
2201.07309
|
OSSID: Online Self-Supervised Instance Detection by (and for) Pose
Estimation
|
Real-time object pose estimation is necessary for many robot manipulation algorithms. However, state-of-the-art methods for object pose estimation are trained for a specific set of objects; these methods thus need to be retrained to estimate the pose of each new object, often requiring tens of GPU-days of training for optimal performance. In this paper, we propose the OSSID framework, leveraging a slow zero-shot pose estimator to self-supervise the training of a fast detection algorithm. This fast detector can then be used to filter the input to the pose estimator, drastically improving its inference speed. We show that this self-supervised training exceeds the performance of existing zero-shot detection methods on two widely used object pose estimation and detection datasets, without requiring any human annotations. Further, we show that the resulting method for pose estimation has a significantly faster inference speed, due to the ability to filter out large parts of the image. Thus, our method for self-supervised online learning of a detector (trained using pseudo-labels from a slow pose estimator) leads to accurate pose estimation at real-time speeds, without requiring human annotations. Supplementary materials and code can be found at https://georgegu1997.github.io/OSSID/
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 275,987
|
2401.12987
|
TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition
in Conversation
|
Emotion Recognition in Conversation (ERC) plays a crucial role in enabling dialogue systems to effectively respond to user requests. The emotions in a conversation can be identified by the representations from various modalities, such as audio, visual, and text. However, due to the weak contribution of non-verbal modalities to recognize emotions, multimodal ERC has always been considered a challenging task. In this paper, we propose Teacher-leading Multimodal fusion network for ERC (TelME). TelME incorporates cross-modal knowledge distillation to transfer information from a language model acting as the teacher to the non-verbal students, thereby optimizing the efficacy of the weak modalities. We then combine multimodal features using a shifting fusion approach in which student networks support the teacher. TelME achieves state-of-the-art performance in MELD, a multi-speaker conversation dataset for ERC. Finally, we demonstrate the effectiveness of our components through additional experiments.
| false
| false
| true
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 423,570
|
2110.04074
|
Active inference, Bayesian optimal design, and expected utility
|
Active inference, a corollary of the free energy principle, is a formal way of describing the behavior of certain kinds of random dynamical systems that have the appearance of sentience. In this chapter, we describe how active inference combines Bayesian decision theory and optimal Bayesian design principles under a single imperative to minimize expected free energy. It is this aspect of active inference that allows for the natural emergence of information-seeking behavior. When removing prior outcomes preferences from expected free energy, active inference reduces to optimal Bayesian design, i.e., information gain maximization. Conversely, active inference reduces to Bayesian decision theory in the absence of ambiguity and relative risk, i.e., expected utility maximization. Using these limiting cases, we illustrate how behaviors differ when agents select actions that optimize expected utility, expected information gain, and expected free energy. Our T-maze simulations show optimizing expected free energy produces goal-directed information-seeking behavior while optimizing expected utility induces purely exploitive behavior and maximizing information gain engenders intrinsically motivated behavior.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 259,747
|
2203.02824
|
Distributional Hardness Against Preconditioned Lasso via Erasure-Robust
Designs
|
Sparse linear regression with ill-conditioned Gaussian random designs is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief, even in the form of examples that are hard for restricted classes of algorithms. Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only shows that for every preconditioner, there exists at least one signal that it fails to recover successfully. This leaves open the possibility that, for example, trying multiple different preconditioners solves every sparse linear regression problem. In this work, we prove a stronger lower bound that overcomes this issue. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new positive result in compressed sensing. We show that standard sparse random designs are with high probability robust to adversarial measurement erasures, in the sense that if $b$ measurements are erased, then all but $O(b)$ of the coordinates of the signal are still information-theoretically identifiable. To our knowledge, this is the first time that partial recoverability of arbitrary sparse signals under erasures has been studied in compressed sensing.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| 283,876
|
2008.09435
|
Self-Supervised Gait Encoding with Locality-Aware Attention for Person
Re-Identification
|
Gait-based person re-identification (Re-ID) is valuable for safety-critical applications, and using only 3D skeleton data to extract discriminative gait features for person Re-ID is an emerging open topic. Existing methods either adopt hand-crafted features or learn gait features by traditional supervised learning paradigms. Unlike previous methods, we for the first time propose a generic gait encoding approach that can utilize unlabeled skeleton data to learn gait representations in a self-supervised manner. Specifically, we first propose to introduce self-supervision by learning to reconstruct input skeleton sequences in reverse order, which facilitates learning richer high-level semantics and better gait representations. Second, inspired by the fact that motion's continuity endows temporally adjacent skeletons with higher correlations ("locality"), we propose a locality-aware attention mechanism that encourages learning larger attention weights for temporally adjacent skeletons when reconstructing current skeleton, so as to learn locality when encoding gait. Finally, we propose Attention-based Gait Encodings (AGEs), which are built using context vectors learned by locality-aware attention, as final gait representations. AGEs are directly utilized to realize effective person Re-ID. Our approach typically improves existing skeleton-based methods by 10-20% Rank-1 accuracy, and it achieves comparable or even superior performance to multi-modal methods with extra RGB or depth information. Our codes are available at https://github.com/Kali-Hac/SGE-LA.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 192,714
|
2304.00047
|
PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels
|
Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice. A promising technique for this still-open problem is to train models on the encoded data. Our approach, called Privately Encoded Open Datasets with Public Labels (PEOPL), uses a certain class of randomly constructed transforms to encode sensitive data. Organizations publish their randomly encoded data and associated raw labels for ML training, where training is done without knowledge of the encoding realization. We investigate several important aspects of this problem: We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user (e.g., adversary) and a faithful user (e.g., model developer) that have access to the published encoded data. We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks. Empirically, we compare the performance of our randomized encoding scheme and a linear scheme to a suite of computational attacks, and we also show that our scheme achieves competitive prediction accuracy to raw-sample baselines. Moreover, we demonstrate that multiple institutions, using independent random encoders, can collaborate to train improved ML models.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| 355,540
|
2409.04009
|
Large Margin Prototypical Network for Few-shot Relation Classification
with Fine-grained Features
|
Relation classification (RC) plays a pivotal role in both natural language understanding and knowledge graph completion. It is generally formulated as a task to recognize the relationship between two entities of interest appearing in a free-text sentence. Conventional approaches on RC, regardless of feature engineering or deep learning based, can obtain promising performance on categorizing common types of relation leaving a large proportion of unrecognizable long-tail relations due to insufficient labeled instances for training. In this paper, we consider few-shot learning is of great practical significance to RC and thus improve a modern framework of metric learning for few-shot RC. Specifically, we adopt the large-margin ProtoNet with fine-grained features, expecting they can generalize well on long-tail relations. Extensive experiments were conducted by FewRel, a large-scale supervised few-shot RC dataset, to evaluate our framework: LM-ProtoNet (FGF). The results demonstrate that it can achieve substantial improvements over many baseline approaches.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 486,261
|
1904.08682
|
On the Polarizing Behavior and Scaling Exponent of Polar Codes with
Product Kernels
|
Polar codes, introduced by Arikan, achieve the capacity of arbitrary binary-input discrete memoryless channel $W$ under successive cancellation decoding. Any such channel having capacity $I(W)$ and for any coding scheme allowing transmission at rate $R$, scaling exponent is a parameter which characterizes how fast gap to capacity decreases as a function of code length $N$ for a fixed probability of error. The relation between them is given by $N\geqslant \alpha/(I(W)-R)^\mu$. Scaling exponent for kernels of small size up to $L=8$ have been exhaustively found. In this paper, we consider product kernels $T_{L}$ obtained by taking Kronecker product of component kernels. We derive the properties of polarizing product kernels relating to number of product kernels, self duality and partial distances in terms of the respective properties of the smaller component kernels. Subsequently, polarization behavior of component kernel $T_{l}$ is used to calculate scaling exponent of $T_{L}=T_{2}\otimes T_{l}$. Using this method, we show that $\mu(T_{2}\otimes T_{5})=3.942.$ Further, we employ a heuristic approach to construct good kernel of $L=14$ from kernel having size $l=8$ having best $\mu$ and find $\mu(T_{2}\otimes T_{7})=3.485.$
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 128,148
|
2312.09880
|
Information Extraction from Unstructured data using Augmented-AI and
Computer Vision
|
Process of information extraction (IE) is often used to extract meaningful information from unstructured and unlabeled data. Conventional methods of data extraction including application of OCR and passing extraction engine, are inefficient on large data and have their limitation. In this paper, a peculiar technique of information extraction is proposed using A2I and computer vision technologies, which also includes NLP.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 415,916
|
2411.00223
|
Learning Optimal Interaction Weights in Multi-Agents Systems
|
This paper presents a spatio-temporal inverse optimal control framework for understanding interactions in multi-agent systems (MAS). We employ a graph representation approach and model the dynamics of interactions between agents as state-dependent edge weights in a consensus algorithm, incorporating both spatial and temporal dynamics. Our method learns these edge weights from trajectory observations, such as provided by expert demonstrations, which allows us to capture the complexity of nonlinear, distributed interaction behaviors. We derive necessary and sufficient conditions for the optimality of these interaction weights, explaining how the network topology affects MAS coordination. The proposed method is demonstrated on a multi-agent formation control problem, where we show its effectiveness in recovering the interaction weights and coordination patterns from sample trajectory data.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 504,492
|
2406.02604
|
Gated recurrent neural network with TPE Bayesian optimization for
enhancing stock index prediction accuracy
|
The recent advancement of deep learning architectures, neural networks, and the combination of abundant financial data and powerful computers are transforming finance, leading us to develop an advanced method for predicting future stock prices. However, the accessibility of investment and trading at everyone's fingertips made the stock markets increasingly intricate and prone to volatility. The increased complexity and volatility of the stock market have driven demand for more models, which would effectively capture high volatility and non-linear behavior of the different stock prices. This study explored gated recurrent neural network (GRNN) algorithms such as LSTM (long short-term memory), GRU (gated recurrent unit), and hybrid models like GRU-LSTM, LSTM-GRU, with Tree-structured Parzen Estimator (TPE) Bayesian optimization for hyperparameter optimization (TPE-GRNN). The aim is to improve the prediction accuracy of the next day's closing price of the NIFTY 50 index, a prominent Indian stock market index, using TPE-GRNN. A combination of eight influential factors is carefully chosen from fundamental stock data, technical indicators, crude oil price, and macroeconomic data to train the models for capturing the changes in the price of the index with the factors of the broader economy. Single-layer and multi-layer TPE-GRNN models have been developed. The models' performance is evaluated using standard matrices like R2, MAPE, and RMSE. The analysis of models' performance reveals the impact of feature selection and hyperparameter optimization (HPO) in enhancing stock index price prediction accuracy. The results show that the MAPE of our proposed TPE-LSTM method is the lowest (best) with respect to all the previous models for stock index price prediction.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 460,857
|
2307.00327
|
SDRCNN: A single-scale dense residual connected convolutional neural
network for pansharpening
|
Pansharpening is a process of fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral image to create a high-resolution multispectral image. A novel single-branch, single-scale lightweight convolutional neural network, named SDRCNN, is developed in this study. By using a novel dense residual connected structure and convolution block, SDRCNN achieved a better trade-off between accuracy and efficiency. The performance of SDRCNN was tested using four datasets from the WorldView-3, WorldView-2 and QuickBird satellites. The compared methods include eight traditional methods (i.e., GS, GSA, PRACS, BDSD, SFIM, GLP-CBD, CDIF and LRTCFPan) and five lightweight deep learning methods (i.e., PNN, PanNet, BayesianNet, DMDNet and FusionNet). Based on a visual inspection of the pansharpened images created and the associated absolute residual maps, SDRCNN exhibited least spatial detail blurring and spectral distortion, amongst all the methods considered. The values of the quantitative evaluation metrics were closest to their ideal values when SDRCNN was used. The processing time of SDRCNN was also the shortest among all methods tested. Finally, the effectiveness of each component in the SDRCNN was demonstrated in ablation experiments. All of these confirmed the superiority of SDRCNN.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 376,965
|
2106.12698
|
Comparative Error Analysis in Neural and Finite-state Models for
Unsupervised Character-level Transduction
|
Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention. Focusing on the less explored unsupervised learning scenario, we compare the two model classes side by side and find that they tend to make different types of errors even when achieving comparable performance. We analyze the distributions of different error classes using two unsupervised tasks as testbeds: converting informally romanized text into the native script of its language (for Russian, Arabic, and Kannada) and translating between a pair of closely related languages (Serbian and Bosnian). Finally, we investigate how combining finite-state and sequence-to-sequence models at decoding time affects the output quantitatively and qualitatively.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 242,804
|
2012.15832
|
Shortformer: Better Language Modeling using Shorter Inputs
|
Increasing the input length has been a driver of progress in language modeling with transformers. We identify conditions where shorter inputs are not harmful, and achieve perplexity and efficiency improvements through two new methods that decrease input length. First, we show that initially training a model on short subsequences before moving on to longer ones both reduces overall training time and, surprisingly, substantially improves perplexity. Second, we show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens when generating sequences that exceed the maximal length the transformer can handle at once. Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. We show that these recurrent models also benefit from short input lengths. Combining these techniques speeds up training by a factor of 1.65, reduces memory usage, and substantially improves perplexity on WikiText-103, without adding any parameters.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 213,920
|
2204.03032
|
Benchmarking Apache Arrow Flight -- A wire-speed protocol for data
transfer, querying and microservices
|
Moving structured data between different big data frameworks and/or data warehouses/storage systems often cause significant overhead. Most of the time more than 80\% of the total time spent in accessing data is elapsed in serialization/de-serialization step. Columnar data formats are gaining popularity in both analytics and transactional databases. Apache Arrow, a unified columnar in-memory data format promises to provide efficient data storage, access, manipulation and transport. In addition, with the introduction of the Arrow Flight communication capabilities, which is built on top of gRPC, Arrow enables high performance data transfer over TCP networks. Arrow Flight allows parallel Arrow RecordBatch transfer over networks in a platform and language-independent way, and offers high performance, parallelism and security based on open-source standards. In this paper, we bring together some recently implemented use cases of Arrow Flight with their benchmarking results. These use cases include bulk Arrow data transfer, querying subsystems and Flight as a microservice integration into different frameworks to show the throughput and scalability results of this protocol. We show that Flight is able to achieve up to 6000 MB/s and 4800 MB/s throughput for DoGet() and DoPut() operations respectively. On Mellanox ConnectX-3 or Connect-IB interconnect nodes Flight can utilize upto 95\% of the total available bandwidth. Flight is scalable and can use upto half of the available system cores efficiently for a bidirectional communication. For query systems like Dremio, Flight is order of magnitude faster than ODBC and turbodbc protocols. Arrow Flight based implementation on Dremio performs 20x and 30x better as compared to turbodbc and ODBC connections respectively.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 290,160
|
2409.19550
|
Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion
|
Similarity matrix serves as a fundamental tool at the core of numerous downstream machine-learning tasks. However, missing data is inevitable and often results in an inaccurate similarity matrix. To address this issue, Similarity Matrix Completion (SMC) methods have been proposed, but they suffer from high computation complexity due to the Singular Value Decomposition (SVD) operation. To reduce the computation complexity, Matrix Factorization (MF) techniques are more explicit and frequently applied to provide a low-rank solution, but the exact low-rank optimal solution can not be guaranteed since it suffers from a non-convex structure. In this paper, we introduce a novel SMC framework that offers a more reliable and efficient solution. Specifically, beyond simply utilizing the unique Positive Semi-definiteness (PSD) property to guide the completion process, our approach further complements a carefully designed rank-minimization regularizer, aiming to achieve an optimal and low-rank solution. Based on the key insights that the underlying PSD property and Low-Rank property improve the SMC performance, we present two novel, scalable, and effective algorithms, SMCNN and SMCNmF, which investigate the PSD property to guide the estimation process and incorporate nonconvex low-rank regularizer to ensure the low-rank solution. Theoretical analysis ensures better estimation performance and convergence speed. Empirical results on real-world datasets demonstrate the superiority and efficiency of our proposed methods compared to various baseline methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 492,734
|
2211.01324
|
eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert
Denoisers
|
Large-scale diffusion-based generative models have led to breakthroughs in text-conditioned high-resolution image synthesis. Starting from random noise, such text-to-image diffusion models gradually synthesize images in an iterative fashion while conditioning on text prompts. We find that their synthesis behavior qualitatively changes throughout this process: Early in sampling, generation strongly relies on the text prompt to generate text-aligned content, while later, the text conditioning is almost entirely ignored. This suggests that sharing model parameters throughout the entire generation process may not be ideal. Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages. To maintain training efficiency, we initially train a single model, which is then split into specialized models that are trained for the specific stages of the iterative generation process. Our ensemble of diffusion models, called eDiff-I, results in improved text alignment while maintaining the same inference computation cost and preserving high visual quality, outperforming previous large-scale text-to-image diffusion models on the standard benchmark. In addition, we train our model to exploit a variety of embeddings for conditioning, including the T5 text, CLIP text, and CLIP image embeddings. We show that these different embeddings lead to different behaviors. Notably, the CLIP image embedding allows an intuitive way of transferring the style of a reference image to the target text-to-image output. Lastly, we show a technique that enables eDiff-I's "paint-with-words" capability. A user can select the word in the input text and paint it in a canvas to control the output, which is very handy for crafting the desired image in mind. The project page is available at https://deepimagination.cc/eDiff-I/
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 328,184
|
1609.08746
|
When Big Data Fails! Relative success of adaptive agents using
coarse-grained information to compete for limited resources
|
The recent trend for acquiring big data assumes that possessing quantitatively more and qualitatively finer data necessarily provides an advantage that may be critical in competitive situations. Using a model complex adaptive system where agents compete for a limited resource using information coarse-grained to different levels, we show that agents having access to more and better data can perform worse than others in certain situations. The relation between information asymmetry and individual payoffs is seen to be complex, depending on the composition of the population of competing agents.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 61,629
|
2410.18013
|
Scalable Ranked Preference Optimization for Text-to-Image Generation
|
Direct Preference Optimization (DPO) has emerged as a powerful approach to align text-to-image (T2I) models with human feedback. Unfortunately, successful application of DPO to T2I models requires a huge amount of resources to collect and label large-scale datasets, e.g., millions of generated paired images annotated with human preferences. In addition, these human preference datasets can get outdated quickly as the rapid improvements of T2I models lead to higher quality images. In this work, we investigate a scalable approach for collecting large-scale and fully synthetic datasets for DPO training. Specifically, the preferences for paired images are generated using a pre-trained reward function, eliminating the need for involving humans in the annotation process, greatly improving the dataset collection efficiency. Moreover, we demonstrate that such datasets allow averaging predictions across multiple models and collecting ranked preferences as opposed to pairwise preferences. Furthermore, we introduce RankDPO to enhance DPO-based methods using the ranking feedback. Applying RankDPO on SDXL and SD3-Medium models with our synthetically generated preference dataset "Syn-Pic" improves both prompt-following (on benchmarks like T2I-Compbench, GenEval, and DPG-Bench) and visual quality (through user studies). This pipeline presents a practical and scalable solution to develop better preference datasets to enhance the performance of text-to-image models.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 501,708
|
2206.13727
|
Persistent homology-based descriptor for machine-learning potential of
amorphous structures
|
High-accuracy prediction of the physical properties of amorphous materials is challenging in condensed-matter physics. A promising method to achieve this is machine-learning potentials, which is an alternative to computationally demanding ab initio calculations. When applying machine-learning potentials, the construction of descriptors to represent atomic configurations is crucial. These descriptors should be invariant to symmetry operations. Handcrafted representations using a smooth overlap of atomic positions and graph neural networks (GNN) are examples of methods used for constructing symmetry-invariant descriptors. In this study, we propose a novel descriptor based on a persistence diagram (PD), a two-dimensional representation of persistent homology (PH). First, we demonstrated that the normalized two-dimensional histogram obtained from PD could predict the average energy per atom of amorphous carbon (aC) at various densities, even when using a simple model. Second, an analysis of the dimensional reduction results of the descriptor spaces revealed that PH can be used to construct descriptors with characteristics similar to those of a latent space in a GNN. These results indicate that PH is a promising method for constructing descriptors suitable for machine-learning potentials without hyperparameter tuning and deep-learning techniques.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 305,062
|
2004.09685
|
Mirror Ritual: An Affective Interface for Emotional Self-Reflection
|
This paper introduces a new form of real-time affective interface that engages the user in a process of conceptualisation of their emotional state. Inspired by Barrett's Theory of Constructed Emotion, `Mirror Ritual' aims to expand upon the user's accessible emotion concepts, and to ultimately provoke emotional reflection and regulation. The interface uses classified emotions -- obtained through facial expression recognition -- as a basis for dynamically generating poetry. The perceived emotion is used to seed a poetry generation system based on OpenAI's GPT-2 model, fine-tuned on a specially curated corpus. We evaluate the device's ability to foster a personalised, meaningful experience for individual users over a sustained period. A qualitative analysis revealed that participants were able to affectively engage with the mirror, with each participant developing a unique interpretation of its poetry in the context of their own emotional landscape.
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 173,416
|
1912.03234
|
What Do You Mean I'm Funny? Personalizing the Joke Skill of a
Voice-Controlled Virtual Assistant
|
A considerable part of the success experienced by Voice-controlled virtual assistants (VVA) is due to the emotional and personalized experience they deliver, with humor being a key component in providing an engaging interaction. In this paper we describe methods used to improve the joke skill of a VVA through personalization. The first method, based on traditional NLP techniques, is robust and scalable. The others combine self-attentional network and multi-task learning to obtain better results, at the cost of added complexity. A significant challenge facing these systems is the lack of explicit user feedback needed to provide labels for the models. Instead, we explore the use of two implicit feedback-based labelling strategies. All models were evaluated on real production data. Online results show that models trained on any of the considered labels outperform a heuristic method, presenting a positive real-world impact on user satisfaction. Offline results suggest that the deep-learning approaches can improve the joke experience with respect to the other considered methods.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 156,542
|
2311.04761
|
FAIR Knowledge Graphs with Semantic Units: a Prototype
|
Knowledge graphs and ontologies are becoming increasingly important in the context of making data and metadata findable, accessible, interoperable, and reusable (FAIR). We introduce the concept of Semantic Units for organizing Knowledge Graphs into identifiable and semantically meaningful subgraphs. Each Semantic Unit is represented in the graph by its own resource that instantiates a Semantic Unit class. Different types of Semantic Units are distinguished, and together they can organize a Knowledge Graph into different levels of representational granularity with partially overlapping, partially enclosed subgraphs that users of Knowledge Graphs can refer to for making statements about statements. The use of Semantic Units in Knowledge Graphs supports making them FAIR and increases the human-reader-actionability of their data and metadata by increasing the graph's cognitive interoperability by increasing its explorability for a human reader. We introduce a minimal prototype web application for a user-driven FAIR Knowledge Graph that is based on Semantic Units.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 406,332
|
2405.00678
|
Low-cost modular devices for on-road vehicle detection and
characterisation
|
Detecting and characterising vehicles is one of the purposes of embedded systems used in intelligent environments. An analysis of a vehicle characteristics can reveal inappropriate or dangerous behaviour. This detection makes it possible to sanction or notify emergency services to take early and practical actions. Vehicle detection and characterisation systems employ complex sensors such as video cameras, especially in urban environments. These sensors provide high precision and performance, although the price and computational requirements are proportional to their accuracy. These sensors offer high accuracy, but the price and computational requirements are directly proportional to their performance. This article introduces a system based on modular devices that is economical and has a low computational cost. These devices use ultrasonic sensors to detect the speed and length of vehicles. The measurement accuracy is improved through the collaboration of the device modules. The experiments were performed using multiple modules oriented to different angles. This module is coupled with another specifically designed to detect distance using previous modules speed and length data. The collaboration between different modules reduces the speed relative error ranges from 1 to 5, depending on the angle configuration used in the modules.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 451,015
|
1401.6437
|
On Phase Noise Suppression in Full-Duplex Systems
|
Oscillator phase noise has been shown to be one of the main performance limiting factors in full-duplex systems. In this paper, we consider the problem of self-interference cancellation with phase noise suppression in full-duplex systems. The feasibility of performing phase noise suppression in full-duplex systems in terms of both complexity and achieved gain is analytically and experimentally investigated. First, the effect of phase noise on full-duplex systems and the possibility of performing phase noise suppression are studied. Two different phase noise suppression techniques with a detailed complexity analysis are then proposed. For each suppression technique, both free-running and phase locked loop based oscillators are considered. Due to the fact that full-duplex system performance highly depends on hardware impairments, experimental analysis is essential for reliable results. In this paper, the performance of the proposed techniques is experimentally investigated in a typical indoor environment. The experimental results are shown to confirm the results obtained from numerical simulations on two different experimental research platforms. At the end, the tradeoff between the required complexity and the gain achieved using phase noise suppression is discussed.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 30,355
|
2501.19158
|
A theoretical framework for overfitting in energy-based modeling
|
We investigate the impact of limited data on training pairwise energy-based models for inverse problems aimed at identifying interaction networks. Utilizing the Gaussian model as testbed, we dissect training trajectories across the eigenbasis of the coupling matrix, exploiting the independent evolution of eigenmodes and revealing that the learning timescales are tied to the spectral decomposition of the empirical covariance matrix. We see that optimal points for early stopping arise from the interplay between these timescales and the initial conditions of training. Moreover, we show that finite data corrections can be accurately modeled through asymptotic random matrix theory calculations and provide the counterpart of generalized cross-validation in the energy based model context. Our analytical framework extends to binary-variable maximum-entropy pairwise models with minimal variations. These findings offer strategies to control overfitting in discrete-variable models through empirical shrinkage corrections, improving the management of overfitting in energy-based generative models.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 529,036
|
1712.01834
|
Optimal Quasi-Gray Codes: The Alphabet Matters
|
A quasi-Gray code of dimension $n$ and length $\ell$ over an alphabet $\Sigma$ is a sequence of distinct words $w_1,w_2,\dots,w_\ell$ from $\Sigma^n$ such that any two consecutive words differ in at most $c$ coordinates, for some fixed constant $c>0$. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word $w_i$ into its successor $w_{i+1}$. We present construction of quasi-Gray codes of dimension $n$ and length $3^n$ over the ternary alphabet $\{0,1,2\}$ with worst-case read complexity $O(\log n)$ and write complexity $2$. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension $n$ and length at least $2^n - 20n$ with worst-case read complexity $6+\log n$ and write complexity $2$. This complements a recent result by Raskin [Raskin '17] who shows that any quasi-Gray code over binary alphabet of length $2^n$ has read complexity $\Omega(n)$. Our results significantly improve on previously known constructions and for the odd-size alphabets we break the $\Omega(n)$ worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. '14, Ben-Or and Cleve '92, Barrington '89, Coppersmith and Grossman '75].
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| 86,188
|
cs/0610050
|
The Mathematical Parallels Between Packet Switching and Information
Transmission
|
All communication networks comprise of transmission systems and switching systems, even though they are usually treated as two separate issues. Communication channels are generally disturbed by noise from various sources. In circuit switched networks, reliable communication requires the error-tolerant transmission of bits over noisy channels. In packet switched networks, however, not only can bits be corrupted with noise, but resources along connection paths are also subject to contention. Thus, quality of service (QoS) is determined by buffer delays and packet losses. The theme of this paper is to show that transmission noise and packet contention actually have similar characteristics and can be tamed by comparable means to achieve reliable communication, and a number of analogies between switching and transmission are identified. The sampling theorem of bandlimited signals provides the cornerstone of digital communication and signal processing. Recently, the Birkhoff-von Neumann decomposition of traffic matrices has been widely applied to packet switches. With respect to the complexity reduction of packet switching, we show that the decomposition of a doubly stochastic traffic matrix plays a similar role to that of the sampling theorem in digital transmission. We conclude that packet switching systems are governed by mathematical laws that are similar to those of digital transmission systems as envisioned by Shannon in his seminal 1948 paper, A Mathematical Theory of Communication.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| 539,773
|
2409.07292
|
A Unified Contrastive Loss for Self-Training
|
Self-training methods have proven to be effective in exploiting abundant unlabeled data in semi-supervised learning, particularly when labeled data is scarce. While many of these approaches rely on a cross-entropy loss function (CE), recent advances have shown that the supervised contrastive loss function (SupCon) can be more effective. Additionally, unsupervised contrastive learning approaches have also been shown to capture high quality data representations in the unsupervised setting. To benefit from these advantages in a semi-supervised setting, we propose a general framework to enhance self-training methods, which replaces all instances of CE losses with a unique contrastive loss. By using class prototypes, which are a set of class-wise trainable parameters, we recover the probability distributions of the CE setting and show a theoretical equivalence with it. Our framework, when applied to popular self-training methods, results in significant performance improvements across three different datasets with a limited number of labeled data. Additionally, we demonstrate further improvements in convergence speed, transfer ability, and hyperparameter stability. The code is available at \url{https://github.com/AurelienGauffre/semisupcon/}.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 487,462
|
2404.13558
|
LASER: Tuning-Free LLM-Driven Attention Control for Efficient
Text-conditioned Image-to-Animation
|
Revolutionary advancements in text-to-image models have unlocked new dimensions for sophisticated content creation, e.g., text-conditioned image editing, allowing us to edit the diverse images that convey highly complex visual concepts according to the textual guidance. Despite being promising, existing methods focus on texture- or non-rigid-based visual manipulation, which struggles to produce the fine-grained animation of smooth text-conditioned image morphing without fine-tuning, i.e., due to their highly unstructured latent space. In this paper, we introduce a tuning-free LLM-driven attention control framework, encapsulated by the progressive process of LLM planning, prompt-Aware editing, StablE animation geneRation, abbreviated as LASER. LASER employs a large language model (LLM) to refine coarse descriptions into detailed prompts, guiding pre-trained text-to-image models for subsequent image generation. We manipulate the model's spatial features and self-attention mechanisms to maintain animation integrity and enable seamless morphing directly from text prompts, eliminating the need for additional fine-tuning or annotations. Our meticulous control over spatial features and self-attention ensures structural consistency in the images. This paper presents a novel framework integrating LLMs with text-to-image models to create high-quality animations from a single text input. We also propose a Text-conditioned Image-to-Animation Benchmark to validate the effectiveness and efficacy of LASER. Extensive experiments demonstrate that LASER produces impressive, consistent, and efficient results in animation generation, positioning it as a powerful tool for advanced digital content creation.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 448,345
|
1309.0535
|
Decentralized Rigidity Maintenance Control with Range Measurements for
Multi-Robot Systems
|
This work proposes a fully decentralized strategy for maintaining the formation rigidity of a multi-robot system using only range measurements, while still allowing the graph topology to change freely over time. In this direction, a first contribution of this work is an extension of rigidity theory to weighted frameworks and the rigidity eigenvalue, which when positive ensures the infinitesimal rigidity of the framework. We then propose a distributed algorithm for estimating a common relative position reference frame amongst a team of robots with only range measurements in addition to one agent endowed with the capability of measuring the bearing to two other agents. This first estimation step is embedded into a subsequent distributed algorithm for estimating the rigidity eigenvalue associated with the weighted framework. The estimate of the rigidity eigenvalue is finally used to generate a local control action for each agent that both maintains the rigidity property and enforces additional con- straints such as collision avoidance and sensing/communication range limits and occlusions. As an additional feature of our approach, the communication and sensing links among the robots are also left free to change over time while preserving rigidity of the whole framework. The proposed scheme is then experimentally validated with a robotic testbed consisting of 6 quadrotor UAVs operating in a cluttered environment.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| 26,790
|
2105.04979
|
Surrogate assisted active subspace and active subspace assisted
surrogate -- A new paradigm for high dimensional structural reliability
analysis
|
Performing reliability analysis on complex systems is often computationally expensive. In particular, when dealing with systems having high input dimensionality, reliability estimation becomes a daunting task. A popular approach to overcome the problem associated with time-consuming and expensive evaluations is building a surrogate model. However, these computationally efficient models often suffer from the curse of dimensionality. Hence, training a surrogate model for high-dimensional problems is not straightforward. Henceforth, this paper presents a framework for solving high-dimensional reliability analysis problems. The basic premise is to train the surrogate model on a low-dimensional manifold, discovered using the active subspace algorithm. However, learning the low-dimensional manifold using active subspace is non-trivial as it requires information on the gradient of the response variable. To address this issue, we propose using sparse learning algorithms in conjunction with the active subspace algorithm; the resulting algorithm is referred to as the sparse active subspace (SAS) algorithm. We project the high-dimensional inputs onto the identified low-dimensional manifold identified using SAS. A high-fidelity surrogate model is used to map the inputs on the low-dimensional manifolds to the output response. We illustrate the efficacy of the proposed framework by using three benchmark reliability analysis problems from the literature. The results obtained indicate the accuracy and efficiency of the proposed approach compared to already established reliability analysis methods in the literature.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 234,677
|
2108.00355
|
ELLIPSDF: Joint Object Pose and Shape Optimization with a Bi-level
Ellipsoid and Signed Distance Function Description
|
Autonomous systems need to understand the semantics and geometry of their surroundings in order to comprehend and safely execute object-level task specifications. This paper proposes an expressive yet compact model for joint object pose and shape optimization, and an associated optimization algorithm to infer an object-level map from multi-view RGB-D camera observations. The model is expressive because it captures the identities, positions, orientations, and shapes of objects in the environment. It is compact because it relies on a low-dimensional latent representation of implicit object shape, allowing onboard storage of large multi-category object maps. Different from other works that rely on a single object representation format, our approach has a bi-level object model that captures both the coarse level scale as well as the fine level shape details. Our approach is evaluated on the large-scale real-world ScanNet dataset and compared against state-of-the-art methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 248,683
|
2010.13626
|
Classification of Important Segments in Educational Videos using
Multimodal Features
|
Videos are a commonly-used type of content in learning during Web search. Many e-learning platforms provide quality content, but sometimes educational videos are long and cover many topics. Humans are good in extracting important sections from videos, but it remains a significant challenge for computers. In this paper, we address the problem of assigning importance scores to video segments, that is how much information they contain with respect to the overall topic of an educational video. We present an annotation tool and a new dataset of annotated educational videos collected from popular online learning platforms. Moreover, we propose a multimodal neural architecture that utilizes state-of-the-art audio, visual and textual features. Our experiments investigate the impact of visual and temporal information, as well as the combination of multimodal features on importance prediction.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 203,196
|
1211.5189
|
Optimally fuzzy temporal memory
|
Any learner with the ability to predict the future of a structured time-varying signal must maintain a memory of the recent past. If the signal has a characteristic timescale relevant to future prediction, the memory can be a simple shift register---a moving window extending into the past, requiring storage resources that linearly grows with the timescale to be represented. However, an independent general purpose learner cannot a priori know the characteristic prediction-relevant timescale of the signal. Moreover, many naturally occurring signals show scale-free long range correlations implying that the natural prediction-relevant timescale is essentially unbounded. Hence the learner should maintain information from the longest possible timescale allowed by resource availability. Here we construct a fuzzy memory system that optimally sacrifices the temporal accuracy of information in a scale-free fashion in order to represent prediction-relevant information from exponentially long timescales. Using several illustrative examples, we demonstrate the advantage of the fuzzy memory system over a shift register in time series forecasting of natural signals. When the available storage resources are limited, we suggest that a general purpose learner would be better off committing to such a fuzzy memory system.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 19,870
|
2003.02615
|
Hadath: From Social Media Mapping to Multi-Resolution Event-Enriched
Maps
|
Publicly available data is increasing rapidly, and will continue to grow with the advancement of technologies in sensors, smartphones and the Internet of Things. Data from multiple sources can improve coverage and provide more relevant knowledge about surrounding events and points of Interest. The strength of one source of data can compensate for the shortcomings of another source by providing supplementary information. Maps are also getting popular day-by-day and people are using it to achieve their daily task smoothly and efficiently. Starting from paper maps hundred years ago, multiple type of maps are available with point of interest, real-time traffic update or displaying micro-blogs from social media. In this paper, we introduce Hadath, a system that displays multi-resolution live events of interest from a variety of available data sources. The system has been designed to be able to handle multiple type of inputs by encapsulating incoming unstructured data into generic data packets. System extracts local events of interest from generic data packets and identify their spatio-temporal scope to display such events on a map, so that as a user changes the zoom level, only events of appropriate scope are displayed. This allows us to show live events in correspondence to the scale of view - when viewing at a city scale, we see events of higher significance, while zooming in to a neighbourhood, events of a more local interest are highlighted. The final output creates a unique and dynamic map browsing experience. Finally, to validate our proposed system, we conducted experiments on social media data.
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 166,981
|
2412.12562
|
Efficient Oriented Object Detection with Enhanced Small Object
Recognition in Aerial Images
|
Achieving a balance between computational efficiency and detection accuracy in the realm of rotated bounding box object detection within aerial imagery is a significant challenge. While prior research has aimed at creating lightweight models that enhance computational performance and feature extraction, there remains a gap in the performance of these networks when it comes to the detection of small and multi-scale objects in remote sensing (RS) imagery. To address these challenges, we present a novel enhancement to the YOLOv8 model, tailored for oriented object detection tasks and optimized for environments with limited computational resources. Our model features a wavelet transform-based C2f module for capturing associative features and an Adaptive Scale Feature Pyramid (ASFP) module that leverages P2 layer details. Additionally, the incorporation of GhostDynamicConv significantly contributes to the model's lightweight nature, ensuring high efficiency in aerial imagery analysis. Featuring a parameter count of 21.6M, our approach provides a more efficient architectural design than DecoupleNet, which has 23.3M parameters, all while maintaining detection accuracy. On the DOTAv1.0 dataset, our model demonstrates a mean Average Precision (mAP) that is competitive with leading methods such as DecoupleNet. The model's efficiency, combined with its reduced parameter count, makes it a strong candidate for aerial object detection, particularly in resource-constrained environments.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 517,923
|
2309.14292
|
On the Non-Associativity of Analog Computations
|
The energy efficiency of analog forms of computing makes it one of the most promising candidates to deploy resource-hungry machine learning tasks on resource-constrained system such as mobile or embedded devices. However, it is well known that for analog computations the safety net of discretization is missing, thus all analog computations are exposed to a variety of imperfections of corresponding implementations. Examples include non-linearities, saturation effect and various forms of noise. In this work, we observe that the ordering of input operands of an analog operation also has an impact on the output result, which essentially makes analog computations non-associative, even though the underlying operation might be mathematically associative. We conduct a simple test by creating a model of a real analog processor which captures such ordering effects. With this model we assess the importance of ordering by comparing the test accuracy of a neural network for keyword spotting, which is trained based either on an ordered model, on a non-ordered variant, and on real hardware. The results prove the existence of ordering effects as well as their high impact, as neglecting ordering results in substantial accuracy drops.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 394,540
|
2409.00634
|
Indoor Sensing with Measurements
|
The cellular wireless networks are evolving towards acquiring newer capabilities, such as sensing, which will support novel use cases and applications. Many of these require indoor sensing capabilities, which can be realized by exploiting the perturbation in the indoor channel. In this work, we conduct an indoor channel measurement campaign to study these perturbations and develop AI-based algorithms for estimating sensing parameters. We develop several AI methods based on CNN and tree-based ensemble architectures for sensing. We show that the presence of a passive target like a person can be detected from the channel perturbation of a single link with more than 90 % accuracy with a simple CNN based AI algorithm. However, sensing the position of a passive target is far more challenging requiring more complex AI algorithms and deployments. We show that the position of the human in the indoor room can be estimated within the average position error of 0.7 m with a deployment having three links and employing complex AI architecture for position estimation. We also compare the results with the baseline algorithm to demonstrate the utility of the proposed method.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 485,008
|
2106.00289
|
Resource-aware Online Parameter Adaptation for
Computationally-constrained Visual-Inertial Navigation Systems
|
In this paper, a computational resources-aware parameter adaptation method for visual-inertial navigation systems is proposed with the goal of enabling the improved deployment of such algorithms on computationally constrained systems. Such a capacity can prove critical when employed on ultra-lightweight systems or alongside mission critical computationally expensive processes. To achieve this objective, the algorithm proposes selected changes in the vision front-end and optimization back-end of visual-inertial odometry algorithms, both prior to execution and in real-time based on an online profiling of available resources. The method also utilizes information from the motion dynamics experienced by the system to manipulate parameters online. The general policy is demonstrated on three established algorithms, namely S-MSCKF, VINS-Mono and OKVIS and has been verified experimentally on the EuRoC dataset. The proposed approach achieved comparable performance at a fraction of the original computational cost.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 238,067
|
1912.07063
|
Multi-User Opportunistic Beamforming using Reconfigurable Surfaces
|
Multi-user (MU) diversity yields sum-rate gains by scheduling a user for transmission at times when its channel is near its peak. The only information required at the base station (BS) for scheduling is the users' signal-to-noise ratios (SNR)s. MU diversity gains are limited in environments with line-of-sight (LoS) channel components and/or spatial correlation. To remedy this, previous works have proposed opportunistic beamforming (OBF) using multiple antennas at the BS to transmit the same signal, modulated by time-varying gains, to the best user at each time slot. In this paper, we propose reconfigurable surface (RS)-assisted OBF to increase the range of channel fluctuations in a single-antenna broadcast channel (BC), where opportunistic scheduling (OS) strategy achieves the sum-rate capacity. The RS is abstracted as an array of passive reflecting elements, and is dumb in the sense that it only induces random phase shifts onto the impinging electromagnetic waves, without requiring any channel state information. We develop the sum-rate scaling laws under Rayleigh, Rician and correlated Rayleigh fading and show that RS-assisted OBF with only a single-antenna BS outperforms multi-antenna BS-assisted OBF. We also extend our results to OFDMA systems and the multi-antenna BC.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 157,504
|
2403.04343
|
CoTBal: Comprehensive Task Balancing for Multi-Task Visual Instruction
Tuning
|
Visual instruction tuning is a key training stage of large multimodal models (LMMs). Nevertheless, the common practice of indiscriminately mixing instruction-following data from various tasks may result in suboptimal overall performance due to different instruction formats and knowledge domains across tasks. To mitigate this issue, we propose a novel Comprehensive Task Balancing (CoTBal) algorithm for multi-task visual instruction tuning of LMMs. To our knowledge, this is the first work that explores multi-task optimization in visual instruction tuning. Specifically, we consider two key dimensions for task balancing: (1) Inter-Task Contribution, the phenomenon where learning one task potentially enhances the performance in other tasks, attributable to the overlapping knowledge domains, and (2) Intra-Task Difficulty, which refers to the learning difficulty within a single task. By quantifying these two dimensions with performance-based metrics, task balancing is thus enabled by assigning more weights to tasks that offer substantial contributions to others, receive minimal contributions from others, and also have great intra-task difficulties. Experiments show that our CoTBal leads to superior overall performance in multi-task visual instruction tuning.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 435,564
|
1201.2925
|
Combining Heterogeneous Classifiers for Relational Databases
|
Most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a 'flat' form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 13,807
|
1311.2180
|
Adaptive Epidemic Dynamics in Networks: Thresholds and Control
|
Theoretical modeling of computer virus/worm epidemic dynamics is an important problem that has attracted many studies. However, most existing models are adapted from biological epidemic ones. Although biological epidemic models can certainly be adapted to capture some computer virus spreading scenarios (especially when the so-called homogeneity assumption holds), the problem of computer virus spreading is not well understood because it has many important perspectives that are not necessarily accommodated in the biological epidemic models. In this paper we initiate the study of such a perspective, namely that of adaptive defense against epidemic spreading in arbitrary networks. More specifically, we investigate a non-homogeneous Susceptible-Infectious-Susceptible (SIS) model where the model parameters may vary with respect to time. In particular, we focus on two scenarios we call semi-adaptive defense and fully-adaptive} defense, which accommodate implicit and explicit dependency relationships between the model parameters, respectively. In the semi-adaptive defense scenario, the model's input parameters are given; the defense is semi-adaptive because the adjustment is implicitly dependent upon the outcome of virus spreading. For this scenario, we present a set of sufficient conditions (some are more general or succinct than others) under which the virus spreading will die out; such sufficient conditions are also known as epidemic thresholds in the literature. In the fully-adaptive defense scenario, some input parameters are not known (i.e., the aforementioned sufficient conditions are not applicable) but the defender can observe the outcome of virus spreading. For this scenario, we present adaptive control strategies under which the virus spreading will die out or will be contained to a desired level.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 28,295
|
2204.00032
|
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
|
We introduce a new class of attacks on machine learning models. We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties. Our active inference attacks connect two independent lines of work targeting the integrity and privacy of machine learning training data. Our attacks are effective across membership inference, attribute inference, and data extraction. For example, our targeted attacks can poison <0.1% of the training dataset to boost the performance of inference attacks by 1 to 2 orders of magnitude. Further, an adversary who controls a significant fraction of the training data (e.g., 50%) can launch untargeted attacks that enable 8x more precise inference on all other users' otherwise-private data points. Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty computation protocols for machine learning, if parties can arbitrarily select their share of training data.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 289,106
|
2209.04627
|
A Power Efficiency Metric for Comparing Energy Consumption in Future
Wireless Networks in the Millimeter Wave and Terahertz bands
|
Future wireless cellular networks will utilize millimeter-wave and sub-THz frequencies and deploy small-cell base stations to achieve data rates on the order of hundreds of Gigabits per second per user. The move to sub-THz frequencies will require attention to sustainability and reduction of power whenever possible to reduce the carbon footprint while maintaining adequate battery life for the massive number of resource-constrained devices to be deployed. This article analyzes power consumption of future wireless networks using a new metric, the power waste factor ($ W $), which shows promise for the study and development of "green G" - green technology for future wireless networks. Using $ W $, power efficiency can be considered by quantifying the power wasted by all devices on a signal path in a cascade. We then show that the consumption efficiency factor ($CEF$), defined as the ratio of the maximum data rate achieved to the total power consumed, is a novel and powerful measure of power efficiency that shows less energy per bit is expended as the cell size shrinks and carrier frequency and channel bandwidth increase. Our findings offer a standard approach to calculating and comparing power consumption and energy efficiency.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 316,837
|
2412.15995
|
Data-Centric Improvements for Enhancing Multi-Modal Understanding in
Spoken Conversation Modeling
|
Conversational assistants are increasingly popular across diverse real-world applications, highlighting the need for advanced multimodal speech modeling. Speech, as a natural mode of communication, encodes rich user-specific characteristics such as speaking rate and pitch, making it critical for effective interaction. Our work introduces a data-centric customization approach for efficiently enhancing multimodal understanding in conversational speech modeling. Central to our contributions is a novel multi-task learning paradigm that involves designing auxiliary tasks to utilize a small amount of speech data. Our approach achieves state-of-the-art performance on the Spoken-SQuAD benchmark, using only 10% of the training data with open-weight models, establishing a robust and efficient framework for audio-centric conversational modeling. We also introduce ASK-QA, the first dataset for multi-turn spoken dialogue with ambiguous user requests and dynamic evaluation inputs. Code and data forthcoming.
| false
| false
| true
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 519,332
|
2101.02121
|
Attention-based Convolutional Autoencoders for 3D-Variational Data
Assimilation
|
We propose a new 'Bi-Reduced Space' approach to solving 3D Variational Data Assimilation using Convolutional Autoencoders. We prove that our approach has the same solution as previous methods but has significantly lower computational complexity; in other words, we reduce the computational cost without affecting the data assimilation accuracy. We tested the new method with data from a real-world application: a pollution model of a site in Elephant and Castle, London and found that we could reduce the size of the background covariance matrix representation by O(10^3) and, at the same time, increase our data assimilation accuracy with respect to existing reduced space methods.
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 214,532
|
2412.11211
|
Deep Learning-based Approaches for State Space Models: A Selective
Review
|
State-space models (SSMs) offer a powerful framework for dynamical system analysis, wherein the temporal dynamics of the system are assumed to be captured through the evolution of the latent states, which govern the values of the observations. This paper provides a selective review of recent advancements in deep neural network-based approaches for SSMs, and presents a unified perspective for discrete time deep state space models and continuous time ones such as latent neural Ordinary Differential and Stochastic Differential Equations. It starts with an overview of the classical maximum likelihood based approach for learning SSMs, reviews variational autoencoder as a general learning pipeline for neural network-based approaches in the presence of latent variables, and discusses in detail representative deep learning models that fall under the SSM framework. Very recent developments, where SSMs are used as standalone architectural modules for improving efficiency in sequence modeling, are also examined. Finally, examples involving mixed frequency and irregularly-spaced time series data are presented to demonstrate the advantage of SSMs in these settings.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 517,313
|
2102.06943
|
Goods Transportation Problem Solving via Routing Algorithm
|
This paper outlines the ideas behind developing a graph-based heuristic-driven routing algorithm designed for a particular instance of a goods transportation problem with a single good type. The proposed algorithm solves the optimization problem of satisfying the demand of goods on a given undirected transportation graph with minimizing the estimated cost for each traversed segment of the delivery path. The operation of the routing algorithm is discussed and overall evaluation of the proposed problem solving technique is given.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 219,930
|
2303.17561
|
SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
|
During the preceding biennium, vision-language pre-training has achieved noteworthy success on several downstream tasks. Nevertheless, acquiring high-quality image-text pairs, where the pairs are entirely exclusive of each other, remains a challenging task, and noise exists in the commonly used datasets. To address this issue, we propose SoftCLIP, a novel approach that relaxes the strict one-to-one constraint and achieves a soft cross-modal alignment by introducing a softened target, which is generated from the fine-grained intra-modal self-similarity. The intra-modal guidance is indicative to enable two pairs have some local similarities and model many-to-many relationships between the two modalities. Besides, since the positive still dominates in the softened target distribution, we disentangle the negatives in the distribution to further boost the relation alignment with the negatives in the cross-modal learning. Extensive experiments demonstrate the effectiveness of SoftCLIP. In particular, on ImageNet zero-shot classification task, using CC3M/CC12M as pre-training dataset, SoftCLIP brings a top-1 accuracy improvement of 6.8%/7.2% over the CLIP baseline.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 355,249
|
1501.01386
|
Roman Urdu Opinion Mining System (RUOMiS)
|
Convincing a customer is always considered as a challenging task in every business. But when it comes to online business, this task becomes even more difficult. Online retailers try everything possible to gain the trust of the customer. One of the solutions is to provide an area for existing users to leave their comments. This service can effectively develop the trust of the customer however normally the customer comments about the product in their native language using Roman script. If there are hundreds of comments this makes difficulty even for the native customers to make a buying decision. This research proposes a system which extracts the comments posted in Roman Urdu, translate them, find their polarity and then gives us the rating of the product. This rating will help the native and non-native customers to make buying decision efficiently from the comments posted in Roman Urdu.
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 39,081
|
2005.09069
|
P-SIF: Document Embeddings Using Partition Averaging
|
Simple weighted averaging of word vectors often yields effective representations for sentences which outperform sophisticated seq2seq neural models in many tasks. While it is desirable to use the same method to represent documents as well, unfortunately, the effectiveness is lost when representing long documents involving multiple sentences. One of the key reasons is that a longer document is likely to contain words from many different topics; hence, creating a single vector while ignoring all the topical structure is unlikely to yield an effective document representation. This problem is less acute in single sentences and other short text fragments where the presence of a single topic is most likely. To alleviate this problem, we present P-SIF, a partitioned word averaging model to represent long documents. P-SIF retains the simplicity of simple weighted word averaging while taking a document's topical structure into account. In particular, P-SIF learns topic-specific vectors from a document and finally concatenates them all to represent the overall document. We provide theoretical justifications on the correctness of P-SIF. Through a comprehensive set of experiments, we demonstrate P-SIF's effectiveness compared to simple weighted averaging and many other baselines.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 177,804
|
2203.02668
|
Cross Language Image Matching for Weakly Supervised Semantic
Segmentation
|
It has been widely known that CAM (Class Activation Map) usually only activates discriminative object regions and falsely includes lots of object-related backgrounds. As only a fixed set of image-level object labels are available to the WSSS (weakly supervised semantic segmentation) model, it could be very difficult to suppress those diverse background regions consisting of open set objects. In this paper, we propose a novel Cross Language Image Matching (CLIMS) framework, based on the recently introduced Contrastive Language-Image Pre-training (CLIP) model, for WSSS. The core idea of our framework is to introduce natural language supervision to activate more complete object regions and suppress closely-related open background regions. In particular, we design object, background region and text label matching losses to guide the model to excite more reasonable object regions for CAM of each category. In addition, we design a co-occurring background suppression loss to prevent the model from activating closely-related background regions, with a predefined set of class-related background text descriptions. These designs enable the proposed CLIMS to generate a more complete and compact activation map for the target objects. Extensive experiments on PASCAL VOC2012 dataset show that our CLIMS significantly outperforms the previous state-of-the-art methods.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 283,821
|
2501.11211
|
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
|
Diffusion models achieve superior performance in image generation tasks. However, it incurs significant computation overheads due to its iterative structure. To address these overheads, we analyze this iterative structure and observe that adjacent time steps in diffusion models exhibit high value similarity, leading to narrower differences between consecutive time steps. We adapt these characteristics to a quantized diffusion model and reveal that the majority of these differences can be represented with reduced bit-width, and even zero. Based on our observations, we propose the Ditto algorithm, a difference processing algorithm that leverages temporal similarity with quantization to enhance the efficiency of diffusion models. By exploiting the narrower differences and the distributive property of layer operations, it performs full bit-width operations for the initial time step and processes subsequent steps with temporal differences. In addition, Ditto execution flow optimization is designed to mitigate the memory overhead of temporal difference processing, further boosting the efficiency of the Ditto algorithm. We also design the Ditto hardware, a specialized hardware accelerator, fully exploiting the dynamic characteristics of the proposed algorithm. As a result, the Ditto hardware achieves up to 1.5x speedup and 17.74% energy saving compared to other accelerators.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 525,830
|
2403.08791
|
Liquid Resistance Liquid Capacitance Networks
|
We introduce liquid-resistance liquid-capacitance neural networks (LRCs), a neural-ODE model which considerably improve the generalization, accuracy, and biological plausibility of electrical equivalent circuits (EECs), liquid time-constant networks (LTCs), and saturated liquid time-constant networks (STCs), respectively. We also introduce LRC units (LRCUs), as a very efficient and accurate gated RNN-model, which results from solving LRCs with an explicit Euler scheme using just one unfolding. We empirically show and formally prove that the liquid capacitance of LRCs considerably dampens the oscillations of LTCs and STCs, while at the same time dramatically increasing accuracy even for cheap solvers. We experimentally demonstrate that LRCs are a highly competitive alternative to popular neural ODEs and gated RNNs in terms of accuracy, efficiency, and interpretability, on classic time-series benchmarks and a complex autonomous-driving lane-keeping task.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 437,477
|
2011.12352
|
Generation of In-group Asset Condition Data for Power System Reliability
Assessment
|
In a power system, unlike some critical and standalone assets that are equipped with condition monitoring devices, the conditions of most regular in-group assets are acquired through periodic inspection work. Due to their large quantities, significant amount of manual inspection effort and sometimes data management issues, it is not uncommon to see the asset condition data in a target study area is unavailable or incomplete. Lack of asset condition data undermines the reliability assessment work. To solve this data problem and enhance data availability, this paper explores an unconventional method-generating numerical and non-numerical asset condition data based on condition degradation, condition correlation and categorical distribution models. Empirical knowledge from human experts can also be incorporated in the modeling process. Also, a probabilistic diversification step can be taken to make the generated numerical condition data probabilistic. This method can generate close-to-real asset condition data and has been validated systematically based on two public datasets. An area reliability assessment example based on cables is given to demonstrate the usefulness of this method and its generated data. This method can also be used to conveniently generate hypothetical asset condition data for research purposes.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 208,124
|
2104.11462
|
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised
Representation Learning from Speech
|
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image and natural language processing. Recent works also investigated SSL from speech. They were notably successful to improve performance on downstream tasks such as automatic speech recognition (ASR). While these works suggest it is possible to reduce dependence on labeled data for building efficient speech systems, their evaluation was mostly made on ASR and using multiple and heterogeneous experimental settings (most of them for English). This questions the objective comparison of SSL approaches and the evaluation of their impact on building speech systems. In this paper, we propose LeBenchmark: a reproducible framework for assessing SSL from speech. It not only includes ASR (high and low resource) tasks but also spoken language understanding, speech translation and emotion recognition. We also focus on speech technologies in a language different than English: French. SSL models of different sizes are trained from carefully sourced and documented datasets. Experiments show that SSL is beneficial for most but not all tasks which confirms the need for exhaustive and reliable benchmarks to evaluate its real impact. LeBenchmark is shared with the scientific community for reproducible research in SSL from speech.
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 231,915
|
2101.00049
|
Particle Swarm Based Hyper-Parameter Optimization for Machine Learned
Interatomic Potentials
|
Modeling non-empirical and highly flexible interatomic potential energy surfaces (PES) using machine learning (ML) approaches is becoming popular in molecular and materials research. Training an ML-PES is typically performed in two stages: feature extraction and structure-property relationship modeling. The feature extraction stage transforms atomic positions into a symmetry-invariant mathematical representation. This representation can be fine-tuned by adjusting on a set of so-called "hyper-parameters" (HPs). Subsequently, an ML algorithm such as neural networks or Gaussian process regression (GPR) is used to model the structure-PES relationship based on another set of HPs. Choosing optimal values for the two sets of HPs is critical to ensure the high quality of the resulting ML-PES model. In this paper, we explore HP optimization strategies tailored for ML-PES generation using a custom-coded parallel particle swarm optimizer (available freely at https://github.com/suresh0807/PPSO.git). We employ the smooth overlap of atomic positions (SOAP) descriptor in combination with GPR-based Gaussian approximation potentials (GAP) and optimize HPs for four distinct systems: a toy C dimer, amorphous carbon, $\alpha$-Fe, and small organic molecules (QM9 dataset). We propose a two-step optimization strategy in which the HPs related to the feature extraction stage are optimized first, followed by the optimization of the HPs in the training stage. This strategy is computationally more efficient than optimizing all HPs at the same time by means of significantly reducing the number of ML models needed to be trained to obtain the optimal HPs. This approach can be trivially extended to other combinations of descriptor and ML algorithm and brings us another step closer to fully automated ML-PES generation.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 213,945
|
1911.09996
|
Orderless Recurrent Models for Multi-label Classification
|
Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification. Since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task. Current approaches sort labels according to their frequency, typically ordering them in either rare-first or frequent-first. These imposed orderings do not take into account that the natural order to generate the labels can change for each image, e.g.\ first the dominant object before summing up the smaller objects in the image. Therefore, in this paper, we propose ways to dynamically order the ground truth labels with the predicted label sequence. This allows for the faster training of more optimal LSTM models for multi-label classification. Analysis evidences that our method does not suffer from duplicate generation, something which is common for other models. Furthermore, it outperforms other CNN-RNN models, and we show that a standard architecture of an image encoder and language decoder trained with our proposed loss obtains the state-of-the-art results on the challenging MS-COCO, WIDER Attribute and PA-100K and competitive results on NUS-WIDE.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 154,697
|
2005.14229
|
FCN+RL: A Fully Convolutional Network followed by Refinement Layers to
Offline Handwritten Signature Segmentation
|
Although secular, handwritten signature is one of the most reliable biometric methods used by most countries. In the last ten years, the application of technology for verification of handwritten signatures has evolved strongly, including forensic aspects. Some factors, such as the complexity of the background and the small size of the region of interest - signature pixels - increase the difficulty of the targeting task. Other factors that make it challenging are the various variations present in handwritten signatures such as location, type of ink, color and type of pen, and the type of stroke. In this work, we propose an approach to locate and extract the pixels of handwritten signatures on identification documents, without any prior information on the location of the signatures. The technique used is based on a fully convolutional encoder-decoder network combined with a block of refinement layers for the alpha channel of the predicted image. The experimental results demonstrate that the technique outputs a clean signature with higher fidelity in the lines than the traditional approaches and preservation of the pertinent characteristics to the signer's spelling. To evaluate the quality of our proposal, we use the following image similarity metrics: SSIM, SIFT, and Dice Coefficient. The qualitative and quantitative results show a significant improvement in comparison with the baseline system.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 179,208
|
2502.04345
|
JingFang: A Traditional Chinese Medicine Large Language Model of
Expert-Level Medical Diagnosis and Syndrome Differentiation-Based Treatment
|
Traditional Chinese medicine (TCM) plays a vital role in health protection and disease treatment, but its practical application requires extensive medical knowledge and clinical experience. Existing TCM Large Language Models (LLMs) exhibit critical limitations of uncomprehensive medical consultation and diagnoses, and inaccurate syndrome differentiation-based treatment. To address these issues, this study establishes JingFang (JF): a novel TCM Large Language Model that demonstrates the expert-level capability of medical diagnosis and syndrome differentiation-based treatment. We innovate a Multi-agent Dynamic Collaborative Chain-of-Thought Mechanism (MDCCTM) for medical consultation, enabling JF with effective and accurate diagnostic ability. In addition, a Syndrome Agent and a Dual-Stage Retrieval Scheme (DSRS) are developed to significantly enhance the capacity of JF for disease treatment based on syndrome differentiation. JingFang not only facilitates the application of LLMs but also promotes the effective practice of TCM in human health protection and disease treatment.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 531,089
|
1802.03243
|
RSDNet: Learning to Predict Remaining Surgery Duration from Laparoscopic
Videos Without Manual Annotations
|
Accurate surgery duration estimation is necessary for optimal OR planning, which plays an important role in patient comfort and safety as well as resource optimization. It is, however, challenging to preoperatively predict surgery duration since it varies significantly depending on the patient condition, surgeon skills, and intraoperative situation. In this paper, we propose a deep learning pipeline, referred to as RSDNet, which automatically estimates the remaining surgery duration (RSD) intraoperatively by using only visual information from laparoscopic videos. Previous state-of-the-art approaches for RSD prediction are dependent on manual annotation, whose generation requires expensive expert knowledge and is time-consuming, especially considering the numerous types of surgeries performed in a hospital and the large number of laparoscopic videos available. A crucial feature of RSDNet is that it does not depend on any manual annotation during training, making it easily scalable to many kinds of surgeries. The generalizability of our approach is demonstrated by testing the pipeline on two large datasets containing different types of surgeries: 120 cholecystectomy and 170 gastric bypass videos. The experimental results also show that the proposed network significantly outperforms a traditional method of estimating RSD without utilizing manual annotation. Further, this work provides a deeper insight into the deep learning network through visualization and interpretation of the features that are automatically learned.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 89,930
|
2305.09798
|
The Ways of Words: The Impact of Word Choice on Information Engagement
and Decision Making
|
Little research has explored how information engagement (IE), the degree to which individuals interact with and use information in a manner that manifests cognitively, behaviorally, and affectively. This study explored the impact of phrasing, specifically word choice, on IE and decision making. Synthesizing two theoretical models, User Engagement Theory UET and Information Behavior Theory IBT, a theoretical framework illustrating the impact of and relationships among the three IE dimensions of perception, participation, and perseverance was developed and hypotheses generated. The framework was empirically validated in a large-scale user study measuring how word choice impacts the dimensions of IE. The findings provide evidence that IE differs from other forms of engagement in that it is driven and fostered by the expression of the information itself, regardless of the information system used to view, interact with, and use the information. The findings suggest that phrasing can have a significant effect on the interpretation of and interaction with digital information, indicating the importance of expression of information, in particular word choice, on decision making and IE. The research contributes to the literature by identifying methods for assessment and improvement of IE and decision making with digital text.
| true
| false
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 364,778
|
1705.10739
|
Efficient Decentralized Visual Place Recognition From Full-Image
Descriptors
|
In this paper, we discuss the adaptation of our decentralized place recognition method described in [1] to full image descriptors. As we had shown, the key to making a scalable decentralized visual place recognition lies in exploting deterministic key assignment in a distributed key-value map. Through this, it is possible to reduce bandwidth by up to a factor of n, the robot count, by casting visual place recognition to a key-value lookup problem. In [1], we exploited this for the bag-of-words method [3], [4]. Our method of casting bag-of-words, however, results in a complex decentralized system, which has inherently worse recall than its centralized counterpart. In this paper, we instead start from the recent full-image description method NetVLAD [5]. As we show, casting this to a key-value lookup problem can be achieved with k-means clustering, and results in a much simpler system than [1]. The resulting system still has some flaws, albeit of a completely different nature: it suffers when the environment seen during deployment lies in a different distribution in feature space than the environment seen during training.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 74,460
|
1412.1443
|
Structure learning of antiferromagnetic Ising models
|
In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i.i.d. samples. We first observe that the notoriously difficult problem of learning parities with noise can be captured as a special case of learning graphical models. This leads to an unconditional computational lower bound of $\Omega (p^{d/2})$ for learning general graphical models on $p$ nodes of maximum degree $d$, for the class of so-called statistical algorithms recently introduced by Feldman et al (2013). The lower bound suggests that the $O(p^d)$ runtime required to exhaustively search over neighborhoods cannot be significantly improved without restricting the class of models. Aside from structural assumptions on the graph such as it being a tree, hypertree, tree-like, etc., many recent papers on structure learning assume that the model has the correlation decay property. Indeed, focusing on ferromagnetic Ising models, Bento and Montanari (2009) showed that all known low-complexity algorithms fail to learn simple graphs when the interaction strength exceeds a number related to the correlation decay threshold. Our second set of results gives a class of repelling (antiferromagnetic) models that have the opposite behavior: very strong interaction allows efficient learning in time $O(p^2)$. We provide an algorithm whose performance interpolates between $O(p^2)$ and $O(p^{d+2})$ depending on the strength of the repulsion.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 38,102
|
2502.09870
|
A Taxonomy of Linguistic Expressions That Contribute To Anthropomorphism
of Language Technologies
|
Recent attention to anthropomorphism -- the attribution of human-like qualities to non-human objects or entities -- of language technologies like LLMs has sparked renewed discussions about potential negative impacts of anthropomorphism. To productively discuss the impacts of this anthropomorphism and in what contexts it is appropriate, we need a shared vocabulary for the vast variety of ways that language can be anthropomorphic. In this work, we draw on existing literature and analyze empirical cases of user interactions with language technologies to develop a taxonomy of textual expressions that can contribute to anthropomorphism. We highlight challenges and tensions involved in understanding linguistic anthropomorphism, such as how all language is fundamentally human and how efforts to characterize and shift perceptions of humanness in machines can also dehumanize certain humans. We discuss ways that our taxonomy supports more precise and effective discussions of and decisions about anthropomorphism of language technologies.
| true
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 533,636
|
2303.05798
|
Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals
|
When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires using Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this distance to brain-age prediction from MEG data and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 350,605
|
1911.03324
|
Transforming Wikipedia into Augmented Data for Query-Focused
Summarization
|
The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging. Meanwhile, the manual construction of a query-focused summarization corpus is costly and time-consuming. In this paper, we use Wikipedia to automatically collect a large query-focused summarization dataset (named WIKIREF) of more than 280, 000 examples, which can serve as a means of data augmentation. We also develop a BERT-based query-focused summarization model (Q-BERT) to extract sentences from the documents as summaries. To better adapt a huge model containing millions of parameters to tiny benchmarks, we identify and fine-tune only a sparse subnetwork, which corresponds to a small fraction of the whole model parameters. Experimental results on three DUC benchmarks show that the model pre-trained on WIKIREF has already achieved reasonable performance. After fine-tuning on the specific benchmark datasets, the model with data augmentation outperforms strong comparison systems. Moreover, both our proposed Q-BERT model and subnetwork fine-tuning further improve the model performance. The dataset is publicly available at https://aka.ms/wikiref.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 152,617
|
1802.02500
|
Cadre Modeling: Simultaneously Discovering Subpopulations and Predictive
Models
|
We consider the problem in regression analysis of identifying subpopulations that exhibit different patterns of response, where each subpopulation requires a different underlying model. Unlike statistical cohorts, these subpopulations are not known a priori; thus, we refer to them as cadres. When the cadres and their associated models are interpretable, modeling leads to insights about the subpopulations and their associations with the regression target. We introduce a discriminative model that simultaneously learns cadre assignment and target-prediction rules. Sparsity-inducing priors are placed on the model parameters, under which independent feature selection is performed for both the cadre assignment and target-prediction processes. We learn models using adaptive step size stochastic gradient descent, and we assess cadre quality with bootstrapped sample analysis. We present simulated results showing that, when the true clustering rule does not depend on the entire set of features, our method significantly outperforms methods that learn subpopulation-discovery and target-prediction rules separately. In a materials-by-design case study, our model provides state-of-the-art prediction of polymer glass transition temperature. Importantly, the method identifies cadres of polymers that respond differently to structural perturbations, thus providing design insight for targeting or avoiding specific transition temperature ranges. It identifies chemically meaningful cadres, each with interpretable models. Further experimental results show that cadre methods have generalization that is competitive with linear and nonlinear regression models and can identify robust subpopulations.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 89,780
|
2204.00649
|
Knowledge distillation with error-correcting transfer learning for wind
power prediction
|
Wind power prediction, especially for turbines, is vital for the operation, controllability, and economy of electricity companies. Hybrid methodologies combining advanced data science with weather forecasting have been incrementally applied to the predictions. Nevertheless, individually modeling massive turbines from scratch and downscaling weather forecasts to turbine size are neither easy nor economical. Aiming at it, this paper proposes a novel framework with mathematical underpinnings for turbine power prediction. This framework is the first time to incorporate knowledge distillation into energy forecasting, enabling accurate and economical constructions of turbine models by learning knowledge from the well-established park model. Besides, park-scale weather forecasts non-explicitly are mapped to turbines by transfer learning of predicted power errors, achieving model correction for better performance. The proposed framework is deployed on five turbines featuring various terrains in an Arctic wind park, the results are evaluated against the competitors of ablation investigation. The major findings reveal that the proposed framework, developed on favorable knowledge distillation and transfer learning parameters tuning, yields performance boosts from 3.3 % to 23.9 % over its competitors. This advantage also exists in terms of wind energy physics and computing efficiency, which are verified by the prediction quality rate and calculation time.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 289,336
|
2009.13580
|
Deep Learning-Based Automatic Detection of Poorly Positioned Mammograms
to Minimize Patient Return Visits for Repeat Imaging: A Real-World
Application
|
Screening mammograms are a routine imaging exam performed to detect breast cancer in its early stages to reduce morbidity and mortality attributed to this disease. In order to maximize the efficacy of breast cancer screening programs, proper mammographic positioning is paramount. Proper positioning ensures adequate visualization of breast tissue and is necessary for effective breast cancer detection. Therefore, breast-imaging radiologists must assess each mammogram for the adequacy of positioning before providing a final interpretation of the examination; this often necessitates return patient visits for additional imaging. In this paper, we propose a deep learning-algorithm method that mimics and automates this decision-making process to identify poorly positioned mammograms. Our objective for this algorithm is to assist mammography technologists in recognizing inadequately positioned mammograms real-time, improve the quality of mammographic positioning and performance, and ultimately reducing repeat visits for patients with initially inadequate imaging. The proposed model showed a true positive rate for detecting correct positioning of 91.35% in the mediolateral oblique view and 95.11% in the craniocaudal view. In addition to these results, we also present an automatically generated report which can aid the mammography technologist in taking corrective measures during the patient visit.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 197,775
|
2210.05126
|
Tackling Instance-Dependent Label Noise with Dynamic Distribution
Calibration
|
Instance-dependent label noise is realistic but rather challenging, where the label-corruption process depends on instances directly. It causes a severe distribution shift between the distributions of training and test data, which impairs the generalization of trained models. Prior works put great effort into tackling the issue. Unfortunately, these works always highly rely on strong assumptions or remain heuristic without theoretical guarantees. In this paper, to address the distribution shift in learning with instance-dependent label noise, a dynamic distribution-calibration strategy is adopted. Specifically, we hypothesize that, before training data are corrupted by label noise, each class conforms to a multivariate Gaussian distribution at the feature level. Label noise produces outliers to shift the Gaussian distribution. During training, to calibrate the shifted distribution, we propose two methods based on the mean and covariance of multivariate Gaussian distribution respectively. The mean-based method works in a recursive dimension-reduction manner for robust mean estimation, which is theoretically guaranteed to train a high-quality model against label noise. The covariance-based method works in a distribution disturbance manner, which is experimentally verified to improve the model robustness. We demonstrate the utility and effectiveness of our methods on datasets with synthetic label noise and real-world unknown noise.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 322,710
|
2303.03227
|
Parallel Hybrid Networks: an interplay between quantum and classical
neural networks
|
Quantum neural networks represent a new machine learning paradigm that has recently attracted much attention due to its potential promise. Under certain conditions, these models approximate the distribution of their dataset with a truncated Fourier series. The trigonometric nature of this fit could result in angle-embedded quantum neural networks struggling to fit the non-harmonic features in a given dataset. Moreover, the interpretability of neural networks remains a challenge. In this work, we introduce a new, interpretable class of hybrid quantum neural networks that pass the inputs of the dataset in parallel to 1) a classical multi-layered perceptron and 2) a variational quantum circuit, and then the outputs of the two are linearly combined. We observe that the quantum neural network creates a smooth sinusoidal foundation base on the training set, and then the classical perceptrons fill the non-harmonic gaps in the landscape. We demonstrate this claim on two synthetic datasets sampled from periodic distributions with added protrusions as noise. The training results indicate that the parallel hybrid network architecture could improve the solution optimality on periodic datasets with additional noise.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 349,652
|
2102.07668
|
Seven Defining Features of Terahertz (THz) Wireless Systems: A
Fellowship of Communication and Sensing
|
Wireless communication at the terahertz (THz) frequency bands (0.1-10THz) is viewed as one of the cornerstones of tomorrow's 6G wireless systems. Owing to the large amount of available bandwidth, THz frequencies can potentially provide wireless capacity performance gains and enable high-resolution sensing. However, operating a wireless system at the THz-band is limited by a highly uncertain channel. Effectively, these channel limitations lead to unreliable intermittent links as a result of a short communication range, and a high susceptibility to blockage and molecular absorption. Consequently, such impediments could disrupt the THz band's promise of high-rate communications and high-resolution sensing capabilities. In this context, this paper panoramically examines the steps needed to efficiently deploy and operate next-generation THz wireless systems that will synergistically support a fellowship of communication and sensing services. For this purpose, we first set the stage by describing the fundamentals of the THz frequency band. Based on these fundamentals, we characterize seven unique defining features of THz wireless systems: 1) Quasi-opticality of the band, 2) THz-tailored wireless architectures, 3) Synergy with lower frequency bands, 4) Joint sensing and communication systems, 5) PHY-layer procedures, 6) Spectrum access techniques, and 7) Real-time network optimization. These seven defining features allow us to shed light on how to re-engineer wireless systems as we know them today so as to make them ready to support THz bands. Furthermore, these features highlight how THz systems turn every communication challenge into a sensing opportunity. Ultimately, the goal of this article is to chart a forward-looking roadmap that exposes the necessary solutions and milestones for enabling THz frequencies to realize their potential as a game changer for next-generation wireless systems.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 220,184
|
1907.09594
|
Understanding the Political Ideology of Legislators from Social Media
Images
|
In this paper, we seek to understand how politicians use images to express ideological rhetoric through Facebook images posted by members of the U.S. House and Senate. In the era of social media, politics has become saturated with imagery, a potent and emotionally salient form of political rhetoric which has been used by politicians and political organizations to influence public sentiment and voting behavior for well over a century. To date, however, little is known about how images are used as political rhetoric. Using deep learning techniques to automatically predict Republican or Democratic party affiliation solely from the Facebook photographs of the members of the 114th U.S. Congress, we demonstrate that predicted class probabilities from our model function as an accurate proxy of the political ideology of images along a left-right (liberal-conservative) dimension. After controlling for the gender and race of politicians, our method achieves an accuracy of 59.28% from single photographs and 82.35% when aggregating scores from multiple photographs (up to 150) of the same person. To better understand image content distinguishing liberal from conservative images, we also perform in-depth content analyses of the photographs. Our findings suggest that conservatives tend to use more images supporting status quo political institutions and hierarchy maintenance, featuring individuals from dominant social groups, and displaying greater happiness than liberals.
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 139,407
|
1001.3911
|
Computing Lower Bounds on the Information Rate of Intersymbol
Interference Channels
|
Provable lower bounds are presented for the information rate I(X; X+S+N) where X is the symbol drawn from a fixed, finite-size alphabet, S a discrete-valued random variable (RV) and N a Gaussian RV. The information rate I(X; X+S+N) serves as a tight lower bound for capacity of intersymbol interference (ISI) channels corrupted by Gaussian noise. The new bounds can be calculated with a reasonable computational load and provide a similar level of tightness as the well-known conjectured lower bound by Shamai and Laroia for a good range of finite-ISI channels of practical interest. The computation of the presented bounds requires the evaluation of the magnitude sum of the precursor ISI terms as well as the identification of dominant terms among them seen at the output of the minimum mean-squared error (MMSE) decision feedback equalizer (DFE).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 5,486
|
2405.08566
|
Space-time boundary elements for frictional contact in elastodynamics
|
This article studies a boundary element method for dynamic frictional contact between linearly elastic bodies. We formulate these problems as a variational inequality on the boundary, involving the elastodynamic Poincar\'{e}-Steklov operator. The variational inequality is solved in a mixed formulation using boundary elements in space and time. In the model problem of unilateral Tresca friction contact with a rigid obstacle we obtain an a priori estimate for the resulting Galerkin approximations. Numerical experiments in two space dimensions demonstrate the stability, energy conservation and convergence of the proposed method for contact problems involving concrete and steel in the linearly elastic regime. They address both unilateral and two-sided dynamic contact with Tresca or Coulomb friction.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 454,137
|
2402.00232
|
Learning Label Hierarchy with Supervised Contrastive Learning
|
Supervised contrastive learning (SCL) frameworks treat each class as independent and thus consider all classes to be equally important. This neglects the common scenario in which label hierarchy exists, where fine-grained classes under the same category show more similarity than very different ones. This paper introduces a family of Label-Aware SCL methods (LASCL) that incorporates hierarchical information to SCL by leveraging similarities between classes, resulting in creating a more well-structured and discriminative feature space. This is achieved by first adjusting the distance between instances based on measures of the proximity of their classes with the scaled instance-instance-wise contrastive. An additional instance-center-wise contrastive is introduced to move within-class examples closer to their centers, which are represented by a set of learnable label parameters. The learned label parameters can be directly used as a nearest neighbor classifier without further finetuning. In this way, a better feature representation is generated with improvements of intra-cluster compactness and inter-cluster separation. Experiments on three datasets show that the proposed LASCL works well on text classification of distinguishing a single label among multi-labels, outperforming the baseline supervised approaches. Our code is publicly available.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 425,523
|
1602.02934
|
Nested Mini-Batch K-Means
|
A new algorithm is proposed which accelerates the mini-batch k-means algorithm of Sculley (2010) by using the distance bounding approach of Elkan (2003). We argue that, when incorporating distance bounds into a mini-batch algorithm, already used data should preferentially be reused. To this end we propose using nested mini-batches, whereby data in a mini-batch at iteration t is automatically reused at iteration t+1. Using nested mini-batches presents two difficulties. The first is that unbalanced use of data can bias estimates, which we resolve by ensuring that each data sample contributes exactly once to centroids. The second is in choosing mini-batch sizes, which we address by balancing premature fine-tuning of centroids with redundancy induced slow-down. Experiments show that the resulting nmbatch algorithm is very effective, often arriving within 1% of the empirical minimum 100 times earlier than the standard mini-batch algorithm.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 51,934
|
2407.17115
|
Reinforced Prompt Personalization for Recommendation with Large Language
Models
|
Designing effective prompts can empower LLMs to understand user preferences and provide recommendations with intent comprehension and knowledge utilization capabilities. Nevertheless, recent studies predominantly concentrate on task-wise prompting, developing fixed prompt templates shared across all users in a given recommendation task (e.g., rating or ranking). Although convenient, task-wise prompting overlooks individual user differences, leading to inaccurate analysis of user interests. In this work, we introduce the concept of instance-wise prompting, aiming at personalizing discrete prompts for individual users. Toward this end, we propose Reinforced Prompt Personalization (RPP) to realize it automatically. To improve efficiency and quality, RPP personalizes prompts at the sentence level rather than searching in the vast vocabulary word-by-word. Specifically, RPP breaks down the prompt into four patterns, tailoring patterns based on multi-agent and combining them. Then the personalized prompts interact with LLMs (environment) iteratively, to boost LLMs' recommending performance (reward). In addition to RPP, to improve the scalability of action space, our proposal of RPP+ dynamically refines the selected actions with LLMs throughout the iterative process. Extensive experiments on various datasets demonstrate the superiority of RPP/RPP+ over traditional recommender models, few-shot methods, and other prompt-based methods, underscoring the significance of instance-wise prompting in LLMs for recommendation. Our code is available at https://github.com/maowenyu-11/RPP.
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 475,853
|
1607.08456
|
Kernel functions based on triplet comparisons
|
Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set. While previous approaches construct a low-dimensional Euclidean embedding of the data set that reflects the given similarity triplets, we aim at defining kernel functions that correspond to high-dimensional embeddings. These kernel functions can subsequently be used to apply any kernel method to the data set.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 59,163
|
2406.14094
|
Logical reduction of relations: from relational databases to Peirce's
reduction thesis
|
We study logical reduction (factorization) of relations into relations of lower arity by Boolean or relative products that come from applying conjunctions and existential quantifiers to predicates, i.e. by primitive positive formulas of predicate calculus. Our algebraic framework unifies natural joins and data dependencies of database theory and relational algebra of clone theory with the bond algebra of C.S. Peirce. We also offer new constructions of reductions, systematically study irreducible relations and reductions to them, and introduce a new characteristic of relations, ternarity, that measures their `complexity of relating' and allows to refine reduction results. In particular, we refine Peirce's controversial reduction thesis, and show that reducibility behavior is dramatically different on finite and infinite domains.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 466,152
|
2009.04286
|
Enhancing and Learning Denoiser without Clean Reference
|
Recent studies on learning-based image denoising have achieved promising performance on various noise reduction tasks. Most of these deep denoisers are trained either under the supervision of clean references, or unsupervised on synthetic noise. The assumption with the synthetic noise leads to poor generalization when facing real photographs. To address this issue, we propose a novel deep image-denoising method by regarding the noise reduction task as a special case of the noise transference task. Learning noise transference enables the network to acquire the denoising ability by observing the corrupted samples. The results on real-world denoising benchmarks demonstrate that our proposed method achieves promising performance on removing realistic noises, making it a potential solution to practical noise reduction problems.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 195,004
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.