id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1106.1820
Inferring Strategies for Sentence Ordering in Multidocument News Summarization
The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
10,794
2010.07722
Improving Neural Network Verification through Spurious Region Guided Refinement
We propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze the network. If the robustness property cannot be verified, the result is inconclusive. Due to the over-approximation, the computed region in the abstraction may be spurious in the sense that it does not contain any true counterexample. Our goal is to identify such spurious regions and use them to guide the abstraction refinement. The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons. This is achieved by linear programming techniques. With the new bounds, we iteratively apply DeepPoly, aiming to eliminate spurious regions. We have implemented our approach in a prototypical tool DeepSRGR. Experimental results show that a large amount of regions can be identified as spurious, and as a result, the precision of DeepPoly can be significantly improved. As a side contribution, we show that our approach can be applied to verify quantitative robustness properties.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
200,919
2309.14347
Continuous-time control synthesis under nested signal temporal logic specifications
In this work, we propose a novel approach for the continuous-time control synthesis of nonlinear systems under nested signal temporal logic (STL) specifications. While the majority of existing literature focuses on control synthesis for STL specifications without nested temporal operators, addressing nested temporal operators poses a notably more challenging scenario and requires new theoretical advancements. Our approach hinges on the concepts of signal temporal logic tree (sTLT) and control barrier function (CBF). Specifically, we detail the construction of an sTLT from a given STL formula and a continuous-time dynamical system, the sTLT semantics (i.e., satisfaction condition), and the equivalence or under-approximation relation between sTLT and STL. Leveraging the fact that the satisfaction condition of an sTLT is essentially keeping the state within certain sets during certain time intervals, it provides explicit guidelines for the CBF design. The resulting controller is obtained through the utilization of an online CBF-based program coupled with an event-triggered scheme for online updating the activation time interval of each CBF, with which the correctness of the system behavior can be established by construction. We demonstrate the efficacy of the proposed method for single-integrator and unicycle models under nested STL formulas.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
394,569
2012.00319
Constrained Optimization for Hybrid System Falsification and Application to Conjunctive Synthesis
The synthesis problem of a cyber-physical system (CPS) is to find an input signal under which the system's behavior satisfies a given specification. Our setting is that the specification is a formula of signal temporal logic, and furthermore, that the specification is a conjunction of different and often conflicting requirements. Conjunctive specifications are often challenging for optimization-based falsification -- an established method for CPS analysis that can also be used for synthesis -- since the usual framework (especially how its robust semantics handles Boolean connectives) is not suited for finding delicate trade-offs between different requirements. Our proposed method consists of the combination of optimization-based falsification and constrained optimization. Specifically, we show that the state-of-the-art multiple constraint ranking method can be combined with falsification powered by CMA-ES optimization; its performance advantage is demonstrated in experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
209,097
2408.05854
On the Robustness of Kernel Goodness-of-Fit Tests
Goodness-of-fit testing is often criticized for its lack of practical relevance; since ``all models are wrong'', the null hypothesis that the data conform to our model is ultimately always rejected when the sample size is large enough. Despite this, probabilistic models are still used extensively, raising the more pertinent question of whether the model is good enough for a specific task. This question can be formalized as a robust goodness-of-fit testing problem by asking whether the data were generated by a distribution corresponding to our model up to some mild perturbation. In this paper, we show that existing kernel goodness-of-fit tests are not robust according to common notions of robustness including qualitative and quantitative robustness. We also show that robust techniques based on tilted kernels from the parameter estimation literature are not sufficient for ensuring both types of robustness in the context of goodness-of-fit testing. We therefore propose the first robust kernel goodness-of-fit test which resolves this open problem using kernel Stein discrepancy balls, which encompass perturbation models such as Huber contamination models and density uncertainty bands.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
479,969
2310.07419
Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else
Recent advances in text-to-image diffusion models have enabled the photorealistic generation of images from text prompts. Despite the great progress, existing models still struggle to generate compositional multi-concept images naturally, limiting their ability to visualize human imagination. While several recent works have attempted to address this issue, they either introduce additional training or adopt guidance at inference time. In this work, we consider a more ambitious goal: natural multi-concept generation using a pre-trained diffusion model, and with almost no extra cost. To achieve this goal, we identify the limitations in the text embeddings used for the pre-trained text-to-image diffusion models. Specifically, we observe concept dominance and non-localized contribution that severely degrade multi-concept generation performance. We further design a minimal low-cost solution that overcomes the above issues by tweaking (not re-training) the text embeddings for more realistic multi-concept text-to-image generation. Our Correction by Similarities method tweaks the embedding of concepts by collecting semantic features from most similar tokens to localize the contribution. To avoid mixing features of concepts, we also apply Cross-Token Non-Maximum Suppression, which excludes the overlap of contributions from different concepts. Experiments show that our approach outperforms previous methods in text-to-image, image manipulation, and personalization tasks, despite not introducing additional training or inference costs to the diffusion steps.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
398,968
2410.04454
Inner-Probe: Discovering Copyright-related Data Generation in LLM Architecture
Large Language Models (LLMs) utilize extensive knowledge databases and show powerful text generation ability. However, their reliance on high-quality copyrighted datasets raises concerns about copyright infringements in generated texts. Current research often employs prompt engineering or semantic classifiers to identify copyrighted content, but these approaches have two significant limitations: (1) Challenging to identify which specific sub-dataset (e.g., works from particular authors) influences an LLM's output. (2) Treating the entire training database as copyrighted, hence overlooking the inclusion of non-copyrighted training data. We propose InnerProbe, a lightweight framework designed to evaluate the influence of copyrighted sub-datasets on LLM-generated texts. Unlike traditional methods relying solely on text, we discover that the results of multi-head attention (MHA) during LLM output generation provide more effective information. Thus, InnerProbe performs sub-dataset contribution analysis using a lightweight LSTM-based network trained on MHA results in a supervised manner. Harnessing such a prior, InnerProbe enables non-copyrighted text detection through a concatenated global projector trained with unsupervised contrastive learning. InnerProbe demonstrates 3x improved efficiency compared to semantic model training in sub-dataset contribution analysis on Books3, achieves 15.04%-58.7% higher accuracy over baselines on the Pile, and delivers a 0.104 increase in AUC for non-copyrighted data filtering.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
495,288
2412.00026
Spatial-variant causal Bayesian inference for rapid seismic ground failures and impacts estimation
Rapid and accurate estimation of post-earthquake ground failures and building damage is critical for effective post-disaster responses. Progression in remote sensing technologies has paved the way for rapid acquisition of detailed, localized data, enabling swift hazard estimation through analysis of correlation deviations between pre- and post-quake satellite imagery. However, discerning seismic hazards and their impacts is challenged by overlapping satellite signals from ground failures, building damage, and environmental noise. Previous advancements introduced a novel causal graph-based Bayesian network that continually refines seismic ground failure and building damage estimates derived from satellite imagery, accounting for the intricate interplay among geospatial elements, seismic activity, ground failures, building structures, damages, and satellite data. However, this model's neglect of spatial heterogeneity across different locations in a seismic region limits its precision in capturing the spatial diversity of seismic effects. In this study, we pioneer an approach that accounts for spatial intricacies by introducing a spatial variable influenced by the bilateral filter to capture relationships from surrounding hazards. The bilateral filter considers both spatial proximity of neighboring hazards and their ground shaking intensity values, ensuring refined modeling of spatial relationships. This integration achieves a balance between site-specific characteristics and spatial tendencies, offering a comprehensive representation of the post-disaster landscape. Our model, tested across multiple earthquake events, demonstrates significant improvements in capturing spatial heterogeneity in seismic hazard estimation. The results highlight enhanced accuracy and efficiency in post-earthquake large-scale multi-impact estimation, effectively informing rapid disaster responses.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
512,447
2110.15823
C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation framework for medical Image Segmentation
Deep learning models have obtained state-of-the-art results for medical image analysis. However, when these models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an image- and feature-level adaptation method in a sequential manner. First, images from the source domain are translated to the target domain through an un-paired image-to-image adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations. Furthermore, to improve the networks segmentation performance, information about the shape, texture, and con-tour of the predicted segmentation is included during the adversarial train-ing. C-MADA is tested on the task of brain MRI segmentation, obtaining competitive results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
264,019
1908.04466
Few Labeled Atlases are Necessary for Deep-Learning-Based Segmentation
We tackle biomedical image segmentation in the scenario of only a few labeled brain MR images. This is an important and challenging task in medical applications, where manual annotations are time-consuming. Current multi-atlas based segmentation methods use image registration to warp segments from labeled images onto a new scan. In a different paradigm, supervised learning-based segmentation strategies have gained popularity. These method consistently use relatively large sets of labeled training data, and their behavior in the regime of a few labeled biomedical images has not been thoroughly evaluated. In this work, we provide two important results for segmentation in the scenario where few labeled images are available. First, we propose a straightforward implementation of efficient semi-supervised learning-based registration method, which we showcase in a multi-atlas segmentation framework. Second, through an extensive empirical study, we evaluate the performance of a supervised segmentation approach, where the training images are augmented via random deformations. Surprisingly, we find that in both paradigms, accurate segmentation is generally possible even in the context of few labeled images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
141,488
2202.09981
Berman Codes: A Generalization of Reed-Muller Codes that Achieve BEC Capacity
We identify a family of binary codes whose structure is similar to Reed-Muller (RM) codes and which include RM codes as a strict subclass. The codes in this family are denoted as $C_n(r,m)$, and their duals are denoted as $B_n(r,m)$. The length of these codes is $n^m$, where $n \geq 2$, and $r$ is their `order'. When $n=2$, $C_n(r,m)$ is the RM code of order $r$ and length $2^m$. The special case of these codes corresponding to $n$ being an odd prime was studied by Berman (1967) and Blackmore and Norton (2001). Following the terminology introduced by Blackmore and Norton, we refer to $B_n(r,m)$ as the Berman code and $C_n(r,m)$ as the dual Berman code. We identify these codes using a recursive Plotkin-like construction, and we show that these codes have a rich automorphism group, they are generated by the minimum weight codewords, and that they can be decoded up to half the minimum distance efficiently. Using a result of Kumar et al. (2016), we show that these codes achieve the capacity of the binary erasure channel (BEC) under bit-MAP decoding. Furthermore, except double transitivity, they satisfy all the code properties used by Reeves and Pfister to show that RM codes achieve the capacity of binary-input memoryless symmetric channels. Finally, when $n$ is odd, we identify a large class of abelian codes that includes $B_n(r,m)$ and $C_n(r,m)$ and which achieves BEC capacity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
281,383
2009.10877
Symbolic Execution + Model Counting + Entropy Maximization = Automatic Search Synthesis
We present a method of automatically synthesizing steps to solve search problems. Given a specification of a search problem, our approach uses symbolic execution to analyze the specification in order to extract a set of constraints which model the problem. These constraints are used in a process called model counting, which is leveraged to compute probability distributions relating search steps to predicates about an unknown target. The probability distribution functions determine an information gain objective function based on Shannon entropy, which, when maximized, yields the next optimal step of the search. We prove that our algorithm converges to a correct solution, and discuss computational complexity issues. We implemented a domain specific language in which to write search problem specifications, enabling our static analysis phase. Our experiments demonstrate the effectiveness of our approach on a set of search problem case studies inspired by the domains of software security, computational geometry, AI for games, and user preference ranking.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
197,008
2104.12945
Quantitative Risk Indices for Autonomous Vehicle Training Systems
The development of Autonomous Vehicles (AV) presents an opportunity to save and improve lives. However, achieving SAE Level 5 (full) autonomy will require overcoming many technical challenges. There is a gap in the literature regarding the measurement of safety for self-driving systems. Measuring safety and risk is paramount for the generation of useful simulation scenarios for training and validation of autonomous systems. The limitation of current approaches is the dependence on near-crash data. Although near-miss data can substantially increase scarce available accident data, the definition of a near-miss or near-crash is arbitrary. A promising alternative is the introduction of the Responsibility-Sensitive Safety (RSS) model by Shalev-Shwartz et al., which defines safe lateral and longitudinal distances that can guarantee impossibility of collision under reasonable assumptions for vehicle dynamics. We present a framework that extends the RSS model for cases when reasonable assumptions or safe distances are violated. The proposed framework introduces risk indices that quantify the likelihood of a collision by using vehicle dynamics and driver's risk aversion. The present study concludes with proposed experiments for tuning the parameters of the formulated risk indices.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
232,362
2403.11536
OCR is All you need: Importing Multi-Modality into Image-based Defect Detection System
Automatic optical inspection (AOI) plays a pivotal role in the manufacturing process, predominantly leveraging high-resolution imaging instruments for scanning purposes. It detects anomalies by analyzing image textures or patterns, making it an essential tool in industrial manufacturing and quality control. Despite its importance, the deployment of models for AOI often faces challenges. These include limited sample sizes, which hinder effective feature learning, variations among source domains, and sensitivities to changes in lighting and camera positions during imaging. These factors collectively compromise the accuracy of model predictions. Traditional AOI often fails to capitalize on the rich mechanism-parameter information from machines or inside images, including statistical parameters, which typically benefit AOI classification. To address this, we introduce an external modality-guided data mining framework, primarily rooted in optical character recognition (OCR), to extract statistical features from images as a second modality to enhance performance, termed OANet (Ocr-Aoi-Net). A key aspect of our approach is the alignment of external modality features, extracted using a single modality-aware model, with image features encoded by a convolutional neural network. This synergy enables a more refined fusion of semantic representations from different modalities. We further introduce feature refinement and a gating function in our OANet to optimize the combination of these features, enhancing inference and decision-making capabilities. Experimental outcomes show that our methodology considerably boosts the recall rate of the defect detection model and maintains high robustness even in challenging scenarios.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
438,748
2202.00563
On the Limitations of General Purpose Domain Generalisation Methods
We investigate the fundamental performance limitations of learning algorithms in several Domain Generalisation (DG) settings. Motivated by the difficulty with which previously proposed methods have in reliably outperforming Empirical Risk Minimisation (ERM), we derive upper bounds on the excess risk of ERM, and lower bounds on the minimax excess risk. Our findings show that in all the DG settings we consider, it is not possible to significantly outperform ERM. Our conclusions are limited not only to the standard covariate shift setting, but also two other settings with additional restrictions on how domains can differ. The first constrains all domains to have a non-trivial bound on pairwise distances, as measured by a broad class of integral probability metrics. The second alternate setting considers a restricted class of DG problems where all domains have the same underlying support. Our analysis also suggests how different strategies can be used to optimise the performance of ERM in each of these DG setting. We also experimentally explore hypotheses suggested by our theoretical analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
278,187
2008.12858
Real-world Video Adaptation with Reinforcement Learning
Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). We evaluate recently proposed RL-based ABR methods in Facebook's web-based video streaming platform. Real-world ABR contains several challenges that requires customized designs beyond off-the-shelf RL algorithms -- we implement a scalable neural network architecture that supports videos with arbitrary bitrate encodings; we design a training method to cope with the variance resulting from the stochasticity in network conditions; and we leverage constrained Bayesian optimization for reward shaping in order to optimize the conflicting QoE objectives. In a week-long worldwide deployment with more than 30 million video streaming sessions, our RL approach outperforms the existing human-engineered ABR algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
193,691
2206.14053
Bengali Common Voice Speech Dataset for Automatic Speech Recognition
Bengali is one of the most spoken languages in the world with over 300 million speakers globally. Despite its popularity, research into the development of Bengali speech recognition systems is hindered due to the lack of diverse open-source datasets. As a way forward, we have crowdsourced the Bengali Common Voice Speech Dataset, which is a sentence-level automatic speech recognition corpus. Collected on the Mozilla Common Voice platform, the dataset is part of an ongoing campaign that has led to the collection of over 400 hours of data in 2 months and is growing rapidly. Our analysis shows that this dataset has more speaker, phoneme, and environmental diversity compared to the OpenSLR Bengali ASR dataset, the largest existing open-source Bengali speech dataset. We present insights obtained from the dataset and discuss key linguistic challenges that need to be addressed in future versions. Additionally, we report the current performance of a few Automatic Speech Recognition (ASR) algorithms and set a benchmark for future research.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
305,163
2206.04140
TreeFlow: Going beyond Tree-based Gaussian Probabilistic Regression
The tree-based ensembles are known for their outstanding performance in classification and regression problems characterized by feature vectors represented by mixed-type variables from various ranges and domains. However, considering regression problems, they are primarily designed to provide deterministic responses or model the uncertainty of the output with Gaussian or parametric distribution. In this work, we introduce TreeFlow, the tree-based approach that combines the benefits of using tree ensembles with the capabilities of modeling flexible probability distributions using normalizing flows. The main idea of the solution is to use a tree-based model as a feature extractor and combine it with a conditional variant of normalizing flow. Consequently, our approach is capable of modeling complex distributions for the regression outputs. We evaluate the proposed method on challenging regression benchmarks with varying volume, feature characteristics, and target dimensionality. We obtain the SOTA results for both probabilistic and deterministic metrics on datasets with multi-modal target distributions and competitive results on unimodal ones compared to tree-based regression baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
301,517
2401.12707
Localized Data-driven Consensus Control
This paper considers a localized data-driven consensus problem for leader-follower multi-agent systems with unknown discrete-time agent dynamics, where each follower computes its local control gain using only their locally collected state and input data. Both noiseless and noisy data-driven consensus protocols are presented, which can handle the challenge of the heterogeneity in control gains caused by the localized data sampling and achieve leader-follower consensus. The design of these data-driven consensus protocols involves low-dimensional linear matrix inequalities. In addition, the results are extended to the case where only the leader's data are collected and exploited. The effectiveness of the proposed methods is illustrated via simulation examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
423,471
1904.06654
The dynamic importance of nodes is poorly predicted by static network features
One of the most central questions in network science is: which nodes are most important? Often this question is answered using structural properties such as high connectedness or centrality in the network. However, static structural connectedness does not necessarily translate to dynamical importance. To demonstrate this, we simulate the kinetic Ising spin model on generated networks and one real-world weighted network. The dynamic impact of nodes is assessed by causally intervening on node state probabilities and measuring the effect on the systemic dynamics. The results show that structural features such as network centrality or connectedness are actually poor predictors of the dynamical impact of a node on the rest of the network. A solution is offered in the form of an information theoretical measure named integrated mutual information. The metric is able to accurately predict the dynamically most important node ('driver' node) in networks based on observational data of non-intervened dynamics. We conclude that the driver node(s) in networks are not necessarily the most well-connected or central nodes. Indeed, the common assumption of network structural features being proportional to dynamical importance is false. Consequently, great care should be taken when deriving dynamical importance from network data alone. These results highlight the need for novel inference methods that take both structure and dynamics into account.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
127,602
2102.10172
Channel Estimation and Data Detection Analysis of Massive MIMO with 1-Bit ADCs
We present an analytical framework for the channel estimation and the data detection in massive multiple-input multiple-output uplink systems with 1-bit analog-to-digital converters (ADCs) and i.i.d. Rayleigh fading. First, we provide closed-form expressions of the mean squared error (MSE) of the channel estimation considering the state-of-the-art linear minimum MSE estimator and the class of scaled least-squares estimators. For the data detection, we provide closed-form expressions of the expected value and the variance of the estimated symbols when maximum ratio combining is adopted, which can be exploited to efficiently implement minimum distance detection and, potentially, to design the set of transmit symbols. Our analytical findings explicitly depend on key system parameters such as the signal-to-noise ratio (SNR), the number of user equipments, and the pilot length, thus enabling a precise characterization of the performance of the channel estimation and the data detection with 1-bit ADCs. The proposed analysis highlights a fundamental SNR trade-off, according to which operating at the right noise level significantly enhances the system performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
220,994
2403.04398
MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment
Action Quality Assessment (AQA) evaluates diverse skills but models struggle with non-stationary data. We propose Continual AQA (CAQA) to refine models using sparse new data. Feature replay preserves memory without storing raw inputs. However, the misalignment between static old features and the dynamically changing feature manifold causes severe catastrophic forgetting. To address this novel problem, we propose Manifold-Aligned Graph Regularization (MAGR), which first aligns deviated old features to the current feature manifold, ensuring representation consistency. It then constructs a graph jointly arranging old and new features aligned with quality scores. Experiments show MAGR outperforms recent strong baselines with up to 6.56%, 5.66%, 15.64%, and 9.05% correlation gains on the MTL-AQA, FineDiving, UNLV-Dive, and JDM-MSA split datasets, respectively. This validates MAGR for continual assessment challenges arising from non-stationary skill variations. Code is available at https://github.com/ZhouKanglei/MAGR_CAQA}{https://github.com/ZhouKanglei/MAGR_CAQA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
435,583
2012.00924
CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Modeling the hand-object (HO) interaction not only requires estimation of the HO pose, but also pays attention to the contact due to their interaction. Significant progress has been made in estimating hand and object separately with deep learning methods, simultaneous HO pose estimation and contact modeling has not yet been fully explored. In this paper, we present an explicit contact representation namely Contact Potential Field (CPF), and a learning-fitting hybrid framework namely MIHO to Modeling the Interaction of Hand and Object. In CPF, we treat each contacting HO vertex pair as a spring-mass system. Hence the whole system forms a potential field with minimal elastic energy at the grasp position. Extensive experiments on the two commonly used benchmarks have demonstrated that our method can achieve state-of-the-art in several reconstruction metrics, and allow us to produce more physically plausible HO pose even when the ground-truth exhibits severe interpenetration or disjointedness. Our code is available at https://github.com/lixiny/CPF.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
209,272
2501.01556
Extended Information Geometry: Large Deviation Theory, Statistical Thermodynamics, and Empirical Counting Frequencies
Combinatorics, probabilities, and measurements are fundamental to understanding information. This work explores how the application of large deviation theory (LDT) in counting phenomena leads to the emergence of various entropy functions, including Shannon's entropy, mutual information, and relative and conditional entropies. In terms of these functions, we reveal an inherent geometrical structure through operations, including contractions, lift, change of basis, and projections. Legendre-Fenchel (LF) transform, which is central to both LDT and Gibbs' method of thermodynamics, offers a novel energetic description of data. The manifold of empirical mean values of statistical data ad infinitum has a parametrization using LF conjugates w.r.t. an entropy function; this gives rise to the additivity known in statistical thermodynamic energetics. This work extends current information geometry to information projection as defined through conditional expectations in Kolmogorov's probability theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
522,116
2210.15427
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
An off-the-shelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC). Specifically, we present SAC-w that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at https://github.com/guanjiyang/SAC.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
326,944
1809.09329
Collaborative Learning for Extremely Low Bit Asymmetric Hashing
Hashing techniques are in great demand for a wide range of real-world applications such as image retrieval and network compression. Nevertheless, existing approaches could hardly guarantee a satisfactory performance with the extremely low-bit (e.g., 4-bit) hash codes due to the severe information loss and the shrink of the discrete solution space. In this paper, we propose a novel \textit{Collaborative Learning} strategy that is tailored for generating high-quality low-bit hash codes. The core idea is to jointly distill bit-specific and informative representations for a group of pre-defined code lengths. The learning of short hash codes among the group can benefit from the manifold shared with other long codes, where multiple views from different hash codes provide the supplementary guidance and regularization, making the convergence faster and more stable. To achieve that, an asymmetric hashing framework with two variants of multi-head embedding structures is derived, termed as Multi-head Asymmetric Hashing (MAH), leading to great efficiency of training and querying. Extensive experiments on three benchmark datasets have been conducted to verify the superiority of the proposed MAH, and have shown that the 8-bit hash codes generated by MAH achieve $94.3\%$ of the MAP (Mean Average Precision (MAP)) score on the CIFAR-10 dataset, which significantly surpasses the performance of the 48-bit codes by the state-of-the-arts in image retrieval tasks.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
108,684
2310.08660
Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach
In this work, we consider the problem of network parameter optimization for rate maximization. We frame this as a joint optimization problem of power control, beam forming, and interference cancellation. We consider the setting where multiple Base Stations (BSs) communicate with multiple user equipment (UEs). Because of the exponential computational complexity of brute force search, we instead solve this nonconvex optimization problem using deep reinforcement learning (RL) techniques. Modern communication systems are notorious for their difficulty in exactly modeling their behavior. This limits us in using RL-based algorithms as interaction with the environment is needed for the agent to explore and learn efficiently. Further, it is ill-advised to deploy the algorithm in the real world for exploration and learning because of the high cost of failure. In contrast to the previous RL-based solutions proposed, such as deep-Q network (DQN) based control, we suggest an offline model-based approach. We specifically consider discrete batch-constrained deep Q-learning (BCQ) and show that performance similar to DQN can be achieved with only a fraction of the data without exploring. This maximizes sample efficiency and minimizes risk in deploying a new algorithm to commercial networks. We provide the entire project resource, including code and data, at the following link: https://github.com/Heasung-Kim/ safe-rl-deployment-for-5g.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
399,472
2408.01892
Re-ENACT: Reinforcement Learning for Emotional Speech Generation using Actor-Critic Strategy
In this paper, we propose the first method to modify the prosodic features of a given speech signal using actor-critic reinforcement learning strategy. Our approach uses a Bayesian framework to identify contiguous segments of importance that links segments of the given utterances to perception of emotions in humans. We train a neural network to produce the variational posterior of a collection of Bernoulli random variables; our model applies a Markov prior on it to ensure continuity. A sample from this distribution is used for downstream emotion prediction. Further, we train the neural network to predict a soft assignment over emotion categories as the target variable. In the next step, we modify the prosodic features (pitch, intensity, and rhythm) of the masked segment to increase the score of target emotion. We employ an actor-critic reinforcement learning to train the prosody modifier by discretizing the space of modifications. Further, it provides a simple solution to the problem of gradient computation through WSOLA operation for rhythm manipulation. Our experiments demonstrate that this framework changes the perceived emotion of a given speech utterance to the target. Further, we show that our unified technique is on par with state-of-the-art emotion conversion models from supervised and unsupervised domains that require pairwise training.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
478,409
1612.08936
Partial Membership Latent Dirichlet Allocation
Topic models (e.g., pLSA, LDA, sLDA) have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership latent Dirichlet allocation (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,133
2110.13609
Resolving Anomalies in the Behaviour of a Modularity Inducing Problem Domain with Distributional Fitness Evaluation
Discrete gene regulatory networks (GRNs) play a vital role in the study of robustness and modularity. A common method of evaluating the robustness of GRNs is to measure their ability to regulate a set of perturbed gene activation patterns back to their unperturbed forms. Usually, perturbations are obtained by collecting random samples produced by a predefined distribution of gene activation patterns. This sampling method introduces stochasticity, in turn inducing dynamicity. This dynamicity is imposed on top of an already complex fitness landscape. So where sampling is used, it is important to understand which effects arise from the structure of the fitness landscape, and which arise from the dynamicity imposed on it. Stochasticity of the fitness function also causes difficulties in reproducibility and in post-experimental analyses. We develop a deterministic distributional fitness evaluation by considering the complete distribution of gene activity patterns, so as to avoid stochasticity in fitness assessment. This fitness evaluation facilitates repeatability. Its determinism permits us to ascertain theoretical bounds on the fitness, and thus to identify whether the algorithm has reached a global optimum. It enables us to differentiate the effects of the problem domain from those of the noisy fitness evaluation, and thus to resolve two remaining anomalies in the behaviour of the problem domain of~\citet{espinosa2010specialization}. We also reveal some properties of solution GRNs that lead them to be robust and modular, leading to a deeper understanding of the nature of the problem domain. We conclude by discussing potential directions toward simulating and understanding the emergence of modularity in larger, more complex domains, which is key both to generating more useful modular solutions, and to understanding the ubiquity of modularity in biological systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
263,246
1008.1610
New Constant-Weight Codes from Propagation Rules
This paper proposes some simple propagation rules which give rise to new binary constant-weight codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
7,233
1509.09152
Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)
The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalization), (ii) deployment of a mediation information system able to computerize the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organizations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
47,456
1105.5427
Combining Lagrangian Decomposition and Excessive Gap Smoothing Technique for Solving Large-Scale Separable Convex Optimization Problems
A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it dynamically updates the smoothness parameters which leads to numerically robust performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is $O(\frac{1}{k})$, where $k$ is the iteration counter. In the second part of the paper, the algorithm is coupled with a dual scheme to construct a switching variant of the dual decomposition. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
10,514
2410.21473
Second-Order Analysis of CSMA Protocols for Age-of-Information Minimization
This paper introduces a general framework to analyze and optimize age-of-information (AoI) in CSMA protocols for distributed uplink transmissions. The proposed framework combines two theoretical approaches. First, it employs second-order analysis that characterizes all random processes by their respective means and temporal variances and approximates AoI as a function of the mean and temporal variance of the packet delivery process. Second, it employs mean-field approximation to derive the mean and temporal variance of the packet delivery process for one node in the presence of interference from others. To demonstrate the utility of this framework, this paper applies it to the age-threshold ALOHA policy and identifies parameter settings that outperform those previously suggested as optimal in the original work that introduced this policy. Simulation results demonstrate that our framework provides precise AoI approximations and achieves significantly better performance, even in networks with a small number of users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
503,256
2409.13136
Federated Learning with Label-Masking Distillation
Federated learning provides a privacy-preserving manner to collaboratively train models on data distributed over multiple local clients via the coordination of a global server. In this paper, we focus on label distribution skew in federated learning, where due to the different user behavior of the client, label distributions between different clients are significantly different. When faced with such cases, most existing methods will lead to a suboptimal optimization due to the inadequate utilization of label distribution information in clients. Inspired by this, we propose a label-masking distillation approach termed FedLMD to facilitate federated learning via perceiving the various label distributions of each client. We classify the labels into majority and minority labels based on the number of examples per class during training. The client model learns the knowledge of majority labels from local data. The process of distillation masks out the predictions of majority labels from the global model, so that it can focus more on preserving the minority label knowledge of the client. A series of experiments show that the proposed approach can achieve state-of-the-art performance in various cases. Moreover, considering the limited resources of the clients, we propose a variant FedLMD-Tf that does not require an additional teacher, which outperforms previous lightweight approaches without increasing computational costs. Our code is available at https://github.com/wnma3mz/FedLMD.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
489,858
2209.04356
Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health
In this paper, we consider a risk-averse multi-armed bandit (MAB) problem where the goal is to learn a policy that minimizes the risk of low expected return, as opposed to maximizing the expected return itself, which is the objective in the usual approach to risk-neutral MAB. Specifically, we formulate this problem as a transfer learning problem between an expert and a learner agent in the presence of contexts that are only observable by the expert but not by the learner. Thus, such contexts are unobserved confounders (UCs) from the learner's perspective. Given a dataset generated by the expert that excludes the UCs, the goal for the learner is to identify the true minimum-risk arm with fewer online learning steps, while avoiding possible biased decisions due to the presence of UCs in the expert's data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
316,759
2009.11264
On the Ability and Limitations of Transformers to Recognize Formal Languages
Transformers have supplanted recurrent models in a large number of NLP tasks. However, the differences in their abilities to model different syntactic properties remain largely unknown. Past works suggest that LSTMs generalize very well on regular languages and have close connections with counter languages. In this work, we systematically study the ability of Transformers to model such languages as well as the role of its individual components in doing so. We first provide a construction of Transformers for a subclass of counter languages, including well-studied languages such as n-ary Boolean Expressions, Dyck-1, and its generalizations. In experiments, we find that Transformers do well on this subclass, and their learned mechanism strongly correlates with our construction. Perhaps surprisingly, in contrast to LSTMs, Transformers do well only on a subset of regular languages with degrading performance as we make languages more complex according to a well-known measure of complexity. Our analysis also provides insights on the role of self-attention mechanism in modeling certain behaviors and the influence of positional encoding schemes on the learning and generalization abilities of the model.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
197,127
1411.0729
The Private and Public Correlation Cost of Three Random Variables with Collaboration
In this paper we consider the problem of generating arbitrary three-party correlations from a combination of public and secret correlations. Two parties -- called Alice and Bob -- share perfectly correlated bits that are secret from a collaborating third party, Charlie. At the same time, all three parties have access to a separate source of correlated bits, and their goal is to convert these two resources into multiple copies of some given tripartite distribution $P_{XYZ}$. We obtain a single-letter characterization of the trade-off between public and private bits that are needed to achieve this task. The rate of private bits is shown to generalize Wyner's classic notion of common information held between a pair of random variables. The problem we consider is also closely related to the task of secrecy formation in which $P_{XYZ}$ is generated using public communication and local randomness but with Charlie functioning as an adversary instead of a collaborator. We describe in detail the differences between the collaborative and adversarial scenarios.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
37,276
2410.13299
LLM-Rank: A Graph Theoretical Approach to Pruning Large Language Models
The evolving capabilities of large language models are accompanied by growing sizes and deployment costs, necessitating effective inference optimisation techniques. We propose a novel pruning method utilising centrality measures from graph theory, reducing both the computational requirements and the memory footprint of these models. Specifically, we devise a method for creating a weighted directed acyclical graph representation of multilayer perceptrons to which we apply a modified version of the weighted PageRank centrality measure to compute node importance scores. In combination with uniform pruning this leads to structured sparsity. We call this pruning method MLPRank. Furthermore we introduce an extension to decoder-only transformer models and call it LLMRank. For both variants we demonstrate a strong performance. With MLPRank on average leading to 6.09 % higher accuracy retention than three popular baselines and 13.42 % with LLMRank compared to two popular baselines. Code is available at https://github.com/amazon-science/llm-rank-pruning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
499,471
1703.09784
Perception Driven Texture Generation
This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
70,799
2403.16728
Improving Diffusion Models's Data-Corruption Resistance using Scheduled Pseudo-Huber Loss
Diffusion models are known to be vulnerable to outliers in training data. In this paper we study an alternative diffusion loss function, which can preserve the high quality of generated data like the original squared $L_{2}$ loss while at the same time being robust to outliers. We propose to use pseudo-Huber loss function with a time-dependent parameter to allow for the trade-off between robustness on the most vulnerable early reverse-diffusion steps and fine details restoration on the final steps. We show that pseudo-Huber loss with the time-dependent parameter exhibits better performance on corrupted datasets in both image and audio domains. In addition, the loss function we propose can potentially help diffusion models to resist dataset corruption while not requiring data filtering or purification compared to conventional training algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
441,168
1303.4247
On the efficiency of the new Italian Senate and the role of 5 Stars Movement: Comparison among different possible scenarios by means of a virtual Parliament model
The recent 2013 Italian elections are over and the situation that President Napolitano will have to settle soon for the formation of the new government is not the simplest one. After twenty years of bipolarism (more or less effective), where we were accustomed to a tight battle between two great political coalitions, the center-right and center-left, now, in the new Parliament, we have four political formations. But is it really this result, as it would seem to suggest our common sense, the prelude to an inevitable phase of ungovernability? Can a Parliament with changing majorities in Senate to be as efficient as a Parliament with a large majority in both the Houses? In this short note we will try to answer these questions going beyond common sense and analyzing the current political situation by means of a scientific, original and innovative instrument, i.e. an "agent-based simulation". We show that the situation is not so dramatic as it sounds, but it contains within itself potential positive aspects, as long as one makes the most appropriate choices.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
22,995
2402.01203
Neural Language of Thought Models
The Language of Thought Hypothesis suggests that human cognition operates on a structured, language-like system of mental representations. While neural language models can naturally benefit from the compositional structure inherently and explicitly expressed in language data, learning such representations from non-linguistic general observations, like images, remains a challenge. In this work, we introduce the Neural Language of Thought Model (NLoTM), a novel approach for unsupervised learning of LoTH-inspired representation and generation. NLoTM comprises two key components: (1) the Semantic Vector-Quantized Variational Autoencoder, which learns hierarchical, composable discrete representations aligned with objects and their properties, and (2) the Autoregressive LoT Prior, an autoregressive transformer that learns to generate semantic concept tokens compositionally, capturing the underlying data distribution. We evaluate NLoTM on several 2D and 3D image datasets, demonstrating superior performance in downstream tasks, out-of-distribution generalization, and image generation quality compared to patch-based VQ-VAE and continuous object-centric representations. Our work presents a significant step towards creating neural networks exhibiting more human-like understanding by developing LoT-like representations and offers insights into the intersection of cognitive science and machine learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
425,910
2304.09285
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation
Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity -- corridor, activity, view, and frame value -- simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 93.8% on simulated sequences and 67.57% in cadaver across all granularity levels, with up to 88% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
359,000
2305.09149
Constructing Feedback Linearizable Discretizations for Continuous-Time Systems using Retraction Maps
Control laws for continuous-time dynamical systems are most often implemented via digital controllers using a sample-and-hold technique. Numerical discretization of the continuous system is an integral part of subsequent analysis. Feedback linearizability of such sampled systems is dependent upon the choice of discretization map or technique. In this article, for feedback linearizable continuous-time systems, we utilize the idea of retraction maps to construct discretizations that are feedback linearizable as well. We also propose a method to functionally compose discretizations to obtain higher-order integrators that are feedback linearizable.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
364,534
2107.07630
Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi
Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
246,476
2106.09109
QuantumFed: A Federated Learning Framework for Collaborative Quantum Training
With the fast development of quantum computing and deep learning, quantum neural networks have attracted great attention recently. By leveraging the power of quantum computing, deep neural networks can potentially overcome computational power limitations in classic machine learning. However, when multiple quantum machines wish to train a global model using the local data on each machine, it may be very difficult to copy the data into one machine and train the model. Therefore, a collaborative quantum neural network framework is necessary. In this article, we borrow the core idea of federated learning to propose QuantumFed, a quantum federated learning framework to have multiple quantum nodes with local quantum data train a mode together. Our experiments show the feasibility and robustness of our framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
241,540
1906.11559
Aerial Base Stations Deployment in 6G Cellular Networks using Tethered Drones: The Mobility and Endurance Trade-off
Airborne base stations (carried by drones) have a great potential to enhance coverage and capacity of cellular networks. Multiple scenarios and use cases will highly benefit from such technology such as (i) offloading terrestrial base stations (BSs) in dense and urban areas, and (ii) providing coverage for rural areas. However, one of the main challenges facing the deployment of airborne BSs is the limited available energy at the drone, which limits the flight time. In fact, most of the currently used unmanned aerial vehicles (UAVs) can only operate for one hour maximum. This limits the performance of the UAV-enabled cellular network due to the need to frequently visit the ground station to recharge, leaving the UAV's coverage area temporarily out of service. In this article, we propose a new UAV-enabled cellular network setup based on tethered UAVs (TUAVs). In the proposed setup, the TUAV is connected to a ground station (GS) through a tether, which provides the TUAV with both energy and data. This enables a flight that can stay for days. We describe in detail the components of the proposed system. Furthermore, we enlist the main advantages of a TUAV-enabled cellular network compared to typical untethered UAVs. Next, we discuss the potential applications and use cases for TUAVs. Finally, we discuss the challenges, design considerations, and future research directions to realize the proposed setup.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
136,694
2007.12813
All-Optical Information Processing Capacity of Diffractive Surfaces
Precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances around the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine learning tasks through light-matter interaction and diffraction. Here, we analyze the information processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view, and exhibit depth advantages in terms of their statistical inference, learning and generalization capabilities for different image classification tasks, when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including e.g., plasmonic and/or dielectric-based metasurfaces and flat optics that can be used to form all-optical processors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
188,929
2002.07341
Joint Frame Design and Resource Allocation for Ultra-Reliable and Low-Latency Vehicular Networks
The rapid development of the fifth generation mobile communication systems accelerates the implementation of vehicle-to-everything communications. Compared with the other types of vehicular communications, vehicle-to-vehicle (V2V) communications mainly focus on the exchange of driving safety information with neighboring vehicles, which requires ultra-reliable and low-latency communications (URLLCs). However, the frame size is significantly shortened in V2V URLLCs because of the rigorous latency requirements, and thus the overhead is no longer negligible compared with the payload information from the perspective of size. In this paper, we investigate the frame design and resource allocation for an urban V2V URLLC system in which the uplink cellular resources are reused at the underlay mode. Specifically, we first analyze the lower bounds of performance for V2V pairs and cellular users based on the regular pilot scheme and superimposed pilot scheme. Then, we propose a frame design algorithm and a semi-persistent scheduling algorithm to achieve the optimal frame design and resource allocation with the reasonable complexity. Finally, our simulation results show that the proposed frame design and resource allocation scheme can greatly satisfy the URLLC requirements of V2V pairs and guarantee the communication quality of cellular users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
164,437
2112.00665
Iterative Saliency Enhancement using Superpixel Similarity
Saliency Object Detection (SOD) has several applications in image analysis. The methods have evolved from image-intrinsic to object-inspired (deep-learning-based) models. When a model fail, however, there is no alternative to enhance its saliency map. We fill this gap by introducing a hybrid approach, named \textit{Iterative Saliency Enhancement over Superpixel Similarity} (ISESS), that iteratively generates enhanced saliency maps by executing two operations alternately: object-based superpixel segmentation and superpixel-based saliency estimation -- cycling operations never exploited. ISESS estimates seeds for superpixel delineation from a given saliency map and defines superpixel queries in the foreground and background. A new saliency map results from color similarities between queries and superpixels at each iteration. The process repeats and, after a given number of iterations, the generated saliency maps are combined into one by cellular automata. Finally, the resulting map is merged with the initial one by the maximum bewteen their average values per superpixel. We demonstrate that our hybrid model can consistently outperform three state-of-the-art deep-learning-based methods on five image datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,209
2312.11843
Enhancing Social Decision-Making of Autonomous Vehicles: A Mixed-Strategy Game Approach With Interaction Orientation Identification
The integration of Autonomous Vehicles (AVs) into existing human-driven traffic systems poses considerable challenges, especially within environments where human and machine interactions are frequent and complex, such as at unsignalized intersections. To deal with these challenges, we introduce a novel framework predicated on dynamic and socially-aware decision-making game theory to augment the social decision-making prowess of AVs in mixed driving environments. This comprehensive framework is delineated into three primary modules: Interaction Orientation Identification, Mixed-Strategy Game Modeling, and Expert Mode Learning. We introduce 'Interaction Orientation' as a metric to evaluate the social decision-making tendencies of various agents, incorporating both environmental factors and trajectory characteristics. The mixed-strategy game model developed as part of this framework considers the evolution of future traffic scenarios and includes a utility function that balances safety, operational efficiency, and the unpredictability of environmental conditions. To adapt to real-world driving complexities, our framework utilizes a dynamic optimization framework for assimilating and learning from expert human driving strategies. These strategies are compiled into a comprehensive strategy library, serving as a reference for future decision-making processes. The proposed approach is validated through extensive driving datasets and human-in-loop driving experiments, and the results demonstrate marked enhancements in decision timing and precision.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
416,740
1702.06011
A Downstream Crosstalk Channel Estimation Method for Mix of Legacy and Vectoring-Enabled VDSL
With the latest technology of vectoring, DSL data rates in the order of 100Mbps have become a reality that is under field deployment. The key is to cancel crosstalk from other lines, which is also known as multiuser MIMO cancellation for wireless communications. During the DSL system upgrade phase of field deployment, mix of legacy and vectoring-enabled VDSL lines is inevitable and a channel estimation solution for the entire mix is needed before vectoring can be enforced. This paper describes a practical method for crosstalk channel estimation for downstream vectoring, assuming that a vectoring-enabled DSLAM forces DMT symbol-level timing to be aligned for all of the lines, but also assuming that the location of synch symbols are aligned only among vectoring-enabled lines. Each vectoring-enabled receiver is capable of reporting error samples to vectoring-DSLAM. The estimation method is not only practical, but also matches the performance of Maximum-Likelihood estimator for the selected training sequences.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,515
2012.14743
BayesCard: Revitilizing Bayesian Frameworks for Cardinality Estimation
Cardinality estimation (CardEst) is an essential component in query optimizers and a fundamental problem in DBMS. A desired CardEst method should attain good algorithm performance, be stable to varied data settings, and be friendly to system deployment. However, no existing CardEst method can fulfill the three criteria at the same time. Traditional methods often have significant algorithm drawbacks such as large estimation errors. Recently proposed deep learning based methods largely improve the estimation accuracy but their performance can be greatly affected by data and often difficult for system deployment. In this paper, we revitalize the Bayesian networks (BN) for CardEst by incorporating the techniques of probabilistic programming languages. We present BayesCard, the first framework that inherits the advantages of BNs, i.e., high estimation accuracy and interpretability, while overcomes their drawbacks, i.e. low structure learning and inference efficiency. This makes BayesCard a perfect candidate for commercial DBMS deployment. Our experimental results on several single-table and multi-table benchmarks indicate BayesCard's superiority over existing state-of-the-art CardEst methods: BayesCard achieves comparable or better accuracy, 1-2 orders of magnitude faster inference time, 1-3 orders faster training time, 1-3 orders smaller model size, and 1-2 orders faster updates. Meanwhile, BayesCard keeps stable performance when varying data with different settings. We also deploy BayesCard into PostgreSQL. On the IMDB benchmark workload, it improves the end-to-end query time by 13.3%, which is very close to the optimal result of 14.2% using an oracle of true cardinality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
213,596
1611.04704
SIR Asymptotics in General Network Models
In the performance analyses of wireless networks, asymptotic quantities and properties often pro- vide useful results and insights. The asymptotic analyses become especially important when complete analytical expressions of the performance metrics of interest are not available, which is often the case if one departs from very specific modeling assumptions. In this paper, we consider the asymptotics of the SIR distribution in general wireless network models, including ad hoc and cellular networks, simple and non-simple point processes, and singular and bounded path loss models, for which, in most cases, finding analytical expressions of the complete SIR distribution seems hopeless. We show that the lower tails of the SIR distributions decay polynomially with the order solely determined by the path loss exponent or the fading parameter, while the upper tails decay exponentially, with the exception of cellular networks with singular path loss. In addition, we analyze the impact of the nearest interferer on the asymptotic properties of the SIR distributions, and we formulate three crisp conjectures that -if true- determine the asymptotic behavior in many cases based on the large-scale path loss properties of the desired signal and/or nearest interferer only.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
63,888
2210.08248
A Closer Look at the Calibration of Differentially Private Learners
We systematically study the calibration of classifiers trained with differentially private stochastic gradient descent (DP-SGD) and observe miscalibration across a wide range of vision and language tasks. Our analysis identifies per-example gradient clipping in DP-SGD as a major cause of miscalibration, and we show that existing approaches for improving calibration with differential privacy only provide marginal improvements in calibration error while occasionally causing large degradations in accuracy. As a solution, we show that differentially private variants of post-processing calibration methods such as temperature scaling and Platt scaling are surprisingly effective and have negligible utility cost to the overall model. Across 7 tasks, temperature scaling and Platt scaling with DP-SGD result in an average 3.1-fold reduction in the in-domain expected calibration error and only incur at most a minor percent drop in accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
324,058
2106.01061
Rethinking Cross-modal Interaction from a Top-down Perspective for Referring Video Object Segmentation
Referring video object segmentation (RVOS) aims to segment video objects with the guidance of natural language reference. Previous methods typically tackle RVOS through directly grounding linguistic reference over the image lattice. Such bottom-up strategy fails to explore object-level cues, easily leading to inferior results. In this work, we instead put forward a two-stage, top-down RVOS solution. First, an exhaustive set of object tracklets is constructed by propagating object masks detected from several sampled frames to the entire video. Second, a Transformer-based tracklet-language grounding module is proposed, which models instance-level visual relations and cross-modal interactions simultaneously and efficiently. Our model ranks first place on CVPR2021 Referring Youtube-VOS challenge.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
238,366
2502.04412
Decoder-Only LLMs are Better Controllers for Diffusion Models
Groundbreaking advancements in text-to-image generation have recently been achieved with the emergence of diffusion models. These models exhibit a remarkable ability to generate highly artistic and intricately detailed images based on textual prompts. However, obtaining desired generation outcomes often necessitates repetitive trials of manipulating text prompts just like casting spells on a magic mirror, and the reason behind that is the limited capability of semantic understanding inherent in current image generation models. Specifically, existing diffusion models encode the text prompt input with a pre-trained encoder structure, which is usually trained on a limited number of image-caption pairs. The state-of-the-art large language models (LLMs) based on the decoder-only structure have shown a powerful semantic understanding capability as their architectures are more suitable for training on very large-scale unlabeled data. In this work, we propose to enhance text-to-image diffusion models by borrowing the strength of semantic understanding from large language models, and devise a simple yet effective adapter to allow the diffusion models to be compatible with the decoder-only structure. Meanwhile, we also provide a supporting theoretical analysis with various architectures (e.g., encoder-only, encoder-decoder, and decoder-only), and conduct extensive empirical evaluations to verify its effectiveness. The experimental results show that the enhanced models with our adapter module are superior to the stat-of-the-art models in terms of text-to-image generation quality and reliability.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
531,149
1705.06908
Unbiased estimates for linear regression via volume sampling
Given a full rank matrix $X$ with more columns than rows, consider the task of estimating the pseudo inverse $X^+$ based on the pseudo inverse of a sampled subset of columns (of size at least the number of rows). We show that this is possible if the subset of columns is chosen proportional to the squared volume spanned by the rows of the chosen submatrix (ie, volume sampling). The resulting estimator is unbiased and surprisingly the covariance of the estimator also has a closed form: It equals a specific factor times $X^{+\top}X^+$. Pseudo inverse plays an important part in solving the linear least squares problem, where we try to predict a label for each column of $X$. We assume labels are expensive and we are only given the labels for the small subset of columns we sample from $X$. Using our methods we show that the weight vector of the solution for the sub problem is an unbiased estimator of the optimal solution for the whole problem based on all column labels. We believe that these new formulas establish a fundamental connection between linear least squares and volume sampling. We use our methods to obtain an algorithm for volume sampling that is faster than state-of-the-art and for obtaining bounds for the total loss of the estimated least-squares solution on all labeled columns.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
73,704
1711.04731
Evaluating prose style transfer with the Bible
In the prose style transfer task a system, provided with text input and a target prose style, produces output which preserves the meaning of the input text but alters the style. These systems require parallel data for evaluation of results and usually make use of parallel data for training. Currently, there are few publicly available corpora for this task. In this work, we identify a high-quality source of aligned, stylistically distinct text in different versions of the Bible. We provide a standardized split, into training, development and testing data, of the public domain versions in our corpus. This corpus is highly parallel since many Bible versions are included. Sentences are aligned due to the presence of chapter and verse numbers within all versions of the text. In addition to the corpus, we present the results, as measured by the BLEU and PINC metrics, of several models trained on our data which can serve as baselines for future research. While we present these data as a style transfer corpus, we believe that it is of unmatched quality and may be useful for other natural language tasks as well.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
84,433
2207.01062
Distributed Online System Identification for LTI Systems Using Reverse Experience Replay
Identification of linear time-invariant (LTI) systems plays an important role in control and reinforcement learning. Both asymptotic and finite-time offline system identification are well-studied in the literature. For online system identification, the idea of stochastic-gradient descent with reverse experience replay (SGD-RER) was recently proposed, where the data sequence is stored in several buffers and the stochastic-gradient descent (SGD) update performs backward in each buffer to break the time dependency between data points. Inspired by this work, we study distributed online system identification of LTI systems over a multi-agent network. We consider agents as identical LTI systems, and the network goal is to jointly estimate the system parameters by leveraging the communication between agents. We propose DSGD-RER, a distributed variant of the SGD-RER algorithm, and theoretically characterize the improvement of the estimation error with respect to the network size. Our numerical experiments certify the reduction of estimation error as the network size grows.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
306,020
2403.13413
A Cox rate-and-state model for monitoring seismic hazard in the Groningen gas field
To monitor the seismic hazard in the Groningen gas field, we modify the rate-and-state model that relates changes in pore pressure to induced seismic hazard by allowing for noise in pore pressure measurements and by explicitly taking into account gas production volumes. We analyse the first and second-moment structure of the resulting Cox process, propose an unbiased estimating equation approach for the unknown model parameters and derive the posterior distribution of the driving random measure. We use a parallel Metropolis adjusted Langevin algorithm for sampling from the posterior and to monitor the hazard.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
439,628
2412.09633
A Novel Wavelet-base Algorithm for Reconstruction of the Time-Domain Impulse Response from Band-limited Scattering Parameters with Applications
In this paper, we introduce a novel waveletbased algorithm for reconstructing time-domain impulse responses from band-limited scattering parameters (frequencydomain data) with a particular focus on ship hull applications. We establish the algorithm and demonstrate its convergence, as well as its efficiency for a class of functions that can be expanded as exponential functions. We provide simulation results to validate our numerical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
516,576
1901.06086
WALL-E: An Efficient Reinforcement Learning Research Framework
There are two halves to RL systems: experience collection time and policy learning time. For a large number of samples in rollouts, experience collection time is the major bottleneck. Thus, it is necessary to speed up the rollout generation time with multi-process architecture support. Our work, dubbed WALL-E, utilizes multiple rollout samplers running in parallel to rapidly generate experience. Due to our parallel samplers, we experience not only faster convergence times, but also higher average reward thresholds. For example, on the MuJoCo HalfCheetah-v2 task, with $N = 10$ parallel sampler processes, we are able to achieve much higher average return than those from using only a single process architecture.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
118,922
2401.04550
WaveletFormerNet: A Transformer-based Wavelet Network for Real-world Non-homogeneous and Dense Fog Removal
Although deep convolutional neural networks have achieved remarkable success in removing synthetic fog, it is essential to be able to process images taken in complex foggy conditions, such as dense or non-homogeneous fog, in the real world. However, the haze distribution in the real world is complex, and downsampling can lead to color distortion or loss of detail in the output results as the resolution of a feature map or image resolution decreases. In addition to the challenges of obtaining sufficient training data, overfitting can also arise in deep learning techniques for foggy image processing, which can limit the generalization abilities of the model, posing challenges for its practical applications in real-world scenarios. Considering these issues, this paper proposes a Transformer-based wavelet network (WaveletFormerNet) for real-world foggy image recovery. We embed the discrete wavelet transform into the Vision Transformer by proposing the WaveletFormer and IWaveletFormer blocks, aiming to alleviate texture detail loss and color distortion in the image due to downsampling. We introduce parallel convolution in the Transformer block, which allows for the capture of multi-frequency information in a lightweight mechanism. Additionally, we have implemented a feature aggregation module (FAM) to maintain image resolution and enhance the feature extraction capacity of our model, further contributing to its impressive performance in real-world foggy image recovery tasks. Extensive experiments demonstrate that our WaveletFormerNet performs better than state-of-the-art methods, as shown through quantitative and qualitative evaluations of minor model complexity. Additionally, our satisfactory results on real-world dust removal and application tests showcase the superior generalization ability and improved performance of WaveletFormerNet in computer vision-related applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,466
2403.01342
LM4OPT: Unveiling the Potential of Large Language Models in Formulating Mathematical Optimization Problems
In the rapidly evolving field of natural language processing, the translation of linguistic descriptions into mathematical formulation of optimization problems presents a formidable challenge, demanding intricate understanding and processing capabilities from Large Language Models (LLMs). This study compares prominent LLMs, including GPT-3.5, GPT-4, and Llama-2-7b, in zero-shot and one-shot settings for this task. Our findings show GPT-4's superior performance, particularly in the one-shot scenario. A central part of this research is the introduction of `LM4OPT,' a progressive fine-tuning framework for Llama-2-7b that utilizes noisy embeddings and specialized datasets. However, this research highlights a notable gap in the contextual understanding capabilities of smaller models such as Llama-2-7b compared to larger counterparts, especially in processing lengthy and complex input contexts. Our empirical investigation, utilizing the NL4Opt dataset, unveils that GPT-4 surpasses the baseline performance established by previous research, achieving an F1-score of 0.63, solely based on the problem description in natural language, and without relying on any additional named entity information. GPT-3.5 follows closely, both outperforming the fine-tuned Llama-2-7b. These findings not only benchmark the current capabilities of LLMs in a novel application area but also lay the groundwork for future improvements in mathematical formulation of optimization problems from natural language input.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
434,368
2105.04799
A Feature Fusion-Net Using Deep Spatial Context Encoder and Nonstationary Joint Statistical Model for High Resolution SAR Image Classification
Convolutional neural networks (CNNs) have been applied to learn spatial features for high-resolution (HR) synthetic aperture radar (SAR) image classification. However, there has been little work on integrating the unique statistical distributions of SAR images which can reveal physical properties of terrain objects, into CNNs in a supervised feature learning framework. To address this problem, a novel end-to-end supervised classification method is proposed for HR SAR images by considering both spatial context and statistical features. First, to extract more effective spatial features from SAR images, a new deep spatial context encoder network (DSCEN) is proposed, which is a lightweight structure and can be effectively trained with a small number of samples. Meanwhile, to enhance the diversity of statistics, the nonstationary joint statistical model (NS-JSM) is adopted to form the global statistical features. Specifically, SAR images are transformed into the Gabor wavelet domain and the produced multi-subbands magnitudes and phases are modeled by the log-normal and uniform distribution. The covariance matrix is further utilized to capture the inter-scale and intra-scale nonstationary correlation between the statistical subbands and make the joint statistical features more compact and distinguishable. Considering complementary advantages, a feature fusion network (Fusion-Net) base on group compression and smooth normalization is constructed to embed the statistical features into the spatial features and optimize the fusion feature representation. As a result, our model can learn the discriminative features and improve the final classification performance. Experiments on four HR SAR images validate the superiority of the proposed method over other related algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
234,622
2208.13653
Learning Binary and Sparse Permutation-Invariant Representations for Fast and Memory Efficient Whole Slide Image Search
Learning suitable Whole slide images (WSIs) representations for efficient retrieval systems is a non-trivial task. The WSI embeddings obtained from current methods are in Euclidean space not ideal for efficient WSI retrieval. Furthermore, most of the current methods require high GPU memory due to the simultaneous processing of multiple sets of patches. To address these challenges, we propose a novel framework for learning binary and sparse WSI representations utilizing a deep generative modelling and the Fisher Vector. We introduce new loss functions for learning sparse and binary permutation-invariant WSI representations that employ instance-based training achieving better memory efficiency. The learned WSI representations are validated on The Cancer Genomic Atlas (TCGA) and Liver-Kidney-Stomach (LKS) datasets. The proposed method outperforms Yottixel (a recent search engine for histopathology images) both in terms of retrieval accuracy and speed. Further, we achieve competitive performance against SOTA on the public benchmark LKS dataset for WSI classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,104
1612.04110
Observation of dynamics inside an unlabeled live cell using bright-field photon microscopy: Evaluation of organelles' trajectories
This article presents an algorithm for the evaluation of organelles' movements inside of an unmodified live cell. We used a time-lapse image series obtained using wide-field bright-field photon transmission microscopy as an algorithm input. The benefit of the algorithm is the application of the R\'enyi information entropy, namely a variable called a point information gain, which enables to highlight the borders of the intracellular organelles and to localize the organelles' centers of mass with the precision of one pixel.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,480
2008.00546
A Foliated View of Transfer Learning
Transfer learning considers a learning process where a new task is solved by transferring relevant knowledge from known solutions to related tasks. While this has been studied experimentally, there lacks a foundational description of the transfer learning problem that exposes what related tasks are, and how they can be exploited. In this work, we present a definition for relatedness between tasks and identify foliations as a mathematical framework to represent such relationships.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
190,043
2006.15373
MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees
Efficient yet accurate extraction of depth from stereo image pairs is required by systems with low power resources, such as robotics and embedded systems. State-of-the-art stereo matching methods based on convolutional neural networks require intensive computations on GPUs and are difficult to deploy on embedded systems. In this paper, we propose a stereo matching method, called MTStereo 2.0, for limited-resource systems that require efficient and accurate depth estimation. It is based on a Max-tree hierarchical representation of image pairs, which we use to identify matching regions along image scan-lines. The method includes a cost function that considers similarity of region contextual information based on the Max-trees and a disparity border preserving cost aggregation approach. MTStereo 2.0 improves on its predecessor MTStereo 1.0 as it a) deploys a more robust cost function, b) performs more thorough detection of incorrect matches, c) computes disparity maps with pixel-level rather than node-level precision. MTStereo provides accurate sparse and semi-dense depth estimation and does not require intensive GPU computations like methods based on CNNs. Thus it can run on embedded and robotics devices with low-power requirements. We tested the proposed approach on several benchmark data sets, namely KITTI 2015, Driving, FlyingThings3D, Middlebury 2014, Monkaa and the TrimBot2020 garden data sets, and achieved competitive accuracy and efficiency. The code is available at https://github.com/rbrandt1/MaxTreeS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
184,484
2410.04731
Efficient transformer with reinforced position embedding for language models
In this paper, we propose an efficient transformer architecture that uses reinforced positional embedding to obtain superior performance with half the number of encoder decoder layers. We demonstrate that concatenating positional encoding with trainable token embeddings, normalizing columns in the token embedding matrix, and using the normalized token embedding matrix as the value of the attention layer improve the training and validation loss and the training time in an encoder-decoder Transformer model for a Portuguese-English translation task with 10 epochs or 12 hours of training across 10 trials. Our method, with roughly a threefold parameter reduction compared to the baseline model, yields a mean training loss of 1.21, a mean validation loss of 1.51, and an average training time of 1352.27 seconds per epoch, surpassing the baseline model with the same embedding dimension that employs addition of positional encoding and token embeddings, which achieves a mean training loss of 1.96, a validation loss of 2.18, and an average training time of 4297.79 seconds per epoch. Additionally, we evaluated our proposed architecture and the baseline across 14 diverse translation datasets from TensorFlow. The results indicate that our method consistently achieves lower or comparable training and validation losses, suggesting enhanced learning efficiency.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
495,416
1910.12626
Model selection for deep audio source separation via clustering analysis
Audio source separation is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals). Deep learning models are the state-of-the-art in source separation, given that the mixture to be separated is similar to the mixtures the deep model was trained on. This requires the end user to know enough about each model's training to select the correct model for a given audio mixture. In this work, we automate selection of the appropriate model for an audio mixture. We present a confidence measure that does not require ground truth to estimate separation quality, given a deep model and audio mixture. We use this confidence measure to automatically select the model output with the best predicted separation quality. We compare our confidence-based ensemble approach to using individual models with no selection, to an oracle that always selects the best model and to a random model selector. Results show our confidence-based ensemble significantly outperforms the random ensemble over general mixtures and approaches oracle performance for music mixtures.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
151,152
2311.16531
Measurement and Modeling on Terahertz Channels in Rain
The Terahertz (THz) frequency band offers a wide range of bandwidths, from tens to hundreds of gigahertz (GHz) and also supports data speeds of several terabits per second (Tbps). Because of this, maintaining THz channel reliability and efficiency in adverse weather conditions is crucial. Rain, in particular, disrupts THz channel propagation significantly and there is still lack of comprehensive investigations due to the involved experimental difficulties. This work explores how rain affects THz channel performance by conducting experiments in a rain emulation chamber and under actual rainy conditions outdoors. We focus on variables like rain intensity, raindrop size distribution (RDSD), and the channel's gradient height. We observe that the gradient height (for air-to-ground channel) can induce changes of the RDSD along the channel's path, impacting the precision of modeling efforts. To address this, we propose a theoretical model, integrating Mie scattering theory with considerations of channel's gradient height. Both our experimental and theoretical findings confirm this model's effectiveness in predicting THz channel behavior in rainy conditions. This work underscores the necessary in incorporating the variation of RDSD when THz channel travels in scenarios involving ground-to-air or air-to-ground communications.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
410,949
1806.03560
Semantic Correspondence: A Hierarchical Approach
Establishing semantic correspondence across images when the objects in the images have undergone complex deformations remains a challenging task in the field of computer vision. In this paper, we propose a hierarchical method to tackle this problem by first semantically targeting the foreground objects to localize the search space and then looking deeply into multiple levels of the feature representation to search for point-level correspondence. In contrast to existing approaches, which typically penalize large discrepancies, our approach allows for significant displacements, with the aim to accommodate large deformations of the objects in scene. Localizing the search space by semantically matching object-level correspondence, our method robustly handles large deformations of objects. Representing the target region by concatenated hypercolumn features which take into account the hierarchical levels of the surrounding context, helps to clear the ambiguity to further improve the accuracy. By conducting multiple experiments across scenes with non-rigid objects, we validate the proposed approach, and show that it outperforms the state of the art methods for semantic correspondence establishment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,025
2412.13835
RACQUET: Unveiling the Dangers of Overlooked Referential Ambiguity in Visual LLMs
Ambiguity resolution is key to effective communication. While humans effortlessly address ambiguity through conversational grounding strategies, the extent to which current language models can emulate these strategies remains unclear. In this work, we examine referential ambiguity in image-based question answering by introducing RACQUET, a carefully curated dataset targeting distinct aspects of ambiguity. Through a series of evaluations, we reveal significant limitations and problems of overconfidence of state-of-the-art large multimodal language models in addressing ambiguity in their responses. The overconfidence issue becomes particularly relevant for RACQUET-BIAS, a subset designed to analyze a critical yet underexplored problem: failing to address ambiguity leads to stereotypical, socially biased responses. Our results underscore the urgency of equipping models with robust strategies to deal with uncertainty without resorting to undesirable stereotypes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
518,467
2003.11902
Implementing a GPU-based parallel MAX-MIN Ant System
The MAX-MIN Ant System (MMAS) is one of the best-known Ant Colony Optimization (ACO) algorithms proven to be efficient at finding satisfactory solutions to many difficult combinatorial optimization problems. The slow-down in Moore's law, and the availability of graphics processing units (GPUs) capable of conducting general-purpose computations at high speed, has sparked considerable research efforts into the development of GPU-based ACO implementations. In this paper, we discuss a range of novel ideas for improving the GPU-based parallel MMAS implementation, allowing it to better utilize the computing power offered by two subsequent Nvidia GPU architectures. Specifically, based on the weighted reservoir sampling algorithm we propose a novel parallel implementation of the node selection procedure, which is at the heart of the MMAS and other ACO algorithms. We also present a memory-efficient implementation of another key-component -- the tabu list structure -- which is used in the ACO's solution construction stage. The proposed implementations, combined with the existing approaches, lead to a total of six MMAS variants, which are evaluated on a set of Traveling Salesman Problem (TSP) instances ranging from 198 to 3,795 cities. The results show that our MMAS implementation is competitive with state-of-the-art GPU-based and multi-core CPU-based parallel ACO implementations: in fact, the times obtained for the Nvidia V100 Volta GPU were up to 7.18x and 21.79x smaller, respectively. The fastest of the proposed MMAS variants is able to generate over 1 million candidate solutions per second when solving a 1,002-city instance. Moreover, we show that, combined with the 2-opt local search heuristic, the proposed parallel MMAS finds high-quality solutions for the TSP instances with up to 18,512 nodes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
169,746
2201.11650
Incremental Mining of Frequent Serial Episodes Considering Multiple Occurrences
The need to analyze information from streams arises in a variety of applications. One of its fundamental research directions is to mine sequential patterns over data streams. Current studies mine series of items based on the presence of the pattern in transactions but pay no attention to the series of itemsets and their multiple occurrences. The pattern over a window of itemsets stream and their multiple occurrences, however, provides additional capability to recognize the essential characteristics of the patterns and the inter-relationships among them that are unidentifiable by the existing presence-based studies. In this paper, we study such a new sequential pattern mining problem and propose a corresponding sequential miner with novel strategies to prune the search space efficiently. Experiments on both real and synthetic data show the utility of our approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
277,354
2307.10506
Is Grad-CAM Explainable in Medical Images?
Explainable Deep Learning has gained significant attention in the field of artificial intelligence (AI), particularly in domains such as medical imaging, where accurate and interpretable machine learning models are crucial for effective diagnosis and treatment planning. Grad-CAM is a baseline that highlights the most critical regions of an image used in a deep learning model's decision-making process, increasing interpretability and trust in the results. It is applied in many computer vision (CV) tasks such as classification and explanation. This study explores the principles of Explainable Deep Learning and its relevance to medical imaging, discusses various explainability techniques and their limitations, and examines medical imaging applications of Grad-CAM. The findings highlight the potential of Explainable Deep Learning and Grad-CAM in improving the accuracy and interpretability of deep learning models in medical imaging. The code is available in (will be available).
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
380,572
2010.06969
NwQM: A neural quality assessment framework for Wikipedia
Millions of people irrespective of socioeconomic and demographic backgrounds, depend on Wikipedia articles everyday for keeping themselves informed regarding popular as well as obscure topics. Articles have been categorized by editors into several quality classes, which indicate their reliability as encyclopedic content. This manual designation is an onerous task because it necessitates profound knowledge about encyclopedic language, as well navigating circuitous set of wiki guidelines. In this paper we propose Neural wikipedia QualityMonitor (NwQM), a novel deep learning model which accumulates signals from several key information sources such as article text, meta data and images to obtain improved Wikipedia article representation. We present comparison of our approach against a plethora of available solutions and show 8% improvement over state-of-the-art approaches with detailed ablation studies.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
200,658
2409.18170
Evaluation of Large Language Models for Summarization Tasks in the Medical Domain: A Narrative Review
Large Language Models have advanced clinical Natural Language Generation, creating opportunities to manage the volume of medical text. However, the high-stakes nature of medicine requires reliable evaluation, which remains a challenge. In this narrative review, we assess the current evaluation state for clinical summarization tasks and propose future directions to address the resource constraints of expert human evaluation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
492,134
2202.03167
Bayesian Non-stationary Linear Bandits for Large-Scale Recommender Systems
Taking advantage of contextual information can potentially boost the performance of recommender systems. In the era of big data, such side information often has several dimensions. Thus, developing decision-making algorithms to cope with such a high-dimensional context in real time is essential. That is specifically challenging when the decision-maker has a variety of items to recommend. In addition, changes in items' popularity or users' preferences can hinder the performance of the deployed recommender system due to a lack of robustness to distribution shifts in the environment. In this paper, we build upon the linear contextual multi-armed bandit framework to address this problem. We develop a decision-making policy for a linear bandit problem with high-dimensional feature vectors, a large set of arms, and non-stationary reward-generating processes. Our Thompson sampling-based policy reduces the dimension of feature vectors using random projection and uses exponentially increasing weights to decrease the influence of past observations with time. Our proposed recommender system employs this policy to learn the users' item preferences online while minimizing runtime. We prove a regret bound that scales as a factor of the reduced dimension instead of the original one. To evaluate our proposed recommender system numerically, we apply it to three real-world datasets. The theoretical and numerical results demonstrate the effectiveness of our proposed algorithm in making a trade-off between computational complexity and regret performance compared to the state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
279,100
1812.10851
A Summary of Adaptation of Techniques from Search-based Optimal Multi-Agent Path Finding Solvers to Compilation-based Approach
In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions aiming to minimize an objective function. Two such common objective functions is the sum-of-costs and the makespan. Many optimal solvers were introduced in the past decade - two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little was known on the performance and relevance of the compilation-based approach on the sum-of-costs objective. In this paper we show how to close the gap between these cost functions in the compilation-based approach. Moreover we study applicability of various techniques developed for search-based solvers in the compilation-based approach. A part of this paper introduces a SAT-solver that is directly aimed to solve the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, we are able to have a reasonable number of variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. Experimental evaluation on several domains show that there are many scenarios where our new SAT-based methods outperforms the best variants of previous sum-of-costs search solvers - the ICTS, CBS algorithms, and ICBS algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
117,450
1711.10050
Non-Orthogonal Multiple Access for mmWave Drones with Multi-Antenna Transmission
Unmanned aerial vehicles (UAVs) can be deployed as aerial base stations (BSs) for rapid establishment of communication networks during temporary events and after disasters. Since UAV-BSs are low power nodes, achieving high spectral and energy efficiency are of paramount importance. In this paper, we introduce non-orthogonal multiple access (NOMA) transmission for millimeter-wave (mmWave) drones serving as flying BSs at a large stadium potentially with several hundreds or thousands of mobile users. In particular, we make use of multi-antenna techniques specifically taking into consideration the physical constraints of the antenna array, to generate directional beams. Multiple users are then served within the same beam employing NOMA transmission. If the UAV beam can not cover entire region where users are distributed, we introduce beam scanning to maximize outage sum rates. The simulation results reveal that, with NOMA transmission the spectral efficiency of the UAV based communication can be greatly enhanced compared to orthogonal multiple access (OMA) transmission. Further, the analysis shows that there is an optimum transmit power value for NOMA beyond which outage sum rates do not improve further.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
85,501
2112.01156
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks that were designed for computer vision. We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints. Our framework can handle both linear and non-linear constraints. We instantiate our framework into two algorithms: a gradient-based attack that introduces constraints in the loss function to maximize, and a multi-objective search algorithm that aims for misclassification, perturbation minimization, and constraint satisfaction. We show that our approach is effective in four different domains, with a success rate of up to 100%, where state-of-the-art attacks fail to generate a single feasible example. In addition to adversarial retraining, we propose to introduce engineered non-convex constraints to improve model adversarial robustness. We demonstrate that this new defense is as effective as adversarial retraining. Our framework forms the starting point for research on constrained adversarial attacks and provides relevant baselines and datasets that future research can exploit.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
269,388
2308.05758
SNR-based beaconless multi-scan link acquisition model with vibration for LEO-to-ground laser communication
We propose a link acquisition time model deeply involving the process from the transmitted power to received signal-to-noise ratio (SNR) for LEO-to-ground laser communication for the first time. Compared with the conventional acquisition models founded on geometry analysis with divergence angle threshold, utilizing SNR as the decision criterion is more appropriate for practical engineering requirements. Specially, under the combined effects of platform vibration and turbulence, we decouple the parameters of beam divergence angle, spiral pitch, and coverage factor at a fixed transmitted power for a given average received SNR threshold. Then the single-scan acquisition probability is obtained by integrating the field of uncertainty (FOU), probability distribution of coverage factor, and receiver field angle. Consequently, the closed-form analytical expression of acquisition time expectation adopting multi-scan, which ensures acquisition success, with essential reset time between single-scan is derived. The optimizations concerning the beam divergence angle, spiral pitch, and FOU are presented. Moreover, the influence of platform vibration is investigated. All the analytical derivations are confirmed by Monte Carlo simulations. Notably, we provide a theoretical method for designing the minimum divergence angle modulated by the laser, which not only improves the acquisition performance within a certain vibration range, but also achieves a good trade-off with the system complexity.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
384,903
2003.04063
Supervised Domain Adaptation using Graph Embedding
Getting deep convolutional neural networks to perform well requires a large amount of training data. When the available labelled data is small, it is often beneficial to use transfer learning to leverage a related larger dataset (source) in order to improve the performance on the small dataset (target). Among the transfer learning approaches, domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them. In this paper, we consider the domain adaptation problem from the perspective of dimensionality reduction and propose a generic framework based on graph embedding. Instead of solving the generalised eigenvalue problem, we formulate the graph-preserving criterion as a loss in the neural network and learn a domain-invariant feature transformation in an end-to-end fashion. We show that the proposed approach leads to a powerful Domain Adaptation framework; a simple LDA-inspired instantiation of the framework leads to state-of-the-art performance on two of the most widely used Domain Adaptation benchmarks, Office31 and MNIST to USPS datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
167,442
2408.08189
FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance
Synthesizing motion-rich and temporally consistent videos remains a challenge in artificial intelligence, especially when dealing with extended durations. Existing text-to-video (T2V) models commonly employ spatial cross-attention for text control, equivalently guiding different frame generations without frame-specific textual guidance. Thus, the model's capacity to comprehend the temporal logic conveyed in prompts and generate videos with coherent motion is restricted. To tackle this limitation, we introduce FancyVideo, an innovative video generator that improves the existing text-control mechanism with the well-designed Cross-frame Textual Guidance Module (CTGM). Specifically, CTGM incorporates the Temporal Information Injector (TII), Temporal Affinity Refiner (TAR), and Temporal Feature Booster (TFB) at the beginning, middle, and end of cross-attention, respectively, to achieve frame-specific textual guidance. Firstly, TII injects frame-specific information from latent features into text conditions, thereby obtaining cross-frame textual conditions. Then, TAR refines the correlation matrix between cross-frame textual conditions and latent features along the time dimension. Lastly, TFB boosts the temporal consistency of latent features. Extensive experiments comprising both quantitative and qualitative evaluations demonstrate the effectiveness of FancyVideo. Our video demo, code and model are available at https://360cvgroup.github.io/FancyVideo/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
480,891
1909.11524
Dual Adaptive Pyramid Network for Cross-Stain Histopathology Image Segmentation
Supervised semantic segmentation normally assumes the test data being in a similar data domain as the training data. However, in practice, the domain mismatch between the training and unseen data could lead to a significant performance drop. Obtaining accurate pixel-wise label for images in different domains is tedious and labor intensive, especially for histopathology images. In this paper, we propose a dual adaptive pyramid network (DAPNet) for histopathological gland segmentation adapting from one stain domain to another. We tackle the domain adaptation problem on two levels: 1) the image-level considers the differences of image color and style; 2) the feature-level addresses the spatial inconsistency between two domains. The two components are implemented as domain classifiers with adversarial training. We evaluate our new approach using two gland segmentation datasets with H&E and DAB-H stains respectively. The extensive experiments and ablation study demonstrate the effectiveness of our approach on the domain adaptive segmentation task. We show that the proposed approach performs favorably against other state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
146,836
1810.05075
Taming the Cross Entropy Loss
We present the Tamed Cross Entropy (TCE) loss function, a robust derivative of the standard Cross Entropy (CE) loss used in deep learning for classification tasks. However, unlike other robust losses, the TCE loss is designed to exhibit the same training properties than the CE loss in noiseless scenarios. Therefore, the TCE loss requires no modification on the training regime compared to the CE loss and, in consequence, can be applied in all applications where the CE loss is currently used. We evaluate the TCE loss using the ResNet architecture on four image datasets that we artificially contaminated with various levels of label noise. The TCE loss outperforms the CE loss in every tested scenario.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
110,160
1911.09100
Gradient Method for Continuous Influence Maximization with Budget-Saving Considerations
Continuous influence maximization (CIM) generalizes the original influence maximization by incorporating general marketing strategies: a marketing strategy mix is a vector $\boldsymbol x = (x_1,\dots,x_d)$ such that for each node $v$ in a social network, $v$ could be activated as a seed of diffusion with probability $h_v(\boldsymbol x)$, where $h_v$ is a strategy activation function satisfying DR-submodularity. CIM is the task of selecting a strategy mix $\boldsymbol x$ with constraint $\sum_i x_i \le k$ where $k$ is a budget constraint, such that the total number of activated nodes after the diffusion process, called influence spread and denoted as $g(\boldsymbol x)$, is maximized. In this paper, we extend CIM to consider budget saving, that is, each strategy mix $\boldsymbol x$ has a cost $c(\boldsymbol x)$ where $c$ is a convex cost function, we want to maximize the balanced sum $g(\boldsymbol x) + \lambda(k - c(\boldsymbol x))$ where $\lambda$ is a balance parameter, subject to the constraint of $c(\boldsymbol x) \le k$. We denote this problem as CIM-BS. The objective function of CIM-BS is neither monotone, nor DR-submodular or concave, and thus neither the greedy algorithm nor the standard result on gradient method could be directly applied. Our key innovation is the combination of the gradient method with reverse influence sampling to design algorithms that solve CIM-BS: For the general case, we give an algorithm that achieves $\left(\frac{1}{2}-\varepsilon\right)$-approximation, and for the case of independent strategy activations, we present an algorithm that achieves $\left(1-\frac{1}{e}-\varepsilon\right)$ approximation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
154,404
1502.06809
Optimal Linear and Cyclic Locally Repairable Codes over Small Fields
We consider locally repairable codes over small fields and propose constructions of optimal cyclic and linear codes in terms of the dimension for a given distance and length. Four new constructions of optimal linear codes over small fields with locality properties are developed. The first two approaches give binary cyclic codes with locality two. While the first construction has availability one, the second binary code is characterized by multiple available repair sets based on a binary Simplex code. The third approach extends the first one to q-ary cyclic codes including (binary) extension fields, where the locality property is determined by the properties of a shortened first-order Reed-Muller code. Non-cyclic optimal binary linear codes with locality greater than two are obtained by the fourth construction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
40,524
2203.13722
Probing Pre-Trained Language Models for Cross-Cultural Differences in Values
Language embeds information about social, cultural, and political values people hold. Prior work has explored social and potentially harmful biases encoded in Pre-Trained Language models (PTLMs). However, there has been no systematic study investigating how values embedded in these models vary across cultures. In this paper, we introduce probes to study which values across cultures are embedded in these models, and whether they align with existing theories and cross-cultural value surveys. We find that PTLMs capture differences in values across cultures, but those only weakly align with established value surveys. We discuss implications of using mis-aligned models in cross-cultural settings, as well as ways of aligning PTLMs with value surveys.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
287,740
2109.05105
Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models
Can we get existing language models and refine them for zero-shot commonsense reasoning? This paper presents an initial study exploring the feasibility of zero-shot commonsense reasoning for the Winograd Schema Challenge by formulating the task as self-supervised refinement of a pre-trained language model. In contrast to previous studies that rely on fine-tuning annotated datasets, we seek to boost conceptualization via loss landscape refinement. To this end, we propose a novel self-supervised learning approach that refines the language model utilizing a set of linguistic perturbations of similar concept relationships. Empirical analysis of our conceptually simple framework demonstrates the viability of zero-shot commonsense reasoning on multiple benchmarks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
254,657
1710.00974
A concatenating framework of shortcut convolutional neural networks
It is well accepted that convolutional neural networks play an important role in learning excellent features for image classification and recognition. However, in tradition they only allow adjacent layers connected, limiting integration of multi-scale information. To further improve their performance, we present a concatenating framework of shortcut convolutional neural networks. This framework can concatenate multi-scale features by shortcut connections to the fully-connected layer that is directly fed to the output layer. We do a large number of experiments to investigate performance of the shortcut convolutional neural networks on many benchmark visual datasets for different tasks. The datasets include AR, FERET, FaceScrub, CelebA for gender classification, CUReT for texture classification, MNIST for digit recognition, and CIFAR-10 for object recognition. Experimental results show that the shortcut convolutional neural networks can achieve better results than the traditional ones on these tasks, with more stability in different settings of pooling schemes, activation functions, optimizations, initializations, kernel numbers and kernel sizes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
81,948
2410.18607
STTATTS: Unified Speech-To-Text And Text-To-Speech Model
Speech recognition and speech synthesis models are typically trained separately, each with its own set of learning objectives, training data, and model parameters, resulting in two distinct large networks. We propose a parameter-efficient approach to learning ASR and TTS jointly via a multi-task learning objective and shared parameters. Our evaluation demonstrates that the performance of our multi-task model is comparable to that of individually trained models while significantly saving computational and memory costs ($\sim$50\% reduction in the total number of parameters required for the two tasks combined). We experiment with English as a resource-rich language, and Arabic as a relatively low-resource language due to shortage of TTS data. Our models are trained with publicly available data, and both the training code and model checkpoints are openly available for further research.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
501,955
2212.11363
Lightweight Monocular Depth Estimation
Monocular depth estimation can play an important role in addressing the issue of deriving scene geometry from 2D images. It has been used in a variety of industries, including robots, self-driving cars, scene comprehension, 3D reconstructions, and others. The goal of our method is to create a lightweight machine-learning model in order to predict the depth value of each pixel given only a single RGB image as input with the Unet structure of the image segmentation network. We use the NYU Depth V2 dataset to test the structure and compare the result with other methods. The proposed method achieves relatively high accuracy and low rootmean-square error.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
337,766
2011.09960
Mathematical comparison of classical and quantum mechanisms in optimization under local differential privacy
Let $\varepsilon>0$. An $n$-tuple $(p_i)_{i=1}^n$ of probability vectors is called $\varepsilon$-differentially private ($\varepsilon$-DP) if $e^\varepsilon p_j-p_i$ has no negative entries for all $i,j=1,\ldots,n$. An $n$-tuple $(\rho_i)_{i=1}^n$ of density matrices is called classical-quantum $\varepsilon$-differentially private (CQ $\varepsilon$-DP) if $e^\varepsilon\rho_j-\rho_i$ is positive semi-definite for all $i,j=1,\ldots,n$. Denote by $\mathrm{C}_n(\varepsilon)$ the set of all $\varepsilon$-DP $n$-tuples, and by $\mathrm{CQ}_n(\varepsilon)$ the set of all CQ $\varepsilon$-DP $n$-tuples. By considering optimization problems under local differential privacy, we define the subset $\mathrm{EC}_n(\varepsilon)$ of $\mathrm{CQ}_n(\varepsilon)$ that is essentially classical. Roughly speaking, an element in $\mathrm{EC}_n(\varepsilon)$ is the image of $(p_i)_{i=1}^n\in\mathrm{C}_n(\varepsilon)$ by a completely positive and trace-preserving linear map (CPTP map). In a preceding study, it is known that $\mathrm{EC}_2(\varepsilon)=\mathrm{CQ}_2(\varepsilon)$. In this paper, we show that $\mathrm{EC}_n(\varepsilon)\not=\mathrm{CQ}_n(\varepsilon)$ for every $n\ge3$, and estimate the difference between $\mathrm{EC}_n(\varepsilon)$ and $\mathrm{CQ}_n(\varepsilon)$ in a certain manner.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
207,374
2408.08792
Assessing Generalization Capabilities of Malaria Diagnostic Models from Thin Blood Smears
Malaria remains a significant global health challenge, necessitating rapid and accurate diagnostic methods. While computer-aided diagnosis (CAD) tools utilizing deep learning have shown promise, their generalization to diverse clinical settings remains poorly assessed. This study evaluates the generalization capabilities of a CAD model for malaria diagnosis from thin blood smear images across four sites. We explore strategies to enhance generalization, including fine-tuning and incremental learning. Our results demonstrate that incorporating site-specific data significantly improves model performance, paving the way for broader clinical application.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
481,153
2410.02811
SAC-KG: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graphs
Knowledge graphs (KGs) play a pivotal role in knowledge-intensive tasks across specialized domains, where the acquisition of precise and dependable knowledge is crucial. However, existing KG construction methods heavily rely on human intervention to attain qualified KGs, which severely hinders the practical applicability in real-world scenarios. To address this challenge, we propose a general KG construction framework, named SAC-KG, to exploit large language models (LLMs) as Skilled Automatic Constructors for domain Knowledge Graph. SAC-KG effectively involves LLMs as domain experts to generate specialized and precise multi-level KGs. Specifically, SAC-KG consists of three components: Generator, Verifier, and Pruner. For a given entity, Generator produces its relations and tails from raw domain corpora, to construct a specialized single-level KG. Verifier and Pruner then work together to ensure precision by correcting generation errors and determining whether newly produced tails require further iteration for the next-level KG.Experiments demonstrate that SAC-KG automatically constructs a domain KG at the scale of over one million nodes and achieves a precision of 89.32%, leading to a superior performance with over 20% increase in precision rate compared to existing state-of-the-art methods for the KG construction task.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
494,476