id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2202.11203
Under-confidence Backdoors Are Resilient and Stealthy Backdoors
By injecting a small number of poisoned samples into the training set, backdoor attacks aim to make the victim model produce designed outputs on any input injected with pre-designed backdoors. In order to achieve a high attack success rate using as few poisoned training samples as possible, most existing attack methods change the labels of the poisoned samples to the target class. This practice often results in severe over-fitting of the victim model over the backdoors, making the attack quite effective in output control but easier to be identified by human inspection or automatic defense algorithms. In this work, we proposed a label-smoothing strategy to overcome the over-fitting problem of these attack methods, obtaining a \textit{Label-Smoothed Backdoor Attack} (LSBA). In the LSBA, the label of the poisoned sample $\bm{x}$ will be changed to the target class with a probability of $p_n(\bm{x})$ instead of 100\%, and the value of $p_n(\bm{x})$ is specifically designed to make the prediction probability the target class be only slightly greater than those of the other classes. Empirical studies on several existing backdoor attacks show that our strategy can considerably improve the stealthiness of these attacks and, at the same time, achieve a high attack success rate. In addition, our strategy makes it able to manually control the prediction probability of the design output through manipulating the applied and activated number of LSBAs\footnote{Source code will be published at \url{https://github.com/v-mipeng/LabelSmoothedAttack.git}}.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
281,800
2410.14075
FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy. However, existing FL paradigms face challenges due to heterogeneity in client data distributions and system capabilities. Personalized federated learning (pFL) has been proposed to mitigate these problems, but often requires a shared model architecture and a central entity for parameter aggregation, resulting in scalability and communication issues. More recently, model-heterogeneous FL has gained attention due to its ability to support diverse client models, but existing methods are limited by their dependence on a centralized framework, synchronized training, and publicly available datasets. To address these limitations, we introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning. Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information. Experimental results show that FedPAE outperforms existing state-of-the-art pFL algorithms, effectively managing diverse client capabilities and demonstrating robustness against statistical heterogeneity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
499,852
2203.01295
Dynamic Coupling Strategy for Interdependent Network Systems Against Cascading Failures
Cascading failures are a common phenomenon in complex networked systems where failures at only a few nodes may trigger a process of sequential failure. We applied a flow redistribution model to investigate the robustness against cascading failures in modern systems carrying flows/loads (i.e. power grid, transportation system, etc.) that contain multiple interdependent networks. In such a system, the coupling coefficients between networks, which determine how much flows/loads are redistributed between networks, are a key factor determining the robustness to cascading failures. We derive recursive expressions to characterize the evolution of such a system under dynamic network coupling. Using these expressions, we enhance the robustness of interdependent network systems by dynamically adjusting the coupling coefficients based on current system situations, minimizing the subsequent failures. The analytical and simulation results show a significant improvement in robustness compared to prior work, which considers only fixed coupling coefficients. Our proposed Step-wise Optimization (SWO) method not only shows good performance against cascading failures, but also offers better computational complexity, scalability to multiple networks, and flexibility to different attack types. We show in simulation that SWO provides robustness against cascading failures for multiple different network topologies.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
283,325
1610.05824
Robot Vision Architecture for Autonomous Clothes Manipulation
This paper presents a novel robot vision architecture for perceiving generic 3D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvatures, across mid-level geometric shapes \& topology descriptions; and finally approaching high-level semantic surface structure descriptions. We demonstrate our robot vision architecture in a customised dual-arm industrial robot with our self-designed, off-the-self stereo vision system, carrying out autonomous grasping and dual-arm flattening. It is worth noting that the proposed dual-arm flattening approach is unique among the state-of-the-art robot autonomous system, which is the major contribution of this paper. The experimental results show that the proposed dual-arm flattening using stereo vision system remarkably outperforms the single-arm flattening and widely-cited Kinect-based sensing system for dexterous manipulation tasks. In addition, the proposed grasping approach achieves satisfactory performance on grasping various kind of garments, verifying the capability of proposed visual perception architecture to be adapted to more than one clothing manipulation tasks.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
62,568
2305.19953
Multi-Dataset Co-Training with Sharpness-Aware Optimization for Audio Anti-spoofing
Audio anti-spoofing for automatic speaker verification aims to safeguard users' identities from spoofing attacks. Although state-of-the-art spoofing countermeasure(CM) models perform well on specific datasets, they lack generalization when evaluated with different datasets. To address this limitation, previous studies have explored large pre-trained models, which require significant resources and time. We aim to develop a compact but well-generalizing CM model that can compete with large pre-trained models. Our approach involves multi-dataset co-training and sharpness-aware minimization, which has not been investigated in this domain. Extensive experiments reveal that proposed method yield competitive results across various datasets while utilizing 4,000 times less parameters than the large pre-trained models.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
369,751
cs/0409046
A TCSP-like decidable constraint language generalising existing cardinal direction relations
We define a quantitative constraint language subsuming two calculi well-known in QSR (Qualitative Spatial Reasoning): Frank's cone-shaped and projection-based calculi of cardinal direction relations. We show how to solve a CSP (Constraint Satisfaction Problem) expressed in the language.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
538,342
1903.01549
A Machine Learning-Based Detection Technique for Optical Fiber Nonlinearity Mitigation
We investigate the performance of a machine learning classification technique, called the Parzen window, to mitigate the fiber nonlinearity in the context of dispersion managed and dispersion unmanaged systems. The technique is applied for detection at the receiver side, and deals with the non-Gaussian nonlinear effects by designing improved decision boundaries. We also propose a two-stage mitigation technique using digital back propagation and Parzen window for dispersion unmanaged systems. In this case, digital back propagation compensates for the deterministic nonlinearity and the Parzen window deals with the stochastic nonlinear signal-noise interactions, which are not taken into account by digital back propagation. A performance improvement up to 0:4 dB in terms of Q factor is observed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
123,278
1801.06934
On the Iteration Complexity Analysis of Stochastic Primal-Dual Hybrid Gradient Approach with High Probability
In this paper, we propose a stochastic Primal-Dual Hybrid Gradient (PDHG) approach for solving a wide spectrum of regularized stochastic minimization problems, where the regularization term is composite with a linear function. It has been recognized that solving this kind of problem is challenging since the closed-form solution of the proximal mapping associated with the regularization term is not available due to the imposed linear composition, and the per-iteration cost of computing the full gradient of the expected objective function is extremely high when the number of input data samples is considerably large. Our new approach overcomes these issues by exploring the special structure of the regularization term and sampling a few data points at each iteration. Rather than analyzing the convergence in expectation, we provide the detailed iteration complexity analysis for the cases of both uniformly and non-uniformly averaged iterates with high probability. This strongly supports the good practical performance of the proposed approach. Numerical experiments demonstrate that the efficiency of stochastic PDHG, which outperforms other competing algorithms, as expected by the high-probability convergence analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
88,701
2303.08537
Graph-less Collaborative Filtering
Graph neural networks (GNNs) have shown the power in representation learning over graph-structured user-item interaction data for collaborative filtering (CF) task. However, with their inherently recursive message propagation among neighboring nodes, existing GNN-based CF models may generate indistinguishable and inaccurate user (item) representations due to the over-smoothing and noise effect with low-pass Laplacian smoothing operators. In addition, the recursive information propagation with the stacked aggregators in the entire graph structures may result in poor scalability in practical applications. Motivated by these limitations, we propose a simple and effective collaborative filtering model (SimRec) that marries the power of knowledge distillation and contrastive learning. In SimRec, adaptive transferring knowledge is enabled between the teacher GNN model and a lightweight student network, to not only preserve the global collaborative signals, but also address the over-smoothing issue with representation recalibration. Empirical results on public datasets show that SimRec archives better efficiency while maintaining superior recommendation performance compared with various strong baselines. Our implementations are publicly available at: https://github.com/HKUDS/SimRec.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
351,680
2207.12245
Decentralized digital twins of complex dynamical systems
In this paper, we introduce a decentralized digital twin (DDT) framework for dynamical systems and discuss the prospects of the DDT modeling paradigm in computational science and engineering applications. The DDT approach is built on a federated learning concept, a branch of machine learning that encourages knowledge sharing without sharing the actual data. This approach enables clients to collaboratively learn an aggregated model while keeping all the training data on each client. We demonstrate the feasibility of the DDT framework with various dynamical systems, which are often considered prototypes for modeling complex transport phenomena in spatiotemporally extended systems. Our results indicate that federated machine learning might be a key enabler for designing highly accurate decentralized digital twins in complex nonlinear spatiotemporal systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
309,946
1211.5643
Shadows and headless shadows: a worlds-based, autobiographical approach to reasoning
Many cognitive systems deploy multiple, closed, individually consistent models which can represent interpretations of the present state of the world, moments in the past, possible futures or alternate versions of reality. While they appear under different names, these structures can be grouped under the general term of worlds. The Xapagy architecture is a story-oriented cognitive system which relies exclusively on the autobiographical memory implemented as a raw collection of events organized into world-type structures called {\em scenes}. The system performs reasoning by shadowing current events with events from the autobiography. The shadows are then extrapolated into headless shadows corresponding to predictions, hidden events or inferred relations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
19,907
1510.07394
Capacity of the Gaussian Two-Hop Full-Duplex Relay Channel with Residual Self-Interference
In this paper, we investigate the capacity of the Gaussian two-hop full-duplex (FD) relay channel with residual self-interference. This channel is comprised of a source, an FD relay, and a destination, where a direct source-destination link does not exist and the FD relay is impaired by residual self-interference. We adopt the worst-case linear self-interference model with respect to the channel capacity, and model the residual self-interference as a Gaussian random variable whose variance depends on the amplitude of the transmit symbol of the relay. For this channel, we derive the capacity and propose an explicit capacity-achieving coding scheme. Thereby, we show that the optimal input distribution at the source is Gaussian and its variance depends on the amplitude of the transmit symbol of the relay. On the other hand, the optimal input distribution at the relay is discrete or Gaussian, where the latter case occurs only when the relay-destination link is the bottleneck link. The derived capacity converges to the capacity of the two-hop ideal FD relay channel without self-interference and to the capacity of the two-hop half-duplex (HD) relay channel in the limiting cases when the residual self-interference is zero and infinite, respectively. Our numerical results show that significant performance gains are achieved with the proposed capacity-achieving coding scheme compared to the achievable rates of conventional HD relaying and/or conventional FD relaying.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
48,202
2212.05857
Automated ICD Coding using Extreme Multi-label Long Text Transformer-based Models
Background: Encouraged by the success of pretrained Transformer models in many natural language processing tasks, their use for International Classification of Diseases (ICD) coding tasks is now actively being explored. In this study, we investigate three types of Transformer-based models, aiming to address the extreme label set and long text classification challenges that are posed by automated ICD coding tasks. Methods: The Transformer-based model PLM-ICD achieved the current state-of-the-art (SOTA) performance on the ICD coding benchmark dataset MIMIC-III. It was chosen as our baseline model to be further optimised. XR-Transformer, the new SOTA model in the general extreme multi-label text classification domain, and XR-LAT, a novel adaptation of the XR-Transformer model, were also trained on the MIMIC-III dataset. XR-LAT is a recursively trained model chain on a predefined hierarchical code tree with label-wise attention, knowledge transferring and dynamic negative sampling mechanisms. Results: Our optimised PLM-ICD model, which was trained with longer total and chunk sequence lengths, significantly outperformed the current SOTA PLM-ICD model, and achieved the highest micro-F1 score of 60.8%. The XR-Transformer model, although SOTA in the general domain, did not perform well across all metrics. The best XR-LAT based model obtained results that were competitive with the current SOTA PLM-ICD model, including improving the macro-AUC by 2.1%. Conclusion: Our optimised PLM-ICD model is the new SOTA model for automated ICD coding on the MIMIC-III dataset, while our novel XR-LAT model performs competitively with the previous SOTA PLM-ICD model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
335,920
1101.2985
Resequencing: A Method for Conforming to Conventions for Sharing Credits Among Multiple Authors
Devising an appropriate scheme that assigns the weights to share credits among multiple authors of a paper is a challenging task. This challenge comes from the fact that different types of conventions might be followed among different research discipline or research groups. In this paper, we discuss that for the purpose of evaluating the quality of research produced by authors, one can resequence either authors or weights and can apply a weight assignment policy which the evaluator deems fit for the particular research discipline or research group.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
true
8,824
2307.16579
Contrastive Conditional Latent Diffusion for Audio-visual Segmentation
We propose a latent diffusion model with contrastive learning for audio-visual segmentation (AVS) to extensively explore the contribution of audio. We interpret AVS as a conditional generation task, where audio is defined as the conditional variable for sound producer(s) segmentation. With our new interpretation, it is especially necessary to model the correlation between audio and the final segmentation map to ensure its contribution. We introduce a latent diffusion model to our framework to achieve semantic-correlated representation learning. Specifically, our diffusion model learns the conditional generation process of the ground-truth segmentation map, leading to ground-truth aware inference when we perform the denoising process at the test stage. As a conditional diffusion model, we argue it is essential to ensure that the conditional variable contributes to model output. We then introduce contrastive learning to our framework to learn audio-visual correspondence, which is proven consistent with maximizing the mutual information between model prediction and the audio data. In this way, our latent diffusion model via contrastive learning explicitly maximizes the contribution of audio for AVS. Experimental results on the benchmark dataset verify the effectiveness of our solution. Code and results are online via our project page: https://github.com/OpenNLPLab/DiffusionAVS.
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
382,659
2412.00532
ChemTEB: Chemical Text Embedding Benchmark, an Overview of Embedding Models Performance & Efficiency on a Specific Domain
Recent advancements in language models have started a new era of superior information retrieval and content generation, with embedding models playing an important role in optimizing data representation efficiency and performance. While benchmarks like the Massive Text Embedding Benchmark (MTEB) have standardized the evaluation of general domain embedding models, a gap remains in specialized fields such as chemistry, which require tailored approaches due to domain-specific challenges. This paper introduces a novel benchmark, the Chemical Text Embedding Benchmark (ChemTEB), designed specifically for the chemical sciences. ChemTEB addresses the unique linguistic and semantic complexities of chemical literature and data, offering a comprehensive suite of tasks on chemical domain data. Through the evaluation of 34 open-source and proprietary models using this benchmark, we illuminate the strengths and weaknesses of current methodologies in processing and understanding chemical information. Our work aims to equip the research community with a standardized, domain-specific evaluation framework, promoting the development of more precise and efficient NLP models for chemistry-related applications. Furthermore, it provides insights into the performance of generic models in a domain-specific context. ChemTEB comes with open-source code and data, contributing further to its accessibility and utility.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
512,692
2403.17103
Animal Avatars: Reconstructing Animatable 3D Animals from Casual Videos
We present a method to build animatable dog avatars from monocular videos. This is challenging as animals display a range of (unpredictable) non-rigid movements and have a variety of appearance details (e.g., fur, spots, tails). We develop an approach that links the video frames via a 4D solution that jointly solves for animal's pose variation, and its appearance (in a canonical pose). To this end, we significantly improve the quality of template-based shape fitting by endowing the SMAL parametric model with Continuous Surface Embeddings, which brings image-to-mesh reprojection constaints that are denser, and thus stronger, than the previously used sparse semantic keypoint correspondences. To model appearance, we propose an implicit duplex-mesh texture that is defined in the canonical pose, but can be deformed using SMAL pose coefficients and later rendered to enforce a photometric compatibility with the input video frames. On the challenging CoP3D and APTv2 datasets, we demonstrate superior results (both in terms of pose estimates and predicted appearance) to existing template-free (RAC) and template-based approaches (BARC, BITE).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
441,312
2410.11977
Generative AI Policies under the Microscope: How CS Conferences Are Navigating the New Frontier in Scholarly Writing
This paper explores the current state of generative AI policies of computer science conferences and offers guidelines for policy adoption.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
498,794
2010.03848
Guided Curriculum Learning for Walking Over Complex Terrain
Reliable bipedal walking over complex terrain is a challenging problem, using a curriculum can help learning. Curriculum learning is the idea of starting with an achievable version of a task and increasing the difficulty as a success criteria is met. We propose a 3-stage curriculum to train Deep Reinforcement Learning policies for bipedal walking over various challenging terrains. In the first stage, the agent starts on an easy terrain and the terrain difficulty is gradually increased, while forces derived from a target policy are applied to the robot joints and the base. In the second stage, the guiding forces are gradually reduced to zero. Finally, in the third stage, random perturbations with increasing magnitude are applied to the robot base, so the robustness of the policies are improved. In simulation experiments, we show that our approach is effective in learning walking policies, separate from each other, for five terrain types: flat, hurdles, gaps, stairs, and steps. Moreover, we demonstrate that in the absence of human demonstrations, a simple hand designed walking trajectory is a sufficient prior to learn to traverse complex terrain types. In ablation studies, we show that taking out any one of the three stages of the curriculum degrades the learning performance.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
199,546
2302.01532
INV: Towards Streaming Incremental Neural Videos
Recent works in spatiotemporal radiance fields can produce photorealistic free-viewpoint videos. However, they are inherently unsuitable for interactive streaming scenarios (e.g. video conferencing, telepresence) because have an inevitable lag even if the training is instantaneous. This is because these approaches consume videos and thus have to buffer chunks of frames (often seconds) before processing. In this work, we take a step towards interactive streaming via a frame-by-frame approach naturally free of lag. Conventional wisdom believes that per-frame NeRFs are impractical due to prohibitive training costs and storage. We break this belief by introducing Incremental Neural Videos (INV), a per-frame NeRF that is efficiently trained and streamable. We designed INV based on two insights: (1) Our main finding is that MLPs naturally partition themselves into Structure and Color Layers, which store structural and color/texture information respectively. (2) We leverage this property to retain and improve upon knowledge from previous frames, thus amortizing training across frames and reducing redundant learning. As a result, with negligible changes to NeRF, INV can achieve good qualities (>28.6db) in 8min/frame. It can also outperform prior SOTA in 19% less training time. Additionally, our Temporal Weight Compression reduces the per-frame size to 0.3MB/frame (6.6% of NeRF). More importantly, INV is free from buffer lag and is naturally fit for streaming. While this work does not achieve real-time training, it shows that incremental approaches like INV present new possibilities in interactive 3D streaming. Moreover, our discovery of natural information partition leads to a better understanding and manipulation of MLPs. Code and dataset will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
343,641
1912.07150
Single Image Deraining: From Model-Based to Data-Driven and Beyond
The goal of single-image deraining is to restore the rain-free background scenes of an image degraded by rain streaks and rain accumulation. The early single-image deraining methods employ a cost function, where various priors are developed to represent the properties of rain and background layers. Since 2017, single-image deraining methods step into a deep-learning era, and exploit various types of networks, i.e. convolutional neural networks, recurrent neural networks, generative adversarial networks, etc., demonstrating impressive performance. Given the current rapid development, in this paper, we provide a comprehensive survey of deraining methods over the last decade. We summarize the rain appearance models, and discuss two categories of deraining approaches: model-based and data-driven approaches. For the former, we organize the literature based on their basic models and priors. For the latter, we discuss developed ideas related to architectures, constraints, loss functions, and training datasets. We present milestones of single-image deraining methods, review a broad selection of previous works in different categories, and provide insights on the historical development route from the model-based to data-driven methods. We also summarize performance comparisons quantitatively and qualitatively. Beyond discussing the technicality of deraining methods, we also discuss the future directions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,528
2003.01197
Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method
Long-tail and rare event problems become crucial when autonomous driving algorithms are applied in the real world. For the purpose of evaluating systems in challenging settings, we propose a generative framework to create safety-critical scenarios for evaluating specific task algorithms. We first represent the traffic scenarios with a series of autoregressive building blocks and generate diverse scenarios by sampling from the joint distribution of these blocks. We then train the generative model as an agent (or a generator) to investigate the risky distribution parameters for a given driving algorithm being evaluated. We regard the task algorithm as an environment (or a discriminator) that returns a reward to the agent when a risky scenario is generated. Through the experiments conducted on several scenarios in the simulation, we demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods. Another advantage of this method is its adaptiveness to the routes and parameters.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
166,565
2209.12946
Investigation of Machine Learning-based Coarse-Grained Mapping Schemes for Organic Molecules
Due to the wide range of timescales that are present in macromolecular systems, hierarchical multiscale strategies are necessary for their computational study. Coarse-graining (CG) allows to establish a link between different system resolutions and provides the backbone for the development of robust multiscale simulations and analyses. The CG mapping process is typically system- and application-specific, and it relies on chemical intuition. In this work, we explored the application of a Machine Learning strategy, based on Variational Autoencoders, for the development of suitable mapping schemes from the atomistic to the coarse-grained space of molecules with increasing chemical complexity. An extensive evaluation of the effect of the model hyperparameters on the training process and on the final output was performed, and an existing method was extended with the definition of different loss functions and the implementation of a selection criterion that ensures physical consistency of the output. The relationship between the input feature choice and the reconstruction accuracy was analyzed, supporting the need to introduce rotational invariance into the system. Strengths and limitations of the approach, both in the mapping and in the backmapping steps, are highlighted and critically discussed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
319,714
2304.04207
Scalable Multiple Patterning Layout Decomposition Implemented by a Distribution Evolutionary Algorithm
As the feature size of semiconductor technology shrinks to 10 nm and beyond, the multiple patterning lithography (MPL) attracts more attention from the industry. In this paper, we model the layout decomposition of MPL as a generalized graph coloring problem, which is addressed by a distribution evolutionary algorithm based on a population of probabilistic model (DEA-PPM). DEA-PPM can strike a balance between decomposition results and running time, being scalable for varied settings of mask number and lithography resolution. Due to its robustness of decomposition results, this could be an alternative technique for multiple patterning layout decomposition in next-generation technology nodes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
357,131
1708.03429
Nearly Optimal Sparse Group Testing
Group testing is the process of pooling arbitrary subsets from a set of $n$ items so as to identify, with a minimal number of tests, a "small" subset of $d$ defective items. In "classical" non-adaptive group testing, it is known that when $d$ is substantially smaller than $n$, $\Theta(d\log(n))$ tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested $\Omega(\log(n))$ times, and most tests to incorporate $\Omega(n/d)$ items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be "sparse". Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most $\gamma \in o(\log(n))$ tests; or (b) tests are size-constrained to pool no more than $\rho \in o(n/d)$items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In both scenarios we provide both randomized constructions (under both $\epsilon$-error and zero-error reconstruction guarantees) and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that are optimal up to constant or small polynomial factors in some regimes of $n, d, \gamma,$ and $\rho$. The randomized design/reconstruction algorithm in the $\rho$-sized test scenario is universal -- independent of the value of $d$, as long as $\rho \in o(n/d)$. We also investigate the effect of unreliability/noise in test outcomes. For the full abstract, please see the full text PDF.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
78,774
1811.00002
WaveGlow: A Flow-based Generative Network for Speech Synthesis
In this paper we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable. Our PyTorch implementation produces audio samples at a rate of more than 500 kHz on an NVIDIA V100 GPU. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation. All code will be made publicly available online.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
111,980
2302.03498
MAC: A unified framework boosting low resource automatic speech recognition
We propose a unified framework for low resource automatic speech recognition tasks named meta audio concatenation (MAC). It is easy to implement and can be carried out in extremely low resource environments. Mathematically, we give a clear description of MAC framework from the perspective of bayesian sampling. In this framework, we leverage a novel concatenative synthesis text-to-speech system to boost the low resource ASR task. By the concatenative synthesis text-to-speech system, we can integrate language pronunciation rules and adjust the TTS process. Furthermore, we propose a broad notion of meta audio set to meet the modeling needs of different languages and different scenes when using the system. Extensive experiments have demonstrated the great effectiveness of MAC on low resource ASR tasks. For CTC greedy search, CTC prefix, attention, and attention rescoring decode mode in Cantonese ASR task, Taiwanese ASR task, and Japanese ASR task the MAC method can reduce the CER by more than 15\%. Furthermore, in the ASR task, MAC beats wav2vec2 (with fine-tuning) on common voice datasets of Cantonese and gets really competitive results on common voice datasets of Taiwanese and Japanese. Among them, it is worth mentioning that we achieve a \textbf{10.9\%} character error rate (CER) on the common voice Cantonese ASR task, bringing about \textbf{30\%} relative improvement compared to the wav2vec2 (with fine-tuning).
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
344,360
1804.04803
Precise Temporal Action Localization by Evolving Temporal Proposals
Locating actions in long untrimmed videos has been a challenging problem in video content analysis. The performances of existing action localization approaches remain unsatisfactory in precisely determining the beginning and the end of an action. Imitating the human perception procedure with observations and refinements, we propose a novel three-phase action localization framework. Our framework is embedded with an Actionness Network to generate initial proposals through frame-wise similarity grouping, and then a Refinement Network to conduct boundary adjustment on these proposals. Finally, the refined proposals are sent to a Localization Network for further fine-grained location regression. The whole process can be deemed as multi-stage refinement using a novel non-local pyramid feature under various temporal granularities. We evaluate our framework on THUMOS14 benchmark and obtain a significant improvement over the state-of-the-arts approaches. Specifically, the performance gain is remarkable under precise localization with high IoU thresholds. Our proposed framework achieves mAP@IoU=0.5 of 34.2%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,935
2406.13720
On the Utility of Domain-Adjacent Fine-Tuned Model Ensembles for Few-shot Problems
Large Language Models (LLMs) have been observed to perform well on a wide range of downstream tasks when fine-tuned on domain-specific data. However, such data may not be readily available in many applications, motivating zero-shot or few-shot approaches using domain-adjacent models. While several fine-tuned models for various tasks are available, finding an appropriate domain-adjacent model for a given task is often not straight forward. In this paper, we study DAFT-E, a framework that utilizes an Ensemble of Domain-Adjacent Fine-Tuned Foundation Models for few-shot problems. We show that for zero-shot problems, this ensembling method provides an accuracy performance close to that of the single best model. With few-shot problems, this performance improves further, at which point DEFT-E can outperform any single domain-adjacent model while requiring much less data for domain-specific fine-tuning.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
465,972
1412.1023
Degrees-of-Freedom Regions for $K$-User MISO Time-Correlated Broadcast Channel
In this paper, we study the achievable degrees-of-freedom (DoF) regions of the $K$-user multiple-input-single-output (MISO) time correlated broadcast channel (BC). The time correlation induces knowledge of the current channel state information at transmitter (CSIT) with an estimation error $P^{-\alpha}$, where $P$ is the signal-to-noise ratio (SNR). We consider the following two scenarios: $(i)$ $K$-user with $K$-antenna base station (BS) and $(ii)$ $3$-user with $2$-antenna BS. In case of symmetric DoF tuples, where all the users obtain the same DoF, we derive the total DoF equal to $\frac{K(1-\alpha)}{1+\frac{1}{2}+\cdots+\frac{1}{K}}+K\alpha$ for the first scenario and $\frac{3+\alpha}{2}$ for the second one. In particular, we provide the achievability schemes for these two DoF tuples. Nevertheless, we also consider the asymmetric case where one of the users is guaranteed {\it one} DoF, and provide the achievability scheme. Notably, the consistency of the proposed DoF regions with an already published outer bound , as well as with the Maddah-Ali-Tse (MAT), which assumes only perfect delayed CSIT, and the ZF beamforming schemes (perfect current CSIT) consents to the optimality of the proposed achievability schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
38,066
2201.10335
Tracking and Planning with Spatial World Models
We introduce a method for real-time navigation and tracking with differentiably rendered world models. Learning models for control has led to impressive results in robotics and computer games, but this success has yet to be extended to vision-based navigation. To address this, we transfer advances in the emergent field of differentiable rendering to model-based control. We do this by planning in a learned 3D spatial world model, combined with a pose estimation algorithm previously used in the context of TSDF fusion, but now tailored to our setting and improved to incorporate agent dynamics. We evaluate over six simulated environments based on complex human-designed floor plans and provide quantitative results. We achieve up to 92% navigation success rate at a frequency of 15 Hz using only image and depth observations under stochastic, continuous dynamics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,957
2309.10926
Semi-Autoregressive Streaming ASR With Label Context
Non-autoregressive (NAR) modeling has gained significant interest in speech processing since these models achieve dramatically lower inference time than autoregressive (AR) models while also achieving good transcription accuracy. Since NAR automatic speech recognition (ASR) models must wait for the completion of the entire utterance before processing, some works explore streaming NAR models based on blockwise attention for low-latency applications. However, streaming NAR models significantly lag in accuracy compared to streaming AR and non-streaming NAR models. To address this, we propose a streaming "semi-autoregressive" ASR model that incorporates the labels emitted in previous blocks as additional context using a Language Model (LM) subnetwork. We also introduce a novel greedy decoding algorithm that addresses insertion and deletion errors near block boundaries while not significantly increasing the inference time. Experiments show that our method outperforms the existing streaming NAR model by 19% relative on Tedlium2, 16%/8% on Librispeech-100 clean/other test sets, and 19%/8% on the Switchboard(SWB)/Callhome(CH) test sets. It also reduced the accuracy gap with streaming AR and non-streaming NAR models while achieving 2.5x lower latency. We also demonstrate that our approach can effectively utilize external text data to pre-train the LM subnetwork to further improve streaming ASR accuracy.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
393,206
2001.03657
Dominance Move calculation using a MIP approach for comparison of multi and many-objective optimization solution sets
Dominance move (DoM) is a binary quality indicator that can be used in multiobjective optimization. It can compare solution sets while representing some important features such as convergence, spread, uniformity, and cardinality. DoM has an intuitive concept and considers the minimum move of one set needed to weakly Pareto dominate the other set. Despite the aforementioned properties, DoM is hard to calculate. The original formulation presents an efficient and exact method to calculate it in a biobjective case only. This work presents a new approach to calculate and extend DoM to deal with three or more objectives. The idea is to use a mixed integer programming (MIP) approach to calculate DoM. Some initial experiments, in the biobjective space, were done to verify the model correctness. Furthermore, other experiments, using three, five, and ten objective functions were done to show how the model behaves in higher dimensional cases. Algorithms such as IBEA, MOEAD, NSGAIII, NSGAII, and SPEA2 were used to generate the solution sets, however any other algorithms could be used with DoM indicator. The results have confirmed the effectiveness of the MIP DoM in problems with more than three objective functions. Final notes, considerations, and future research are discussed to exploit some solution sets particularities and improve the model and its use for other situations.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
160,026
1705.01583
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
72,860
1705.05040
Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes
Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
73,421
1212.5981
Core organization of directed complex networks
The recursive removal of leaves (dead end vertices) and their neighbors from an undirected network results, when this pruning algorithm stops, in a so-called core of the network. This specific subgraph should be distinguished from $k$-cores, which are principally different subgraphs in networks. If the vertex mean degree of a network is sufficiently large, the core is a giant cluster containing a finite fraction of vertices. We find that generalization of this pruning algorithm to directed networks provides a significantly more complex picture of cores. By implementing a rate equation approach to this pruning procedure for directed uncorrelated networks, we identify a set of cores progressively embedded into each other in a network and describe their birth points and structure.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
20,606
2406.04103
Multistep Distillation of Diffusion Models via Moment Matching
We present a new method for making diffusion models faster to sample. The method distills many-step diffusion models into few-step models by matching conditional expectations of the clean data given noisy data along the sampling trajectory. Our approach extends recently proposed one-step methods to the multi-step case, and provides a new perspective by interpreting these approaches in terms of moment matching. By using up to 8 sampling steps, we obtain distilled models that outperform not only their one-step versions but also their original many-step teacher models, obtaining new state-of-the-art results on the Imagenet dataset. We also show promising results on a large text-to-image model where we achieve fast generation of high resolution images directly in image space, without needing autoencoders or upsamplers.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
461,521
2109.11043
Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Machine learning models that utilize patient data across time (rather than just the most recent measurements) have increased performance for many risk stratification tasks in the intensive care unit. However, many of these models and their learned representations are complex and therefore difficult for clinicians to interpret, creating challenges for validation. Our work proposes a new procedure to learn summaries of clinical time-series that are both predictive and easily understood by humans. Specifically, our summaries consist of simple and intuitive functions of clinical data (e.g. falling mean arterial pressure). Our learned summaries outperform traditional interpretable model classes and achieve performance comparable to state-of-the-art deep learning models on an in-hospital mortality classification task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
256,820
1810.06679
Understanding and Predicting the Memorability of Outdoor Natural Scenes
Memorability measures how easily an image is to be memorized after glancing, which may contribute to designing magazine covers, tourism publicity materials, and so forth. Recent works have shed light on the visual features that make generic images, object images or face photographs memorable. However, these methods are not able to effectively predict the memorability of outdoor natural scene images. To overcome this shortcoming of previous works, in this paper, we provide an attempt to answer: "what exactly makes outdoor natural scenes memorable". To this end, we first establish a large-scale outdoor natural scene image memorability (LNSIM) database, containing 2,632 outdoor natural scene images with their ground truth memorability scores and the multi-label scene category annotations. Then, similar to previous works, we mine our database to investigate how low-, middle- and high-level handcrafted features affect the memorability of outdoor natural scenes. In particular, we find that the high-level feature of scene category is rather correlated with outdoor natural scene memorability, and the deep features learnt by deep neural network (DNN) are also effective in predicting the memorability scores. Moreover, combining the deep features with the category feature can further boost the performance of memorability prediction. Therefore, we propose an end-to-end DNN based outdoor natural scene memorability (DeepNSM) predictor, which takes advantage of the learned category-related features. Then, the experimental results validate the effectiveness of our DeepNSM model, exceeding the state-of-the-art methods. Finally, we try to understand the reason of the good performance for our DeepNSM model, and also study the cases that our DeepNSM model succeeds or fails to accurately predict the memorability of outdoor natural scenes. Code: github.com/JiaxinLu-home/Natural-Scene-Memorability-Dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
110,475
1610.07289
Levelized Cost of Energy Calculation for Energy Storage Systems
The levelized cost of energy (LCOE) presents the energy-normalized cost of a generation asset by considering all associated costs (investment and operation) and total generated energy over its life cycle. As LCOE is a levelized value, it provides a quick and easy measure to compare different energy resource technologies with different characteristics. The LCOE calculation for large-scale power plants and distributed generations (DGs) is extensively studied and can be found in the literature. The discussions on the LCOE calculation for energy storage systems, however, is limited. Although still relatively expensive compared to generation technologies, energy storage is gaining significant attention and has been deployed extensively during the past few years, conceivably due to its many benefits such as load shifting, energy arbitrage, and renewable coordination. Therefore, LCOE calculation of energy storage systems plays an important role in economic evaluation of power systems. This paper proposes a method for calculating the LCOE of energy storage, and further provides the sensitivity analysis with respect to changes in capacity, electricity market prices, and efficiency.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
62,767
1512.07876
An unsupervised spatiotemporal graphical modeling approach to anomaly detection in distributed CPS
Modern distributed cyber-physical systems (CPSs) encounter a large variety of physical faults and cyber anomalies and in many cases, they are vulnerable to catastrophic fault propagation scenarios due to strong connectivity among the sub-systems. This paper presents a new data-driven framework for system-wide anomaly detection for addressing such issues. The framework is based on a spatiotemporal feature extraction scheme built on the concept of symbolic dynamics for discovering and representing causal interactions among the subsystems of a CPS. The extracted spatiotemporal features are then used to learn system-wide patterns via a Restricted Boltzmann Machine (RBM). The results show that: (1) the RBM free energy in the off-nominal conditions is different from that in the nominal conditions and can be used for anomaly detection; (2) the framework can capture multiple nominal modes with one graphical model; (3) the case studies with simulated data and an integrated building system validate the proposed approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
50,463
2406.11854
Attributions toward Artificial Agents in a modified Moral Turing Test
Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen and colleagues' (2000) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI's moral reasoning as superior in quality to humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans' raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
465,102
2107.10292
Predicting Power Electronics Device Reliability under Extreme Conditions with Machine Learning Algorithms
Power device reliability is a major concern during operation under extreme environments, as doing so reduces the operational lifetime of any power system or sensing infrastructure. Due to a potential for system failure, devices must be experimentally validated before implementation, which is expensive and time-consuming. In this paper, we have utilized machine learning algorithms to predict device reliability, significantly reducing the need for conducting experiments. To train the models, we have tested 224 power devices from 10 different manufacturers. First, we describe a method to process the data for modeling purposes. Based on the in-house testing data, we implemented various ML models and observed that computational models such as Gradient Boosting and LSTM encoder-decoder networks can predict power device failure with high accuracy.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
247,256
1502.00804
A Polya Urn Document Language Model for Improved Information Retrieval
The multinomial language model has been one of the most effective models of retrieval for over a decade. However, the multinomial distribution does not model one important linguistic phenomenon relating to term-dependency, that is the tendency of a term to repeat itself within a document (i.e. word burstiness). In this article, we model document generation as a random process with reinforcement (a multivariate Polya process) and develop a Dirichlet compound multinomial language model that captures word burstiness directly. We show that the new reinforced language model can be computed as efficiently as current retrieval models, and with experiments on an extensive set of TREC collections, we show that it significantly outperforms the state-of-the-art language model for a number of standard effectiveness metrics. Experiments also show that the tuning parameter in the proposed model is more robust than in the multinomial language model. Furthermore, we develop a constraint for the verbosity hypothesis and show that the proposed model adheres to the constraint. Finally, we show that the new language model essentially introduces a measure closely related to idf which gives theoretical justification for combining the term and document event spaces in tf-idf type schemes.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
39,877
2305.08175
An Optimal and Scalable Matrix Mechanism for Noisy Marginals under Convex Loss Functions
Noisy marginals are a common form of confidentiality-protecting data release and are useful for many downstream tasks such as contingency table analysis, construction of Bayesian networks, and even synthetic data generation. Privacy mechanisms that provide unbiased noisy answers to linear queries (such as marginals) are known as matrix mechanisms. We propose ResidualPlanner, a matrix mechanism for marginals with Gaussian noise that is both optimal and scalable. ResidualPlanner can optimize for many loss functions that can be written as a convex function of marginal variances (prior work was restricted to just one predefined objective function). ResidualPlanner can optimize the accuracy of marginals in large scale settings in seconds, even when the previous state of the art (HDMM) runs out of memory. It even runs on datasets with 100 attributes in a couple of minutes. Furthermore ResidualPlanner can efficiently compute variance/covariance values for each marginal (prior methods quickly run out of memory, even for relatively small datasets).
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
364,190
2407.08150
Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video Understanding
Understanding of video creativity and content often varies among individuals, with differences in focal points and cognitive levels across different ages, experiences, and genders. There is currently a lack of research in this area, and most existing benchmarks suffer from several drawbacks: 1) a limited number of modalities and answers with restrictive length; 2) the content and scenarios within the videos are excessively monotonous, transmitting allegories and emotions that are overly simplistic. To bridge the gap to real-world applications, we introduce a large-scale Subjective Response Indicators for Advertisement Videos dataset, namely SRI-ADV. Specifically, we collected real changes in Electroencephalographic (EEG) and eye-tracking regions from different demographics while they viewed identical video content. Utilizing this multi-modal dataset, we developed tasks and protocols to analyze and evaluate the extent of cognitive understanding of video content among different users. Along with the dataset, we designed a Hypergraph Multi-modal Large Language Model (HMLLM) to explore the associations among different demographics, video elements, EEG, and eye-tracking indicators. HMLLM could bridge semantic gaps across rich modalities and integrate information beyond different modalities to perform logical reasoning. Extensive experimental evaluations on SRI-ADV and other additional video-based generative performance benchmarks demonstrate the effectiveness of our method. The codes and dataset will be released at https://github.com/mininglamp-MLLM/HMLLM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
472,034
1903.02167
Efficient Multi-Objective Optimization through Population-based Parallel Surrogate Search
Multi-Objective Optimization (MOO) is very difficult for expensive functions because most current MOO methods rely on a large number of function evaluations to get an accurate solution. We address this problem with surrogate approximation and parallel computation. We develop an MOO algorithm MOPLS-N for expensive functions that combines iteratively updated surrogate approximations of the objective functions with a structure for efficiently selecting a population of $N$ points so that the expensive objectives for all points are simultaneously evaluated on $N$ processors in each iteration. MOPLS incorporates Radial Basis Function (RBF) approximation, Tabu Search and local candidate search around multiple points to strike a balance between exploration, exploitation and diversification during each algorithm iteration. Eleven test problems (with 8 to 24 decision variables and two real-world watershed problems are used to compare performance of MOPLS to ParEGO, GOMORS, Borg, MOEA/D, and NSGA-III on a limited budget of evaluations with between 1 (serial) and 64 processors. MOPLS in serial is better than all non-RBF serial methods tested. Parallel speedup of MOPLS is higher than all other parallel algorithms with 16 and 64 processors. With both algorithms on 64 processors MOPLS is at least 2 times faster than NSGA-III on the watershed problems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
123,443
2402.06737
ExGRG: Explicitly-Generated Relation Graph for Self-Supervised Representation Learning
Self-supervised Learning (SSL) has emerged as a powerful technique in pre-training deep learning models without relying on expensive annotated labels, instead leveraging embedded signals in unlabeled data. While SSL has shown remarkable success in computer vision tasks through intuitive data augmentation, its application to graph-structured data poses challenges due to the semantic-altering and counter-intuitive nature of graph augmentations. Addressing this limitation, this paper introduces a novel non-contrastive SSL approach to Explicitly Generate a compositional Relation Graph (ExGRG) instead of relying solely on the conventional augmentation-based implicit relation graph. ExGRG offers a framework for incorporating prior domain knowledge and online extracted information into the SSL invariance objective, drawing inspiration from the Laplacian Eigenmap and Expectation-Maximization (EM). Employing an EM perspective on SSL, our E-step involves relation graph generation to identify candidates to guide the SSL invariance objective, and M-step updates the model parameters by integrating the derived relational information. Extensive experimentation on diverse node classification datasets demonstrates the superiority of our method over state-of-the-art techniques, affirming ExGRG as an effective adoption of SSL for graph representation learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
428,417
2108.04936
Using Information Theory to Measure Psychophysical Performance
Most psychophysical experiments discard half the data collected. Specifically, experiments discard reaction time data, and use binary responses (e.g. yes/no) to measure performance. Here, Shannon's information theory is used to define Shannon competence $s'$, which depends on the mutual information between stimulus strength (e.g. luminance) and a combination of reaction times and binary responses. Mutual information is the entropy of the joint distribution of responses minus the residual entropy after a model has been fitted to these responses. Here, this model is instantiated as a proportional rate diffusion model, with the additional innovation that the full covariance structure of responses is taken into account. Results suggest information associated with reaction times is independent of (i.e. additional to) information associated with binary responses, and that reaction time and binary responses together provide substantially more than the sum of their individual contributions (i.e. they act synergistically). Consequently, the additional information supplied by reaction times suggests that using combined reaction time and binary responses requires fewer stimulus presentations, without loss of precision in psychophysical parameters. Finally, because s' takes account of both reaction time and binary responses, (and in contrast to d') $s'$ is immune to speed-accuracy trade-offs, which vary between observers and experimental designs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
250,155
2205.10155
Large-Signal Stability Guarantees for Cycle-by-Cycle Controlled DC-DC Converters
Stability guarantees are critical for cycle-by-cycle controlled dc-dc converters in consumer electronics and energy storage systems. Traditional stability analysis on cycle-by-cycle dc-dc converters is incomplete because the inductor current ramps are considered fixed; but instead, inductor ramps are not fixed because they are dependent on the output voltage in large-signal transients. We demonstrate a new large-signal stability theory that treats cycle-by-cycle controlled dc-dc converters as a particular type of feedback interconnection system. An analytical and practical stability criterion is provided based on this system. The criterion indicates that the L/R and RC time constants are the design parameters that determine the amount of coupling between the current ramp and the output voltage.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
297,589
2304.11218
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we conduct a gap analysis of existing policies, leading us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
359,723
2006.02231
A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store
Counterfeit apps impersonate existing popular apps in attempts to misguide users to install them for various reasons such as collecting personal information or spreading malware. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation. To this end, this paper proposes to leverage the recent advances in deep learning methods to create image and text embeddings so that counterfeit apps can be efficiently identified when they are submitted for publication. We show that a novel approach of combining content embeddings and style embeddings outperforms the baseline methods for image similarity such as SIFT, SURF, and various image hashing methods. We first evaluate the performance of the proposed method on two well-known datasets for evaluating image similarity methods and show that content, style, and combined embeddings increase precision@k and recall@k by 10%-15% and 12%-25%, respectively when retrieving five nearest neighbours. Second, specifically for the app counterfeit detection problem, combined content and style embeddings achieve 12% and 14% increase in precision@k and recall@k, respectively compared to the baseline methods. Third, we present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-10,000 popular apps. Under a conservative assumption, we were able to find 2,040 potential counterfeits that contain malware in a set of 49,608 apps that showed high similarity to one of the top-10,000 popular apps in Google Play Store. We also find 1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third party advertisement libraries.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
true
false
false
179,988
2110.01269
PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds
Rigid registration of point clouds with partial overlaps is a longstanding problem usually solved in two steps: (a) finding correspondences between the point clouds; (b) filtering these correspondences to keep only the most reliable ones to estimate the transformation. Recently, several deep nets have been proposed to solve these steps jointly. We built upon these works and propose PCAM: a neural network whose key element is a pointwise product of cross-attention matrices that permits to mix both low-level geometric and high-level contextual information to find point correspondences. These cross-attention matrices also permits the exchange of context information between the point clouds, at each layer, allowing the network construct better matching features within the overlapping regions. The experiments show that PCAM achieves state-of-the-art results among methods which, like us, solve steps (a) and (b) jointly via deepnets. Our code and trained models are available at https://github.com/valeoai/PCAM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
258,707
2501.09859
Empirical Evaluation of Embedding Models in the Context of Text Classification in Document Review in Construction Delay Disputes
Text embeddings are numerical representations of text data, where words, phrases, or entire documents are converted into vectors of real numbers. These embeddings capture semantic meanings and relationships between text elements in a continuous vector space. The primary goal of text embeddings is to enable the processing of text data by machine learning models, which require numerical input. Numerous embedding models have been developed for various applications. This paper presents our work in evaluating different embeddings through a comprehensive comparative analysis of four distinct models, focusing on their text classification efficacy. We employ both K-Nearest Neighbors (KNN) and Logistic Regression (LR) to perform binary classification tasks, specifically determining whether a text snippet is associated with 'delay' or 'not delay' within a labeled dataset. Our research explores the use of text snippet embeddings for training supervised text classification models to identify delay-related statements during the document review process of construction delay disputes. The results of this study highlight the potential of embedding models to enhance the efficiency and accuracy of document analysis in legal contexts, paving the way for more informed decision-making in complex investigative scenarios.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
525,298
1912.06719
Neural Network Surgery with Sets
The cost to train machine learning models has been increasing exponentially, making exploration and research into the correct features and architecture a costly or intractable endeavor at scale. However, using a technique named "surgery" OpenAI Five was continuously trained to play the game DotA 2 over the course of 10 months through 20 major changes in features and architecture. Surgery transfers trained weights from one network to another after a selection process to determine which sections of the model are unchanged and which must be re-initialized. In the past, the selection process relied on heuristics, manual labor, or pre-existing boundaries in the structure of the model, limiting the ability to salvage experiments after modifications of the feature set or input reorderings. We propose a solution to automatically determine which components of a neural network model should be salvaged and which require retraining. We achieve this by allowing the model to operate over discrete sets of features and use set-based operations to determine the exact relationship between inputs and outputs, and how they change across tweaks in model architecture. In this paper, we introduce the methodology for enabling neural networks to operate on sets, derive two methods for detecting feature-parameter interaction maps, and show their equivalence. We empirically validate that we can surgery weights across feature and architecture changes to the OpenAI Five model.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
157,405
2310.13452
Quadrotor Dead Reckoning with Multiple Inertial Sensors
Quadrotors are widely used for surveillance, mapping, and deliveries. In several scenarios the quadrotor operates in pure inertial navigation mode resulting in a navigation solution drift. To handle such situations and bind the navigation drift, the quadrotor dead reckoning (QDR) approach requires flying the quadrotor in a periodic trajectory. Then, using model or learning based approaches the quadrotor position vector can be estimated. We propose to use multiple inertial measurement units (MIMU) to improve the positioning accuracy of the QDR approach. Several methods to utilize MIMU data in a deep learning framework are derived and evaluated. Field experiments were conducted to validate the proposed approach and show its benefits.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
401,456
2405.09352
On the impact of the antenna radiation patterns in passive radio sensing
Electromagnetic (EM) body models based on the scalar diffraction theory allow to predict the impact of subject motions on the radio propagation channel without requiring a time-consuming full-wave approach. On the other hand, they are less effective in complex environments characterized by significant multipath effects. Recently, emerging radio sensing applications have proposed the adoption of smart antennas with non-isotropic radiation characteristics to improve coverage.This letter investigates the impact of antenna radiation patterns in passive radio sensing applications. Adaptations of diffraction-based EM models are proposed to account for antenna non-uniform angular filtering. Next, we quantify experimentally the impact of diffraction and multipath disturbance components on radio sensing accuracy in environments with smart antennas.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
454,375
2308.13415
An investigation into the impact of deep learning model choice on sex and race bias in cardiac MR segmentation
In medical imaging, artificial intelligence (AI) is increasingly being used to automate routine tasks. However, these algorithms can exhibit and exacerbate biases which lead to disparate performances between protected groups. We investigate the impact of model choice on how imbalances in subject sex and race in training datasets affect AI-based cine cardiac magnetic resonance image segmentation. We evaluate three convolutional neural network-based models and one vision transformer model. We find significant sex bias in three of the four models and racial bias in all of the models. However, the severity and nature of the bias varies between the models, highlighting the importance of model choice when attempting to train fair AI-based segmentation models for medical imaging tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
387,908
2102.06732
Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution
Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education. Most existing works decoupled this problem into several independent sub-tasks of text spotting (text detection and recognition) and information extraction, which completely ignored the high correlation among them during optimization. In this paper, we propose a robust visual information extraction system (VIES) towards real-world scenarios, which is a unified end-to-end trainable framework for simultaneous text detection, recognition and information extraction by taking a single document image as input and outputting the structured information. Specifically, the information extraction branch collects abundant visual and semantic representations from text spotting for multimodal feature fusion and conversely, provides higher-level semantic clues to contribute to the optimization of text spotting. Moreover, regarding the shortage of public benchmarks, we construct a fully-annotated dataset called EPHOIE (https://github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances. Compared with the state-of-the-art methods, our VIES shows significant superior performance on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used SROIE dataset under the end-to-end scenario.
false
false
false
false
true
true
true
false
false
false
false
true
false
false
false
false
false
true
219,848
1908.04381
Efficient Contraction of Large Tensor Networks for Weighted Model Counting through Graph Decompositions
Constrained counting is a fundamental problem in artificial intelligence. A promising new algebraic approach to constrained counting makes use of tensor networks, following a reduction from constrained counting to the problem of tensor-network contraction. Contracting a tensor network efficiently requires determining an efficient order to contract the tensors inside the network, which is itself a difficult problem. In this work, we apply graph decompositions to find contraction orders for tensor networks. We prove that finding an efficient contraction order for a tensor network is equivalent to the well-known problem of finding an optimal carving decomposition. Thus memory-optimal contraction orders for planar tensor networks can be found in cubic time. We show that tree decompositions can be used both to find carving decompositions and to factor tensor networks with high-rank, structured tensors. We implement these algorithms on top of state-of-the-art solvers for tree decompositions and show empirically that the resulting weighted model counter is quite effective and useful as part of a portfolio of counters.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
141,459
1811.10991
An explicit representation and enumeration for negacyclic codes of length $2^kn$ over $\mathbb{Z}_4+u\mathbb{Z}_4$
In this paper, an explicit representation and enumeration for negacyclic codes of length $2^kn$ over the local non-principal ideal ring $R=\mathbb{Z}_4+u\mathbb{Z}_4$ $(u^2=0)$ is provided, where $k, n$ are any positive integers and $n$ is odd. As a corollary, all distinct negacyclic codes of length $2^k$ over $R$ are listed precisely. Moreover, a mass formula for the number of negacyclic codes of length $2^kn$ over $R$ is given and a mistake in [Cryptogr. Commun. (2017) 9: 241--272] is corrected.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
114,655
1904.03943
Component-Wise Boosting of Targets for Multi-Output Prediction
Multi-output prediction deals with the prediction of several targets of possibly diverse types. One way to address this problem is the so called problem transformation method. This method is often used in multi-label learning, but can also be used for multi-output prediction due to its generality and simplicity. In this paper, we introduce an algorithm that uses the problem transformation method for multi-output prediction, while simultaneously learning the dependencies between target variables in a sparse and interpretable manner. In a first step, predictions are obtained for each target individually. Target dependencies are then learned via a component-wise boosting approach. We compare our new method with similar approaches in a benchmark using multi-label, multivariate regression and mixed-type datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
126,890
1908.00222
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Recently, there has been growing interest in developing learning-based methods to detect and utilize salient semi-global or global structures, such as junctions, lines, planes, cuboids, smooth surfaces, and all types of symmetries, for 3D scene modeling and understanding. However, the ground truth annotations are often obtained via human labor, which is particularly challenging and inefficient for such tasks due to the large number of 3D structure instances (e.g., line segments) and other factors such as viewpoints and occlusions. In this paper, we present a new synthetic dataset, Structured3D, with the aim of providing large-scale photo-realistic images with rich 3D structure annotations for a wide spectrum of structured 3D modeling tasks. We take advantage of the availability of professional interior designs and automatically extract 3D structures from them. We generate high-quality images with an industry-leading rendering engine. We use our synthetic dataset in combination with real images to train deep networks for room layout estimation and demonstrate improved performance on benchmark datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,462
1908.10489
Domain-Agnostic Learning with Anatomy-Consistent Embedding for Cross-Modality Liver Segmentation
Domain Adaptation (DA) has the potential to greatly help the generalization of deep learning models. However, the current literature usually assumes to transfer the knowledge from the source domain to a specific known target domain. Domain Agnostic Learning (DAL) proposes a new task of transferring knowledge from the source domain to data from multiple heterogeneous target domains. In this work, we propose the Domain-Agnostic Learning framework with Anatomy-Consistent Embedding (DALACE) that works on both domain-transfer and task-transfer to learn a disentangled representation, aiming to not only be invariant to different modalities but also preserve anatomical structures for the DA and DAL tasks in cross-modality liver segmentation. We validated and compared our model with state-of-the-art methods, including CycleGAN, Task Driven Generative Adversarial Network (TD-GAN), and Domain Adaptation via Disentangled Representations (DADR). For the DA task, our DALACE model outperformed CycleGAN, TD-GAN ,and DADR with DSC of 0.847 compared to 0.721, 0.793 and 0.806. For the DAL task, our model improved the performance with DSC of 0.794 from 0.522, 0.719 and 0.742 by CycleGAN, TD-GAN, and DADR. Further, we visualized the success of disentanglement, which added human interpretability of the learned meaningful representations. Through ablation analysis, we specifically showed the concrete benefits of disentanglement for downstream tasks and the role of supervision for better disentangled representation with segmentation consistency to be invariant to domains with the proposed Domain-Agnostic Module (DAM) and to preserve anatomical information with the proposed Anatomy-Preserving Module (APM).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
143,129
2407.09646
Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba
3D Hand reconstruction from a single RGB image is challenging due to the articulated motion, self-occlusion, and interaction with objects. Existing SOTA methods employ attention-based transformers to learn the 3D hand pose and shape, yet they do not fully achieve robust and accurate performance, primarily due to inefficiently modeling spatial relations between joints. To address this problem, we propose a novel graph-guided Mamba framework, named Hamba, which bridges graph learning and state space modeling. Our core idea is to reformulate Mamba's scanning into graph-guided bidirectional scanning for 3D reconstruction using a few effective tokens. This enables us to efficiently learn the spatial relationships between joints for improving reconstruction performance. Specifically, we design a Graph-guided State Space (GSS) block that learns the graph-structured relations and spatial sequences of joints and uses 88.5% fewer tokens than attention-based methods. Additionally, we integrate the state space features and the global features using a fusion module. By utilizing the GSS block and the fusion module, Hamba effectively leverages the graph-guided state space features and jointly considers global and local features to improve performance. Experiments on several benchmarks and in-the-wild tests demonstrate that Hamba significantly outperforms existing SOTAs, achieving the PA-MPVPE of 5.3mm and F@15mm of 0.992 on FreiHAND. At the time of this paper's acceptance, Hamba holds the top position, Rank 1 in two Competition Leaderboards on 3D hand reconstruction. Project Website: https://humansensinglab.github.io/Hamba/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
472,665
1904.02660
Recommendations for Datasets for Source Code Summarization
Source Code Summarization is the task of writing short, natural language descriptions of source code. The main use for these descriptions is in software documentation e.g. the one-sentence Java method descriptions in JavaDocs. Code summarization is rapidly becoming a popular research problem, but progress is restrained due to a lack of suitable datasets. In addition, a lack of community standards for creating datasets leads to confusing and unreproducible research results -- we observe swings in performance of more than 33% due only to changes in dataset design. In this paper, we make recommendations for these standards from experimental results. We release a dataset based on prior work of over 2.1m pairs of Java methods and one sentence method descriptions from over 28k Java projects. We describe the dataset and point out key differences from natural language data, to guide and support future researchers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
126,488
2201.01677
A Versatile SPH Modeling Framework for Coupled Microfluid-Powder Dynamics in Additive Manufacturing: Binder Jetting, Material Jetting, Directed Energy Deposition and Powder Bed Fusion
Many additive manufacturing (AM) technologies rely on powder feedstock, which is fused to form the final part either by melting or by chemical binding with subsequent sintering. In both cases, process stability and resulting part quality depend on dynamic interactions between powder particles and a fluid phase, i.e., molten metal or liquid binder. The present work proposes a versatile computational modeling framework for simulating such coupled microfluid-powder dynamics problems involving thermo-capillary flow and reversible phase transitions. In particular, a liquid and a gas phase are interacting with a solid phase that consists of a substrate and mobile powder particles while simultaneously considering temperature-dependent surface tension and wetting effects. In case of laser-metal interactions, the effect of rapid evaporation is incorporated through additional mechanical and thermal interface fluxes. All phase domains are spatially discretized using smoothed particle hydrodynamics. The method's Lagrangian nature is beneficial in the context of dynamically changing interface topologies. Special care is taken in the formulation of phase transitions, which is crucial for the robustness of the computational scheme. While the underlying model equations are of a very general nature, the proposed framework is especially suitable for the mesoscale modeling of various AM processes. To this end, the generality and robustness of the computational modeling framework is demonstrated by several application-motivated examples representing the specific AM processes binder jetting, material jetting, directed energy deposition, and powder bed fusion. Among others, it is shown how the dynamic impact of droplets in binder jetting or the evaporation-induced recoil pressure in powder bed fusion leads to powder motion, distortion of the powder packing structure, and powder particle ejection.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
274,317
2404.14646
Exploring and Unleashing the Power of Large Language Models in Automated Code Translation
Code translation tools (transpilers) are developed for automatic source-to-source translation. Although learning-based transpilers have shown impressive enhancement against rule-based counterparts, owing to their task-specific pre-training on extensive monolingual corpora. Their current performance still remains unsatisfactory for practical deployment, and the associated training resources are also prohibitively expensive. LLMs pre-trained on huge amounts of human-written code/text have shown remarkable performance in many code intelligence tasks due to their powerful generality, even without task-specific training. Thus, LLMs can potentially circumvent the above limitations, but they have not been exhaustively explored yet. This paper investigates diverse LLMs and learning-based transpilers for automated code translation tasks, finding that: although certain LLMs have outperformed current transpilers, they still have some accuracy issues, where most of the failures are induced by a lack of comprehension of source programs, missing clear instructions on I/O types in translation, and ignoring discrepancies between source and target programs. Enlightened by the above findings, we further propose UniTrans, a Unified code Translation framework, applicable to various LLMs, for unleashing their power in this field. Specifically, UniTrans first crafts a series of test cases for target programs with the assistance of source programs. Next, it harnesses the above auto-generated test cases to augment the code translation and then evaluate their correctness via execution. Afterward, UniTrans further (iteratively) repairs incorrectly translated programs prompted by test case execution results. Extensive experiments are conducted on six settings of translation datasets between Python, Java, and C++. Three recent LLMs of diverse sizes are tested with UniTrans, and all achieve substantial improvements.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
448,752
1904.12144
IsMo-GAN: Adversarial Learning for Monocular Non-Rigid 3D Reconstruction
The majority of the existing methods for non-rigid 3D surface regression from monocular 2D images require an object template or point tracks over multiple frames as an input, and are still far from real-time processing rates. In this work, we present the Isometry-Aware Monocular Generative Adversarial Network (IsMo-GAN) - an approach for direct 3D reconstruction from a single image, trained for the deformation model in an adversarial manner on a light-weight synthetic dataset. IsMo-GAN reconstructs surfaces from real images under varying illumination, camera poses, textures and shading at over 250 Hz. In multiple experiments, it consistently outperforms several approaches in the reconstruction accuracy, runtime, generalisation to unknown surfaces and robustness to occlusions. In comparison to the state-of-the-art, we reduce the reconstruction error by 10-30% including the textureless case and our surfaces evince fewer artefacts qualitatively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,025
2101.06917
Detection of Insider Attacks in Distributed Projected Subgradient Algorithms
The gossip-based distributed algorithms are widely used to solve decentralized optimization problems in various multi-agent applications, while they are generally vulnerable to data injection attacks by internal malicious agents as each agent locally estimates its decent direction without an authorized supervision. In this work, we explore the application of artificial intelligence (AI) technologies to detect internal attacks. We show that a general neural network is particularly suitable for detecting and localizing the malicious agents, as they can effectively explore nonlinear relationship underlying the collected data. Moreover, we propose to adopt one of the state-of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol, to facilitate training our neural network models by gossip exchanges. This advanced approach is expected to make our model more robust to challenges with insufficient training data, or mismatched test data. In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods. Simulation results demonstrate that the proposed AI-based methods are beneficial to improve performance of detecting and localizing malicious agents over score-based methods, and the peer-to-peer neural network model is indeed robust to target issues.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
215,885
1509.06750
3D weak lensing with spin wavelets on the ball
We construct the spin flaglet transform, a wavelet transform to analyze spin signals in three dimensions. Spin flaglets can probe signal content localized simultaneously in space and frequency and, moreover, are separable so that their angular and radial properties can be controlled independently. They are particularly suited to analyzing of cosmological observations such as the weak gravitational lensing of galaxies. Such observations have a unique 3D geometrical setting since they are natively made on the sky, have spin angular symmetries, and are extended in the radial direction by additional distance or redshift information. Flaglets are constructed in the harmonic space defined by the Fourier-Laguerre transform, previously defined for scalar functions and extended here to signals with spin symmetries. Thanks to various sampling theorems, both the Fourier-Laguerre and flaglet transforms are theoretically exact when applied to bandlimited signals. In other words, in numerical computations the only loss of information is due to the finite representation of floating point numbers. We develop a 3D framework relating the weak lensing power spectrum to covariances of flaglet coefficients. We suggest that the resulting novel flaglet weak lensing estimator offers a powerful alternative to common 2D and 3D approaches to accurately capture cosmological information. While standard weak lensing analyses focus on either real or harmonic space representations (i.e., correlation functions or Fourier-Bessel power spectra, respectively), a wavelet approach inherits the advantages of both techniques, where both complicated sky coverage and uncertainties associated with the physical modeling of small scales can be handled effectively. Our codes to compute the Fourier-Laguerre and flaglet transforms are made publicly available.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,186
2208.00800
GANDSE: Generative Adversarial Network based Design Space Exploration for Neural Network Accelerator Design
With the popularity of deep learning, the hardware implementation platform of deep learning has received increasing interest. Unlike the general purpose devices, e.g., CPU, or GPU, where the deep learning algorithms are executed at the software level, neural network hardware accelerators directly execute the algorithms to achieve higher both energy efficiency and performance improvements. However, as the deep learning algorithms evolve frequently, the engineering effort and cost of designing the hardware accelerators are greatly increased. To improve the design quality while saving the cost, design automation for neural network accelerators was proposed, where design space exploration algorithms are used to automatically search the optimized accelerator design within a design space. Nevertheless, the increasing complexity of the neural network accelerators brings the increasing dimensions to the design space. As a result, the previous design space exploration algorithms are no longer effective enough to find an optimized design. In this work, we propose a neural network accelerator design automation framework named GANDSE, where we rethink the problem of design space exploration, and propose a novel approach based on the generative adversarial network (GAN) to support an optimized exploration for high dimension large design space. The experiments show that GANDSE is able to find the more optimized designs in negligible time compared with approaches including multilayer perceptron and deep reinforcement learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
310,976
2402.02513
Early stopping by correlating online indicators in neural networks
In order to minimize the generalization error in neural networks, a novel technique to identify overfitting phenomena when training the learner is formally introduced. This enables support of a reliable and trustworthy early stopping condition, thus improving the predictive power of that type of modeling. Our proposal exploits the correlation over time in a collection of online indicators, namely characteristic functions for indicating if a set of hypotheses are met, associated with a range of independent stopping conditions built from a canary judgment to evaluate the presence of overfitting. That way, we provide a formal basis for decision making in terms of interrupting the learning process. As opposed to previous approaches focused on a single criterion, we take advantage of subsidiarities between independent assessments, thus seeking both a wider operating range and greater diagnostic reliability. With a view to illustrating the effectiveness of the halting condition described, we choose to work in the sphere of natural language processing, an operational continuum increasingly based on machine learning. As a case study, we focus on parser generation, one of the most demanding and complex tasks in the domain. The selection of cross-validation as a canary function enables an actual comparison with the most representative early stopping conditions based on overfitting identification, pointing to a promising start toward an optimal bias and variance control.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
426,595
1202.3185
Improving News Ranking by Community Tweets
Users frequently express their information needs by means of short and general queries that are difficult for ranking algorithms to interpret correctly. However, users' social contexts can offer important additional information about their information needs which can be leveraged by ranking algorithms to provide augmented, personalized results. Existing methods mostly rely on users' individual behavioral data such as clickstream and log data, but as a result suffer from data sparsity and privacy issues. Here, we propose a Community Tweets Voting Model (CTVM) to re-rank Google and Yahoo news search results on the basis of open, large-scale Twitter community data. Experimental results show that CTVM outperforms baseline rankings from Google and Yahoo for certain online communities. We propose an application scenario of CTVM and provide an agenda for further research.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
14,334
1412.7713
Fronthaul Compression and Precoding Design for C-RANs over Ergodic Fading Channel
This work investigates the joint design of fronthaul compression and precoding for the downlink of Cloud Radio Access Networks (C-RANs). In a C-RAN, a central unit (CU) performs the baseband processing for a cluster of radio units (RUs) that receive compressed baseband samples from the CU through low-latency fronthaul links. Most previous works on the design of fronthaul compression and precoding assume constant channels and instantaneous channel state information (CSI) at the CU. This work, in contrast, concentrates on a more practical scenario with block-ergodic channels and considers either instantaneous or stochastic CSI at the CU. Moreover, the analysis encompasses both the Compression-After-Precoding (CAP) and the Compression-Before-Precoding (CBP) schemes. With the CAP approach, which is the standard C-RAN solution, the CU performs channel coding and precoding and then the CU compresses and forwards the resulting baseband signals on the fronthaul links to the RUs. With the CBP scheme, instead, the CU does not perform precoding but rather forwards separately the information messages of a subset of mobile stations (MSs) along with the compressed precoding matrices to the each RU, which then performs precoding. Optimization algorithms over fronthaul compression and precoding for both CAP and CBP are proposed that are based on a stochastic successive upper-bound minimization approach. Via numerical results, the relative merits of the two strategies under either instantaneous or stochastic CSI are evaluated as a function of system parameters such as fronthaul capacity and channel coherence time.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
38,833
2201.06592
Proactive Query Expansion for Streaming Data Using External Source
Query expansion is the process of reformulating the original query by adding relevant words. Choosing which terms to add in order to improve the performance of the query expansion methods or to enhance the quality of the retrieved results is an important aspect of any information retrieval system. Adding words that can positively impact the quality of the search query or are informative enough play an important role in returning or gathering relevant documents that cover a certain topic can result in improving the efficiency of the information retrieval system. Typically, query expansion techniques are used to add or substitute words to a given search query to collect relevant data. In this paper, we design and implement a pipeline of automated query expansion. We outline several tools using different methods to expand the query. Our methods depend on targeting emergent events in streaming data over time and finding the hidden topics from targeted documents using probabilistic topic models. We employ Dynamic Eigenvector Centrality to trigger the emergent events, and the Latent Dirichlet Allocation to discover the topics. Also, we use an external data source as a secondary stream to supplement the primary stream with relevant words and expand the query using the words from both primary and secondary streams. An experimental study is performed on Twitter data (primary stream) related to the events that happened during protests in Baltimore in 2015. The quality of the retrieved results was measured using a quality indicator of the streaming data: tweets count, hashtag count, and hashtag clustering.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
275,764
1811.00052
Some New Layer Architectures for Graph CNN
While convolutional neural networks (CNNs) have recently made great strides in supervised classification of data structured on a grid (e.g. images composed of pixel grids), in several interesting datasets, the relations between features can be better represented as a general graph instead of a regular grid. Although recent algorithms that adapt CNNs to graphs have shown promising results, they mostly neglect learning explicit operations for edge features while focusing on vertex features alone. We propose new formulations for convolutional, pooling, and fully connected layers for neural networks that make more comprehensive use of the information available in multi-dimensional graphs. Using these layers led to an improvement in classification accuracy over the state-of-the-art methods on benchmark graph datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,985
2409.09429
Real-Time Adaptive Industrial Robots: Improving Safety And Comfort In Human-Robot Collaboration
Industrial robots become increasingly prevalent, resulting in a growing need for intuitive, comforting human-robot collaboration. We present a user-aware robotic system that adapts to operator behavior in real time while non-intrusively monitoring physiological signals to create a more responsive and empathetic environment. Our prototype dynamically adjusts robot speed and movement patterns while measuring operator pupil dilation and proximity. Our user study compares this adaptive system to a non-adaptive counterpart, and demonstrates that the adaptive system significantly reduces both perceived and physiologically measured cognitive load while enhancing usability. Participants reported increased feelings of comfort, safety, trust, and a stronger sense of collaboration when working with the adaptive robot. This highlights the potential of integrating real-time physiological data into human-robot interaction paradigms. This novel approach creates more intuitive and collaborative industrial environments where robots effectively 'read' and respond to human cognitive states, and we feature all data and code for future use.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
488,315
2401.02718
Calibration Attacks: A Comprehensive Study of Adversarial Attacks on Model Confidence
In this work, we highlight and perform a comprehensive study on calibration attacks, a form of adversarial attacks that aim to trap victim models to be heavily miscalibrated without altering their predicted labels, hence endangering the trustworthiness of the models and follow-up decision making based on their confidence. We propose four typical forms of calibration attacks: underconfidence, overconfidence, maximum miscalibration, and random confidence attacks, conducted in both black-box and white-box setups. We demonstrate that the attacks are highly effective on both convolutional and attention-based models: with a small number of queries, they seriously skew confidence without changing the predictive performance. Given the potential danger, we further investigate the effectiveness of a wide range of adversarial defence and recalibration methods, including our proposed defences specifically designed for calibration attacks to mitigate the harm. From the ECE and KS scores, we observe that there are still significant limitations in handling calibration attacks. To the best of our knowledge, this is the first dedicated study that provides a comprehensive investigation on calibration-focused attacks. We hope this study helps attract more attention to these types of attacks and hence hamper their potential serious damages. To this end, this work also provides detailed analyses to understand the characteristics of the attacks. Our code is available at https://github.com/PhenetOs/CalibrationAttack
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
419,822
2309.12761
S3TC: Spiking Separated Spatial and Temporal Convolutions with Unsupervised STDP-based Learning for Action Recognition
Video analysis is a major computer vision task that has received a lot of attention in recent years. The current state-of-the-art performance for video analysis is achieved with Deep Neural Networks (DNNs) that have high computational costs and need large amounts of labeled data for training. Spiking Neural Networks (SNNs) have significantly lower computational costs (thousands of times) than regular non-spiking networks when implemented on neuromorphic hardware. They have been used for video analysis with methods like 3D Convolutional Spiking Neural Networks (3D CSNNs). However, these networks have a significantly larger number of parameters compared with spiking 2D CSNN. This, not only increases the computational costs, but also makes these networks more difficult to implement with neuromorphic hardware. In this work, we use CSNNs trained in an unsupervised manner with the Spike Timing-Dependent Plasticity (STDP) rule, and we introduce, for the first time, Spiking Separated Spatial and Temporal Convolutions (S3TCs) for the sake of reducing the number of parameters required for video analysis. This unsupervised learning has the advantage of not needing large amounts of labeled data for training. Factorizing a single spatio-temporal spiking convolution into a spatial and a temporal spiking convolution decreases the number of parameters of the network. We test our network with the KTH, Weizmann, and IXMAS datasets, and we show that S3TCs successfully extract spatio-temporal information from videos, while increasing the output spiking activity, and outperforming spiking 3D convolutions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
393,925
1801.06161
Topic Lifecycle on Social Networks: Analyzing the Effects of Semantic Continuity and Social Communities
Topic lifecycle analysis on Twitter, a branch of study that investigates Twitter topics from their birth through lifecycle to death, has gained immense mainstream research popularity. In the literature, topics are often treated as one of (a) hashtags (independent from other hashtags), (b) a burst of keywords in a short time span or (c) a latent concept space captured by advanced text analysis methodologies, such as Latent Dirichlet Allocation (LDA). The first two approaches are not capable of recognizing topics where different users use different hashtags to express the same concept (semantically related), while the third approach misses out the user's explicit intent expressed via hashtags. In our work, we use a word embedding based approach to cluster different hashtags together, and the temporal concurrency of the hashtag usages, thus forming topics (a semantically and temporally related group of hashtags).We present a novel analysis of topic lifecycles with respect to communities. We characterize the participation of social communities in the topic clusters, and analyze the lifecycle of topic clusters with respect to such participation. We derive first-of-its-kind novel insights with respect to the complex evolution of topics over communities and time: temporal morphing of topics over hashtags within communities, how the hashtags die in some communities but morph into some other hashtags in some other communities (that, it is a community-level phenomenon), and how specific communities adopt to specific hashtags. Our work is fundamental in the space of topic lifecycle modeling and understanding in communities: it redefines our understanding of topic lifecycles and shows that the social boundaries of topic lifecycles are deeply ingrained with community behavior.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
88,565
2305.07079
The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation
Dataset obfuscation refers to techniques in which random noise is added to the entries of a given dataset, prior to its public release, to protect against leakage of private information. In this work, dataset obfuscation under two objectives is considered: i) rank-preservation: to preserve the row ordering in the obfuscated dataset induced by a given rank function, and ii) anonymity: to protect user anonymity under fingerprinting attacks. The first objective, rank-preservation, is of interest in applications such as the design of search engines and recommendation systems, feature matching, and social network analysis. Fingerprinting attacks, considered in evaluating the anonymity objective, are privacy attacks where an attacker constructs a fingerprint of a victim based on its observed activities, such as online web activities, and compares this fingerprint with information extracted from a publicly released obfuscated dataset to identify the victim. By evaluating the performance limits of a class of obfuscation mechanisms over asymptotically large datasets, a fundamental trade-off is quantified between rank-preservation and user anonymity. Single-letter obfuscation mechanisms are considered, where each entry in the dataset is perturbed by independent noise, and their fundamental performance limits are characterized by leveraging large deviation techniques. The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff, is characterized in the form of a convex optimization problem which can be solved efficiently. Numerical simulations of various scenarios are provided to verify the theoretical derivations.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
363,763
2211.03115
Data-driven Emergency Frequency Control for Multi-Infeed Hybrid AC-DC System
With the continuous development of large-scale complex hybrid AC-DC grids, the fast adjustability of HVDC systems is required by the grid to provide frequency regulation services. This paper develops a fully data-driven linear quadratic regulator (LQR) for the HVDC to provide temporal frequency support. The main technical challenge is the complexity and the nonlinearity of multi-infeed hybrid AC-DC (MIDC) systems dynamics that make the LQR intractable. Based on Koopman operator (KO) theory, a Koopman eigenpairs construction method is developed to fit a global linear dynamic model of MIDC systems. Once globally linear representation of uncontrolled system dynamics is obtained offline, the control term is constituted by the gradient of the identified eigenfunctions and the control matrix $\mathbf{B}$. In case that $\mathbf{B}$ is unknown, we propose a method to identify it based on the verified Koopman eigenfunctions. The active power reference is optimized online for LCC-HVDC in a moving horizon fashion to provide frequency support, with only local frequency and transmission power measurements. The robustness of the proposed control method against approximation errors of the linear representation in eigenfunction coordinates is analyzed. Simulation results show the effectiveness, robustness and adaptability of the proposed emergency control strategy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
328,831
1702.06235
Learning to generate one-sentence biographies from Wikidata
We investigate the generation of one-sentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
68,558
2401.13481
How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas: Evidence From a Large, Dynamic Experiment
Exposure to large language model output is rapidly increasing. How will seeing AI-generated ideas affect human ideas? We conducted an experiment (800+ participants, 40+ countries) where participants viewed creative ideas that were from ChatGPT or prior experimental participants and then brainstormed their own idea. We varied the number of AI-generated examples (none, low, or high exposure) and if the examples were labeled as 'AI' (disclosure). Our dynamic experiment design -- ideas from prior participants in an experimental condition are used as stimuli for future participants in the same experimental condition -- speaks to the interdependent process of cultural creation: creative ideas are built upon prior ideas. Hence, we capture the compounding effects of having LLMs 'in the culture loop'. We find that high AI exposure (but not low AI exposure) did not affect the creativity of individual ideas but did increase the average amount and rate of change of collective idea diversity. AI made ideas different, not better. There were no main effects of disclosure. We also found that self-reported creative people were less influenced by knowing an idea was from AI and that participants may knowingly adopt AI ideas when the task is difficult. Our findings suggest that introducing AI ideas may increase collective diversity but not individual creativity.
true
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
423,742
2411.06175
Clustering Algorithms and RAG Enhancing Semi-Supervised Text Classification with Large LLMs
This paper proposes a Clustering, Labeling, then Augmenting framework that significantly enhances performance in Semi-Supervised Text Classification (SSTC) tasks, effectively addressing the challenge of vast datasets with limited labeled examples. Unlike traditional SSTC approaches that rely on a predefined small set of labeled data to generate pseudo-labels for the unlabeled data, this framework innovatively employs clustering to select representative "landmarks" for labeling. These landmarks subsequently act as intermediaries in an ensemble of augmentation techniques, including Retrieval-Augmented Generation (RAG), Large Language Model (LLMs)-based rewriting, and synonym substitution, to generate synthetic labeled data without making pseudo-labels for the unlabeled data. Empirical results show that even in complex text document classification scenarios involving over 100 categories, our method achieves state-of-the-art accuracies of 95.41% on the Reuters dataset and 82.43% on the Web of Science dataset. Our approach significantly reduces the reliance on human labeling efforts and the associated expenses, while simultaneously ensuring high data quality and minimizing privacy risks. The finetuning results further show the efficiency of fine-tuning LLMs for text classification tasks, highlighting a robust solution for leveraging limited labeled data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
506,998
2009.04185
Small-floating Target Detection in Sea Clutter via Visual Feature Classifying in the Time-Doppler Spectra
It is challenging to detect small-floating object in the sea clutter for a surface radar. In this paper, we have observed that the backscatters from the target brake the continuity of the underlying motion of the sea surface in the time-Doppler spectra (TDS) images. Following this visual clue, we exploit the local binary pattern (LBP) to measure the variations of texture in the TDS images. It is shown that the radar returns containing target and those only having clutter are separable in the feature space of LBP. An unsupervised one-class support vector machine (SVM) is then utilized to detect the deviation of the LBP histogram of the clutter. The outiler of the detector is classified as the target. In the real-life IPIX radar data sets, our visual feature based detector shows favorable detection rate compared to other three existing approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
194,984
2306.07176
Slicing Unbalanced Optimal Transport
Optimal transport (OT) is a powerful framework to compare probability measures, a fundamental task in many statistical and machine learning problems. Substantial advances have been made in designing OT variants which are either computationally and statistically more efficient or robust. Among them, sliced OT distances have been extensively used to mitigate optimal transport's cubic algorithmic complexity and curse of dimensionality. In parallel, unbalanced OT was designed to allow comparisons of more general positive measures, while being more robust to outliers. In this paper, we bridge the gap between those two concepts and develop a general framework for efficiently comparing positive measures. We notably formulate two different versions of sliced unbalanced OT, and study the associated topology and statistical properties. We then develop a GPU-friendly Frank-Wolfe like algorithm to compute the corresponding loss functions, and show that the resulting methodology is modular as it encompasses and extends prior related work. We finally conduct an empirical analysis of our loss functions and methodology on both synthetic and real datasets, to illustrate their computational efficiency, relevance and applicability to real-world scenarios including geophysical data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
372,909
1401.3882
Probabilistic Relational Planning with First Order Decision Diagrams
Dynamic programming algorithms have been successfully applied to propositional stochastic planning problems by using compact representations, in particular algebraic decision diagrams, to capture domain dynamics and value functions. Work on symbolic dynamic programming lifted these ideas to first order logic using several representation schemes. Recent work introduced a first order variant of decision diagrams (FODD) and developed a value iteration algorithm for this representation. This paper develops several improvements to the FODD algorithm that make the approach practical. These include, new reduction operators that decrease the size of the representation, several speedup techniques, and techniques for value approximation. Incorporating these, the paper presents a planning system, FODD-Planner, for solving relational stochastic planning problems. The system is evaluated on several domains, including problems from the recent international planning competition, and shows competitive performance with top ranking systems. This is the first demonstration of feasibility of this approach and it shows that abstraction through compact representation is a promising approach to stochastic planning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
29,995
2407.14720
Downstream-Pretext Domain Knowledge Traceback for Active Learning
Active learning (AL) is designed to construct a high-quality labeled dataset by iteratively selecting the most informative samples. Such sampling heavily relies on data representation, while recently pre-training is popular for robust feature learning. However, as pre-training utilizes low-level pretext tasks that lack annotation, directly using pre-trained representation in AL is inadequate for determining the sampling score. To address this problem, we propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance for selecting diverse and instructive samples near the decision boundary. DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator. The diversity indicator constructs two feature spaces based on the pre-training pretext model and the downstream knowledge from annotation, by which it locates the neighbors of unlabeled data from the downstream space in the pretext space to explore the interaction of samples. With this mechanism, DOKT unifies the data relations of low-level and high-level representations to estimate traceback diversity. Next, in the uncertainty estimator, domain mixing is designed to enforce perceptual perturbing to unlabeled samples with similar visual patches in the pretext space. Then the divergence of perturbed samples is measured to estimate the domain uncertainty. As a result, DOKT selects the most diverse and important samples based on these two modules. The experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods and generalizes well to various application scenarios such as semantic segmentation and image captioning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
474,876
2407.06390
JANET: Joint Adaptive predictioN-region Estimation for Time-series
Conformal prediction provides machine learning models with prediction sets that offer theoretical guarantees, but the underlying assumption of exchangeability limits its applicability to time series data. Furthermore, existing approaches struggle to handle multi-step ahead prediction tasks, where uncertainty estimates across multiple future time points are crucial. We propose JANET (Joint Adaptive predictioN-region Estimation for Time-series), a novel framework for constructing conformal prediction regions that are valid for both univariate and multivariate time series. JANET generalises the inductive conformal framework and efficiently produces joint prediction regions with controlled K-familywise error rates, enabling flexible adaptation to specific application needs. Our empirical evaluation demonstrates JANET's superior performance in multi-step prediction tasks across diverse time series datasets, highlighting its potential for reliable and interpretable uncertainty quantification in sequential data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
471,378
2402.13891
Overcoming Saturation in Density Ratio Estimation by Iterated Regularization
Estimating the ratio of two probability densities from finitely many samples, is a central task in machine learning and statistics. In this work, we show that a large class of kernel methods for density ratio estimation suffers from error saturation, which prevents algorithms from achieving fast error convergence rates on highly regular learning problems. To resolve saturation, we introduce iterated regularization in density ratio estimation to achieve fast error rates. Our methods outperform its non-iteratively regularized versions on benchmarks for density ratio estimation as well as on large-scale evaluations for importance-weighted ensembling of deep unsupervised domain adaptation models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,450
2409.12202
ScaleFlow++: Robust and Accurate Estimation of 3D Motion from Video
Perceiving and understanding 3D motion is a core technology in fields such as autonomous driving, robots, and motion prediction. This paper proposes a 3D motion perception method called ScaleFlow++ that is easy to generalize. With just a pair of RGB images, ScaleFlow++ can robustly estimate optical flow and motion-in-depth (MID). Most existing methods directly regress MID from two RGB frames or optical flow, resulting in inaccurate and unstable results. Our key insight is cross-scale matching, which extracts deep motion clues by matching objects in pairs of images at different scales. Unlike previous methods, ScaleFlow++ integrates optical flow and MID estimation into a unified architecture, estimating optical flow and MID end-to-end based on feature matching. Moreover, we also proposed modules such as global initialization network, global iterative optimizer, and hybrid training pipeline to integrate global motion information, reduce the number of iterations, and prevent overfitting during training. On KITTI, ScaleFlow++ achieved the best monocular scene flow estimation performance, reducing SF-all from 6.21 to 5.79. The evaluation of MID even surpasses RGBD-based methods. In addition, ScaleFlow++ has achieved stunning zero-shot generalization performance in both rigid and nonrigid scenes. Code is available at \url{https://github.com/HanLingsgjk/CSCV}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
489,488
2412.19616
Gradient Weight-normalized Low-rank Projection for Efficient LLM Training
Large Language Models (LLMs) have shown remarkable performance across various tasks, but the escalating demands on computational resources pose significant challenges, particularly in the extensive utilization of full fine-tuning for downstream tasks. To address this, parameter-efficient fine-tuning (PEFT) methods have been developed, but they often underperform compared to full fine-tuning and struggle with memory efficiency. In this work, we introduce Gradient Weight-Normalized Low-Rank Projection (GradNormLoRP), a novel approach that enhances both parameter and memory efficiency while maintaining comparable performance to full fine-tuning. GradNormLoRP normalizes the weight matrix to improve gradient conditioning, facilitating better convergence during optimization. Additionally, it applies low-rank approximations to the weight and gradient matrices, significantly reducing memory usage during training. Extensive experiments demonstrate that our 8-bit GradNormLoRP reduces optimizer memory usage by up to 89.5% and enables the pre-training of large LLMs, such as LLaMA 7B, on consumer-level GPUs like the NVIDIA RTX 4090, without additional inference costs. Moreover, GradNormLoRP outperforms existing low-rank methods in fine-tuning tasks. For instance, when fine-tuning the RoBERTa model on all GLUE tasks with a rank of 8, GradNormLoRP achieves an average score of 80.65, surpassing LoRA's score of 79.23. These results underscore GradNormLoRP as a promising alternative for efficient LLM pre-training and fine-tuning. Source code: https://github.com/Jhhuangkay/Gradient-Weight-normalized-Low-rank-Projection-for-Efficient-LLM-Training
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
520,914
1412.2958
Physical forces between humans and how humans attract and repel each other based on their social interactions in an online world
Physical interactions between particles are the result of the exchange of gauge bosons. Human interactions are mediated by the exchange of messages, goods, money, promises, hostilities, etc. While in the physical world interactions and their associated forces have immediate dynamical consequences (Newton's law) the situation is not clear for human interactions. Here we study the acceleration between humans who interact through the exchange of messages, goods and hostilities in a massive multiplayer online game. For this game we have complete information about all interactions (exchange events) between about 1/2 million players, and about their trajectories (positions) in a metric space of the game universe at any point in time. We derive the interaction potentials for communication, trade and attacks and show that they are harmonic in nature. Individuals who exchange messages and trade goods generally attract each other and start to separate immediately after exchange events stop. The interaction potential for attacks mirrors the usual "hit-and-run" tactics of aggressive players. By measuring interaction intensities as a function of distance, velocity and acceleration, we show that "forces" between players are directly related to the number of exchange events. The power-law of the likelihood for interactions vs. distance is in accordance with previous real world empirical work. We show that the obtained potentials can be understood with a simple model assuming an exchange-driven force in combination with a distance dependent exchange rate.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
38,247
2312.01938
DSText V2: A Comprehensive Video Text Spotting Dataset for Dense and Small Text
Recently, video text detection, tracking, and recognition in natural scenes are becoming very popular in the computer vision community. However, most existing algorithms and benchmarks focus on common text cases (e.g., normal size, density) and single scenario, while ignoring extreme video text challenges, i.e., dense and small text in various scenarios. In this paper, we establish a video text reading benchmark, named DSText V2, which focuses on Dense and Small text reading challenges in the video with various scenarios. Compared with the previous datasets, the proposed dataset mainly include three new challenges: 1) Dense video texts, a new challenge for video text spotters to track and read. 2) High-proportioned small texts, coupled with the blurriness and distortion in the video, will bring further challenges. 3) Various new scenarios, e.g., Game, Sports, etc. The proposed DSText V2 includes 140 video clips from 7 open scenarios, supporting three tasks, i.e., video text detection (Task 1), video text tracking (Task 2), and end-to-end video text spotting (Task 3). In this article, we describe detailed statistical information of the dataset, tasks, evaluation protocols, and the results summaries. Most importantly, a thorough investigation and analysis targeting three unique challenges derived from our dataset are provided, aiming to provide new insights. Moreover, we hope the benchmark will promise video text research in the community. DSText v2 is built upon DSText v1, which was previously introduced to organize the ICDAR 2023 competition for dense and small video text.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
412,635
2102.12685
A Local Method for Identifying Causal Relations under Markov Equivalence
Causality is important for designing interpretable and robust methods in artificial intelligence research. We propose a local approach to identify whether a variable is a cause of a given target under the framework of causal graphical models of directed acyclic graphs (DAGs). In general, the causal relation between two variables may not be identifiable from observational data as many causal DAGs encoding different causal relations are Markov equivalent. In this paper, we first introduce a sufficient and necessary graphical condition to check the existence of a causal path from a variable to a target in every Markov equivalent DAG. Next, we provide local criteria for identifying whether a variable is a cause/non-cause of a target based only on the local structure instead of the entire graph. Finally, we propose a local learning algorithm for this causal query via learning the local structure of the variable and some additional statistical independence tests related to the target. Simulation studies show that our local algorithm is efficient and effective, compared with other state-of-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
221,816
2006.03548
Graphon Neural Networks and the Transferability of Graph Neural Networks
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data. These graph convolutions combine information from adjacent nodes using coefficients that are shared across all nodes. Since these coefficients are shared and do not depend on the graph, one can envision using the same coefficients to define a GNN on another graph. This motivates analyzing the transferability of GNNs across graphs. In this paper we introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN. This bound vanishes with growing number of nodes if the graph convolutional filters are bandlimited in the graph spectral domain. This result establishes a tradeoff between discriminability and transferability of GNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,347
2402.05563
Neural Multigrid Architectures
We propose a convenient matrix-free neural architecture for the multigrid method. The architecture is simple enough to be implemented in less than fifty lines of code, yet it encompasses a large number of distinct multigrid solvers. We argue that a fixed neural network without dense layers can not realize an efficient iterative method. Because of that, standard training protocols do not lead to competitive solvers. To overcome this difficulty, we use parameter sharing and serialization of layers. The resulting network can be trained on linear problems with thousands of unknowns and retains its efficiency on problems with millions of unknowns. From the point of view of numerical linear algebra network's training corresponds to finding optimal smoothers for the geometric multigrid method. We demonstrate our approach on a few second-order elliptic equations. For tested linear systems, we obtain from two to five times smaller spectral radius of the error propagation matrix compare to a basic linear multigrid with Jacobi smoother.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
427,908
2303.15245
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks
Title: Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks Abstract: Convolutional neural networks (CNNs) are widely used in various applications due to their effectiveness in extracting features from data. However, the performance of a CNN heavily depends on its architecture and training process. In this study, we propose a layer-to-layer training method and compare its performance with the conventional training method. In the layer-to-layer training approach, we treat a portion of the early layers as a student network and the later layers as a teacher network. During each training step, we incrementally train the student network to learn from the output of the teacher network, and vice versa. We evaluate this approach on VGG16, ResNext, and DenseNet networks without pre-trained ImageNet weights and a regular CNN model. Our experiments show that the layer-to-layer training method outperforms the conventional training method for both models. Specifically, we achieve higher accuracy on the test set for the VGG16, ResNext, and DeseNet networks and the CNN model using layer-to-layer training compared to the conventional training method. Overall, our study highlights the importance of layer-wise training in CNNs and suggests that layer-to-layer training can be a promising approach for improving the accuracy of CNNs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
354,423