id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2412.14650
Permutation recovery of spikes in noisy high-dimensional tensor estimation
We study the dynamics of gradient flow in high dimensions for the multi-spiked tensor problem, where the goal is to estimate $r$ unknown signal vectors (spikes) from noisy Gaussian tensor observations. Specifically, we analyze the maximum likelihood estimation procedure, which involves optimizing a highly nonconvex random function. We determine the sample complexity required for gradient flow to efficiently recover all spikes, without imposing any assumptions on the separation of the signal-to-noise ratios (SNRs). More precisely, our results provide the sample complexity required to guarantee recovery of the spikes up to a permutation. Our work builds on our companion paper [Ben Arous, Gerbelot, Piccolo 2024], which studies Langevin dynamics and determines the sample complexity and separation conditions for the SNRs necessary for ensuring exact recovery of the spikes (where the recovered permutation matches the identity). During the recovery process, the correlations between the estimators and the hidden vectors increase in a sequential manner. The order in which these correlations become significant depends on their initial values and the corresponding SNRs, which ultimately determines the permutation of the recovered spikes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
518,807
2402.17983
3MVRD: Multimodal Multi-task Multi-teacher Visually-Rich Form Document Understanding
This paper presents a groundbreaking multimodal, multi-task, multi-teacher joint-grained knowledge distillation model for visually-rich form document understanding. The model is designed to leverage insights from both fine-grained and coarse-grained levels by facilitating a nuanced correlation between token and entity representations, addressing the complexities inherent in form documents. Additionally, we introduce new inter-grained and cross-grained loss functions to further refine diverse multi-teacher knowledge distillation transfer process, presenting distribution gaps and a harmonised understanding of form documents. Through a comprehensive evaluation across publicly available form document understanding datasets, our proposed model consistently outperforms existing baselines, showcasing its efficacy in handling the intricate structures and content of visually complex form documents.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
433,230
1206.5260
Reasoning at the Right Time Granularity
Most real-world dynamic systems are composed of different components that often evolve at very different rates. In traditional temporal graphical models, such as dynamic Bayesian networks, time is modeled at a fixed granularity, generally selected based on the rate at which the fastest component evolves. Inference must then be performed at this fastest granularity, potentially at significant computational cost. Continuous Time Bayesian Networks (CTBNs) avoid time-slicing in the representation by modeling the system as evolving continuously over time. The expectation-propagation (EP) inference algorithm of Nodelman et al. (2005) can then vary the inference granularity over time, but the granularity is uniform across all parts of the system, and must be selected in advance. In this paper, we provide a new EP algorithm that utilizes a general cluster graph architecture where clusters contain distributions that can overlap in both space (set of variables) and time. This architecture allows different parts of the system to be modeled at very different time granularities, according to their current rate of evolution. We also provide an information-theoretic criterion for dynamically re-partitioning the clusters during inference to tune the level of approximation to the current rate of evolution. This avoids the need to hand-select the appropriate granularity, and allows the granularity to adapt as information is transmitted across the network. We present experiments demonstrating that this approach can result in significant computational savings.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
16,798
2007.14294
A High Probability Analysis of Adaptive SGD with Momentum
Stochastic Gradient Descent (SGD) and its variants are the most used algorithms in machine learning applications. In particular, SGD with adaptive learning rates and momentum is the industry standard to train deep networks. Despite the enormous success of these methods, our theoretical understanding of these variants in the nonconvex setting is not complete, with most of the results only proving convergence in expectation and with strong assumptions on the stochastic gradients. In this paper, we present a high probability analysis for adaptive and momentum algorithms, under weak assumptions on the function, stochastic gradients, and learning rates. We use it to prove for the first time the convergence of the gradients to zero in high probability in the smooth nonconvex setting for Delayed AdaGrad with momentum.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
189,362
2310.07756
Self-supervised Representation Learning From Random Data Projectors
Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations. While augmentation-based SSRL algorithms push the boundaries of performance in computer vision and natural language processing, they are often not directly applicable to other data modalities, and can conflict with application-specific data augmentation constraints. This paper presents an SSRL approach that can be applied to any data modality and network architecture because it does not rely on augmentations or masking. Specifically, we show that high-quality data representations can be learned by reconstructing random data projections. We evaluate the proposed approach on a wide range of representation learning tasks that span diverse modalities and real-world applications. We show that it outperforms multiple state-of-the-art SSRL baselines. Due to its wide applicability and strong empirical results, we argue that learning from randomness is a fruitful research direction worthy of attention and further study.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,112
2408.12100
A Unified Plug-and-Play Algorithm with Projected Landweber Operator for Split Convex Feasibility Problems
In recent years Plug-and-Play (PnP) methods have achieved state-of-the-art performance in inverse imaging problems by replacing proximal operators with denoisers. Based on the proximal gradient method, some theoretical results of PnP have appeared, where appropriate step size is crucial for convergence analysis. However, in practical applications, applying PnP methods with theoretically guaranteed step sizes is difficult, and these algorithms are limited to Gaussian noise. In this paper,from a perspective of split convex feasibility problems (SCFP), an adaptive PnP algorithm with Projected Landweber Operator (PnP-PLO) is proposed to address these issues. Numerical experiments on image deblurring, super-resolution, and compressed sensing MRI experiments illustrate that PnP-PLO with theoretical guarantees outperforms state-of-the-art methods such as RED and RED-PRO.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
482,583
2309.13287
Approximating Queries on Probabilistic Graphs
Query evaluation over probabilistic databases is notoriously intractable -- not only in combined complexity, but often in data complexity as well. This motivates the study of approximation algorithms, and particularly of combined FPRASes, with runtime polynomial in both the query and instance size. In this paper, we focus on tuple-independent probabilistic databases over binary signatures, i.e., probabilistic graphs, and study when we can devise combined FPRASes for probabilistic query evaluation. We settle the complexity of this problem for a variety of query and instance classes, by proving both approximability results and (conditional) inapproximability results doubled with (unconditional) DNNF provenance circuit size lower bounds. This allows us to deduce many corollaries of possible independent interest. For example, we show how the results of Arenas et al. on counting fixed-length strings accepted by an NFA imply the existence of an FPRAS for the two-terminal network reliability problem on directed acyclic graphs: this was an open problem until now. We also show that one cannot extend a recent result of van Bremen and Meel that gives a combined FPRAS for self-join-free conjunctive queries of bounded hypertree width on probabilistic databases: neither the bounded-hypertree-width condition nor the self-join-freeness hypothesis can be relaxed. We last show how our methods can give insights on the evaluation and approximability of regular path queries (RPQs) on probabilistic graphs in the data complexity perspective, showing in particular that some of them are (conditionally) inapproximable.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
394,142
1512.05457
In a World That Counts: Clustering and Detecting Fake Social Engagement at Scale
How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner --- with the objective of searching for individuals that have similar pattern of behavior as the known seeds --- based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving an manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time. Leas is actively in use at Google, searching for daily deceptive practices on YouTube's engagement graph spanning over a billion users.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
50,229
2102.01938
How good is Good-Turing for Markov samples?
The Good-Turing (GT) estimator for the missing mass (i.e., total probability of missing symbols) in $n$ samples is the number of symbols that appeared exactly once divided by $n$. For i.i.d. samples, the bias and squared-error risk of the GT estimator can be shown to fall as $1/n$ by bounding the expected error uniformly over all symbols. In this work, we study convergence of the GT estimator for missing stationary mass (i.e., total stationary probability of missing symbols) of Markov samples on an alphabet $\mathcal{X}$ with stationary distribution $[\pi_x:x \in \mathcal{X}]$ and transition probability matrix (t.p.m.) $P$. This is an important and interesting problem because GT is widely used in applications with temporal dependencies such as language models assigning probabilities to word sequences, which are modelled as Markov. We show that convergence of GT depends on convergence of $(P^{\sim x})^n$, where $P^{\sim x}$ is $P$ with the $x$-th column zeroed out. This, in turn, depends on the Perron eigenvalue $\lambda^{\sim x}$ of $P^{\sim x}$ and its relationship with $\pi_x$ uniformly over $x$. For randomly generated t.p.ms and t.p.ms derived from New York Times and Charles Dickens corpora, we numerically exhibit such uniform-over-$x$ relationships between $\lambda^{\sim x}$ and $\pi_x$. This supports the observed success of GT in language models and practical text data scenarios. For Markov chains with rank-2, diagonalizable t.p.ms having spectral gap $\beta$, we show minimax rate upper and lower bounds of $1/(n\beta^5)$ and $1/(n\beta)$, respectively, for the estimation of stationary missing mass. This theoretical result extends the $1/n$ minimax rate for i.i.d. or rank-1 t.p.ms to rank-2 Markov, and is a first such minimax rate result for missing mass of Markov samples.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
218,275
2311.18068
ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic Reconstruction
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames. Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality. To overcome the inherent challenges of online methods, we make two main contributions. First, to effectively extract information from the input RGB-D video stream, we jointly estimate geometry and semantic labels per frame in 3D. A key focus of our approach is to reason about semantic entities both in the 2D input and the local 3D domain to leverage differences in spatial context and network architectures. Our method predicts 2D features using an off-the-shelf segmentation network. The extracted 2D features are refined by a lightweight 3D network to enable reasoning about the local 3D structure. Second, to efficiently deal with an infinite stream of input RGB-D frames, a subsequent network serves as a temporal expert predicting the incremental scene updates by leveraging 2D, 3D, and past information in a learned manner. These updates are then integrated into a global scene representation. Using these main contributions, our method can enable scenarios with real-time constraints and can scale to arbitrary scene sizes by processing and updating the scene only in a local region defined by the new measurement. Our experiments demonstrate improved results compared to existing online methods that purely operate in local regions and show that complementary sources of information can boost the performance. We provide a thorough ablation study on the benefits of different architectural as well as algorithmic design decisions. Our method yields competitive results on the popular ScanNet benchmark and SceneNN dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,542
2210.11248
OCR-VQGAN: Taming Text-within-Image Generation
Synthetic image generation has recently experienced significant improvements in domains such as natural image or art generation. However, the problem of figure and diagram generation remains unexplored. A challenging aspect of generating figures and diagrams is effectively rendering readable texts within the images. To alleviate this problem, we present OCR-VQGAN, an image encoder, and decoder that leverages OCR pre-trained features to optimize a text perceptual loss, encouraging the architecture to preserve high-fidelity text and diagram structure. To explore our approach, we introduce the Paper2Fig100k dataset, with over 100k images of figures and texts from research papers. The figures show architecture diagrams and methodologies of articles available at arXiv.org from fields like artificial intelligence and computer vision. Figures usually include text and discrete objects, e.g., boxes in a diagram, with lines and arrows that connect them. We demonstrate the effectiveness of OCR-VQGAN by conducting several experiments on the task of figure reconstruction. Additionally, we explore the qualitative and quantitative impact of weighting different perceptual metrics in the overall loss function. We release code, models, and dataset at https://github.com/joanrod/ocr-vqgan.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,242
1706.03498
On the covariance of X in AX = XB
Hand-eye calibration, which consists in identifying the rigid- body transformation between a camera mounted on the robot end-effector and the end-effector itself, is a fundamental problem in robot vision. Mathematically, this problem can be formulated as: solve for X in AX = XB. In this paper, we provide a rigorous derivation of the covariance of the solution X, when A and B are randomly perturbed matrices. This fine-grained information is critical for applications that require a high degree of perception precision. Our approach consists in applying covariance propagation methods in SE(3). Experiments involving synthetic and real calibration data confirm that our approach can predict the covariance of the hand-eye transformation with excellent precision.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
75,181
2412.16378
REFA: Reference Free Alignment for multi-preference optimization
We introduce REFA, a family of reference-free alignment methods that optimize over multiple user preferences while enforcing fine-grained length control. Our approach integrates deviation-based weighting to emphasize high-quality responses more strongly, length normalization to prevent trivial short-response solutions, and an EOS-probability regularizer to mitigate dataset-induced brevity biases. Theoretically, we show that under the Uncertainty Reduction with Sequence Length Assertion (URSLA), naive length normalization can still incentivize length-based shortcuts. By contrast, REFA corrects these subtle incentives, guiding models toward genuinely more informative and higher-quality outputs. Empirically, REFA sets a new state-of-the-art among reference-free alignment methods, producing richer responses aligned more closely with human preferences. Compared to a base supervised fine-tuned (SFT) mistral-7b model that achieves 8.4% length-controlled win rate (LC-WR) and 6.2% win rate (WR), our best REFA configuration attains 21.62% LC-WR and 19.87% WR on the AlpacaEval v2 benchmark. This represents a substantial improvement over both the strongest multi-preference baseline, InfoNCA (16.82% LC-WR, 10.44% WR), and the strongest reference-free baseline, SimPO (20.01% LC-WR, 17.65% WR)
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
519,487
2303.14027
Poincar\'e ResNet
This paper introduces an end-to-end residual network that operates entirely on the Poincar\'e ball model of hyperbolic space. Hyperbolic learning has recently shown great potential for visual understanding, but is currently only performed in the penultimate layer(s) of deep networks. All visual representations are still learned through standard Euclidean networks. In this paper we investigate how to learn hyperbolic representations of visual data directly from the pixel-level. We propose Poincar\'e ResNet, a hyperbolic counterpart of the celebrated residual network, starting from Poincar\'e 2D convolutions up to Poincar\'e residual connections. We identify three roadblocks for training convolutional networks entirely in hyperbolic space and propose a solution for each: (i) Current hyperbolic network initializations collapse to the origin, limiting their applicability in deeper networks. We provide an identity-based initialization that preserves norms over many layers. (ii) Residual networks rely heavily on batch normalization, which comes with expensive Fr\'echet mean calculations in hyperbolic space. We introduce Poincar\'e midpoint batch normalization as a faster and equally effective alternative. (iii) Due to the many intermediate operations in Poincar\'e layers, we lastly find that the computation graphs of deep learning libraries blow up, limiting our ability to train on deep hyperbolic networks. We provide manual backward derivations of core hyperbolic operations to maintain manageable computation graphs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
353,921
cs/9604103
Iterative Optimization and Simplification of Hierarchical Clusterings
Clustering is often used for discovering structure in data. Clustering systems differ in the objective function used to evaluate clustering quality and the control strategy used to search the space of clusterings. Ideally, the search strategy should consistently construct clusterings of high quality, but be computationally inexpensive as well. In general, we cannot have it both ways, but we can partition the search so that a system inexpensively constructs a `tentative' clustering for initial examination, followed by iterative optimization, which continues to search in background for improved clusterings. Given this motivation, we evaluate an inexpensive strategy for creating initial clusterings, coupled with several control strategies for iterative optimization, each of which repeatedly modifies an initial clustering in search of a better one. One of these methods appears novel as an iterative optimization strategy in clustering contexts. Once a clustering has been constructed it is judged by analysts -- often according to task-specific criteria. Several authors have abstracted these criteria and posited a generic performance task akin to pattern completion, where the error rate over completed patterns is used to `externally' judge clustering utility. Given this performance task, we adapt resampling-based pruning strategies used by supervised learning systems to the task of simplifying hierarchical clusterings, thus promising to ease post-clustering analysis. Finally, we propose a number of objective functions, based on attribute-selection measures for decision-tree induction, that might perform well on the error rate and simplicity dimensions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,336
2212.03513
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Automated Machine Learning-based systems' integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical, high-risk applications where human lives are at risk. To address this issue, researchers and businesses have been focusing on finding ways to improve the interpretability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
335,151
2302.13292
Discovering Top-k Structural Hole Spanners in Dynamic Networks
Structural Hole (SH) theory states that the node which acts as a connecting link among otherwise disconnected communities gets positional advantages in the network. These nodes are called Structural Hole Spanners (SHS). Numerous solutions are proposed to discover SHSs; however, most of the solutions are only applicable to static networks. Since real-world networks are dynamic networks; consequently, in this study, we aim to discover SHSs in dynamic networks. Discovering SHSs is an NP-hard problem, due to which, instead of discovering exact k SHSs, we adopt a greedy approach to discover Top-k SHSs. We first propose an efficient Tracking-SHS algorithm for updating SHSs in dynamic networks. Our algorithm reuses the information obtained during the initial runs of the static algorithm and avoids the recomputations for the nodes unaffected by the updates. Besides, motivated from the success of Graph Neural Networks (GNNs) on various graph mining problems, we also design a Graph Neural Network-based model, GNN-SHS, to discover SHSs in dynamic networks, aiming to reduce the computational cost while achieving high accuracy. We provide a theoretical analysis of the Tracking-SHS algorithm, and our theoretical results prove that for a particular type of graphs, such as Preferential Attachment graphs [1], Tracking-SHS algorithm achieves 1.6 times of speedup compared with the static algorithm. We perform extensive experiments, and our results demonstrate that the Tracking-SHS algorithm attains a minimum of 3.24 times speedup over the static algorithm. Also, the proposed second model GNN-SHS is on an average 671.6 times faster than the Tracking-SHS algorithm.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
347,895
2006.05926
Separable Four Points Fundamental Matrix
We present a novel approach for RANSAC-based computation of the fundamental matrix based on epipolar homography decomposition. We analyze the geometrical meaning of the decomposition-based representation and show that it directly induces a consecutive sampling strategy of two independent sets of correspondences. We show that our method guarantees a minimal number of evaluated hypotheses with respect to current minimal approaches, on the condition that there are four correspondences on an image line. We validate our approach on real-world image pairs, providing fast and accurate results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,250
2408.11829
FAKER: Full-body Anonymization with Human Keypoint Extraction for Real-time Video Deidentification
In the contemporary digital era, protection of personal information has become a paramount issue. The exponential growth of the media industry has heightened concerns regarding the anonymization of individuals captured in video footage. Traditional methods, such as blurring or pixelation, are commonly employed, while recent advancements have introduced generative adversarial networks (GAN) to redraw faces in videos. In this study, we propose a novel approach that employs a significantly smaller model to achieve real-time full-body anonymization of individuals in videos. Unlike conventional techniques that often fail to effectively remove personal identification information such as skin color, clothing, accessories, and body shape while our method successfully eradicates all such details. Furthermore, by leveraging pose estimation algorithms, our approach accurately represents information regarding individuals' positions, movements, and postures. This algorithm can be seamlessly integrated into CCTV or IP camera systems installed in various industrial settings, functioning in real-time and thus facilitating the widespread adoption of full-body anonymization technology.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
482,457
2210.08159
Dynamics-aware Adversarial Attack of Adaptive Neural Networks
In this paper, we investigate the dynamics-aware adversarial attack problem of adaptive neural networks. Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process. However, this assumption does not hold for many recently proposed adaptive neural networks, which adaptively deactivate unnecessary execution units based on inputs to improve computational efficiency. It results in a serious issue of lagged gradient, making the learned attack at the current step ineffective due to the architecture change afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient. More specifically, we reformulate the gradients to be aware of the potential dynamic changes of network architectures, so that the learned attack better "leads" the next step than the dynamics-unaware methods when network architecture changes dynamically. Extensive experiments on representative types of adaptive neural networks for both 2D images and 3D point clouds show that our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods. Code is available at https://github.com/antao97/LGM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,014
1511.05911
Behavior Query Discovery in System-Generated Temporal Graphs
Computer system monitoring generates huge amounts of logs that record the interaction of system entities. How to query such data to better understand system behaviors and identify potential system risks and malicious behaviors becomes a challenging task for system administrators due to the dynamics and heterogeneity of the data. System monitoring data are essentially heterogeneous temporal graphs with nodes being system entities and edges being their interactions over time. Given the complexity of such graphs, it becomes time-consuming for system administrators to manually formulate useful queries in order to examine abnormal activities, attacks, and vulnerabilities in computer systems. In this work, we investigate how to query temporal graphs and treat query formulation as a discriminative temporal graph pattern mining problem. We introduce TGMiner to mine discriminative patterns from system logs, and these patterns can be taken as templates for building more complex queries. TGMiner leverages temporal information in graphs to prune graph patterns that share similar growth trend without compromising pattern quality. Experimental results on real system data show that TGMiner is 6-32 times faster than baseline methods. The discovered patterns were verified by system experts; they achieved high precision (97%) and recall (91%).
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
true
false
49,118
2306.08862
Hyperbolic Convolution via Kernel Point Aggregation
Learning representations according to the underlying geometry is of vital importance for non-Euclidean data. Studies have revealed that the hyperbolic space can effectively embed hierarchical or tree-like data. In particular, the few past years have witnessed a rapid development of hyperbolic neural networks. However, it is challenging to learn good hyperbolic representations since common Euclidean neural operations, such as convolution, do not extend to the hyperbolic space. Most hyperbolic neural networks do not embrace the convolution operation and ignore local patterns. Others either only use non-hyperbolic convolution, or miss essential properties such as equivariance to permutation. We propose HKConv, a novel trainable hyperbolic convolution which first correlates trainable local hyperbolic features with fixed kernel points placed in the hyperbolic space, then aggregates the output features within a local neighborhood. HKConv not only expressively learns local features according to the hyperbolic geometry, but also enjoys equivariance to permutation of hyperbolic points and invariance to parallel transport of a local neighborhood. We show that neural networks with HKConv layers advance state-of-the-art in various tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
373,576
1509.03977
Optimization of anemia treatment in hemodialysis patients via reinforcement learning
Objective: Anemia is a frequent comorbidity in hemodialysis patients that can be successfully treated by administering erythropoiesis-stimulating agents (ESAs). ESAs dosing is currently based on clinical protocols that often do not account for the high inter- and intra-individual variability in the patient's response. As a result, the hemoglobin level of some patients oscillates around the target range, which is associated with multiple risks and side-effects. This work proposes a methodology based on reinforcement learning (RL) to optimize ESA therapy. Methods: RL is a data-driven approach for solving sequential decision-making problems that are formulated as Markov decision processes (MDPs). Computing optimal drug administration strategies for chronic diseases is a sequential decision-making problem in which the goal is to find the best sequence of drug doses. MDPs are particularly suitable for modeling these problems due to their ability to capture the uncertainty associated with the outcome of the treatment and the stochastic nature of the underlying process. The RL algorithm employed in the proposed methodology is fitted Q iteration, which stands out for its ability to make an efficient use of data. Results: The experiments reported here are based on a computational model that describes the effect of ESAs on the hemoglobin level. The performance of the proposed method is evaluated and compared with the well-known Q-learning algorithm and with a standard protocol. Simulation results show that the performance of Q-learning is substantially lower than FQI and the protocol. Conclusion: Although prospective validation is required, promising results demonstrate the potential of RL to become an alternative to current protocols.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
46,884
1905.04423
Optimizing Routerless Network-on-Chip Designs: An Innovative Learning-Based Framework
Machine learning applied to architecture design presents a promising opportunity with broad applications. Recent deep reinforcement learning (DRL) techniques, in particular, enable efficient exploration in vast design spaces where conventional design strategies may be inadequate. This paper proposes a novel deep reinforcement framework, taking routerless networks-on-chip (NoC) as an evaluation case study. The new framework successfully resolves problems with prior design approaches being either unreliable due to random searches or inflexible due to severe design space restrictions. The framework learns (near-)optimal loop placement for routerless NoCs with various design constraints. A deep neural network is developed using parallel threads that efficiently explore the immense routerless NoC design space with a Monte Carlo search tree. Experimental results show that, compared with conventional mesh, the proposed deep reinforcement learning (DRL) routerless design achieves a 3.25x increase in throughput, 1.6x reduction in packet latency, and 5x reduction in power. Compared with the state-of-the-art routerless NoC, DRL achieves a 1.47x increase in throughput, 1.18x reduction in packet latency, and 1.14x reduction in average hop count albeit with slightly more power overhead.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
130,459
1906.07304
Neurally-Guided Structure Inference
Most structure inference methods either rely on exhaustive search or are purely data-driven. Exhaustive search robustly infers the structure of arbitrarily complex data, but it is slow. Data-driven methods allow efficient inference, but do not generalize when test data have more complex structures than training data. In this paper, we propose a hybrid inference algorithm, the Neurally-Guided Structure Inference (NG-SI), keeping the advantages of both search-based and data-driven methods. The key idea of NG-SI is to use a neural network to guide the hierarchical, layer-wise search over the compositional space of structures. We evaluate our algorithm on two representative structure inference tasks: probabilistic matrix decomposition and symbolic program parsing. It outperforms data-driven and search-based alternatives on both tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
135,559
2007.08970
Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures
While mainstream machine learning methods are known to have limited ability to compositionally generalize, new architectures and techniques continue to be proposed to address this limitation. We investigate state-of-the-art techniques and architectures in order to assess their effectiveness in improving compositional generalization in semantic parsing tasks based on the SCAN and CFQ datasets. We show that masked language model (MLM) pre-training rivals SCAN-inspired architectures on primitive holdout splits. On a more complex compositional task, we show that pre-training leads to significant improvements in performance vs. comparable non-pre-trained models, whereas architectures proposed to encourage compositional generalization on SCAN or in the area of algorithm learning fail to lead to significant improvements. We establish a new state of the art on the CFQ compositional generalization benchmark using MLM pre-training together with an intermediate representation.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
187,799
2006.07309
Multiple-Vehicle Tracking in the Highway Using Appearance Model and Visual Object Tracking
In recent decades, due to the groundbreaking improvements in machine vision, many daily tasks are performed by computers. One of these tasks is multiple-vehicle tracking, which is widely used in different areas such as video surveillance and traffic monitoring. This paper focuses on introducing an efficient novel approach with acceptable accuracy. This is achieved through an efficient appearance and motion model based on the features extracted from each object. For this purpose, two different approaches have been used to extract features, i.e. features extracted from a deep neural network, and traditional features. Then the results from these two approaches are compared with state-of-the-art trackers. The results are obtained by executing the methods on the UA-DETRACK benchmark. The first method led to 58.9% accuracy while the second method caused up to 15.9%. The proposed methods can still be improved by extracting more distinguishable features.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,753
2310.07082
Taking the human out of decomposition-based optimization via artificial intelligence: Part II. Learning to initialize
The repeated solution of large-scale optimization problems arises frequently in process systems engineering tasks. Decomposition-based solution methods have been widely used to reduce the corresponding computational time, yet their implementation has multiple steps that are difficult to configure. We propose a machine learning approach to learn the optimal initialization of such algorithms which minimizes the computational time. Active and supervised learning is used to learn a surrogate model that predicts the computational performance for a given initialization. We apply this approach to the initialization of Generalized Benders Decomposition for the solution of mixed integer model predictive control problems. The surrogate models are used to find the optimal number of initial cuts that should be added in the master problem. The results show that the proposed approach can lead to a significant reduction in solution time, and active learning can reduce the data required for learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
398,819
1106.0256
Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic
This paper presents an implemented system for recognizing the occurrence of events described by simple spatial-motion verbs in short image sequences. The semantics of these verbs is specified with event-logic expressions that describe changes in the state of force-dynamic relations between the participants of the event. An efficient finite representation is introduced for the infinite sets of intervals that occur when describing liquid and semi-liquid events. Additionally, an efficient procedure using this representation is presented for inferring occurrences of compound events, described with event-logic expressions, from occurrences of primitive events. Using force dynamics and event logic to specify the lexical semantics of events allows the system to be more robust than prior systems based on motion profile.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
10,663
1903.09746
Residual Pyramid Learning for Single-Shot Semantic Segmentation
Pixel-level semantic segmentation is a challenging task with a huge amount of computation, especially if the size of input is large. In the segmentation model, apart from the feature extraction, the extra decoder structure is often employed to recover spatial information. In this paper, we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. Specifically speaking, we use the residual features to learn the edges and details, and the identity features to learn the main part of targets. At testing time, the predicted residuals are used to enhance the details of the top-level prediction. Residual learning blocks split the network into several shallow sub-networks which facilitates the training of the RPNet. We then evaluate the proposed method and compare it with recent state-of-the-art methods on CamVid and Cityscapes. The proposed single-shot segmentation based on RPNet achieves impressive results with high efficiency on pixel-level segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
125,115
2410.22952
Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation
A common strategy for Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViTs) involves adapting the model to downstream tasks by learning a low-rank adaptation matrix. This matrix is decomposed into a product of down-projection and up-projection matrices, with the bottleneck dimensionality being crucial for reducing the number of learnable parameters, as exemplified by prevalent methods like LoRA and Adapter. However, these low-rank strategies typically employ a fixed bottleneck dimensionality, which limits their flexibility in handling layer-wise variations. To address this limitation, we propose a novel PEFT approach inspired by Singular Value Decomposition (SVD) for representing the adaptation matrix. SVD decomposes a matrix into the product of a left unitary matrix, a diagonal matrix of scaling values, and a right unitary matrix. We utilize Householder transformations to construct orthogonal matrices that efficiently mimic the unitary matrices, requiring only a vector. The diagonal values are learned in a layer-wise manner, allowing them to flexibly capture the unique properties of each layer. This approach enables the generation of adaptation matrices with varying ranks across different layers, providing greater flexibility in adapting pre-trained models. Experiments on standard downstream vision tasks demonstrate that our method achieves promising fine-tuning performance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
503,844
2208.12460
Nuclei & Glands Instance Segmentation in Histology Images: A Narrative Review
Instance segmentation of nuclei and glands in the histology images is an important step in computational pathology workflow for cancer diagnosis, treatment planning and survival analysis. With the advent of modern hardware, the recent availability of large-scale quality public datasets and the community organized grand challenges have seen a surge in automated methods focusing on domain specific challenges, which is pivotal for technology advancements and clinical translation. In this survey, 126 papers illustrating the AI based methods for nuclei and glands instance segmentation published in the last five years (2017-2022) are deeply analyzed, the limitations of current approaches and the open challenges are discussed. Moreover, the potential future research direction is presented and the contribution of state-of-the-art methods is summarized. Further, a generalized summary of publicly available datasets and a detailed insights on the grand challenges illustrating the top performing methods specific to each challenge is also provided. Besides, we intended to give the reader current state of existing research and pointers to the future directions in developing methods that can be used in clinical practice enabling improved diagnosis, grading, prognosis, and treatment planning of cancer. To the best of our knowledge, no previous work has reviewed the instance segmentation in histology images focusing towards this direction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
314,732
2204.08205
A Greedy and Optimistic Approach to Clustering with a Specified Uncertainty of Covariates
In this study, we examine a clustering problem in which the covariates of each individual element in a dataset are associated with an uncertainty specific to that element. More specifically, we consider a clustering approach in which a pre-processing applying a non-linear transformation to the covariates is used to capture the hidden data structure. To this end, we approximate the sets representing the propagated uncertainty for the pre-processed features empirically. To exploit the empirical uncertainty sets, we propose a greedy and optimistic clustering (GOC) algorithm that finds better feature candidates over such sets, yielding more condensed clusters. As an important application, we apply the GOC algorithm to synthetic datasets of the orbital properties of stars generated through our numerical simulation mimicking the formation process of the Milky Way. The GOC algorithm demonstrates an improved performance in finding sibling stars originating from the same dwarf galaxy. These realistic datasets have also been made publicly available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
292,006
2106.13367
SeaNet -- Towards A Knowledge Graph Based Autonomic Management of Software Defined Networks
Automatic network management driven by Artificial Intelligent technologies has been heatedly discussed over decades. However, current reports mainly focus on theoretic proposals and architecture designs, works on practical implementations on real-life networks are yet to appear. This paper proposes our effort toward the implementation of knowledge graph driven approach for autonomic network management in software defined networks (SDNs), termed as SeaNet. Driven by the ToCo ontology, SeaNet is reprogrammed based on Mininet (a SDN emulator). It consists three core components, a knowledge graph generator, a SPARQL engine, and a network management API. The knowledge graph generator represents the knowledge in the telecommunication network management tasks into formally represented ontology driven model. Expert experience and network management rules can be formalized into knowledge graph and by automatically inferenced by SPARQL engine, Network management API is able to packet technology-specific details and expose technology-independent interfaces to users. The Experiments are carried out to evaluate proposed work by comparing with a commercial SDN controller Ryu implemented by the same language Python. The evaluation results show that SeaNet is considerably faster in most circumstances than Ryu and the SeaNet code is significantly more compact. Benefit from RDF reasoning, SeaNet is able to achieve O(1) time complexity on different scales of the knowledge graph while the traditional database can achieve O(nlogn) at its best. With the developed network management API, SeaNet enables researchers to develop semantic-intelligent applications on their own SDNs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
243,050
1505.04785
Advances in Bioinformatics and Computational Biology: Don't take them too seriously anyway
In the last few decades or so, we witness a paradigm shift in our nature studies - from a data-processing based computational approach to an information-processing based cognitive approach. The process is restricted and often misguided by the lack of a clear understanding about what information is and how it should be treated in research applications (in general) and in biological studies (in particular). The paper intend to provide some remedies for this bizarre situation.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
43,225
2407.12057
NinjaLLM: Fast, Scalable and Cost-effective RAG using Amazon SageMaker and AWS Trainium and Inferentia2
Retrieval-augmented generation (RAG) techniques are widely used today to retrieve and present information in a conversational format. This paper presents a set of enhancements to traditional RAG techniques, focusing on large language models (LLMs) fine-tuned and hosted on AWS Trainium and Inferentia2 AI chips via SageMaker. These chips are characterized by their elasticity, affordability, and efficient performance for AI compute tasks. Besides enabling deployment on these chips, this work aims to improve tool usage, add citation capabilities, and mitigate the risks of hallucinations and unsafe responses due to context bias. We benchmark our RAG system's performance on the Natural Questions and HotPotQA datasets, achieving an accuracy of 62% and 59% respectively, exceeding other models such as DBRX and Mixtral Instruct.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
473,748
2404.02235
Is Exploration All You Need? Effective Exploration Characteristics for Transfer in Reinforcement Learning
In deep reinforcement learning (RL) research, there has been a concerted effort to design more efficient and productive exploration methods while solving sparse-reward problems. These exploration methods often share common principles (e.g., improving diversity) and implementation details (e.g., intrinsic reward). Prior work found that non-stationary Markov decision processes (MDPs) require exploration to efficiently adapt to changes in the environment with online transfer learning. However, the relationship between specific exploration characteristics and effective transfer learning in deep RL has not been characterized. In this work, we seek to understand the relationships between salient exploration characteristics and improved performance and efficiency in transfer learning. We test eleven popular exploration algorithms on a variety of transfer types -- or ``novelties'' -- to identify the characteristics that positively affect online transfer learning. Our analysis shows that some characteristics correlate with improved performance and efficiency across a wide range of transfer tasks, while others only improve transfer performance with respect to specific environment changes. From our analysis, make recommendations about which exploration algorithm characteristics are best suited to specific transfer situations.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
443,773
2303.03876
Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets
Volume electron microscopy is the method of choice for the in-situ interrogation of cellular ultrastructure at the nanometer scale. Recent technical advances have led to a rapid increase in large raw image datasets that require computational strategies for segmentation and spatial analysis. In this protocol, we describe a practical and annotation-efficient pipeline for organelle-specific segmentation, spatial analysis, and visualization of large volume electron microscopy datasets using freely available, user-friendly software tools that can be run on a single standard workstation. We specifically target researchers in the life sciences with limited computational expertise, who face the following tasks within their volume electron microscopy projects: i) How to generate 3D segmentation labels for different types of cell organelles while minimizing manual annotation efforts, ii) how to analyze the spatial interactions between organelle instances, and iii) how to best visualize the 3D segmentation results. To meet these demands we give detailed guidelines for choosing the most efficient segmentation tools for the specific cell organelle. We furthermore provide easily executable components for spatial analysis and 3D rendering and bridge compatibility issues between freely available open-source tools, such that others can replicate our full pipeline starting from a raw dataset up to the final plots and rendered images. We believe that our detailed description can serve as a valuable reference for similar projects requiring special strategies for single- or multiple organelle analysis which can be achieved with computational resources commonly available to single-user setups.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
349,876
2107.05368
A Three Phase Semantic Web Matchmaker
Since using environments that are made according to the service oriented architecture, we have more effective and dynamic applications. Semantic matchmaking process is finding valuable service candidates for substitution. It is a very important aspect of using semantic Web Services. Our proposed matchmaker algorithm performs semantic matching of Web Services on the basis of input and output descriptions of semantic Web Services matching. This technique takes advantages from a graph structure and flow networks. Our novel approach is assigning matchmaking scores to semantics of the inputs and outputs parameters and their types. It makes a flow network in which the weights of the edges are these scores, using FordFulkerson algorithm, we find matching rate of two web services. So, all services should be described in the same Ontology Web Language. Among these candidates, best one is chosen for substitution in the case of an execution failure. Our approach uses the algorithm that has the least running time among all others that can be used for bipartite matching. The importance of problem is that in real systems, many fundamental problems will occur by late answering. So system`s service should always be on and if one of them crashes, it would be replaced fast. Semantic web matchmaker eases this process.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
245,757
2308.00262
The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution
This work presents our solutions to the Algonauts Project 2023 Challenge. The primary objective of the challenge revolves around employing computational models to anticipate brain responses captured during participants' observation of intricate natural visual scenes. The goal is to predict brain responses across the entire visual brain, as it is the region where the most reliable responses to images have been observed. We constructed an image-based brain encoder through a two-step training process to tackle this challenge. Initially, we created a pretrained encoder using data from all subjects. Next, we proceeded to fine-tune individual subjects. Each step employed different training strategies, such as different loss functions and objectives, to introduce diversity. Ultimately, our solution constitutes an ensemble of multiple unique encoders. The code is available at https://github.com/uark-cviu/Algonauts2023
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
382,877
2101.08382
ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation
We propose ParaSCI, the first large-scale paraphrase dataset in the scientific field, including 33,981 paraphrase pairs from ACL (ParaSCI-ACL) and 316,063 pairs from arXiv (ParaSCI-arXiv). Digging into characteristics and common patterns of scientific papers, we construct this dataset though intra-paper and inter-paper methods, such as collecting citations to the same paper or aggregating definitions by scientific terms. To take advantage of sentences paraphrased partially, we put up PDBERT as a general paraphrase discovering method. The major advantages of paraphrases in ParaSCI lie in the prominent length and textual diversity, which is complementary to existing paraphrase datasets. ParaSCI obtains satisfactory results on human evaluation and downstream tasks, especially long paraphrase generation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
216,300
2405.15655
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker Verification System
In recent years, the remarkable advancements in deep neural networks have brought tremendous convenience. However, the training process of a highly effective model necessitates a substantial quantity of samples, which brings huge potential threats, like unauthorized exploitation with privacy leakage. In response, we propose a framework named HiddenSpeaker, embedding imperceptible perturbations within the training speech samples and rendering them unlearnable for deep-learning-based speaker verification systems that employ large-scale speakers for efficient training. The HiddenSpeaker utilizes a simplified error-minimizing method named Single-Level Error-Minimizing (SLEM) to generate specific and effective perturbations. Additionally, a hybrid objective function is employed for human perceptual optimization, ensuring the perturbation is indistinguishable from human listeners. We conduct extensive experiments on multiple state-of-the-art (SOTA) models in the speaker verification domain to evaluate HiddenSpeaker. Our results demonstrate that HiddenSpeaker not only deceives the model with unlearnable samples but also enhances the imperceptibility of the perturbations, showcasing strong transferability across different models.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
457,035
2412.19280
Parametrizations of All Stable Closed-loop Responses: From Theory to Neural Network Control Design
The complexity of modern control systems necessitates architectures that achieve high performance while ensuring robust stability, particularly for nonlinear systems. In this work, we tackle the challenge of designing optimal output-feedback controllers to boost the performance of $\ell_p$-stable discrete-time nonlinear systems while preserving closed-loop stability from external disturbances to input and output channels. Leveraging operator theory and neural network representations, we parametrize the achievable closed-loop maps for a given system and propose novel parametrizations of all $\ell_p$-stabilizing controllers, unifying frameworks such as nonlinear Youla and Internal Model Control. Contributing to a rapidly growing research line, our approach enables unconstrained optimization exclusively over stabilizing output-feedback controllers and provides sufficient conditions to ensure robustness against model mismatch. Additionally, our methods reveal that stronger notions of stability can be imposed on the closed-loop maps if disturbance realizations are available after one time step. Last, our approaches are compatible with the design of nonlinear distributed controllers. Numerical experiments on cooperative robotics demonstrate the flexibility of our framework, allowing cost functions to be freely designed for achieving complex behaviors while preserving stability.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
520,781
1605.08834
Information Measure as Time Complexity
The concept of Shannon entropy of random variables was generalized to measurable functions in general, and to simple functions with finite values in particular. It is shown that the information measure of a function is related to the time complexity of search problems concerning the functions in question. Formally, given a Turing reduction from a search problem f(x)=y to another function g(x), the amount of information about f(x)=y provided by querying g(x) is exactly equal to the average mutual information I(f;g). As a result, the average number of queries is I(f=y)/I(f;g), where I(f) is amount of self-information about the event {f=y}. In the idea case, if I(f=y)/I(f;g) is polynomial in the size of input and the function g(x) can be computed in polynomial time, then the problem f(x)=y also has polynomial-time algorithm. As it turns out, our information-based complexity estimation is a natural setting in which to study the power of randomized or probabilistic algorithms. Applying to decision problems, our result provides a strong evidence that P=RP=BPP. Using brute-force search as a benchmark for efficiency, our results also support that P!=PP. Further, applying an argument similar to Carnot's work on measuring the efficiency of an ideal heat engine in thermodynamics, it is shown that I(f=y)/H(f) is a lower bound on query complexity of solving f(x)=y. According to Markov's inequality, if I(f=y)/H(f) is exponential in the size of input, then the probability of f(x)=y having polynomial-time algorithm is a negative exponential in the size of input. This result enables us to reduce the problem of proving FP!=FNP to the computation of the entropy of certain simple functions, such as the subset sum function. As a result, P!=NP is provable, answering a long-standing open problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
56,482
2107.09947
Preventing dataset shift from breaking machine-learning biomarkers
Machine learning brings the hope of finding new biomarkers extracted from cohorts with rich biomedical measurements. A good biomarker is one that gives reliable detection of the corresponding condition. However, biomarkers are often extracted from a cohort that differs from the target population. Such a mismatch, known as a dataset shift, can undermine the application of the biomarker to new individuals. Dataset shifts are frequent in biomedical research, e.g. because of recruitment biases. When a dataset shift occurs, standard machine-learning techniques do not suffice to extract and validate biomarkers. This article provides an overview of when and how dataset shifts breaks machine-learning extracted biomarkers, as well as detection and correction strategies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,172
2012.05002
Persuading Voters in District-based Elections
We focus on the scenario in which an agent can exploit his information advantage to manipulate the outcome of an election. In particular, we study district-based elections with two candidates, in which the winner of the election is the candidate that wins in the majority of the districts. District-based elections are adopted worldwide (e.g., UK and USA) and are a natural extension of widely studied voting mechanisms (e.g., k-voting and plurality voting). We resort to the Bayesian persuasion framework, where the manipulator (sender) strategically discloses information to the voters (receivers) that update their beliefs rationally. We study both private signaling, in which the sender can use a private communication channel per receiver, and public signaling, in which the sender can use a single communication channel for all the receivers. Furthermore, for the first time, we introduce semi-public signaling in which the sender can use a single communication channel per district. We show that there is a sharp distinction between private and (semi-)public signaling. In particular, optimal private signaling schemes can provide an arbitrarily better probability of victory than (semi-)public ones and can be computed efficiently, while optimal (semi-)public signaling schemes cannot be approximated to within any factor in polynomial time unless P=NP. However, we show that reasonable relaxations allow the design of multi-criteria PTASs for optimal (semi-)public signaling schemes. In doing so, we introduce a novel property, namely comparative stability, and we design a bi-criteria PTAS for public signaling in general Bayesian persuasion problems beyond elections when the sender's utility function is state-dependent.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
210,647
1705.08942
Joint PoS Tagging and Stemming for Agglutinative Languages
The number of word forms in agglutinative languages is theoretically infinite and this variety in word forms introduces sparsity in many natural language processing tasks. Part-of-speech tagging (PoS tagging) is one of these tasks that often suffers from sparsity. In this paper, we present an unsupervised Bayesian model using Hidden Markov Models (HMMs) for joint PoS tagging and stemming for agglutinative languages. We use stemming to reduce sparsity in PoS tagging. Two tasks are jointly performed to provide a mutual benefit in both tasks. Our results show that joint POS tagging and stemming improves PoS tagging scores. We present results for Turkish and Finnish as agglutinative languages and English as a morphologically poor language.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
74,112
1612.02498
Discrete Schroedinger Transform For Texture Recognition
This work presents a new procedure to extract features of grey-level texture images based on the discrete Schroedinger transform. This is a non-linear transform where the image is mapped as the initial probability distribution of a wave function and such distribution evolves in time following the Schroedinger equation from Quantum Mechanics. The features are provided by statistical moments of the distribution measured at different times. The proposed method is applied to the classification of three databases of textures used for benchmark and compared to other well-known texture descriptors in the literature, such as textons, local binary patterns, multifractals, among others. All of them are outperformed by the proposed method in terms of percentage of images correctly classified. The proposal is also applied to the identification of plant species using scanned images of leaves and again it outperforms other texture methods. A test with images affected by Gaussian and "salt \& pepper" noise is also carried out, also with the best performance achieved by the Schroedinger descriptors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,235
1704.07943
Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks
We study the stochastic multi-armed bandit (MAB) problem in the presence of side-observations across actions that occur as a result of an underlying network structure. In our model, a bipartite graph captures the relationship between actions and a common set of unknowns such that choosing an action reveals observations for the unknowns that it is connected to. This models a common scenario in online social networks where users respond to their friends' activity, thus providing side information about each other's preferences. Our contributions are as follows: 1) We derive an asymptotic lower bound (with respect to time) as a function of the bi-partite network structure on the regret of any uniformly good policy that achieves the maximum long-term average reward. 2) We propose two policies - a randomized policy; and a policy based on the well-known upper confidence bound (UCB) policies - both of which explore each action at a rate that is a function of its network position. We show, under mild assumptions, that these policies achieve the asymptotic lower bound on the regret up to a multiplicative factor, independent of the network structure. Finally, we use numerical examples on a real-world social network and a routing example network to demonstrate the benefits obtained by our policies over other existing policies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
72,446
2005.13756
The SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion
In this paper, we describe the findings of the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion (SIGMORPHON 2020 Task 2), a novel task in the field of inflectional morphology. Participants were asked to submit systems which take raw text and a list of lemmas as input, and output all inflected forms, i.e., the entire morphological paradigm, of each lemma. In order to simulate a realistic use case, we first released data for 5 development languages. However, systems were officially evaluated on 9 surprise languages, which were only revealed a few days before the submission deadline. We provided a modular baseline system, which is a pipeline of 4 components. 3 teams submitted a total of 7 systems, but, surprisingly, none of the submitted systems was able to improve over the baseline on average over all 9 test languages. Only on 3 languages did a submitted system obtain the best results. This shows that unsupervised morphological paradigm completion is still largely unsolved. We present an analysis here, so that this shared task will ground further research on the topic.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
179,091
2401.04323
Divergent Characteristics of Biomedical Research across Publication Types: A Quantitative Analysis on the Aging-related Research
This paper investigates differences in characteristics across publication types for aging-related genetic research. We utilized bibliometric data for five model species retrieved from authoritative databases including PubMed. Publications are classified into types according to PubMed. Results indicate substantial divergence across publication types in attention paid to aging-related research, scopes of studied genes, and topical preferences. For instance, comparative studies and meta-analyses show a greater focus on aging than validation studies. Reviews concentrate more on cell biology while clinical studies emphasize translational topics. Publication types also manifest variations in highly studied genes, like APOE for reviews versus GH1 for clinical studies. Despite differences, top genes like insulin are universally emphasized. Publication types demonstrate similar levels of imbalance in research efforts to genes. Differences also exist in bibliometrics like authorship numbers, citation counts, etc. Publication types show distinct preferences for journals of certain topical specialties and scope of readership. Overall, findings showcase distinct characteristics of publication types in studying aging-related genetics, owing to their unique nature and objectives. This study is the first endeavor to systematically depict the inherent structure of a biomedical research field from the perspective of publication types and provides insights into knowledge production and evaluation patterns across biomedical communities.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
420,390
1705.04051
Nash Region of the Linear Deterministic Interference Channel with Noisy Output Feedback
In this paper, the $\eta$-Nash equilibrium ($\eta$-NE) region of the two-user linear deterministic interference channel (IC) with noisy channel-output feedback is characterized for all $\eta > 0$. The $\eta$-NE region, a subset of the capacity region, contains the set of all achievable information rate pairs that are stable in the sense of an $\eta$-NE. More specifically, given an $\eta$-NE coding scheme, there does not exist an alternative coding scheme for either transmitter-receiver pair that increases the individual rate by more than $\eta$ bits per channel use. Existing results such as the $\eta$-NE region of the linear deterministic IC without feedback and with perfect output feedback are obtained as particular cases of the result presented in this paper.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,274
2405.06354
KeepOriginalAugment: Single Image-based Better Information-Preserving Data Augmentation Approach
Advanced image data augmentation techniques play a pivotal role in enhancing the training of models for diverse computer vision tasks. Notably, SalfMix and KeepAugment have emerged as popular strategies, showcasing their efficacy in boosting model performance. However, SalfMix reliance on duplicating salient features poses a risk of overfitting, potentially compromising the model's generalization capabilities. Conversely, KeepAugment, which selectively preserves salient regions and augments non-salient ones, introduces a domain shift that hinders the exchange of crucial contextual information, impeding overall model understanding. In response to these challenges, we introduce KeepOriginalAugment, a novel data augmentation approach. This method intelligently incorporates the most salient region within the non-salient area, allowing augmentation to be applied to either region. Striking a balance between data diversity and information preservation, KeepOriginalAugment enables models to leverage both diverse salient and non-salient regions, leading to enhanced performance. We explore three strategies for determining the placement of the salient region minimum, maximum, or random and investigate swapping perspective strategies to decide which part (salient or non-salient) undergoes augmentation. Our experimental evaluations, conducted on classification datasets such as CIFAR-10, CIFAR-100, and TinyImageNet, demonstrate the superior performance of KeepOriginalAugment compared to existing state-of-the-art techniques.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
453,269
2304.12331
USTEP: Structuration des logs en flux gr{\^a}ce {\`a} un arbre de recherche {\'e}volutif
Logs record valuable system information at runtime. They are widely used by data-driven approaches for development and monitoring purposes. Parsing log messages to structure their format is a classic preliminary step for log-mining tasks. As they appear upstream, parsing operations can become a processing time bottleneck for downstream applications. The quality of parsing also has a direct influence on their efficiency. Here, we propose USTEP, an online log parsing method based on an evolving tree structure. Evaluation results on a wide panel of datasets coming from different real-world systems demonstrate USTEP superiority in terms of both effectiveness and robustness when compared to other online methods.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
360,178
2106.04385
Property-Aware Robot Object Manipulation: a Generative Approach
When transporting an object, we unconsciously adapt our movement to its properties, for instance by slowing down when the item is fragile. The most relevant features of an object are immediately revealed to a human observer by the way the handling occurs, without any need for verbal description. It would greatly facilitate collaboration to enable humanoid robots to perform movements that convey similar intuitive cues to the observers. In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects, such as their weight and fragility. We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object. The use of a generative approach allows us to create new and consistent motion patterns, without the need of collecting a large number of recorded human-led demonstrations. Besides, the informative content of the actions is preserved. Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions, which result effectively modulated as a function of the object weight and the carefulness required in its handling.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
239,700
1703.04421
Guetzli: Perceptually Guided JPEG Encoder
Guetzli is a new JPEG encoder that aims to produce visually indistinguishable images at a lower bit-rate than other common JPEG encoders. It optimizes both the JPEG global quantization tables and the DCT coefficient values in each JPEG block using a closed-loop optimizer. Guetzli uses Butteraugli, our perceptual distance metric, as the source of feedback in its optimization process. We reach a 29-45% reduction in data size for a given perceptual distance, according to Butteraugli, in comparison to other compressors we tried. Guetzli's computation is currently extremely slow, which limits its applicability to compressing static content and serving as a proof- of-concept that we can achieve significant reductions in size by combining advanced psychovisual models with lossy compression techniques.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,902
2009.14088
Task-Based Analog-to-Digital Converters
Obtaining digital representations of multivariate continuous-time (CT) signals is a challenge encountered in many signal processing systems. In practice, these signals are often acquired to extract some underlying information, i.e., for a specific task. Employing conventional task-agnostic analog-to-digital converters (ADCs), typically designed to minimize the mean squared error (MSE) in reconstructing the CT input signal, can be costly and energy-inefficient in such cases. In this work, we study task-based ADCs, which are designed to obtain a digital representation of a multivariate CT input process with the goal of recovering an underlying statistically related parameter vector, referred to as \emph{the task}. The proposed system employs analog filtering, uniform sampling, and scalar uniform quantization of the input process before recovering the task vector using a linear digital recovery filter. We optimize the analog and digital filters and derive closed-form expressions for the achievable MSE in recovering the task vector from a set of analog signals when utilizing ADCs with a fixed sampling rate and amplitude resolution. Based on our derivation, we provide guidelines for designing practical acquisition systems subject to a constraint on the bit rate. Our analysis proves that the intuitive approaches of either recovering the task vector solely in digital or designing the analog filter to estimate the task vector are inferior to the proposed joint design. We then consider the recovery of a set of matched filter outputs under a rate budget. We numerically verify our theoretical observations and demonstrate that task-based ADCs substantially outperform analog matched filtering as well as applying the matched filter solely in the digital domain. [...]
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
197,932
1909.07375
Extending and Automating Basic Probability Theory with Propositional Computability Logic
Classical probability theory is formulated using sets. In this paper, we extend classical probability theory with propositional computability logic. Unlike other formalisms, computability logic is built on the notion of events/games, which is central to probability theory. The probability theory based on CoL is therefore useful for {\it automating} uncertainty reasoning. We describe some basic properties of this new probability theory. We also discuss a novel isomorphism between the set operations and computability logic operations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
145,654
2411.13608
Integrating Dynamic Correlation Shifts and Weighted Benchmarking in Extreme Value Analysis
This paper presents an innovative approach to Extreme Value Analysis (EVA) by introducing the Extreme Value Dynamic Benchmarking Method (EVDBM). EVDBM integrates extreme value theory to detect extreme events and is coupled with the novel Dynamic Identification of Significant Correlation (DISC)-Thresholding algorithm, which enhances the analysis of key variables under extreme conditions. By integrating return values predicted through EVA into the benchmarking scores, we are able to transform these scores to reflect anticipated conditions more accurately. This provides a more precise picture of how each case is projected to unfold under extreme conditions. As a result, the adjusted scores offer a forward-looking perspective, highlighting potential vulnerabilities and resilience factors for each case in a way that static historical data alone cannot capture. By incorporating both historical and probabilistic elements, the EVDBM algorithm provides a comprehensive benchmarking framework that is adaptable to a range of scenarios and contexts. The methodology is applied to real PV data, revealing critical low - production scenarios and significant correlations between variables, which aid in risk management, infrastructure design, and long-term planning, while also allowing for the comparison of different production plants. The flexibility of EVDBM suggests its potential for broader applications in other sectors where decision-making sensitivity is crucial, offering valuable insights to improve outcomes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
509,857
2208.13166
Influence Maximization (IM) in Complex Networks with Limited Visibility Using Statistical Methods
A social network (SN) is a social structure consisting of a group representing the interaction between them. SNs have recently been widely used and, subsequently, have become suitable and popular platforms for product promotion and information diffusion. People in an SN directly influence each other's interests and behavior. One of the most important problems in SNs is to find people who can have the maximum influence on other nodes in the network in a cascade manner if they are chosen as the seed nodes of a network diffusion scenario. Influential diffusers are people who, if they are chosen as the seed set in a publishing issue in the network, that network will have the most people who have learned about that diffused entity. This is a well-known problem in literature known as influence maximization (IM) problem. Although it has been proven that this is an NP-complete problem and does not have a solution in polynomial time, it has been argued that it has the properties of sub modular functions and, therefore, can be solved using a greedy algorithm. Most of the methods proposed to improve this complexity are based on the assumption that the entire graph is visible. However, this assumption does not hold for many real-world graphs. This study is conducted to extend current maximization methods with link prediction techniques to pseudo-visibility graphs. To this end, a graph generation method called the exponential random graph model (ERGM) is used for link prediction. The proposed method is tested using the data from the Snap dataset of Stanford University. According to the experimental tests, the proposed method is efficient on real-world graphs.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
314,978
1411.1784
Conditional Generative Adversarial Nets
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
37,366
2012.10870
Computer Vision based Accident Detection for Autonomous Vehicles
Numerous Deep Learning and sensor-based models have been developed to detect potential accidents with an autonomous vehicle. However, a self-driving car needs to be able to detect accidents between other vehicles in its path and take appropriate actions such as to slow down or stop and inform the concerned authorities. In this paper, we propose a novel support system for self-driving cars that detects vehicular accidents through a dashboard camera. The system leverages the Mask R-CNN framework for vehicle detection and a centroid tracking algorithm to track the detected vehicle. Additionally, the framework calculates various parameters such as speed, acceleration, and trajectory to determine whether an accident has occurred between any of the tracked vehicles. The framework has been tested on a custom dataset of dashcam footage and achieves a high accident detection rate while maintaining a low false alarm rate.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,464
2108.10029
Modeling time evolving COVID-19 uncertainties with density dependent asymptomatic infections and social reinforcement
The COVID-19 pandemic has posed significant challenges in modeling its complex epidemic transmissions, infection and contagion, which are very different from known epidemics. The challenges in quantifying COVID-19 complexities include effectively modeling its process and data uncertainties. The uncertainties are embedded in implicit and high-proportional undocumented infections, asymptomatic contagion, social reinforcement of infections, and various quality issues in the reported data. These uncertainties become even more apparent in the first two months of the COVID-19 pandemic, when the relevant knowledge, case reporting and testing were all limited. Here we introduce a novel hybrid approach Susceptible-Undocumented infected-Documented infected-Recovered (SUDR) model. First, SUDR (1) characterizes and distinguishes Undocumented (U) and Documented (D) infections commonly seen during COVID-19 incubation periods and asymptomatic infections. Second, SUDR characterizes the probabilistic density of infections by capturing exogenous processes. Lastly, SUDR approximates the density likelihood of COVID-19 prevalence over time by incorporating Bayesian inference into SUDR. Different from existing COVID-19 models, SUDR characterizes the undocumented infections during unknown transmission processes. To capture the uncertainties of temporal transmission and social reinforcement during COVID-19 contagion, the transmission rate is modeled by a time-varying density function of undocumented infectious cases. By sampling from the mean-field posterior distribution with reasonable priors, SUDR handles the randomness, noise and sparsity of COVID-19 observations widely seen in the public COVID-19 case data. The results demonstrate a deeper quantitative understanding of the above uncertainties, in comparison with classic SIR, time-dependent SIR, and probabilistic SIR models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
251,776
1804.06546
Deep Generative Networks For Sequence Prediction
This thesis investigates unsupervised time series representation learning for sequence prediction problems, i.e. generating nice-looking input samples given a previous history, for high dimensional input sequences by decoupling the static input representation from the recurrent sequence representation. We introduce three models based on Generative Stochastic Networks (GSN) for unsupervised sequence learning and prediction. Experimental results for these three models are presented on pixels of sequential handwritten digit (MNIST) data, videos of low-resolution bouncing balls, and motion capture data. The main contribution of this thesis is to provide evidence that GSNs are a viable framework to learn useful representations of complex sequential input data, and to suggest a new framework for deep generative models to learn complex sequences by decoupling static input representations from dynamic time dependency representations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
95,329
1611.09176
Simulation of clustering algorithms in OODBs in order to evaluate their performances
A good object clustering is critical to the performance of object-oriented databases. However, it always involves some kind of overhead for the system. The aim of this paper is to propose a modelling methodology in order to evaluate the performances of different clustering policies. This methodology has been used to compare the performances of three clustering algorithms found in the literature (Cactis, CK and ORION) that we considered representative of the current research in the field of object clustering. The actual performance evaluation was performed using simulation. Simulation experiments showed that the Cactis algorithm is better than the ORION algorithm and that the CK algorithm totally outperforms both other algorithms in terms of response time and clustering overhead.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
64,631
2311.02215
Towards model-free RL algorithms that scale well with unstructured data
Conventional reinforcement learning (RL) algorithms exhibit broad generality in their theoretical formulation and high performance on several challenging domains when combined with powerful function approximation. However, developing RL algorithms that perform well across problems with unstructured observations at scale remains challenging because most function approximation methods rely on externally provisioned knowledge about the structure of the input for good performance (e.g. convolutional networks, graph neural networks, tile-coding). A common practice in RL is to evaluate algorithms on a single problem, or on problems with limited variation in the observation scale. RL practitioners lack a systematic way to study how well a single RL algorithm performs when instantiated across a range of problem scales, and they lack function approximation techniques that scale well with unstructured observations. We address these limitations by providing environments and algorithms to study scaling for unstructured observation vectors and flat action spaces. We introduce a family of combinatorial RL problems with an exponentially large state space and high-dimensional dynamics but where linear computation is sufficient to learn a (nonlinear) value function estimate for performant control. We provide an algorithm that constructs reward-relevant general value function (GVF) questions to find and exploit predictive structure directly from the experience stream. In an empirical evaluation of the approach on synthetic problems, we observe a sample complexity that scales linearly with the observation size. The proposed algorithm reliably outperforms a conventional deep RL algorithm on these scaling problems, and they exhibit several desirable auxiliary properties. These results suggest new algorithmic mechanisms by which algorithms can learn at scale from unstructured data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
405,340
2202.01354
Dissecting BFT Consensus: In Trusted Components we Trust!
The growing interest in reliable multi-party applications has fostered widespread adoption of Byzantine Fault-Tolerant (BFT) consensus protocols. Existing BFT protocols need f more replicas than Paxos-style protocols to prevent equivocation attacks. Trust-BFT protocols instead seek to minimize this cost by making use of trusted components at replicas. This paper makes two contributions. First, we analyze the design of existing Trust-BFT protocols and uncover three fundamental limitations that preclude most practical deployments. Some of these limitations are fundamental, while others are linked to the state of trusted components today. Second, we introduce a novel suite of consensus protocols, FlexiTrust, that attempts to sidestep these issues. We show that our FlexiTrust protocols achieve up to 185% more throughput than their Trust-BFT counterparts.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
true
278,457
2312.16909
A GAN-based Semantic Communication for Text without CSI
Recently, semantic communication (SC) has been regarded as one of the potential paradigms of 6G. Current SC frameworks require channel state information (CSI) to handle severe signal distortion induced by channel fading. Since the channel estimation overhead for obtaining CSI cannot be neglected, we therefore propose a generative adversarial network (GAN) based SC framework (Ti-GSC) that doesn't require CSI. In Ti-GSC, two main modules, i.e., an autoencoder-based encoder-decoder module (AEDM) and a GAN-based signal distortion suppression module (GSDSM) are included where AEDM first encodes the data at the source before transmission, and then GSDSM suppresses the distortion of the received signals in both syntactic and semantic dimensions at the destination. At last, AEDM decodes the distortion-suppressed signal at the destination. To measure signal distortion, syntactic distortion and semantic distortion terms are newly added to the total loss function. To achieve better training results, joint optimization-based training (JOT) and alternating optimization-based training (AOT) are designed for the proposed Ti-GSC. Experimental results show that JOT is more efficient for Ti-GSC. Moreover, without CSI, bilingual evaluation understudy (BLEU) score achieved by Ti-GSC is about 40% and 62% higher than that achieved by existing SC frameworks in Rician and Rayleigh fading, respectively. (*Due to the notification of arXiv "The Abstract field cannot be longer than 1,920 characters", the appeared Abstract is shortened. For the full Abstract, please download the Article.)
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
418,561
2409.02334
YoloTag: Vision-based Robust UAV Navigation with Fiducial Markers
By harnessing fiducial markers as visual landmarks in the environment, Unmanned Aerial Vehicles (UAVs) can rapidly build precise maps and navigate spaces safely and efficiently, unlocking their potential for fluent collaboration and coexistence with humans. Existing fiducial marker methods rely on handcrafted feature extraction, which sacrifices accuracy. On the other hand, deep learning pipelines for marker detection fail to meet real-time runtime constraints crucial for navigation applications. In this work, we propose YoloTag -a real-time fiducial marker-based localization system. YoloTag uses a lightweight YOLO v8 object detector to accurately detect fiducial markers in images while meeting the runtime constraints needed for navigation. The detected markers are then used by an efficient perspective-n-point algorithm to estimate UAV states. However, this localization system introduces noise, causing instability in trajectory tracking. To suppress noise, we design a higher-order Butterworth filter that effectively eliminates noise through frequency domain analysis. We evaluate our algorithm through real-robot experiments in an indoor environment, comparing the trajectory tracking performance of our method against other approaches in terms of several distance metrics.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
485,653
2406.11844
Prompting the E-Brushes: Users as Authors in Generative AI
Since its introduction in 2022, Generative AI has significantly impacted the art world, from winning state art fairs to creating complex videos from simple prompts. Amid this renaissance, a pivotal issue emerges: should users of Generative AI be recognized as authors eligible for copyright protection? The Copyright Office, in its March 2023 Guidance, argues against this notion. By comparing the prompts to clients' instructions for commissioned art, the Office denies users authorship due to their limited role in the creative process. This Article challenges this viewpoint and advocates for the recognition of Generative AI users who incorporate these tools into their creative endeavors. It argues that the current policy fails to consider the intricate and dynamic interaction between Generative AI users and the models, where users actively influence the output through a process of adjustment, refinement, selection, and arrangement. Rather than dismissing the contributions generated by AI, this Article suggests a simplified and streamlined registration process that acknowledges the role of AI in creation. This approach not only aligns with the constitutional goal of promoting the progress of science and useful arts but also encourages public engagement in the creative process, which contributes to the pool of training data for AI. Moreover, it advocates for a flexible framework that evolves alongside technological advancements while ensuring safety and public interest. In conclusion, by examining text-to-image generators and addressing misconceptions about Generative AI and user interaction, this Article calls for a regulatory framework that adapts to technological developments and safeguards public interests
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
465,097
2311.03083
Quantifying the value of information transfer in population-based SHM
Population-based structural health monitoring (PBSHM), seeks to address some of the limitations associated with data scarcity that arise in traditional SHM. A tenet of the population-based approach to SHM is that information can be shared between sufficiently-similar structures in order to improve predictive models. Transfer learning techniques, such as domain adaptation, have been shown to be a highly-useful technology for sharing information between structures when developing statistical classifiers for PBSHM. Nonetheless, transfer-learning techniques are not without their pitfalls. In some circumstances, for example if the data distributions associated with the structures within a population are dissimilar, applying transfer-learning methods can be detrimental to classification performance -- this phenomenon is known as negative transfer. Given the potentially-severe consequences of negative transfer, it is prudent for engineers to ask the question `when, what, and how should one transfer between structures?'. The current paper aims to demonstrate a transfer-strategy decision process for a classification task for a population of simulated structures in the context of a representative SHM maintenance problem, supported by domain adaptation. The transfer decision framework is based upon the concept of expected value of information transfer. In order to compute the expected value of information transfer, predictions must be made regarding the classification (and decision performance) in the target domain following information transfer. In order to forecast the outcome of transfers, a probabilistic regression is used here to predict classification performance from a proxy for structural similarity based on the modal assurance criterion.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
405,709
2409.13511
An Efficient Multi-Robot Arm Coordination Strategy for Pick-and-Place Tasks using Reinforcement Learning
We introduce a novel strategy for multi-robot sorting of waste objects using Reinforcement Learning. Our focus lies on finding optimal picking strategies that facilitate an effective coordination of a multi-robot system, subject to maximizing the waste removal potential. We realize this by formulating the sorting problem as an OpenAI gym environment and training a neural network with a deep reinforcement learning algorithm. The objective function is set up to optimize the picking rate of the robotic system. In simulation, we draw a performance comparison to an intuitive combinatorial game theory-based approach. We show that the trained policies outperform the latter and achieve up to 16% higher picking rates. Finally, the respective algorithms are validated on a hardware setup consisting of a two-robot sorting station able to process incoming waste objects through pick-and-place operations.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
490,024
2312.05688
NLLG Quarterly arXiv Report 09/23: What are the most influential current AI Papers?
Artificial Intelligence (AI) has witnessed rapid growth, especially in the subfields Natural Language Processing (NLP), Machine Learning (ML) and Computer Vision (CV). Keeping pace with this rapid progress poses a considerable challenge for researchers and professionals in the field. In this arXiv report, the second of its kind, which covers the period from January to September 2023, we aim to provide insights and analysis that help navigate these dynamic areas of AI. We accomplish this by 1) identifying the top-40 most cited papers from arXiv in the given period, comparing the current top-40 papers to the previous report, which covered the period January to June; 2) analyzing dataset characteristics and keyword popularity; 3) examining the global sectoral distribution of institutions to reveal differences in engagement across geographical areas. Our findings highlight the continued dominance of NLP: while only 16% of all submitted papers have NLP as primary category (more than 25% have CV and ML as primary category), 50% of the most cited papers have NLP as primary category, 90% of which target LLMs. Additionally, we show that i) the US dominates among both top-40 and top-9k papers, followed by China; ii) Europe clearly lags behind and is hardly represented in the top-40 most cited papers; iii) US industry is largely overrepresented in the top-40 most influential papers.
false
false
false
false
true
false
true
false
true
false
false
true
false
true
false
false
false
true
414,188
2010.13228
Unified Gradient Reweighting for Model Biasing with Applications to Source Separation
Recent deep learning approaches have shown great improvement in audio source separation tasks. However, the vast majority of such work is focused on improving average separation performance, often neglecting to examine or control the distribution of the results. In this paper, we propose a simple, unified gradient reweighting scheme, with a lightweight modification to bias the learning process of a model and steer it towards a certain distribution of results. More specifically, we reweight the gradient updates of each batch, using a user-specified probability distribution. We apply this method to various source separation tasks, in order to shift the operating point of the models towards different objectives. We demonstrate different parameterizations of our unified reweighting scheme can be used towards addressing several real-world problems, such as unreliable separation estimates. Our framework enables the user to control a robustness trade-off between worst and average performance. Moreover, we experimentally show that our unified reweighting scheme can also be used in order to shift the focus of the model towards being more accurate for user-specified sound classes or even towards easier examples in order to enable faster convergence.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
203,044
2412.16182
Decoding Poultry Vocalizations -- Natural Language Processing and Transformer Models for Semantic and Emotional Analysis
Deciphering the acoustic language of chickens offers new opportunities in animal welfare and ecological informatics. Their subtle vocal signals encode health conditions, emotional states, and dynamic interactions within ecosystems. Understanding the semantics of these calls provides a valuable tool for interpreting their functional vocabulary and clarifying how each sound serves a specific purpose in social and environmental contexts. We apply advanced Natural Language Processing and transformer based models to translate bioacoustic data into meaningful insights. Our method integrates Wave2Vec 2.0 for raw audio feature extraction with a fine tuned Bidirectional Encoder Representations from Transformers model, pretrained on a broad corpus of animal sounds and adapted to poultry tasks. This pipeline decodes poultry vocalizations into interpretable categories including distress calls, feeding signals, and mating vocalizations, revealing emotional nuances often overlooked by conventional analyses. Achieving 92 percent accuracy in classifying key vocalization types, our approach demonstrates the feasibility of real time automated monitoring of flock health and stress. By tracking this functional vocabulary, farmers can respond proactively to environmental or behavioral changes, improving poultry welfare, reducing stress related productivity losses, and supporting more sustainable farm management. Beyond agriculture, this research enhances our understanding of computational ecology. Accessing the semantic foundation of animal calls may indicate biodiversity, environmental stressors, and species interactions, informing integrative ecosystem level decision making.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
519,395
1908.03087
A second-order face-centred finite volume method for elliptic problems
A second-order face-centred finite volume method (FCFV) is proposed. Contrary to the more popular cell-centred and vertex-centred finite volume (FV) techniques, the proposed method defines the solution on the faces of the mesh (edges in two dimensions). The method is based on a mixed formulation and therefore considers the solution and its gradient as independent unknowns. They are computed solving an element-by-element problem after the solution at the faces is determined. The proposed approach avoids the need of reconstructing the solution gradient, as required by cell-centred and vertex-centred FV methods. This strategy leads to a method that is insensitive to mesh distortion and stretching. The current method is second-order and requires the solution of a global system of equations of identical size and identical number of non-zero elements when compared to the recently proposed first-order FCFV. The formulation is presented for Poisson and Stokes problems. Numerical examples are used to illustrate the approximation properties of the method as well as to demonstrate its potential in three dimensional problems with complex geometries. The integration of a mesh adaptive procedure in the FCFV solution algorithm is also presented.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
141,158
2204.14155
Autonomous Crosslink Radionavigation for a Lunar CubeSat Mission
This study presents an autonomous orbit determination system based on crosslink radiometric measurements applied to a future lunar CubeSat mission to clearly highlight its advantages with respect to existing ground-based navigation strategies. This work is based on the Linked Autonomous Interplanetary Satellite Orbit Navigation (LiAISON) method which provides an autonomous navigation solution solely using satellite-to-satellite measurements, such as range and/or range-rate, to estimate absolute spacecraft states when at least one of the involved spacecraft has an orbit with a unique size, shape, and orientation. The lunar vicinity is a perfect candidate for this type of application due to the asymmetrical gravity field: the selected lunar mission, an Earth-Moon L2 (EML2) Halo orbiter, has an inter-satellite link between a lunar elliptical frozen orbiter. Simulation results show that, even in case of high-measurement errors (in the order of 100 m, 1 sigma, the navigation filter estimates the true states of spacecraft at EML2 with an error in the order of 500 m for position, and 2 mm/s for velocity, respectively and the elliptical lunar frozen orbiter states can be estimated in the order of 100 m for position and 1 cm/s for velocity, respectively. This study shows that range-only measurements provide better state estimation than range-rate-only measurements for this specific situation. Different bias handling strategies are also investigated. It has been found that even a less accurate ranging method, such as data-aided ranging, provides a sufficient orbit determination solution. This would simplify the communication system design for the selected CubeSat mission. The most observable states are found to be position states of the lunar orbiter via the observability analysis. In addition, the best tracking windows are also investigated for the selected mission scenario.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
294,073
1902.06467
Stable Topological Summaries for Analyzing the Organization of Cells in a Packed Tissue
We use Topological Data Analysis tools for studying the inner organization of cells in segmented images of epithelial tissues. More specifically, for each segmented image, we compute different persistence barcodes, which codify lifetime of homology classes (persistent homology) along different filtrations (increasing nested sequences of simplicial complexes) that are built from the regions representing the cells in the tissue. We use a complete and well-grounded set of numerical variables over those persistence barcodes, also known as topological summaries. A novel combination of normalization methods for both, the set of input segmented images and the produced barcodes, allows to prove stability results for those variables with respect to small changes in the input, as well as invariance to image scale. Our study provides new insights to this problem, such as a possible novel indicator for the development of the drosophila wing disc tissue or the importance of centroids distribution to differentiate some tissues from their CVT-path counterpart (a mathematical model of epithelia based on Voronoi diagrams). We also show how the use of topological summaries may improve the classification accuracy of epithelial images using Random Forests algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,773
1908.02288
BCN20000: Dermoscopic Lesions in the Wild
This article summarizes the BCN20000 dataset, composed of 19424 dermoscopic images of skin lesions captured from 2010 to 2016 in the facilities of the Hospital Cl\'inic in Barcelona. With this dataset, we aim to study the problem of unconstrained classification of dermoscopic images of skin cancer, including lesions found in hard-to-diagnose locations (nails and mucosa), large lesions which do not fit in the aperture of the dermoscopy device, and hypo-pigmented lesions. The BCN20000 will be provided to the participants of the ISIC Challenge 2019, where they will be asked to train algorithms to classify dermoscopic images of skin cancer automatically.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,966
2412.18859
Few-shot Metric Domain Adaptation: Practical Learning Strategies for an Automated Plant Disease Diagnosis
Numerous studies have explored image-based automated systems for plant disease diagnosis, demonstrating impressive diagnostic capabilities. However, recent large-scale analyses have revealed a critical limitation: that the diagnostic capability suffers significantly when validated on images captured in environments (domains) differing from those used during training. This shortfall stems from the inherently limited dataset size and the diverse manifestation of disease symptoms, combined with substantial variations in cultivation environments and imaging conditions, such as equipment and composition. These factors lead to insufficient variety in training data, ultimately constraining the system's robustness and generalization. To address these challenges, we propose Few-shot Metric Domain Adaptation (FMDA), a flexible and effective approach for enhancing diagnostic accuracy in practical systems, even when only limited target data is available. FMDA reduces domain discrepancies by introducing a constraint to the diagnostic model that minimizes the "distance" between feature spaces of source (training) data and target data with limited samples. FMDA is computationally efficient, requiring only basic feature distance calculations and backpropagation, and can be seamlessly integrated into any machine learning (ML) pipeline. In large-scale experiments, involving 223,015 leaf images across 20 fields and 3 crop species, FMDA achieved F1 score improvements of 11.1 to 29.3 points compared to cases without target data, using only 10 images per disease from the target domain. Moreover, FMDA consistently outperformed fine-tuning methods utilizing the same data, with an average improvement of 8.5 points.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
520,608
1202.0372
Analog Network Coding in General SNR Regime
The problem of maximum rate achievable with analog network coding for a unicast communication over a layered wireless relay network with directed links is considered. A relay node performing analog network coding scales and forwards the signals received at its input. Recently this problem has been considered under two assumptions: (A) each relay node scales its received signal to the upper bound of its transmit power constraint, (B) the relay nodes in specific subsets of the network operate in the high-SNR regime. We establish that assumption (A), in general, leads to suboptimal end-to-end rate. We also characterize the performance of analog network coding in class of symmetric layered networks without assumption (B). The key contribution of this work is a lemma that states that a globally optimal set of scaling factors for the nodes in a layered relay network that maximizes the end-to-end rate can be computed layer-by-layer. Specifically, a rate-optimal set of scaling factors for the nodes in a layer is the one that maximizes the sum-rate of the nodes in the next layer. This critical insight allows us to characterize analog network coding performance in network scenarios beyond those that can be analyzed using the existing approaches. We illustrate this by computing the maximum rate achievable with analog network coding in one particular layered network, in various communication scenarios.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
14,077
2412.10432
Imitate Before Detect: Aligning Machine Stylistic Preference for Machine-Revised Text Detection
Large Language Models (LLMs) have revolutionized text generation, making detecting machine-generated text increasingly challenging. Although past methods have achieved good performance on detecting pure machine-generated text, those detectors have poor performance on distinguishing machine-revised text (rewriting, expansion, and polishing), which can have only minor changes from its original human prompt. As the content of text may originate from human prompts, detecting machine-revised text often involves identifying distinctive machine styles, e.g., worded favored by LLMs. However, existing methods struggle to detect machine-style phrasing hidden within the content contributed by humans. We propose the "Imitate Before Detect" (ImBD) approach, which first imitates the machine-style token distribution, and then compares the distribution of the text to be tested with the machine-style distribution to determine whether the text has been machine-revised. To this end, we introduce style preference optimization (SPO), which aligns a scoring LLM model to the preference of text styles generated by machines. The aligned scoring model is then used to calculate the style-conditional probability curvature (Style-CPC), quantifying the log probability difference between the original and conditionally sampled texts for effective detection. We conduct extensive comparisons across various scenarios, encompassing text revisions by six LLMs, four distinct text domains, and three machine revision types. Compared to existing state-of-the-art methods, our method yields a 13% increase in AUC for detecting text revised by open-source LLMs, and improves performance by 5% and 19% for detecting GPT-3.5 and GPT-4o revised text, respectively. Notably, our method surpasses the commercially trained GPT-Zero with just $1,000$ samples and five minutes of SPO, demonstrating its efficiency and effectiveness.
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
false
false
false
516,923
1607.05993
Closed-Loop Estimation of Oscillator g-Sensitivity in a GNSS/IMU System
We propose a simple method for estimating crystal oscillator g-sensitivity in inertially aided Global Navigation Satellite System (GNSS) receivers. It does not require any specific equipment, like GNSS signal simulators or rate tables. The method is based on analyzing closed-loop phase tracking errors. This enables us to utilize the actual GNSS signal as the frequency reference, despite the presence of an unknown Doppler shift in it. The method has been successfully applied to the calibration of an oven-controlled crystal oscillator.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
58,828
2208.03666
See What You See: Self-supervised Cross-modal Retrieval of Visual Stimuli from Brain Activity
Recent studies demonstrate the use of a two-stage supervised framework to generate images that depict human perception to visual stimuli from EEG, referring to EEG-visual reconstruction. They are, however, unable to reproduce the exact visual stimulus, since it is the human-specified annotation of images, not their data, that determines what the synthesized images are. Moreover, synthesized images often suffer from noisy EEG encodings and unstable training of generative models, making them hard to recognize. Instead, we present a single-stage EEG-visual retrieval paradigm where data of two modalities are correlated, as opposed to their annotations, allowing us to recover the exact visual stimulus for an EEG clip. We maximize the mutual information between the EEG encoding and associated visual stimulus through optimization of a contrastive self-supervised objective, leading to two additional benefits. One, it enables EEG encodings to handle visual classes beyond seen ones during training, since learning is not directed at class annotations. In addition, the model is no longer required to generate every detail of the visual stimulus, but rather focuses on cross-modal alignment and retrieves images at the instance level, ensuring distinguishable model output. Empirical studies are conducted on the largest single-subject EEG dataset that measures brain activities evoked by image stimuli. We demonstrate the proposed approach completes an instance-level EEG-visual retrieval task which existing methods cannot. We also examine the implications of a range of EEG and visual encoder structures. Furthermore, for a mostly studied semantic-level EEG-visual classification task, despite not using class annotations, the proposed method outperforms state-of-the-art supervised EEG-visual reconstruction approaches, particularly on the capability of open class recognition.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
311,863
2301.05785
Efficient Activation Function Optimization through Surrogate Modeling
Carefully designed activation functions can improve the performance of neural networks in many machine learning tasks. However, it is difficult for humans to construct optimal activation functions, and current activation function search algorithms are prohibitively expensive. This paper aims to improve the state of the art through three steps: First, the benchmark datasets Act-Bench-CNN, Act-Bench-ResNet, and Act-Bench-ViT were created by training convolutional, residual, and vision transformer architectures from scratch with 2,913 systematically generated activation functions. Second, a characterization of the benchmark space was developed, leading to a new surrogate-based method for optimization. More specifically, the spectrum of the Fisher information matrix associated with the model's predictive distribution at initialization and the activation function's output distribution were found to be highly predictive of performance. Third, the surrogate was used to discover improved activation functions in several real-world tasks, with a surprising finding: a sigmoidal design that outperformed all other activation functions was discovered, challenging the status quo of always using rectifier nonlinearities in deep learning. Each of these steps is a contribution in its own right; together they serve as a practical and theoretical foundation for further research on activation function optimization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
340,443
2410.13565
SDI-Paste: Synthetic Dynamic Instance Copy-Paste for Video Instance Segmentation
Data augmentation methods such as Copy-Paste have been studied as effective ways to expand training datasets while incurring minimal costs. While such methods have been extensively implemented for image level tasks, we found no scalable implementation of Copy-Paste built specifically for video tasks. In this paper, we leverage the recent growth in video fidelity of generative models to explore effective ways of incorporating synthetically generated objects into existing video datasets to artificially expand object instance pools. We first procure synthetic video sequences featuring objects that morph dynamically with time. Our carefully devised pipeline automatically segments then copy-pastes these dynamic instances across the frames of any target background video sequence. We name our video data augmentation pipeline Synthetic Dynamic Instance Copy-Paste, and test it on the complex task of Video Instance Segmentation which combines detection, segmentation and tracking of object instances across a video sequence. Extensive experiments on the popular Youtube-VIS 2021 dataset using two separate popular networks as baselines achieve strong gains of +2.9 AP (6.5%) and +2.1 AP (4.9%). We make our code and models publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
499,583
2502.04809
Humans Co-exist, So Must Embodied Artificial Agents
Modern embodied artificial agents excel in static, predefined tasks but fall short in dynamic and long-term interactions with humans. On the other hand, humans can adapt and evolve continuously, exploiting the situated knowledge embedded in their environment and other agents, thus contributing to meaningful interactions. We introduce the concept of co-existence for embodied artificial agents and argues that it is a prerequisite for meaningful, long-term interaction with humans. We take inspiration from biology and design theory to understand how human and non-human organisms foster entities that co-exist within their specific niches. Finally, we propose key research directions for the machine learning community to foster co-existing embodied agents, focusing on the principles, hardware and learning methods responsible for shaping them.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,330
1002.0019
Regularized Modified BPDN for Noisy Sparse Reconstruction with Partial Erroneous Support and Signal Value Knowledge
We study the problem of sparse reconstruction from noisy undersampled measurements when the following two things are available. (1) We are given partial, and partly erroneous, knowledge of the signal's support, denoted by $T$. (2) We are also given an erroneous estimate of the signal values on $T$, denoted by $(\hat{\mu})_T$. In practice, both these may be available from available prior knowledge. Alternatively, in recursive reconstruction applications, like real-time dynamic MRI, one can use the support estimate and the signal value estimate from the previous time instant as $T$ and $(\hat{\mu})_T$. In this work, we introduce regularized modified-BPDN (reg-mod-BPDN) and obtain computable bounds on its reconstruction error. Reg-mod-BPDN tries to find the signal that is sparsest outside the set $T$, while being "close enough" to $(\hat{\mu})_T$ on $T$ and while satisfying the data constraint. Corresponding results for modified-BPDN and BPDN follow as direct corollaries. A second key contribution is an approach to obtain computable error bounds that hold without any sufficient conditions. This makes it easy to compare the bounds for the various approaches. Empirical reconstruction error comparisons with many existing approaches are also provided.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,563
1909.00669
Ultra-Low Energy and High Speed LIF Neuron using Silicon Bipolar Impact Ionization MOSFET for Spiking Neural Networks
Silicon bipolar impact ionization MOSFET offers the potential for realization of leaky integrated fire (LIF) neuron due to the presence of parasitic BJT in the floating body. In this work, we have proposed an L shaped gate bipolar impact ionization MOS (L-BIMOS), with reduced breakdown voltage ($V_{B}$ = 1.68 V) and demonstrated the functioning of LIF neuron based on positive feedback mechanism of parasitic BJT. Using 2-D TCAD simulations, we manifest that the proposed L-BIMOS exhibits a low threshold voltage (0.2 V) for firing a spike, and the minimum energy required to fire a single spike for L-BIMOS is calculated to be 0.18 pJ, which makes proposed device $194\times$ more energy efficient than PD-SOI MOSFET silicon neuron (MOSFET silicon neuron) and $5\times10^{3}$ times more energy efficient than analog/digital circuit based conventional neuron. Furthermore, the proposed L-BIMOS silicon neuron exhibits spiking frequency in the GHz range, when the drain is biased at $V_{DG}$ = 2.0 V.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
143,689
2501.14495
BILLNET: A Binarized Conv3D-LSTM Network with Logic-gated residual architecture for hardware-efficient video inference
Long Short-Term Memory (LSTM) and 3D convolution (Conv3D) show impressive results for many video-based applications but require large memory and intensive computing. Motivated by recent works on hardware-algorithmic co-design towards efficient inference, we propose a compact binarized Conv3D-LSTM model architecture called BILLNET, compatible with a highly resource-constrained hardware. Firstly, BILLNET proposes to factorize the costly standard Conv3D by two pointwise convolutions with a grouped convolution in-between. Secondly, BILLNET enables binarized weights and activations via a MUX-OR-gated residual architecture. Finally, to efficiently train BILLNET, we propose a multi-stage training strategy enabling to fully quantize LSTM layers. Results on Jester dataset show that our method can obtain high accuracy with extremely low memory and computational budgets compared to existing Conv3D resource-efficient models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
527,142
2204.02164
Joint Learning of Feature Extraction and Cost Aggregation for Semantic Correspondence
Establishing dense correspondences across semantically similar images is one of the challenging tasks due to the significant intra-class variations and background clutters. To solve these problems, numerous methods have been proposed, focused on learning feature extractor or cost aggregation independently, which yields sub-optimal performance. In this paper, we propose a novel framework for jointly learning feature extraction and cost aggregation for semantic correspondence. By exploiting the pseudo labels from each module, the networks consisting of feature extraction and cost aggregation modules are simultaneously learned in a boosting fashion. Moreover, to ignore unreliable pseudo labels, we present a confidence-aware contrastive loss function for learning the networks in a weakly-supervised manner. We demonstrate our competitive results on standard benchmarks for semantic correspondence.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
289,847
1904.00724
GAN You Do the GAN GAN?
Generative Adversarial Networks (GANs) have become a dominant class of generative models. In recent years, GAN variants have yielded especially impressive results in the synthesis of a variety of forms of data. Examples include compelling natural and artistic images, textures, musical sequences, and 3D object files. However, one obvious synthesis candidate is missing. In this work, we answer one of deep learning's most pressing questions: GAN you do the GAN GAN? That is, is it possible to train a GAN to model a distribution of GANs? We release the full source code for this project under the MIT license.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
125,946
2408.12158
Could Bibliometrics Reveal Top Science and Technology Achievements and Researchers? The Case for Evaluatology-based Science and Technology Evaluation
By utilizing statistical methods to analyze bibliographic data, bibliometrics faces inherent limitations in identifying the most significant science and technology achievements and researchers. To overcome this challenge, we present an evaluatology-based science and technology evaluation methodology. At the heart of this approach lies the concept of an extended evaluation condition, encompassing eight crucial components derived from a field. We define four relationships that illustrate the connections among various achievements based on their mapped extended EC components, as well as their temporal and citation links. Within a relationship under an extended evaluation condition, evaluators can effectively compare these achievements by carefully addressing the influence of confounding variables. We establish a real-world evaluation system encompassing an entire collection of achievements, each of which is mapped to several components of an extended EC. Within a specific field like chip technology or open source, we construct a perfect evaluation model that can accurately trace the evolution and development of all achievements in terms of four relationships based on the real-world evaluation system. Building upon the foundation of the perfect evaluation model, we put forth four-round rules to eliminate non-significant achievements by utilizing four relationships. This process allows us to establish a pragmatic evaluation model that effectively captures the essential achievements, serving as a curated collection of the top N achievements within a specific field during a specific timeframe. We present a case study on the top 100 Chip achievements which highlights its practical application and efficacy in identifying significant achievements and researchers that otherwise can not be identified by using bibliometrics.
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
482,618
2401.15969
Routers in Vision Mixture of Experts: An Empirical Study
Mixture-of-Experts (MoE) models are a promising way to scale up model capacity without significantly increasing computational cost. A key component of MoEs is the router, which decides which subset of parameters (experts) process which feature embeddings (tokens). In this paper, we present a comprehensive study of routers in MoEs for computer vision tasks. We introduce a unified MoE formulation that subsumes different MoEs with two parametric routing tensors. This formulation covers both sparse MoE, which uses a binary or hard assignment between experts and tokens, and soft MoE, which uses a soft assignment between experts and weighted combinations of tokens. Routers for sparse MoEs can be further grouped into two variants: Token Choice, which matches experts to each token, and Expert Choice, which matches tokens to each expert. We conduct head-to-head experiments with 6 different routers, including existing routers from prior work and new ones we introduce. We show that (i) many routers originally developed for language modeling can be adapted to perform strongly in vision tasks, (ii) in sparse MoE, Expert Choice routers generally outperform Token Choice routers, and (iii) soft MoEs generally outperform sparse MoEs with a fixed compute budget. These results provide new insights regarding the crucial role of routers in vision MoE models.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
424,668
1807.01364
Visual Pattern-Driven Exploration of Big Data
Pattern extraction algorithms are enabling insights into the ever-growing amount of today's datasets by translating reoccurring data properties into compact representations. Yet, a practical problem arises: With increasing data volumes and complexity also the number of patterns increases, leaving the analyst with a vast result space. Current algorithmic and especially visualization approaches often fail to answer central overview questions essential for a comprehensive understanding of pattern distributions and support, their quality, and relevance to the analysis task. To address these challenges, we contribute a visual analytics pipeline targeted on the pattern-driven exploration of result spaces in a semi-automatic fashion. Specifically, we combine image feature analysis and unsupervised learning to partition the pattern space into interpretable, coherent chunks, which should be given priority in a subsequent in-depth analysis. In our analysis scenarios, no ground-truth is given. Thus, we employ and evaluate novel quality metrics derived from the distance distributions of our image feature vectors and the derived cluster model to guide the feature selection process. We visualize our results interactively, allowing the user to drill down from overview to detail into the pattern space and demonstrate our techniques in a case study on biomedical genomic data.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
102,045
2501.03020
AC-aware Optimization Framework for Under-Frequency Load Shedding
Under-frequency load shedding (UFLS) prevents system collapse during large disturbances. Increased penetration of distributed energy resources (DERs) and reduced system inertia makes it challenging to design a static UFLS scheme, which relies on preset frequency thresholds and load shed fractions to meet design criteria across all possible operating conditions. Due to non-linearity and traceability issues, previous adaptive UFLS schemes use simplified tractable frequency models that overlook AC network effects such as voltage-dependent load/generation. This paper leverages model order reduction techniques to obtain a higher fidelity low-order model of system frequency dynamics that captures AC network effects while incorporating turbine governor action and their associated limits. The model is then used in a new AC-aware predictive optimization framework to adapt UFLS setpoints periodically based on current operating conditions while minimizing load shed. Validated on a 1,648-bus system with PSS/E simulations, the proposed method meets design criteria under various operating conditions and disturbance scenarios. Furthermore, the framework outperforms conventional static UFLS schemes and adaptive UFLS schemes based on simplified dynamic models.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
522,732
2104.09667
Manipulating SGD with Data Ordering Attacks
Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
false
false
false
false
true
false
true
false
false
false
false
true
true
false
false
false
false
false
231,309
1710.11458
Breaking the Interference Barrier in Dense Wireless Networks with Interference Alignment
A fundamental problem arising in dense wireless networks is the high co-channel interference. Interference alignment (IA) was recently proposed as an effective way to combat interference in wireless networks. The concept of IA, though, is originated by the capacity study of interference channels and as such, its performance is mainly gauged under ideal assumptions, such as instantaneous and perfect channel state information (CSI) at all nodes, and homogeneous signal-to-noise ratio (SNR) users, i.e., each user has the same average SNR. Consequently, the performance of IA under realistic conditions has not been completely investigated yet. In this paper, we aim at filling this gap by providing a performance assessment of spatial IA in practical systems. Specifically, we derive a closed-form expression for the IA average sum-rate when CSI is acquired through training and users have heterogeneous SNR. A main insight from our analysis is that IA can indeed provide significant spectral efficiency gains over traditional approaches in a wide range of dense network scenarios. To demonstrate this, we consider the examples of linear, grid and random network topologies.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
83,603
cs/0003016
Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective
Diagnostic reasoning has been characterized logically as consistency-based reasoning or abductive reasoning. Previous analyses in the literature have shown, on the one hand, that choosing the (in general more restrictive) abductive definition may be appropriate or not, depending on the content of the knowledge base [Console&Torasso91], and, on the other hand, that, depending on the choice of the definition the same knowledge should be expressed in different form [Poole94]. Since in Model-Based Diagnosis a major problem is finding the right way of abstracting the behavior of the system to be modeled, this paper discusses the relation between modeling, and in particular abstraction in the model, and the notion of diagnosis.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
537,030
1303.5760
A Non-Numeric Approach to Multi-Criteria/Multi-Expert Aggregation Based on Approximate Reasoning
We describe a technique that can be used for the fusion of multiple sources of information as well as for the evaluation and selection of alternatives under multi-criteria. Three important properties contribute to the uniqueness of the technique introduced. The first is the ability to do all necessary operations and aggregations with information that is of a nonnumeric linguistic nature. This facility greatly reduces the burden on the providers of information, the experts. A second characterizing feature is the ability assign, again linguistically, differing importance to the criteria or in the case of information fusion to the individual sources of information. A third significant feature of the approach is its ability to be used as method to find a consensus of the opinion of multiple experts on the issue of concern. The techniques used in this approach are base on ideas developed from the theory of approximate reasoning. We illustrate the approach with a problem of project selection.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,208