id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2011.10214
PIFE-PIC: Parallel Immersed-Finite-Element Particle-In-Cell For 3-D Kinetic Simulations of Plasma-Material Interactions
This paper presents a recently developed particle simulation code package PIFE-PIC, which is a novel three-dimensional (3-D) Parallel Immersed-Finite-Element (IFE) Particle-in-Cell (PIC) simulation model for particle simulations of plasma-material interactions. This framework is based on the recently developed non-homogeneous electrostatic IFE-PIC algorithm, which is designed to handle complex plasma-material interface conditions associated with irregular geometries using a Cartesian-mesh-based PIC. Three-dimensional domain decomposition is utilized for both the electrostatic field solver with IFE and the particle operations in PIC to distribute the computation among multiple processors. A simulation of the orbital-motion-limited (OML) sheath of a dielectric sphere immersed in a stationary plasma is carried out to validate PIFE-PIC and profile the parallel performance of the code package. Furthermore, a large-scale simulation of plasma charging at a lunar crater containing 2 million PIC cells (10 million FE/IFE cells) and about 520 million particles, running for 20,000 PIC steps in about 109 wall-clock hours, is presented to demonstrate the high-performance computing capability of PIFE-PIC.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
207,440
2403.18038
TGGLinesPlus: A robust topological graph-guided computer vision algorithm for line detection from images
Line detection is a classic and essential problem in image processing, computer vision and machine intelligence. Line detection has many important applications, including image vectorization (e.g., document recognition and art design), indoor mapping, and important societal challenges (e.g., sea ice fracture line extraction from satellite imagery). Many line detection algorithms and methods have been developed, but robust and intuitive methods are still lacking. In this paper, we proposed and implemented a topological graph-guided algorithm, named TGGLinesPlus, for line detection. Our experiments on images from a wide range of domains have demonstrated the flexibility of our TGGLinesPlus algorithm. We benchmarked our algorithm with five classic and state-of-the-art line detection methods and evaluated the benchmark results qualitatively and quantitatively, the results demonstrate the robustness of TGGLinesPlus.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
441,733
1809.03577
Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations
The trade-off between relevance and fairness in personalized recommendations has been explored in recent works, with the goal of minimizing learned discrimination towards certain demographics while still producing relevant results. We present a fairness-aware variation of the Maximal Marginal Relevance (MMR) re-ranking method which uses representations of demographic groups computed using a labeled dataset. This method is intended to incorporate fairness with respect to these demographic groups. We perform an experiment on a stock photo dataset and examine the trade-off between relevance and fairness against a well known baseline, MMR, by using human judgment to examine the results of the re-ranking when using different fractions of a labeled dataset, and by performing a quantitative analysis on the ranked results of a set of query images. We show that our proposed method can incorporate fairness in the ranked results while obtaining higher precision than the baseline, while our case study shows that even a limited amount of labeled data can be used to compute the representations to obtain fairness. This method can be used as a post-processing step for recommender systems and search.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
107,357
2104.08626
Mixed Gibbs Sampling Detector in High-Order Modulation Large-Scale MIMO Systems
A neighborhood restricted Mixed Gibbs Sampling (MGS) based approach is proposed for low-complexity high-order modulation large-scale Multiple-Input Multiple-Output (LS-MIMO) detection. The proposed LS-MIMO detector applies a neighborhood limitation (NL) on the noisy solution from the MGS at a distance d - thus, named d-simplified MGS (d-sMGS) - in order to mitigate its impact, which can be harmful when a high order modulation is considered. Numerical simulation results considering 64-QAM demonstrated that the proposed detection method can substantially improve the MGS algorithm convergence, whereas no extra computational complexity per iteration is required. The proposed d-sMGS-based detector suitable for high-order modulation LS-MIMO further exhibits improved performance vs. complexity tradeoff when the system loading is high, i.e., when K >= 0.75. N. Also, with increasing the number of dimensions, i.e., increasing the number of antennas and/or modulation order, a smaller restriction of 2-sMGS was shown to be a more interesting choice than 1-sMGS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
230,881
cs/0612019
On Finite Memory Universal Data Compression and Classification of Individual Sequences
Consider the case where consecutive blocks of N letters of a semi-infinite individual sequence X over a finite-alphabet are being compressed into binary sequences by some one-to-one mapping. No a-priori information about X is available at the encoder, which must therefore adopt a universal data-compression algorithm. It is known that if the universal LZ77 data compression algorithm is successively applied to N-blocks then the best error-free compression for the particular individual sequence X is achieved, as $N$ tends to infinity. The best possible compression that may be achieved by any universal data compression algorithm for finite N-blocks is discussed. It is demonstrated that context tree coding essentially achieves it. Next, consider a device called classifier (or discriminator) that observes an individual training sequence X. The classifier's task is to examine individual test sequences of length N and decide whether the test N-sequence has the same features as those that are captured by the training sequence X, or is sufficiently different, according to some appropriatecriterion. Here again, it is demonstrated that a particular universal context classifier with a storage-space complexity that is linear in N, is essentially optimal. This may contribute a theoretical "individual sequence" justification for the Probabilistic Suffix Tree (PST) approach in learning theory and in computational biology.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,939
1610.03705
Structure Properties of Koch Networks Based on Networks Dynamical Systems
We introduce an informative labeling algorithm for the vertices of a family of Koch networks. Each of the labels is consisted of two parts, the precise position and the time adding to Koch networks. The shortest path routing between any two vertices is determined only on the basis of their labels, and the routing is calculated only by few computations. The rigorous solutions of betweenness centrality for every node and edge are also derived by the help of their labels. Furthermore, the community structure in Koch networks is studied by the current and voltage characteristics of its resistor networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
62,287
1802.00167
Consensus-based Distributed Quickest Detection of Attacks with Unknown Parameters
Sequential attack detection in a distributed estimation system is considered, where each sensor successively produces one-bit quantized samples of a desired deterministic scalar parameter corrupted by additive noise. The unknown parameters in the pre-attack and post-attack models, namely the desired parameter to be estimated and the injected malicious data at the attacked sensors pose a significant challenge for designing a computationally efficient scheme for each sensor to detect the occurrence of attacks by only using local communication with neighboring sensors. The generalized Cumulative Sum (GCUSUM) algorithm is considered, which replaces the unknown parameters with their maximum likelihood estimates in the CUSUM test statistic. For the problem under consideration, a sufficient condition is provided under which the expected false alarm period of the GCUSUM can be guaranteed to be larger than any given value. Next, we consider the distributed implementation of the GCUSUM. We first propose an alternative test statistic which is asymptotically equivalent to that of GCUSUM. Then based on the proposed alternative test statistic and running consensus algorithms, we propose a distributed approximate GCUSUM algorithm which significantly reduce the prohibitively high computational complexity of the centralized GCUSUM. Numerical results show that the distributed approximate GCUSUM algorithm can provide a performance that is comparable to the centralized GCUSUM.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
89,360
1808.04126
Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering
The most approaches to Knowledge Base Question Answering are based on semantic parsing. In this paper, we address the problem of learning vector representations for complex semantic parses that consist of multiple entities and relations. Previous work largely focused on selecting the correct semantic relations for a question and disregarded the structure of the semantic parse: the connections between entities and the directions of the relations. We propose to use Gated Graph Neural Networks to encode the graph structure of the semantic parse. We show on two data sets that the graph networks outperform all baseline models that do not explicitly model the structure. The error analysis confirms that our approach can successfully process complex semantic parses.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
105,072
2202.09481
TransDreamer: Reinforcement Learning with Transformer World Models
The Dreamer agent provides various benefits of Model-Based Reinforcement Learning (MBRL) such as sample efficiency, reusable knowledge, and safe planning. However, its world model and policy networks inherit the limitations of recurrent neural networks and thus an important question is how an MBRL framework can benefit from the recent advances of transformers and what the challenges are in doing so. In this paper, we propose a transformer-based MBRL agent, called TransDreamer. We first introduce the Transformer State-Space Model, a world model that leverages a transformer for dynamics predictions. We then share this world model with a transformer-based policy network and obtain stability in training a transformer-based RL agent. In experiments, we apply the proposed model to 2D visual RL and 3D first-person visual RL tasks both requiring long-range memory access for memory-based reasoning. We show that the proposed model outperforms Dreamer in these complex tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,209
2001.00391
Temporal-Spatial Neural Filter: Direction Informed End-to-End Multi-channel Target Speech Separation
Target speech separation refers to extracting the target speaker's speech from mixed signals. Despite the recent advances in deep learning based close-talk speech separation, the applications to real-world are still an open issue. Two main challenges are the complex acoustic environment and the real-time processing requirement. To address these challenges, we propose a temporal-spatial neural filter, which directly estimates the target speech waveform from multi-speaker mixture in reverberant environments, assisted with directional information of the speaker(s). Firstly, against variations brought by complex environment, the key idea is to increase the acoustic representation completeness through the jointly modeling of temporal, spectral and spatial discriminability between the target and interference source. Specifically, temporal, spectral, spatial along with the designed directional features are integrated to create a joint acoustic representation. Secondly, to reduce the latency, we design a fully-convolutional autoencoder framework, which is purely end-to-end and single-pass. All the feature computation is implemented by the network layers and operations to speed up the separation procedure. Evaluation is conducted on simulated reverberant dataset WSJ0-2mix and WSJ0-3mix under speaker-independent scenario. Experimental results demonstrate that the proposed method outperforms state-of-the-art deep learning based multi-channel approaches with fewer parameters and faster processing speed. Furthermore, the proposed temporal-spatial neural filter can handle mixtures with varying and unknown number of speakers and exhibits persistent performance even when existing a direction estimation error. Codes and models will be released soon.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
159,204
2201.13320
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized machine learning applications in multi-agent or federated environments. To tackle the communication bottleneck, there have been many efforts to design communication-compressed algorithms for decentralized nonconvex optimization, where the clients are only allowed to communicate a small amount of quantized information (aka bits) with their neighbors over a predefined graph topology. Despite significant efforts, the state-of-the-art algorithm in the nonconvex setting still suffers from a slower rate of convergence $O((G/T)^{2/3})$ compared with their uncompressed counterpart, where $G$ measures the data heterogeneity across different clients, and $T$ is the number of communication rounds. This paper proposes BEER, which adopts communication compression with gradient tracking, and shows it converges at a faster rate of $O(1/T)$. This significantly improves over the state-of-the-art rate, by matching the rate without compression even under arbitrary data heterogeneity. Numerical experiments are also provided to corroborate our theory and confirm the practical superiority of BEER in the data heterogeneous regime.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
277,948
2208.10099
Recent Advances in Text-to-SQL: A Survey of What We Have and What We Expect
Text-to-SQL has attracted attention from both the natural language processing and database communities because of its ability to convert the semantics in natural language into SQL queries and its practical application in building natural language interfaces to database systems. The major challenges in text-to-SQL lie in encoding the meaning of natural utterances, decoding to SQL queries, and translating the semantics between these two forms. These challenges have been addressed to different extents by the recent advances. However, there is still a lack of comprehensive surveys for this task. To this end, we review recent progress on text-to-SQL for datasets, methods, and evaluation and provide this systematic survey, addressing the aforementioned challenges and discussing potential future directions. We hope that this survey can serve as quick access to existing work and motivate future research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
313,940
2405.17240
Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth
The absence of real targets to guide the model training is one of the main problems with the makeup transfer task. Most existing methods tackle this problem by synthesizing pseudo ground truths (PGTs). However, the generated PGTs are often sub-optimal and their imprecision will eventually lead to performance degradation. To alleviate this issue, in this paper, we propose a novel Content-Style Decoupled Makeup Transfer (CSD-MT) method, which works in a purely unsupervised manner and thus eliminates the negative effects of generating PGTs. Specifically, based on the frequency characteristics analysis, we assume that the low-frequency (LF) component of a face image is more associated with its makeup style information, while the high-frequency (HF) component is more related to its content details. This assumption allows CSD-MT to decouple the content and makeup style information in each face image through the frequency decomposition. After that, CSD-MT realizes makeup transfer by maximizing the consistency of these two types of information between the transferred result and input images, respectively. Two newly designed loss functions are also introduced to further improve the transfer performance. Extensive quantitative and qualitative analyses show the effectiveness of our CSD-MT method. Our code is available at https://github.com/Snowfallingplum/CSD-MT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
457,820
1908.07968
Assessing the Impact of a User-Item Collaborative Attack on Class of Users
Collaborative Filtering (CF) models lie at the core of most recommendation systems due to their state-of-the-art accuracy. They are commonly adopted in e-commerce and online services for their impact on sales volume and/or diversity, and their impact on companies' outcome. However, CF models are only as good as the interaction data they work with. As these models rely on outside sources of information, counterfeit data such as user ratings or reviews can be injected by attackers to manipulate the underlying data and alter the impact of resulting recommendations, thus implementing a so-called shilling attack. While previous works have focused on evaluating shilling attack strategies from a global perspective paying particular attention to the effect of the size of attacks and attacker's knowledge, in this work we explore the effectiveness of shilling attacks under novel aspects. First, we investigate the effect of attack strategies crafted on a target user in order to push the recommendation of a low-ranking item to a higher position, referred to as user-item attack. Second, we evaluate the effectiveness of attacks in altering the impact of different CF models by contemplating the class of the target user, from the perspective of the richness of her profile (i.e., cold v.s. warm user). Finally, similar to previous work we contemplate the size of attack (i.e., the amount of fake profiles injected) in examining their success. The results of experiments on two widely used datasets in business and movie domains, namely Yelp and MovieLens, suggest that warm and cold users exhibit contrasting behaviors in datasets with different characteristics.
false
false
false
false
false
true
true
false
false
false
false
false
true
false
false
false
false
false
142,429
1909.12127
Intensity-Free Learning of Temporal Point Processes
Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals. The standard way of learning in such models is by estimating the conditional intensity function. However, parameterizing the intensity function usually incurs several trade-offs. We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. We draw on the literature on normalizing flows to design models that are flexible and efficient. We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
147,032
2110.02153
Inference and De-Noising of Non-Gaussian Particle Distribution Functions: A Generative Modeling Approach
The particle-in-cell numerical method of plasma physics balances a trade-off between computational cost and intrinsic noise. Inference on data produced by these simulations generally consists of binning the data to recover the particle distribution function, from which physical processes may be investigated. In addition to containing noise, the distribution function is temporally dynamic and can be non-gaussian and multi-modal, making the task of modeling it difficult. Here we demonstrate the use of normalizing flows to learn a smooth, tractable approximation to the noisy particle distribution function. We demonstrate that the resulting data driven likelihood conserves relevant physics and may be extended to encapsulate the temporal evolution of the distribution function.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
259,021
2311.06292
Towards a data-driven debt collection strategy based on an advanced machine learning framework
The European debt purchase market as measured by the total book value of purchased debt approached 25bn euros in 2020 and it was growing at double-digit rates. This is an example of how big the debt collection and debt purchase industry has grown and the important impact it has in the financial sector. However, in order to ensure an adequate return during the debt collection process, a good estimation of the propensity to pay and/or the expected cashflow is crucial. These estimations can be employed, for instance, to create different strategies during the amicable collection to maximize quality standards and revenues. And not only that, but also to prioritize the cases in which a legal process is necessary when debtors are unreachable for an amicable negotiation. This work offers a solution for these estimations. Specifically, a new machine learning modelling pipeline is presented showing how outperforms current strategies employed in the sector. The solution contains a pre-processing pipeline and a model selector based on the best model calibration. Performance is validated with real historical data of the debt industry.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
406,884
2501.05109
EquiBoost: An Equivariant Boosting Approach to Molecular Conformation Generation
Molecular conformation generation plays key roles in computational drug design. Recently developed deep learning methods, particularly diffusion models have reached competitive performance over traditional cheminformatical approaches. However, these methods are often time-consuming or require extra support from traditional methods. We propose EquiBoost, a boosting model that stacks several equivariant graph transformers as weak learners, to iteratively refine 3D conformations of molecules. Without relying on diffusion techniques, EquiBoost balances accuracy and efficiency more effectively than diffusion-based methods. Notably, compared to the previous state-of-the-art diffusion method, EquiBoost improves generation quality and preserves diversity, achieving considerably better precision of Average Minimum RMSD (AMR) on the GEOM datasets. This work rejuvenates boosting and sheds light on its potential to be a robust alternative to diffusion models in certain scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
523,472
1811.08306
Study of Multi-Step Knowledge-Aided Iterative Nested MUSIC for Direction Finding
In this work, we propose a subspace-based algorithm for direction-of-arrival (DOA) estimation applied to the signals impinging on a two-level nested array, referred to as multi-step knowledge-aided iterative nested MUSIC method (MS-KAI-Nested-MUSIC), which significantly improves the accuracy of the original Nested-MUSIC. Differently from existing knowledge-aided methods applied to uniform linear arrays (ULAs), which make use of available known DOAs to improve the estimation of the covariance matrix of the input data, the proposed Multi-Step KAI-Nested-MU employs knowledge of the structure of the augmented sample covariance matrix, which is obtained by exploiting the difference co-array structure covariance matrix, and its perturbation terms and the gradual incorporation of prior knowledge, which is obtained on line. The effectiveness of the proposed technique can be noticed by simulations focusing on uncorrelated closely-spaced sources.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
114,010
2209.01121
Back-to-Bones: Rediscovering the Role of Backbones in Domain Generalization
Domain Generalization (DG) studies the capability of a deep learning model to generalize to out-of-training distributions. In the last decade, literature has been massively filled with training methodologies that claim to obtain more abstract and robust data representations to tackle domain shifts. Recent research has provided a reproducible benchmark for DG, pointing out the effectiveness of naive empirical risk minimization (ERM) over existing algorithms. Nevertheless, researchers persist in using the same outdated feature extractors, and no attention has been given to the effects of different backbones yet. In this paper, we start back to the backbones proposing a comprehensive analysis of their intrinsic generalization capabilities, which so far have been ignored by the research community. We evaluate a wide variety of feature extractors, from standard residual solutions to transformer-based architectures, finding an evident linear correlation between large-scale single-domain classification accuracy and DG capability. Our extensive experimentation shows that by adopting competitive backbones in conjunction with effective data augmentation, plain ERM outperforms recent DG solutions and achieves state-of-the-art accuracy. Moreover, our additional qualitative studies reveal that novel backbones give more similar representations to same-class samples, separating different domains in the feature space. This boost in generalization capabilities leaves marginal room for DG algorithms. It suggests a new paradigm for investigating the problem, placing backbones in the spotlight and encouraging the development of consistent algorithms on top of them. The code is available at https://github.com/PIC4SeR/Back-to-Bones.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
315,783
2308.08538
Proprioceptive Learning with Soft Polyhedral Networks
Proprioception is the "sixth sense" that detects limb postures with motor neurons. It requires a natural integration between the musculoskeletal systems and sensory receptors, which is challenging among modern robots that aim for lightweight, adaptive, and sensitive designs at a low cost. Here, we present the Soft Polyhedral Network with an embedded vision for physical interactions, capable of adaptive kinesthesia and viscoelastic proprioception by learning kinetic features. This design enables passive adaptations to omni-directional interactions, visually captured by a miniature high-speed motion tracking system embedded inside for proprioceptive learning. The results show that the soft network can infer real-time 6D forces and torques with accuracies of 0.25/0.24/0.35 N and 0.025/0.034/0.006 Nm in dynamic interactions. We also incorporate viscoelasticity in proprioception during static adaptation by adding a creep and relaxation modifier to refine the predicted results. The proposed soft network combines simplicity in design, omni-adaptation, and proprioceptive sensing with high accuracy, making it a versatile solution for robotics at a low cost with more than 1 million use cycles for tasks such as sensitive and competitive grasping, and touch-based geometry reconstruction. This study offers new insights into vision-based proprioception for soft robots in adaptive grasping, soft manipulation, and human-robot interaction.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
385,944
2111.05940
A Novel Corpus of Discourse Structure in Humans and Computers
We present a novel corpus of 445 human- and computer-generated documents, comprising about 27,000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural discourse modes. The corpus covers both formal and informal discourse, and contains documents generated using fine-tuned GPT-2 (Zellers et al., 2019) and GPT-3(Brown et al., 2020). We showcase the usefulness of this corpus for detailed discourse analysis of text generation by providing preliminary evidence that less numerous, shorter and more often incoherent clause relations are associated with lower perceived quality of computer-generated narratives and arguments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
265,919
2202.01551
Isometries and MacWilliams Extension Property for Weighted Poset Metric
Let $\mathbf{H}$ be the cartesian product of a family of left modules over a ring $S$, indexed by a finite set $\Omega$. We are concerned with the $(\mathbf{P},\omega)$-weight on $\mathbf{H}$, where $\mathbf{P}=(\Omega,\preccurlyeq_{\mathbf{P}})$ is a poset and $\omega:\Omega\longrightarrow\mathbb{R}^{+}$ is a weight function. We characterize the group of $(\mathbf{P},\omega)$-weight isometries of $\mathbf{H}$, and give a canonical decomposition for semi-simple subcodes of $\mathbf{H}$ when $\mathbf{P}$ is hierarchical. We then study the MacWilliams extension property (MEP) for $(\mathbf{P},\omega)$-weight. We show that the MEP implies the unique decomposition property (UDP) of $(\mathbf{P},\omega)$, which further implies that $\mathbf{P}$ is hierarchical if $\omega$ is identically $1$. For the case that either $\mathbf{P}$ is hierarchical or $\omega$ is identically $1$, we show that the MEP for $(\mathbf{P},\omega)$-weight can be characterized in terms of the MEP for Hamming weight, and give necessary and sufficient conditions for $\mathbf{H}$ to satisfy the MEP for $(\mathbf{P},\omega)$-weight when $S$ is an Artinian simple ring (either finite or infinite). When $S$ is a finite field, in the context of $(\mathbf{P},\omega)$-weight, we compare the MEP with other coding theoretic properties including the MacWilliams identity, Fourier-reflexivity of partitions and the UDP, and show that the MEP is strictly stronger than all the rest among them.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
278,518
1906.11437
Hard Pixel Mining for Depth Privileged Semantic Segmentation
Semantic segmentation has achieved remarkable progress but remains challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as a depth map to help RGB based semantic segmentation because the depth map could provide complementary geometric cues. However, due to the inaccessibility of depth sensors, depth information is usually unavailable for the test images. In this paper, we leverage only the depth of training images as the privileged information to mine the hard pixels in semantic segmentation, in which depth information is only available for training images but not available for test images. Specifically, we propose a novel Loss Weight Module, which outputs a loss weight map by employing two depth-related measurements of hard pixels: Depth Prediction Error and Depthaware Segmentation Error. The loss weight map is then applied to segmentation loss, with the goal of learning a more robust model by paying more attention to the hard pixels. Besides, we also explore a curriculum learning strategy based on the loss weight map. Meanwhile, to fully mine the hard pixels on different scales, we apply our loss weight module to multi-scale side outputs. Our hard pixels mining method achieves the state-of-the-art results on two benchmark datasets, and even outperforms the methods which need depth input during testing.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
136,668
1911.10088
Optimizing Data Usage via Differentiable Rewards
To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems. Similarly, a machine learning model could potentially be trained better with a scorer that "adapts" to its current learning state and estimates the importance of each training data instance. Training such an adaptive scorer efficiently is a challenging problem; in order to precisely quantify the effect of a data instance at a given time during the training, it is typically necessary to first complete the entire training process. To efficiently optimize data usage, we propose a reinforcement learning approach called Differentiable Data Selection (DDS). In DDS, we formulate a scorer network as a learnable function of the training data, which can be efficiently updated along with the main model being trained. Specifically, DDS updates the scorer with an intuitive reward signal: it should up-weigh the data that has a similar gradient with a dev set upon which we would finally like to perform well. Without significant computing overhead, DDS delivers strong and consistent improvements over several strong baselines on two very different tasks of machine translation and image classification.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
154,721
2312.17263
TACIT: A Target-Agnostic Feature Disentanglement Framework for Cross-Domain Text Classification
Cross-domain text classification aims to transfer models from label-rich source domains to label-poor target domains, giving it a wide range of practical applications. Many approaches promote cross-domain generalization by capturing domain-invariant features. However, these methods rely on unlabeled samples provided by the target domains, which renders the model ineffective when the target domain is agnostic. Furthermore, the models are easily disturbed by shortcut learning in the source domain, which also hinders the improvement of domain generalization ability. To solve the aforementioned issues, this paper proposes TACIT, a target domain agnostic feature disentanglement framework which adaptively decouples robust and unrobust features by Variational Auto-Encoders. Additionally, to encourage the separation of unrobust features from robust features, we design a feature distillation task that compels unrobust features to approximate the output of the teacher. The teacher model is trained with a few easy samples that are easy to carry potential unknown shortcuts. Experimental results verify that our framework achieves comparable results to state-of-the-art baselines while utilizing only source domain data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
418,680
1610.06809
The Anatomy of Brexit Debate on Facebook
Nowadays users get informed and shape their opinion through social media. However, the disintermediated access to contents does not guarantee quality of information. Selective exposure and confirmation bias, indeed, have been shown to play a pivotal role in content consumption and information spreading. Users tend to select information adhering (and reinforcing) their worldview and to ignore dissenting information. This pattern elicits the formation of polarized groups -- i.e., echo chambers -- where the interaction with like-minded people might even reinforce polarization. In this work we address news consumption around Brexit in UK on Facebook. In particular, we perform a massive analysis on more than 1 Million users interacting with Brexit related posts from the main news providers between January and July 2016. We show that consumption patterns elicit the emergence of two distinct communities of news outlets. Furthermore, to better characterize inner group dynamics, we introduce a new technique which combines automatic topic extraction and sentiment analysis. We compare how the same topics are presented on posts and the related emotional response on comments finding significant differences in both echo chambers and that polarization influences the perception of topics. Our results provide important insights about the determinants of polarization and evolution of core narratives on online debating.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
62,699
1807.03234
Bayesian Sequential Joint Detection and Estimation
Joint detection and estimation refers to deciding between two or more hypotheses and, depending on the test outcome, simultaneously estimating the unknown parameters of the underlying distribution. This problem is investigated in a sequential framework under mild assumptions on the underlying random process. We formulate an unconstrained sequential decision problem, whose cost function is the weighted sum of the expected run-length and the detection/estimation errors. Then, a strong connection between the derivatives of the cost function with respect to the weights, which can be interpreted as Lagrange multipliers, and the detection/estimation errors of the underlying scheme is shown. This property is used to characterize the solution of a closely related sequential decision problem, whose objective function is the expected run-length under constraints on the average detection/estimation errors. We show that the solution of the constrained problem coincides with the solution of the unconstrained problem with suitably chosen weights. These weights are characterized as the solution of a linear program, which can be solved using efficient off-the-shelf solvers. The theoretical results are illustrated with two example problems, for which optimal sequential schemes are designed numerically and whose performance is validated via Monte Carlo simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
102,478
1503.07706
Pain Intensity Estimation by a Self--Taught Selection of Histograms of Topographical Features
Pain assessment through observational pain scales is necessary for special categories of patients such as neonates, patients with dementia, critically ill patients, etc. The recently introduced Prkachin-Solomon score allows pain assessment directly from facial images opening the path for multiple assistive applications. In this paper, we introduce the Histograms of Topographical (HoT) features, which are a generalization of the topographical primal sketch, for the description of the face parts contributing to the mentioned score. We propose a semi-supervised, clustering oriented self--taught learning procedure developed on the emotion oriented Cohn-Kanade database. We use this procedure to improve the discrimination between different pain intensity levels and the generalization with respect to the monitored persons, while testing on the UNBC McMaster Shoulder Pain database.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
41,505
2303.06799
Gaussian Process on the Product of Directional Manifolds
We present a principled study on defining Gaussian processes (GPs) with inputs on the product of directional manifolds. A circular kernel is first presented according to the von Mises distribution. Based thereon, the hypertoroidal von Mises (HvM) kernel is proposed to establish GPs on hypertori with consideration of correlated circular components. The proposed HvM kernel is demonstrated with multi-output GP regression for learning vector-valued functions on hypertori using the intrinsic coregionalization model. Analytic derivatives for hyperparameter optimization are provided for runtime-critical applications. For evaluation, we synthesize a ranging-based sensor network and employ the HvM-based GPs for data-driven recursive localization. Numerical results show that the HvM-based GP achieves superior tracking accuracy compared to parametric model and GPs of conventional kernel designs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
350,988
2204.07120
Exploring Dual Encoder Architectures for Question Answering
Dual encoders have been used for question-answering (QA) and information retrieval (IR) tasks with good results. Previous research focuses on two major types of dual encoders, Siamese Dual Encoder (SDE), with parameters shared across two encoders, and Asymmetric Dual Encoder (ADE), with two distinctly parameterized encoders. In this work, we explore different ways in which the dual encoder can be structured, and evaluate how these differences can affect their efficacy in terms of QA retrieval tasks. By evaluating on MS MARCO, open domain NQ and the MultiReQA benchmarks, we show that SDE performs significantly better than ADE. We further propose three different improved versions of ADEs by sharing or freezing parts of the architectures between two encoder towers. We find that sharing parameters in projection layers would enable ADEs to perform competitively with or outperform SDEs. We further explore and explain why parameter sharing in projection layer significantly improves the efficacy of the dual encoders, by directly probing the embedding spaces of the two encoder towers with t-SNE algorithm.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
291,568
2408.17169
Secure Transmission in Cell-Free Massive MIMO under Active Eavesdropping
We study secure communications in cell-free massive multiple-input multiple-output (CF-mMIMO) systems with multi-antenna access points (APs) and protective partial zero-forcing (PPZF) precoding. In particular, we consider an active eavesdropping attack, where an eavesdropper contaminates the uplink channel estimation phase by sending an identical pilot sequence with a legitimate user of interest. We formulate an optimization problem for maximizing the received signal-to-noise ratio (SINR) at the legitimate user, subject to a maximum allowable SINR at the eavesdropper and maximum transmit power at each AP, while guaranteeing specific SINR requirements on other legitimate users. The optimization problem is solved using a path-following algorithm. We also propose a large-scale-based greedy AP selection scheme to improve the secrecy spectral efficiency (SSE). Finally, we propose a simple method for identifying the presence of an eavesdropper within the system. Our findings show that PPZF can substantially outperform the conventional maximum-ratio transmission (MRT) scheme by providing around 2-fold improvement in the SSE compared to the MRT scheme. More importantly, for PPZF precoding scheme, our proposed AP selection can achieve a remarkable SSE gain of up to 220%, while our power optimization approach can provide an additional gain of up to 55% compared with a CF-mMIMO system with equal power allocation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
484,607
2310.17974
FaultSeg Swin-UNETR: Transformer-Based Self-Supervised Pretraining Model for Fault Recognition
This paper introduces an approach to enhance seismic fault recognition through self-supervised pretraining. Seismic fault interpretation holds great significance in the fields of geophysics and geology. However, conventional methods for seismic fault recognition encounter various issues, including dependence on data quality and quantity, as well as susceptibility to interpreter subjectivity. Currently, automated fault recognition methods proposed based on small synthetic datasets experience performance degradation when applied to actual seismic data. To address these challenges, we have introduced the concept of self-supervised learning, utilizing a substantial amount of relatively easily obtainable unlabeled seismic data for pretraining. Specifically, we have employed the Swin Transformer model as the core network and employed the SimMIM pretraining task to capture unique features related to discontinuities in seismic data. During the fine-tuning phase, inspired by edge detection techniques, we have also refined the structure of the Swin-UNETR model, enabling multiscale decoding and fusion for more effective fault detection. Experimental results demonstrate that our proposed method attains state-of-the-art performance on the Thebe dataset, as measured by the OIS and ODS metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
403,359
2406.20095
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
Vision Language Models (VLMs) have recently been leveraged to generate robotic actions, forming Vision-Language-Action (VLA) models. However, directly adapting a pretrained VLM for robotic control remains challenging, particularly when constrained by a limited number of robot demonstrations. In this work, we introduce LLaRA: Large Language and Robotics Assistant, a framework that formulates robot action policy as visuo-textual conversations and enables an efficient transfer of a pretrained VLM into a powerful VLA, motivated by the success of visual instruction tuning in Computer Vision. First, we present an automated pipeline to generate conversation-style instruction tuning data for robots from existing behavior cloning datasets, aligning robotic actions with image pixel coordinates. Further, we enhance this dataset in a self-supervised manner by defining six auxiliary tasks, without requiring any additional action annotations. We show that a VLM finetuned with a limited amount of such datasets can produce meaningful action decisions for robotic control. Through experiments across multiple simulated and real-world tasks, we demonstrate that LLaRA achieves state-of-the-art performance while preserving the generalization capabilities of large language models. The code, datasets, and pretrained models are available at https://github.com/LostXine/LLaRA.
false
false
false
false
true
false
true
true
true
false
false
true
false
false
false
false
false
false
468,668
2105.15101
Anchor Nodes Positioning for Self-localization in Wireless Sensor Networks using Belief Propagation and Evolutionary Algorithms
Locating each node in a wireless sensor network is essential for starting the monitoring job and sending information about the area. One method that has been used in hard and inaccessible environments is randomly scattering each node in the area. In order to reduce the cost of using GPS at each node, some nodes should be equipped with GPS (anchors), Then using the belief propagation algorithm, locate other nodes. The number of anchor nodes must be reduced since they are expensive. Furthermore, the location of these nodes affects the algorithm's performance. Using multi-objective optimization, an algorithm is introduced in this paper that minimizes the estimated location error and the number of anchor nodes. According to simulation results, This algorithm proposes a set of solutions with less energy consumption and less error than similar algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
237,919
2102.03012
A Serverless Cloud-Fog Platform for DNN-Based Video Analytics with Incremental Learning
DNN-based video analytics have empowered many new applications (e.g., automated retail). Meanwhile, the proliferation of fog devices provides developers with more design options to improve performance and save cost. To the best of our knowledge, this paper presents the first serverless system that takes full advantage of the client-fog-cloud synergy to better serve the DNN-based video analytics. Specifically, the system aims to achieve two goals: 1) Provide the optimal analytics results under the constraints of lower bandwidth usage and shorter round-trip time (RTT) by judiciously managing the computational and bandwidth resources deployed in the client, fog, and cloud environment. 2) Free developers from tedious administration and operation tasks, including DNN deployment, cloud and fog's resource management. To this end, we implement a holistic cloud-fog system referred to as VPaaS (Video-Platform-as-a-Service). VPaaS adopts serverless computing to enable developers to build a video analytics pipeline by simply programming a set of functions (e.g., model inference), which are then orchestrated to process videos through carefully designed modules. To save bandwidth and reduce RTT, VPaaS provides a new video streaming protocol that only sends low-quality video to the cloud. The state-of-the-art (SOTA) DNNs deployed at the cloud can identify regions of video frames that need further processing at the fog ends. At the fog ends, misidentified labels in these regions can be corrected using a light-weight DNN model. To address the data drift issues, we incorporate limited human feedback into the system to verify the results and adopt incremental learning to improve our system continuously. The evaluation demonstrates that VPaaS is superior to several SOTA systems: it maintains high accuracy while reducing bandwidth usage by up to 21%, RTT by up to 62.5%, and cloud monetary cost by up to 50%.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
218,599
2409.19007
Rephrase and Contrast: Fine-Tuning Language Models for Enhanced Understanding of Communication and Computer Networks
Large language models (LLMs) are being widely researched across various disciplines, with significant recent efforts focusing on adapting LLMs for understanding of how communication networks operate. However, over-reliance on prompting techniques hinders the full exploitation of the generalization ability of these models, and the lack of efficient fine-tuning methods prevents the full realization of lightweight LLMs' potential. This paper addresses these challenges by introducing our Rephrase and Contrast (RaC) framework, an efficient fine-tuning framework. RaC enhances LLMs' comprehension and critical thinking abilities by incorporating question reformulation and contrastive analysis of correct and incorrect answers during the fine-tuning process. Experimental results demonstrate a 63.73% accuracy improvement over the foundational model when tested on a comprehensive networking problem set. Moreover, to efficiently construct the dataset for RaC fine-tuning, we develop a GPT-assisted data mining method for generating high-quality question-answer (QA) pairs; furthermore, we introduce ChoiceBoost, a data augmentation technique that expands dataset size while reducing answer-order bias. Apart from these technical innovations, we contribute to the networking community by open-sourcing valuable research resources, including: 1) the fine-tuned networking model referred to as RaC-Net, 2) the training dataset used for fine-tuning the model, 3) three testing problem sets of different difficulties to serve as benchmarks for future research, and 4) code associated with the above resources.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
492,503
1603.07475
Fine-scale Surface Normal Estimation using a Single NIR Image
We present surface normal estimation using a single near infrared (NIR) image. We are focusing on fine-scale surface geometry captured with an uncalibrated light source. To tackle this ill-posed problem, we adopt a generative adversarial network which is effective in recovering a sharp output, which is also essential for fine-scale surface normal estimation. We incorporate angular error and integrability constraint into the objective function of the network to make estimated normals physically meaningful. We train and validate our network on a recent NIR dataset, and also evaluate the generality of our trained model by using new external datasets which are captured with a different camera under different environment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
53,634
2411.17720
MAS-Attention: Memory-Aware Stream Processing for Attention Acceleration on Resource-Constrained Edge Devices
The advent of foundation models have revolutionized various fields, enabling unprecedented task accuracy and flexibility in computational linguistics, computer vision and other domains. Attention mechanism has become an essential component of foundation models, due to their superb capability of capturing correlations in a sequence. However, attention results in quadratic complexity in memory and compute as the context length grows. Although many fusion-based exact attention acceleration algorithms have been developed for datacenter-grade GPUs and accelerators leveraging multi-core parallelism and data locality, yet it remains a significant challenge to accelerate attention on resource-constrained edge neural accelerators with limited compute units and stringent on-chip caches. In this paper, we propose a scheme for exact attention inference acceleration on memory-constrained edge accelerators, by parallelizing the utilization of heterogeneous compute units, i.e., vector processing units and matrix processing units. Our method involves scheduling workloads onto these different compute units in a multi-tiered tiling scheme to process tiled vector workloads and matrix workloads in attention as two streams, respecting the workload dependencies. We search for tiling factors to maximize the parallelization of both compute units while considering I/O overhead, and propose a proactive cache overwrite strategy to avoid undesirable cache spills in reality. Extensive results based on open-sourced simulation frameworks show up to 2.75x speedup and 54% reduction in energy consumption as compared to the state-of-the-art attention fusion method (FLAT) in the edge computing scenario. Further experiments on a real-world edge neural processing unit demonstrate speedup of up to 1.76x for attention as compared to FLAT, without affecting model output accuracy.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
511,566
2102.09954
A bi-level encoding scheme for the clustered shortest-path tree problem in multifactorial optimization
The Clustered Shortest-Path Tree Problem (CluSPT) plays an important role in various types of optimization problems in real-life. Recently, some Multifactorial Evolutionary Algorithm (MFEA) have been introduced to deal with the CluSPT, however these researches still have some shortcomings such as evolution operators only perform on complete graphs, huge resource consumption for finding the solution on large search spaces. To overcome these limitations, this paper describes a MFEA-based approach to solve the CluSPT. The proposed algorithm utilizes Dijkstra's algorithm to construct the spanning trees in clusters while using evolutionary operators for building the spanning tree connecting clusters. This approach takes advantage of both exact and approximate algorithms so it enables the algorithm to function efficiently on complete and sparse graphs alike. Furthermore, evolutionary operators such as individual encoding and decoding methods are also designed with great consideration regarding performance and memory usage. We have included a proof on the repairing method's efficacy in ensuring all solutions are valid. We have conducted tests on various types of Euclidean instances to assess the effectiveness of the proposed algorithm and methods. Experiment results point out the effectiveness of the proposed algorithm existing heuristic algorithms in most of the test cases. The impact of the proposed MFEA was analyzed and a possible influential factor that may be useful for further study was also pointed out.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
220,935
1902.05880
Robot Co-design: Beyond the Monotone Case
Recent advances in 3D printing and manufacturing of miniaturized robotic hardware and computing are paving the way to build inexpensive and disposable robots. This will have a large impact on several applications including scientific discovery (e.g., hurricane monitoring), search-and-rescue (e.g., operation in confined spaces), and entertainment (e.g., nano drones). The need for inexpensive and task-specific robots clashes with the current practice, where human experts are in charge of designing hardware and software aspects of the robotic platform. This makes the robot design process expensive and time-consuming, and ultimately unsuitable for small-volumes low-cost applications. This paper considers the computational robot co-design problem, which aims to create an automatic algorithm that selects the best robotic modules (sensing, actuation, computing) in order to maximize the performance on a task, while satisfying given specifications (e.g., maximum cost of the resulting design). We propose a binary optimization formulation of the co-design problem and show that such formulation generalizes previous work based on strong modeling assumptions. We show that the proposed formulation can solve relatively large co-design problems in seconds and with minimal human intervention. We demonstrate the proposed approach in two applications: the co-design of an autonomous drone racing platform and the co-design of a multi-robot system.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
121,635
2410.03190
Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization
Recent advancements in timestep-distilled diffusion models have enabled high-quality image generation that rivals non-distilled multi-step models, but with significantly fewer inference steps. While such models are attractive for applications due to the low inference cost and latency, fine-tuning them with a naive diffusion objective would result in degraded and blurry outputs. An intuitive alternative is to repeat the diffusion distillation process with a fine-tuned teacher model, which produces good results but is cumbersome and computationally intensive; the distillation training usually requires magnitude higher of training compute compared to fine-tuning for specific image styles. In this paper, we present an algorithm named pairwise sample optimization (PSO), which enables the direct fine-tuning of an arbitrary timestep-distilled diffusion model. PSO introduces additional reference images sampled from the current time-step distilled model, and increases the relative likelihood margin between the training images and reference images. This enables the model to retain its few-step generation ability, while allowing for fine-tuning of its output distribution. We also demonstrate that PSO is a generalized formulation which can be flexibly extended to both offline-sampled and online-sampled pairwise data, covering various popular objectives for diffusion model preference optimization. We evaluate PSO in both preference optimization and other fine-tuning tasks, including style transfer and concept customization. We show that PSO can directly adapt distilled models to human-preferred generation with both offline and online-generated pairwise preference image data. PSO also demonstrates effectiveness in style transfer and concept customization by directly tuning timestep-distilled diffusion models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
494,670
2211.15144
Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
333,134
2112.07209
ACE-BERT: Adversarial Cross-modal Enhanced BERT for E-commerce Retrieval
Nowadays on E-commerce platforms, products are presented to the customers with multiple modalities. These multiple modalities are significant for a retrieval system while providing attracted products for customers. Therefore, how to take into account those multiple modalities simultaneously to boost the retrieval performance is crucial. This problem is a huge challenge to us due to the following reasons: (1) the way of extracting patch features with the pre-trained image model (e.g., CNN-based model) has much inductive bias. It is difficult to capture the efficient information from the product image in E-commerce. (2) The heterogeneity of multimodal data makes it challenging to construct the representations of query text and product including title and image in a common subspace. We propose a novel Adversarial Cross-modal Enhanced BERT (ACE-BERT) for efficient E-commerce retrieval. In detail, ACE-BERT leverages the patch features and pixel features as image representation. Thus the Transformer architecture can be applied directly to the raw image sequences. With the pre-trained enhanced BERT as the backbone network, ACE-BERT further adopts adversarial learning by adding a domain classifier to ensure the distribution consistency of different modality representations for the purpose of narrowing down the representation gap between query and product. Experimental results demonstrate that ACE-BERT outperforms the state-of-the-art approaches on the retrieval task. It is remarkable that ACE-BERT has already been deployed in our E-commerce's search engine, leading to 1.46% increase in revenue.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
271,409
2005.12841
Applying Evolutionary Metaheuristics for Parameter Estimation of Individual-Based Models
Individual-based models are complex and they have usually an elevated number of input parameters which must be tuned for reproducing the observed population data or the experimental results as accurately as possible. Thus, one of the weakest points of this modelling approach lies on the fact that rarely the modeler has the enough information about the correct values or even the acceptable range for the input parameters. Consequently, several parameter combinations must be tried to find an acceptable set of input factors minimizing the deviations of simulated and the reference dataset. In practice, most of times, it is computationally unfeasible to traverse the complete search space trying all every possible combination to find the best of set of parameters. That is precisely an instance of a combinatorial problem which is suitable for being solved by metaheuristics and evolutionary computation techniques. In this work, we introduce EvoPER, an R package for simplifying the parameter estimation using evolutionary computation methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
178,838
2306.05791
Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors
Endowing robots with tactile capabilities opens up new possibilities for their interaction with the environment, including the ability to handle fragile and/or soft objects. In this work, we equip the robot gripper with low-cost vision-based tactile sensors and propose a manipulation algorithm that adapts to both rigid and soft objects without requiring any knowledge of their properties. The algorithm relies on a touch and slip detection method, which considers the variation in the tactile images with respect to reference ones. We validate the approach on seven different objects, with different properties in terms of rigidity and fragility, to perform unplugging and lifting tasks. Furthermore, to enhance applicability, we combine the manipulation algorithm with a grasp sampler for the task of finding and picking a grape from a bunch without damaging~it.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
372,339
1904.08303
Usage of Decision Support Systems for Conflicts Modelling during Information Operations Recognition
Application of decision support systems for conflict modeling in information operations recognition is presented. An information operation is considered as a complex weakly structured system. The model of conflict between two subjects is proposed based on the second-order rank reflexive model. The method is described for construction of the design pattern for knowledge bases of decision support systems. In the talk, the methodology is proposed for using of decision support systems for modeling of conflicts in information operations recognition based on the use of expert knowledge and content monitoring.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
128,023
2006.11280
Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
Many real-world applications have to tackle the Positive-Unlabeled (PU) learning problem, i.e., learning binary classifiers from a large amount of unlabeled data and a few labeled positive examples. While current state-of-the-art methods employ importance reweighting to design various risk estimators, they ignored the learning capability of the model itself, which could have provided reliable supervision. This motivates us to propose a novel Self-PU learning framework, which seamlessly integrates PU learning and self-training. Self-PU highlights three "self"-oriented building blocks: a self-paced training algorithm that adaptively discovers and augments confident positive/negative examples as the training proceeds; a self-calibrated instance-aware loss; and a self-distillation scheme that introduces teacher-students learning as an effective regularization for PU learning. We demonstrate the state-of-the-art performance of Self-PU on common PU learning benchmarks (MNIST and CIFAR-10), which compare favorably against the latest competitors. Moreover, we study a real-world application of PU learning, i.e., classifying brain images of Alzheimer's Disease. Self-PU obtains significantly improved results on the renowned Alzheimer's Disease Neuroimaging Initiative (ADNI) database over existing methods. The code is publicly available at: https://github.com/TAMU-VITA/Self-PU.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,163
2208.06187
Stabilizer quantum codes defined by trace-depending polynomials
Quantum error-correcting codes with good parameters can be constructed by evaluating polynomials at the roots of the polynomial trace. In this paper, we propose to evaluate polynomials at the roots of trace-depending polynomials (given by a constant plus the trace of a polynomial) and show that this procedure gives rise to stabilizer quantum error-correcting codes with a wider range of lengths than in other papers involving roots of the trace and with excellent parameters. Namely, we are able to provide new binary records and non-binary codes improving the ones available in the literature.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
312,636
1108.0027
Revisiting Degree Distribution Models for Social Graph Analysis
Degree distribution models are incredibly important tools for analyzing and understanding the structure and formation of social networks, and can help guide the design of efficient graph algorithms. In particular, the Power-law degree distribution has long been used to model the structure of online social networks, and is the basis for algorithms and heuristics in graph applications such as influence maximization and social search. Along with recent measurement results, our interest in this topic was sparked by our own experimental results on social graphs that deviated significantly from those predicted by a Power-law model. In this work, we seek a deeper understanding of these deviations, and propose an alternative model with significant implications on graph algorithms and applications. We start by quantifying this artifact using a variety of real social graphs, and show that their structures cannot be accurately modeled using elementary distributions including the Power-law. Instead, we propose the Pareto-Lognormal (PLN) model, verify its goodness-of-fit using graphical and statistical methods, and present an analytical study of its asymptotical differences with the Power-law. To demonstrate the quantitative benefits of the PLN model, we compare the results of three wide-ranging graph applications on real social graphs against those on synthetic graphs generated using the PLN and Power-law models. We show that synthetic graphs generated using PLN are much better predictors of degree distributions in real graphs, and produce experimental results with errors that are orders-of-magnitude smaller than those produced by the Power-law model.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
11,516
2205.06631
A Polar Subcode Approach to Belief Propagation List Decoding
Permutation decoding gained recent interest as it can exploit the symmetries of a code in a parallel fashion. Moreover, it has been shown that by viewing permuted polar codes as polar subcodes, the set of usable permutations in permutation decoding can be increased. We extend this idea to pre-transformed polar codes, such as cyclic redundancy check (CRC)-aided polar codes, which previously could not be decoded using permutations due to their lack of automorphisms. Using belief propagation (BP)-based subdecoders, we showcase a performance close to CRC-aided SCL (CA-SCL) decoding. The proposed algorithm outperforms the previously best performing iterative CRC-aided belief propagation list (CA-BPL) decoder both in error-rate performance and decoding latency.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
296,303
2409.17754
Byzantine-Robust Aggregation for Securing Decentralized Federated Learning
Federated Learning (FL) emerges as a distributed machine learning approach that addresses privacy concerns by training AI models locally on devices. Decentralized Federated Learning (DFL) extends the FL paradigm by eliminating the central server, thereby enhancing scalability and robustness through the avoidance of a single point of failure. However, DFL faces significant challenges in optimizing security, as most Byzantine-robust algorithms proposed in the literature are designed for centralized scenarios. In this paper, we present a novel Byzantine-robust aggregation algorithm to enhance the security of Decentralized Federated Learning environments, coined WFAgg. This proposal handles the adverse conditions and strength robustness of dynamic decentralized topologies at the same time by employing multiple filters to identify and mitigate Byzantine attacks. Experimental results demonstrate the effectiveness of the proposed algorithm in maintaining model accuracy and convergence in the presence of various Byzantine attack scenarios, outperforming state-of-the-art centralized Byzantine-robust aggregation schemes (such as Multi-Krum or Clustering). These algorithms are evaluated on an IID image classification problem in both centralized and decentralized scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
491,966
2004.02589
Software Defect Prediction Based On Deep Learning Models: Performance Study
In recent years, defect prediction, one of the major software engineering problems, has been in the focus of researchers since it has a pivotal role in estimating software errors and faulty modules. Researchers with the goal of improving prediction accuracy have developed many models for software defect prediction. However, there are a number of critical conditions and theoretical problems in order to achieve better results. In this paper, two deep learning models, Stack Sparse Auto-Encoder (SSAE) and Deep Belief Network (DBN), are deployed to classify NASA datasets, which are unbalanced and have insufficient samples. According to the conducted experiment, the accuracy for the datasets with sufficient samples is enhanced and beside this SSAE model gains better results in comparison to DBN model in the majority of evaluation metrics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
171,277
2502.08636
PulseCheck457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models
Although large multimodal models (LMMs) have demonstrated remarkable capabilities in visual scene interpretation and reasoning, their capacity for complex and precise 3-dimensional spatial reasoning remains uncertain. Existing benchmarks focus predominantly on 2D spatial understanding and lack a framework to comprehensively evaluate 6D spatial reasoning across varying complexities. To address this limitation, we present PulseCheck457, a scalable and unbiased synthetic dataset designed with 4 key capability for spatial reasoning: multi-object recognition, 2D location, 3D location, and 3D orientation. We develop a cascading evaluation structure, constructing 7 question types across 5 difficulty levels that range from basic single object recognition to our new proposed complex 6D spatial reasoning tasks. We evaluated various large multimodal models (LMMs) on PulseCheck457, observing a general decline in performance as task complexity increases, particularly in 3D reasoning and 6D spatial tasks. To quantify these challenges, we introduce the Relative Performance Dropping Rate (RPDR), highlighting key weaknesses in 3D reasoning capabilities. Leveraging the unbiased attribute design of our dataset, we also uncover prediction biases across different attributes, with similar patterns observed in real-world image settings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
533,090
1902.00746
On the bias, risk and consistency of sample means in multi-armed bandits
The sample mean is among the most well studied estimators in statistics, having many desirable properties such as unbiasedness and consistency. However, when analyzing data collected using a multi-armed bandit (MAB) experiment, the sample mean is biased and much remains to be understood about its properties. For example, when is it consistent, how large is its bias, and can we bound its mean squared error? This paper delivers a thorough and systematic treatment of the bias, risk and consistency of MAB sample means. Specifically, we identify four distinct sources of selection bias (sampling, stopping, choosing and rewinding) and analyze them both separately and together. We further demonstrate that a new notion of \emph{effective sample size} can be used to bound the risk of the sample mean under suitable loss functions. We present several carefully designed examples to provide intuition on the different sources of selection bias we study. Our treatment is nonparametric and algorithm-agnostic, meaning that it is not tied to a specific algorithm or goal. In a nutshell, our proofs combine variational representations of information-theoretic divergences with new martingale concentration inequalities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,494
2410.09588
A Game-Theoretic Perspective for Efficient Modern Random Access
Modern random access mechanisms combine packet repetitions with multi-user detection mechanisms at the receiver to maximize the throughput and reliability in massive Internet of Things (IoT) scenarios. However, optimizing the access policy, which selects the number of repetitions, is a complicated problem, and failing to do so can lead to an inefficient use of resources and, potentially, to an increased congestion. In this paper, we follow a game-theoretic approach for optimizing the access policies of selfish users in modern random access mechanisms. Our goal is to find adequate values for the rewards given after a success to achieve a Nash equilibrium (NE) that optimizes the throughput of the system while considering the cost of transmission. Our results show that a mixed strategy, where repetitions are selected according to the irregular repetition slotted ALOHA (IRSA) protocol, attains a NE that maximizes the throughput in the special case with two users. In this scenario, our method increases the throughput by 30% when compared to framed ALOHA. Furthermore, we present three methods to attain a NE with near-optimal throughput for general modern random access scenarios, which exceed the throughput of framed ALOHA by up to 34%.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
497,666
2004.10360
Breaking Down Memory Walls: Adaptive Memory Management in LSM-based Storage Systems (Extended Version)
Log-Structured Merge-trees (LSM-trees) have been widely used in modern NoSQL systems. Due to their out-of-place update design, LSM-trees have introduced memory walls among the memory components of multiple LSM-trees and between the write memory and the buffer cache. Optimal memory allocation among these regions is non-trivial because it is highly workload-dependent. Existing LSM-tree implementations instead adopt static memory allocation schemes due to their simplicity and robustness, sacrificing performance. In this paper, we attempt to break down these memory walls in LSM-based storage systems. We first present a memory management architecture that enables adaptive memory management. We then present a partitioned memory component structure with new flush policies to better exploit the write memory to minimize the write cost. To break down the memory wall between the write memory and the buffer cache, we further introduce a memory tuner that tunes the memory allocation between these two regions. We have conducted extensive experiments in the context of Apache AsterixDB using the YCSB and TPC-C benchmarks and we present the results here.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
173,615
2311.16673
Large Language Models Meet Computer Vision: A Brief Survey
Recently, the intersection of Large Language Models (LLMs) and Computer Vision (CV) has emerged as a pivotal area of research, driving significant advancements in the field of Artificial Intelligence (AI). As transformers have become the backbone of many state-of-the-art models in both Natural Language Processing (NLP) and CV, understanding their evolution and potential enhancements is crucial. This survey paper delves into the latest progressions in the domain of transformers and their subsequent successors, emphasizing their potential to revolutionize Vision Transformers (ViTs) and LLMs. This survey also presents a comparative analysis, juxtaposing the performance metrics of several leading paid and open-source LLMs, shedding light on their strengths and areas of improvement as well as a literature review on how LLMs are being used to tackle vision related tasks. Furthermore, the survey presents a comprehensive collection of datasets employed to train LLMs, offering insights into the diverse data available to achieve high performance in various pre-training and downstream tasks of LLMs. The survey is concluded by highlighting open directions in the field, suggesting potential venues for future research and development. This survey aims to underscores the profound intersection of LLMs on CV, leading to a new era of integrated and advanced AI models.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
411,012
2311.01073
Fourier Analysis of Signals on Directed Acyclic Graphs (DAG) Using Graph Zero-Padding
Directed acyclic graphs (DAGs) are used for modeling causal relationships, dependencies, and flows in various systems. However, spectral analysis becomes impractical in this setting because the eigendecomposition of the adjacency matrix yields all eigenvalues equal to zero. This inherent property of DAGs results in an inability to differentiate between frequency components of signals on such graphs. This problem can be addressed by {alternating the Fourier basis or adding edges in a DAG}. However, these approaches change the physics of the considered problem. To address this limitation, we propose a \textit{graph zero-padding} approach. This approach involves augmenting the original DAG with additional vertices that are connected to the existing structure. The added vertices are characterized by signal values set to zero. The proposed technique enables the spectral evaluation of system outputs on DAGs (in almost all cases), that is the computation of vertex-domain convolution without the adverse effects of aliasing due to changes in a graph structure, { with the ultimate goal of preserving the output of the system on a graph as if the changes in the graph structure were not performed}.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
404,911
2202.02686
PuzzleBots: Physical Coupling of Robot Swarms
Robot swarms have been shown to improve the ability of individual robots by inter-robot collaboration. In this paper, we present the PuzzleBots - a low-cost robotic swarm system where robots can physically couple with each other to form functional structures with minimum energy consumption while maintaining individual mobility to navigate within the environment. Each robot has knobs and holes along the sides of its body so that the robots can couple by inserting the knobs into the holes. We present the characterization of knob design and the result of gap-crossing behavior with up to nine robots. We show with hardware experiments that the robots are able to couple with each other to cross gaps and decouple to perform individual tasks. We anticipate the PuzzleBots will be useful in unstructured environments as individuals and coupled systems in real-world applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
278,910
1604.06635
Bridging LSTM Architecture and the Neural Dynamics during Reading
Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks. LSTM architecture consists of a memory cell and three gates, which looks similar to the neuronal networks in the brain. However, there still lacks the evidence of the cognitive plausibility of LSTM architecture as well as its working mechanism. In this paper, we study the cognitive plausibility of LSTM by aligning its internal architecture with the brain activity observed via fMRI when the subjects read a story. Experiment results show that the artificial memory vector in LSTM can accurately predict the observed sequential brain activities, indicating the correlation between LSTM architecture and the cognitive process of story reading.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
54,970
2110.10136
Activation Landscapes as a Topological Summary of Neural Network Performance
We use topological data analysis (TDA) to study how data transforms as it passes through successive layers of a deep neural network (DNN). We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes. The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning. We observe that the topological complexity often increases with training and that the topological complexity does not decrease with each layer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
262,048
2303.04384
SEMv2: Table Separation Line Detection Based on Instance Segmentation
Table structure recognition is an indispensable element for enabling machines to comprehend tables. Its primary purpose is to identify the internal structure of a table. Nevertheless, due to the complexity and diversity of their structure and style, it is highly challenging to parse the tabular data into a structured format that machines can comprehend. In this work, we adhere to the principle of the split-and-merge based methods and propose an accurate table structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge). Unlike the previous works in the ``split'' stage, we aim to address the table separation line instance-level discrimination problem and introduce a table separation line detection strategy based on conditional convolution. Specifically, we design the ``split'' in a top-down manner that detects the table separation line instance first and then dynamically predicts the table separation line mask for each instance. The final table separation line shape can be accurately obtained by processing the table separation line mask in a row-wise/column-wise manner. To comprehensively evaluate the SEMv2, we also present a more challenging dataset for table structure recognition, dubbed iFLYTAB, which encompasses multiple style tables in various scenarios such as photos, scanned documents, etc. Extensive experiments on publicly available datasets (e.g. SciTSR, PubTabNet and iFLYTAB) demonstrate the efficacy of our proposed approach. The code and iFLYTAB dataset are available at https://github.com/ZZR8066/SEMv2.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,068
1703.07909
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication of attacks, has prompted the need for a data driven solution, with machine learning forming the core of many cybersecurity systems. Machine learning was not designed with security in mind, and the essential assumption of stationarity, requiring that the training and testing data follow similar distributions, is violated in an adversarial domain. In this paper, an adversary's view point of a classification based system, is presented. Based on a formal adversarial model, the Seed-Explore-Exploit framework is presented, for simulating the generation of data driven and reverse engineering attacks on classifiers. Experimental evaluation, on 10 real world datasets and using the Google Cloud Prediction Platform, demonstrates the innate vulnerability of classifiers and the ease with which evasion can be carried out, without any explicit information about the classifier type, the training data or the application domain. The proposed framework, algorithms and empirical evaluation, serve as a white hat analysis of the vulnerabilities, and aim to foster the development of secure machine learning frameworks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
70,474
2202.12226
Probing BERT's priors with serial reproduction chains
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT. The MLM objective yields a dependency network with no guarantee of consistent conditional distributions, posing a problem for naive approaches. Drawing from theories of iterated learning in cognitive science, we explore the use of serial reproduction chains to sample from BERT's priors. In particular, we observe that a unique and consistent estimator of the ground-truth joint distribution is given by a Generative Stochastic Network (GSN) sampler, which randomly selects which token to mask and reconstruct on each step. We show that the lexical and syntactic statistics of sentences from GSN chains closely match the ground-truth corpus distribution and perform better than other methods in a large corpus of naturalness judgments. Our findings establish a firmer theoretical foundation for bottom-up probing and highlight richer deviations from human priors.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
282,152
2312.04874
Interpretable Underwater Diver Gesture Recognition
In recent years, usage and applications of Autonomous Underwater Vehicles has grown rapidly. Interaction of divers with the AUVs remains an integral part of the usage of AUVs for various applications and makes building robust and efficient underwater gesture recognition systems extremely important. In this paper, we propose an Underwater Gesture Recognition system trained on the Cognitive Autonomous Diving Buddy Underwater gesture dataset using deep learning that achieves 98.01\% accuracy on the dataset, which to the best of our knowledge is the best performance achieved on this dataset at the time of writing this paper. We also improve the Gesture Recognition System Interpretability by using XAI techniques to visualize the model's predictions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,867
1706.00842
Neural Network-Based Automatic Liver Tumor Segmentation With Random Forest-Based Candidate Filtering
We present a fully automatic method employing convolutional neural networks based on the 2D U-net architecture and random forest classifier to solve the automatic liver lesion segmentation problem of the ISBI 2017 Liver Tumor Segmentation Challenge (LiTS). In order to constrain the ROI in which the tumors could be located, a liver segmentation is performed first. For the organ segmentation, an ensemble of convolutional networks is trained to segment a liver using a set of 179 liver CT datasets from liver surgery planning. Inside of the liver ROI a neural network, trained using 127 challenge training datasets, identifies tumor candidates, which are subsequently filtered with a random forest classifier yielding the final tumor segmentation. The evaluation on the 70 challenge test cases resulted in a mean Dice coefficient of 0.65, ranking our method in the second place.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,694
2110.03680
Burst Image Restoration and Enhancement
Modern handheld devices can acquire burst image sequence in a quick succession. However, the individual acquired frames suffer from multiple degradations and are misaligned due to camera shake and object motions. The goal of Burst Image Restoration is to effectively combine complimentary cues across multiple burst frames to generate high-quality outputs. Towards this goal, we develop a novel approach by solely focusing on the effective information exchange between burst frames, such that the degradations get filtered out while the actual scene details are preserved and enhanced. Our central idea is to create a set of pseudo-burst features that combine complementary information from all the input burst frames to seamlessly exchange information. However, the pseudo-burst cannot be successfully created unless the individual burst frames are properly aligned to discount inter-frame movements. Therefore, our approach initially extracts pre-processed features from each burst frame and matches them using an edge-boosting burst alignment module. The pseudo-burst features are then created and enriched using multi-scale contextual information. Our final step is to adaptively aggregate information from the pseudo-burst features to progressively increase resolution in multiple stages while merging the pseudo-burst features. In comparison to existing works that usually follow a late fusion scheme with single-stage upsampling, our approach performs favorably, delivering state-of-the-art performance on burst superresolution, burst low-light image enhancement, and burst denoising tasks. The source code and pre-trained models are available at \url{https://github.com/akshaydudhane16/BIPNet}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
259,590
1704.04912
Pseudorehearsal in actor-critic agents
Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balancing task and compare different pseudorehearsal approaches. We expect that pseudorehearsal assists learning even in such very simple problems, given proper initialization of the rehearsal parameters.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
71,916
2411.00270
Unsupervised Feature Selection Algorithm Based on Graph Filtering and Self-representation
Aiming at the problem that existing methods could not fully capture the intrinsic structure of data without considering the higher-order neighborhood information of the data, we proposed an unsupervised feature selection algorithm based on graph filtering and self-representation. Firstly,a higher-order graph filter was applied to the data to obtain its smooth representation,and a regularizer was designed to combine the higher-order graph information for the self-representation matrix learning to capture the intrinsic structure of the data. Secondly,l2,1 norm was used to reconstruct the error term and feature selection matrix to enhance the robustness and row sparsity of the model to select the discriminant features. Finally, an iterative algorithm was applied to effectively solve the proposed objective function and simulation experiments were carried out to verify the effectiveness of the proposed algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
504,515
2406.15955
Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects
Though vision transformers (ViTs) have achieved state-of-the-art performance in a variety of settings, they exhibit surprising failures when performing tasks involving visual relations. This begs the question: how do ViTs attempt to perform tasks that require computing visual relations between objects? Prior efforts to interpret ViTs tend to focus on characterizing relevant low-level visual features. In contrast, we adopt methods from mechanistic interpretability to study the higher-level visual algorithms that ViTs use to perform abstract visual reasoning. We present a case study of a fundamental, yet surprisingly difficult, relational reasoning task: judging whether two visual entities are the same or different. We find that pretrained ViTs fine-tuned on this task often exhibit two qualitatively different stages of processing despite having no obvious inductive biases to do so: 1) a perceptual stage wherein local object features are extracted and stored in a disentangled representation, and 2) a relational stage wherein object representations are compared. In the second stage, we find evidence that ViTs can learn to represent somewhat abstract visual relations, a capability that has long been considered out of reach for artificial neural networks. Finally, we demonstrate that failures at either stage can prevent a model from learning a generalizable solution to our fairly simple tasks. By understanding ViTs in terms of discrete processing stages, one can more precisely diagnose and rectify shortcomings of existing and future models.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
466,933
1910.00370
Sub-Architecture Ensemble Pruning in Neural Architecture Search
Neural architecture search (NAS) is gaining more and more attention in recent years due to its flexibility and remarkable capability to reduce the burden of neural network design. To achieve better performance, however, the searching process usually costs massive computations that might not be affordable for researchers and practitioners. While recent attempts have employed ensemble learning methods to mitigate the enormous computational cost, however, they neglect a key property of ensemble methods, namely diversity, which leads to collecting more similar sub-architectures with potential redundancy in the final design. To tackle this problem, we propose a pruning method for NAS ensembles called "Sub-Architecture Ensemble Pruning in Neural Architecture Search (SAEP)." It targets to leverage diversity and to achieve sub-ensemble architectures at a smaller size with comparable performance to ensemble architectures that are not pruned. Three possible solutions are proposed to decide which sub-architectures to prune during the searching process. Experimental results exhibit the effectiveness of the proposed method by largely reducing the number of sub-architectures without degrading the performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
147,646
2311.10236
Latent Feature-based Data Splits to Improve Generalisation Evaluation: A Hate Speech Detection Case Study
With the ever-growing presence of social media platforms comes the increased spread of harmful content and the need for robust hate speech detection systems. Such systems easily overfit to specific targets and keywords, and evaluating them without considering distribution shifts that might occur between train and test data overestimates their benefit. We challenge hate speech models via new train-test splits of existing datasets that rely on the clustering of models' hidden representations. We present two split variants (Subset-Sum-Split and Closest-Split) that, when applied to two datasets using four pretrained models, reveal how models catastrophically fail on blind spots in the latent space. This result generalises when developing a split with one model and evaluating it on another. Our analysis suggests that there is no clear surface-level property of the data split that correlates with the decreased performance, which underscores that task difficulty is not always humanly interpretable. We recommend incorporating latent feature-based splits in model development and release two splits via the GenBench benchmark.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
408,456
1804.07612
Revisiting Small Batch Training for Deep Neural Networks
Modern deep neural network training is typically based on mini-batch stochastic gradient optimization. While the use of large mini-batches increases the available computational parallelism, small batch training has been shown to provide improved generalization performance and allows a significantly smaller memory footprint, which might also be exploited to improve machine throughput. In this paper, we review common assumptions on learning rate scaling and training duration, as a basis for an experimental comparison of test performance for different mini-batch sizes. We adopt a learning rate that corresponds to a constant average weight update per gradient calculation (i.e., per unit cost of computation), and point out that this results in a variance of the weight updates that increases linearly with the mini-batch size $m$. The collected experimental results for the CIFAR-10, CIFAR-100 and ImageNet datasets show that increasing the mini-batch size progressively reduces the range of learning rates that provide stable convergence and acceptable test performance. On the other hand, small mini-batch sizes provide more up-to-date gradient calculations, which yields more stable and reliable training. The best performance has been consistently obtained for mini-batch sizes between $m = 2$ and $m = 32$, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
95,558
2406.09411
MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding
We introduce MuirBench, a comprehensive benchmark that focuses on robust multi-image understanding capabilities of multimodal LLMs. MuirBench consists of 12 diverse multi-image tasks (e.g., scene understanding, ordering) that involve 10 categories of multi-image relations (e.g., multiview, temporal relations). Comprising 11,264 images and 2,600 multiple-choice questions, MuirBench is created in a pairwise manner, where each standard instance is paired with an unanswerable variant that has minimal semantic differences, in order for a reliable assessment. Evaluated upon 20 recent multi-modal LLMs, our results reveal that even the best-performing models like GPT-4o and Gemini Pro find it challenging to solve MuirBench, achieving 68.0% and 49.3% in accuracy. Open-source multimodal LLMs trained on single images can hardly generalize to multi-image questions, hovering below 33.3% in accuracy. These results highlight the importance of MuirBench in encouraging the community to develop multimodal LLMs that can look beyond a single image, suggesting potential pathways for future improvements.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
463,930
1603.01882
Composing inference algorithms as program transformations
Probabilistic inference procedures are usually coded painstakingly from scratch, for each target model and each inference algorithm. We reduce this effort by generating inference procedures from models automatically. We make this code generation modular by decomposing inference algorithms into reusable program-to-program transformations. These transformations perform exact inference as well as generate probabilistic programs that compute expectations, densities, and MCMC samples. The resulting inference procedures are about as accurate and fast as other probabilistic programming systems on real-world problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
52,949
2410.12216
Learning Differentiable Tensegrity Dynamics using Graph Neural Networks
Tensegrity robots are composed of rigid struts and flexible cables. They constitute an emerging class of hybrid rigid-soft robotic systems and are promising systems for a wide array of applications, ranging from locomotion to assembly. They are difficult to control and model accurately, however, due to their compliance and high number of degrees of freedom. To address this issue, prior work has introduced a differentiable physics engine designed for tensegrity robots based on first principles. In contrast, this work proposes the use of graph neural networks to model contact dynamics over a graph representation of tensegrity robots, which leverages their natural graph-like cable connectivity between end caps of rigid rods. This learned simulator can accurately model 3-bar and 6-bar tensegrity robot dynamics in simulation-to-simulation experiments where MuJoCo is used as the ground truth. It can also achieve higher accuracy than the previous differentiable engine for a real 3-bar tensegrity robot, for which the robot state is only partially observable. When compared against direct applications of recent mesh-based graph neural network simulators, the proposed approach is computationally more efficient, both for training and inference, while achieving higher accuracy. Code and data are available at https://github.com/nchen9191/tensegrity_gnn_simulator_public
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
498,909
1812.06298
Residual Policy Learning
We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements. We study RPL in six challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. For initial controllers, we consider both hand-designed policies and model-predictive controllers with known or learned transition models. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently. Video and code at https://k-r-allen.github.io/residual-policy-learning/.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
116,582
1911.01509
Understanding racial bias in health using the Medical Expenditure Panel Survey data
Over the years, several studies have demonstrated that there exist significant disparities in health indicators in the United States population across various groups. Healthcare expense is used as a proxy for health in algorithms that drive healthcare systems and this exacerbates the existing bias. In this work, we focus on the presence of racial bias in health indicators in the publicly available, and nationally representative Medical Expenditure Panel Survey (MEPS) data. We show that predictive models for care management trained using this data inherit this bias. Finally, we demonstrate that this inherited bias can be reduced significantly using simple mitigation techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
152,117
1812.00888
A Consolidated Approach to Convolutional Neural Networks and the Kolmogorov Complexity
The ability to precisely quantify similarity between various entities has been a fundamental complication in various problem spaces specifically in the classification of cellular images. Contemporary similarity measures applied in the domain of image processing proposed by the scientific community are mainly pursued in supervised settings. In this work, we will explore the innovative algorithmic normalized compression distance metric based on the information theoretic concept of Kolmogorov Complexity. Additionally we will observe its possible implementation in Convolutional Neural Networks to facilitate and automate the classification of Retinal Pigment Epithelial cell cultures for use in Age Related Macular Degeneration Stem Cell therapy in an unsupervised setting.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
115,375
2312.11723
Improving Uniquely Decodable Codes in Binary Adder Channels
We present a general method to modify existing uniquely decodable codes in the $T$-user binary adder channel. If at least one of the original constituent codes does not have average weight exactly half of the dimension, then our method produces a new set of constituent codes in a higher dimension, with a strictly higher rate. Using our method we improve the highest known rate for the $T$-user binary adder channel for all $T \geq 2$. This information theory problem is equivalent to co-Sidon problems initiated by Lindstr{\"o}m in the 1960s, and also the multi-set union-free problem. Our results improve the known lower bounds in these settings as well.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
416,690
2310.08571
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Machine Learning as a Service (MLaaS) APIs provide ready-to-use and high-utility encoders that generate vector representations for given inputs. Since these encoders are very costly to train, they become lucrative targets for model stealing attacks during which an adversary leverages query access to the API to replicate the encoder locally at a fraction of the original training costs. We propose Bucks for Buckets (B4B), the first active defense that prevents stealing while the attack is happening without degrading representation quality for legitimate API users. Our defense relies on the observation that the representations returned to adversaries who try to steal the encoder's functionality cover a significantly larger fraction of the embedding space than representations of legitimate users who utilize the encoder to solve a particular downstream task.vB4B leverages this to adaptively adjust the utility of the returned representations according to a user's coverage of the embedding space. To prevent adaptive adversaries from eluding our defense by simply creating multiple user accounts (sybils), B4B also individually transforms each user's representations. This prevents the adversary from directly aggregating representations over multiple accounts to create their stolen encoder copy. Our active defense opens a new path towards securely sharing and democratizing encoders over public APIs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,437
1604.03222
Fuzzy Logic Trajectory Tracking Controller for a Tanker
This paper proposes a fuzzy logic controller for design of autopilot of a ship. Triangular membership functions have been use for fuzzification and the centroid method for defuzzification. A nonlinear mathematical model of an oil tanker has been considered whose parameters vary with the depth of water. The performance of proposed controller has been tested under both course changing and trajectory keeping mode of operations. It has been demonstrated that the performance is robust in shallow as well as deep waters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
54,460
2308.09267
GraphReason: Enhancing Reasoning Capabilities of Large Language Models through A Graph-Based Verification Approach
Large Language Models (LLMs) have showcased impressive reasoning capabilities, particularly when guided by specifically designed prompts in complex reasoning tasks such as math word problems. These models typically solve tasks using a chain-of-thought approach, which not only bolsters their reasoning abilities but also provides valuable insights into their problem-solving process. However, there is still significant room for enhancing the reasoning abilities of LLMs. Some studies suggest that the integration of an LLM output verifier can boost reasoning accuracy without necessitating additional model training. In this paper, we follow these studies and introduce a novel graph-based method to further augment the reasoning capabilities of LLMs. We posit that multiple solutions to a reasoning task, generated by an LLM, can be represented as a reasoning graph due to the logical connections between intermediate steps from different reasoning paths. Therefore, we propose the Reasoning Graph Verifier (GraphReason) to analyze and verify the solutions generated by LLMs. By evaluating these graphs, models can yield more accurate and reliable results.Our experimental results show that our graph-based verification method not only significantly enhances the reasoning abilities of LLMs but also outperforms existing verifier methods in terms of improving these models' reasoning performance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
386,213
2103.07925
An SPH framework for fluid-solid and contact interaction problems including thermo-mechanical coupling and reversible phase transitions
The present work proposes an approach for fluid-solid and contact interaction problems including thermo-mechanical coupling and reversible phase transitions. The solid field is assumed to consist of several arbitrarily-shaped, undeformable but mobile rigid bodies, that are evolved in time individually and allowed to get into mechanical contact with each other. The fluid field generally consists of multiple liquid or gas phases. All fields are spatially discretized using the method of smoothed particle hydrodynamics (SPH). This approach is especially suitable in the context of continually changing interface topologies and dynamic phase transitions without the need for additional methodological and computational effort for interface tracking as compared to mesh- or grid-based methods. Proposing a concept for the parallelization of the computational framework, in particular concerning a computationally efficient evaluation of rigid body motion, is an essential part of this work. Finally, the accuracy and robustness of the proposed framework is demonstrated by several numerical examples in two and three dimensions, involving multiple rigid bodies, two-phase flow, and reversible phase transitions, with a focus on two potential application scenarios in the fields of engineering and biomechanics: powder bed fusion additive manufacturing (PBFAM) and disintegration of food boluses in the human stomach. The efficiency of the parallel computational framework is demonstrated by a strong scaling analysis.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
224,737
2302.00242
The Parametric Stability of Well-separated Spherical Gaussian Mixtures
We quantify the parameter stability of a spherical Gaussian Mixture Model (sGMM) under small perturbations in distribution space. Namely, we derive the first explicit bound to show that for a mixture of spherical Gaussian $P$ (sGMM) in a pre-defined model class, all other sGMM close to $P$ in this model class in total variation distance has a small parameter distance to $P$. Further, this upper bound only depends on $P$. The motivation for this work lies in providing guarantees for fitting Gaussian mixtures; with this aim in mind, all the constants involved are well defined and distribution free conditions for fitting mixtures of spherical Gaussians. Our results tighten considerably the existing computable bounds, and asymptotically match the known sharp thresholds for this problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
343,154
2208.00313
Untargeted Region of Interest Selection for GC-MS Data using a Pseudo F-Ratio Moving Window ($\psi$FRMV)
There are many challenges associated with analysing gas chromatography - mass spectrometry (GC-MS) data. Many of these challenges stem from the fact that electron ionisation can make it difficult to recover molecular information due to the high degree of fragmentation with concomitant loss of molecular ion signal. With GC-MS data there are often many common fragment ions shared among closely-eluting peaks, necessitating sophisticated methods for analysis. Some of these methods are fully automated, but make some assumptions about the data which can introduce artifacts during the analysis. Chemometric methods such as Multivariate Curve Resolution, or Parallel Factor Analysis are particularly attractive, since they are flexible and make relatively few assumptions about the data - ideally resulting in fewer artifacts. These methods do require expert user intervention to determine the most relevant regions of interest and an appropriate number of components, $k$, for each region. Automated region of interest selection is needed to permit automated batch processing of chromatographic data with advanced signal deconvolution. Here, we propose a new method for automated, untargeted region of interest selection that accounts for the multivariate information present in GC-MS data to select regions of interest based on the ratio of the squared first, and second singular values from the Singular Value Decomposition of a window that moves across the chromatogram. Assuming that the first singular value accounts largely for signal, and that the second singular value accounts largely for noise, it is possible to interpret the relationship between these two values as a probabilistic distribution of Fisher Ratios. The sensitivity of the algorithm was tested by investigating the concentration at which the algorithm can no longer pick out chromatographic regions known to contain signal.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,809
2212.13221
A Combined Synchronization Index for Grassroots Activism on Social Media
Social media has provided a citizen voice, giving rise to grassroots collective action, where users deploy a concerted effort to disseminate online narratives and even carry out offline protests. Sometimes these collective action are aided by inorganic synchronization, which arise from bot actors. It is thus important to identify the synchronicity of emerging discourse on social media and the indications of organic/inorganic activity within the conversations. This provides a way of profiling an event for possibility of offline protests and violence. In this study, we build on past definitions of synchronous activity on social media -- simultaneous user action -- and develop a Combined Synchronization Index (CSI) which adopts a hierarchical approach in measuring user synchronicity. We apply this index on six political and social activism events on Twitter and analyzed three action types: synchronicity by hashtag, URL and @mentions.The CSI provides an overall quantification of synchronization across all action types within an event, which allows ranking of a spectrum of synchronicity across the six events. Human users have higher synchronous scores than bot users in most events; and bots and humans exhibits the most synchronized activities across all events as compared to other pairs (i.e., bot-bot and human-human). We further rely on the harmony and dissonance of CSI-Network scores with network centrality metrics to observe the presence of organic/inorganic synchronization. We hope this work aids in investigating synchronized action within social media in a collective manner.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
338,250
2309.05192
Towards Viewpoint Robustness in Bird's Eye View Segmentation
Autonomous vehicles (AV) require that neural networks used for perception be robust to different viewpoints if they are to be deployed across many types of vehicles without the repeated cost of data collection and labeling for each. AV companies typically focus on collecting data from diverse scenarios and locations, but not camera rig configurations, due to cost. As a result, only a small number of rig variations exist across most fleets. In this paper, we study how AV perception models are affected by changes in camera viewpoint and propose a way to scale them across vehicle types without repeated data collection and labeling. Using bird's eye view (BEV) segmentation as a motivating task, we find through extensive experiments that existing perception models are surprisingly sensitive to changes in camera viewpoint. When trained with data from one camera rig, small changes to pitch, yaw, depth, or height of the camera at inference time lead to large drops in performance. We introduce a technique for novel view synthesis and use it to transform collected data to the viewpoint of target rigs, allowing us to train BEV segmentation models for diverse target rigs without any additional data collection or labeling cost. To analyze the impact of viewpoint changes, we leverage synthetic data to mitigate other gaps (content, ISP, etc). Our approach is then trained on real data and evaluated on synthetic data, enabling evaluation on diverse target rigs. We release all data for use in future work. Our method is able to recover an average of 14.7% of the IoU that is otherwise lost when deploying to new rigs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
390,992
2110.14709
Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image Synthesis
Existing deep learning-based approaches for histopathology image analysis require large annotated training sets to achieve good performance; but annotating histopathology images is slow and resource-intensive. Conditional generative adversarial networks have been applied to generate synthetic histopathology images to alleviate this issue, but current approaches fail to generate clear contours for overlapped and touching nuclei. In this study, We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images. The proposed network uses normalized nucleus distance map rather than the binary mask to encode nuclei contour information. The proposed sharpness loss enhances the contrast of nuclei contour pixels. The proposed method is evaluated using four image quality metrics and segmentation results on two public datasets. Both quantitative and qualitative results demonstrate that the proposed approach can generate realistic histopathology images with clear nuclei contours.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
263,606
2209.06264
High-resolution semantically-consistent image-to-image translation
Deep learning has become one of remote sensing scientists' most efficient computer vision tools in recent years. However, the lack of training labels for the remote sensing datasets means that scientists need to solve the domain adaptation problem to narrow the discrepancy between satellite image datasets. As a result, image segmentation models that are then trained, could better generalize and use an existing set of labels instead of requiring new ones. This work proposes an unsupervised domain adaptation model that preserves semantic consistency and per-pixel quality for the images during the style-transferring phase. This paper's major contribution is proposing the improved architecture of the SemI2I model, which significantly boosts the proposed model's performance and makes it competitive with the state-of-the-art CyCADA model. A second contribution is testing the CyCADA model on the remote sensing multi-band datasets such as WorldView-2 and SPOT-6. The proposed model preserves semantic consistency and per-pixel quality for the images during the style-transferring phase. Thus, the semantic segmentation model, trained on the adapted images, shows substantial performance gain compared to the SemI2I model and reaches similar results as the state-of-the-art CyCADA model. The future development of the proposed method could include ecological domain transfer, {\em a priori} evaluation of dataset quality in terms of data distribution, or exploration of the inner architecture of the domain adaptation model.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
317,336
cs/0305027
Managing Inconsistent Intelligence
In this paper we demonstrate that it is possible to manage intelligence in constant time as a pre-process to information fusion through a series of processes dealing with issues such as clustering reports, ranking reports with respect to importance, extraction of prototypes from clusters and immediate classification of newly arriving intelligence reports. These methods are used when intelligence reports arrive which concerns different events which should be handled independently, when it is not known a priori to which event each intelligence report is related. We use clustering that runs as a back-end process to partition the intelligence into subsets representing the events, and in parallel, a fast classification that runs as a front-end process in order to put the newly arriving intelligence into its correct information fusion process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
537,843
2107.12589
Cross-modal Consensus Network for Weakly Supervised Temporal Action Localization
Weakly supervised temporal action localization (WS-TAL) is a challenging task that aims to localize action instances in the given video with video-level categorical supervision. Both appearance and motion features are used in previous works, while they do not utilize them in a proper way but apply simple concatenation or score-level fusion. In this work, we argue that the features extracted from the pretrained extractor, e.g., I3D, are not the WS-TALtask-specific features, thus the feature re-calibration is needed for reducing the task-irrelevant information redundancy. Therefore, we propose a cross-modal consensus network (CO2-Net) to tackle this problem. In CO2-Net, we mainly introduce two identical proposed cross-modal consensus modules (CCM) that design a cross-modal attention mechanism to filter out the task-irrelevant information redundancy using the global information from the main modality and the cross-modal local information of the auxiliary modality. Moreover, we treat the attention weights derived from each CCMas the pseudo targets of the attention weights derived from another CCM to maintain the consistency between the predictions derived from two CCMs, forming a mutual learning manner. Finally, we conduct extensive experiments on two common used temporal action localization datasets, THUMOS14 and ActivityNet1.2, to verify our method and achieve the state-of-the-art results. The experimental results show that our proposed cross-modal consensus module can produce more representative features for temporal action localization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
247,941
2311.15439
Efficient Encoding of Graphics Primitives with Simplex-based Structures
Grid-based structures are commonly used to encode explicit features for graphics primitives such as images, signed distance functions (SDF), and neural radiance fields (NeRF) due to their simple implementation. However, in $n$-dimensional space, calculating the value of a sampled point requires interpolating the values of its $2^n$ neighboring vertices. The exponential scaling with dimension leads to significant computational overheads. To address this issue, we propose a simplex-based approach for encoding graphics primitives. The number of vertices in a simplex-based structure increases linearly with dimension, making it a more efficient and generalizable alternative to grid-based representations. Using the non-axis-aligned simplicial structure property, we derive and prove a coordinate transformation, simplicial subdivision, and barycentric interpolation scheme for efficient sampling, which resembles transformation procedures in the simplex noise algorithm. Finally, we use hash tables to store multiresolution features of all interest points in the simplicial grid, which are passed into a tiny fully connected neural network to parameterize graphics primitives. We implemented a detailed simplex-based structure encoding algorithm in C++ and CUDA using the methods outlined in our approach. In the 2D image fitting task, the proposed method is capable of fitting a giga-pixel image with 9.4% less time compared to the baseline method proposed by instant-ngp, while maintaining the same quality and compression rate. In the volumetric rendering setup, we observe a maximum 41.2% speedup when the samples are dense enough.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,513
1704.06825
Deep Learning for Medical Image Processing: Overview, Challenges and Future
Healthcare sector is totally different from other industry. It is on high priority sector and people expect highest level of care and services regardless of cost. It did not achieve social expectation even though it consume huge percentage of budget. Mostly the interpretations of medical data is being done by medical expert. In terms of image interpretation by human expert, it is quite limited due to its subjectivity, the complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in other real world application, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in health secotr. In this chapter, we discussed state of the art deep learning architecture and its optimization used for medical image segmentation and classification. In the last section, we have discussed the challenges deep learning based methods for medical imaging and open research issue.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
72,227
1106.1803
Improving the Efficiency of Inductive Logic Programming Through the Use of Query Packs
Inductive logic programming, or relational learning, is a powerful paradigm for machine learning or data mining. However, in order for ILP to become practically useful, the efficiency of ILP systems must improve substantially. To this end, the notion of a query pack is introduced: it structures sets of similar queries. Furthermore, a mechanism is described for executing such query packs. A complexity analysis shows that considerable efficiency improvements can be achieved through the use of this query pack execution mechanism. This claim is supported by empirical results obtained by incorporating support for query pack execution in two existing learning systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
10,785
2104.01857
Fast Channel Estimation in the Transformed Spatial Domain for Analog Millimeter Wave Systems
Fast channel estimation in millimeter-wave (mmWave) systems is a fundamental enabler of high-gain beamforming, which boosts coverage and capacity. The channel estimation stage typically involves an initial beam training process where a subset of the possible beam directions at the transmitter and receiver is scanned along a predefined codebook. Unfortunately, the high number of transmit and receive antennas deployed in mmWave systems increase the complexity of the beam selection and channel estimation tasks. In this work, we tackle the channel estimation problem in analog systems from a different perspective than used by previous works. In particular, we propose to move the channel estimation problem from the angular domain into the transformed spatial domain, in which estimating the angles of arrivals and departures corresponds to estimating the angular frequencies of paths constituting the mmWave channel. The proposed approach, referred to as transformed spatial domain channel estimation (TSDCE) algorithm, exhibits robustness to additive white Gaussian noise by combining low-rank approximations and sample autocorrelation functions for each path in the transformed spatial domain. Numerical results evaluate the mean square error of the channel estimation and the direction of arrival estimation capability. TSDCE significantly reduces the first, while exhibiting a remarkably low computational complexity compared with well-known benchmarking schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
228,506
2106.01560
CitationIE: Leveraging the Citation Graph for Scientific Information Extraction
Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress. Prior work has considered extracting document-level entity clusters and relations end-to-end from raw scientific text, which can improve literature search and help identify methods and materials for a given problem. Despite the importance of this task, most existing works on scientific information extraction (SciIE) consider extraction solely based on the content of an individual paper, without considering the paper's place in the broader literature. In contrast to prior work, we augment our text representations by leveraging a complementary source of document context: the citation graph of referential links between citing and cited papers. On a test set of English-language scientific documents, we show that simple ways of utilizing the structure and content of the citation graph can each lead to significant gains in different scientific information extraction tasks. When these tasks are combined, we observe a sizable improvement in end-to-end information extraction over the state-of-the-art, suggesting the potential for future work along this direction. We release software tools to facilitate citation-aware SciIE development.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
238,540
1809.05454
Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown Probabilities
In this paper we provide a method to obtain tight lower bounds on the minimum redundancy achievable by a Huffman code when the probability distribution underlying an alphabet is only partially known. In particular, we address the case where the occurrence probabilities are unknown for some of the symbols in an alphabet. Bounds can be obtained for alphabets of a given size, for alphabets of up to a given size, and for alphabets of arbitrary size. The method operates on a Computer Algebra System, yielding closed-form numbers for all results. Finally, we show the potential of the proposed method to shed some light on the structure of the minimum redundancy achievable by the Huffman code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
107,792
0804.2057
Comparing and Combining Methods for Automatic Query Expansion
Query expansion is a well known method to improve the performance of information retrieval systems. In this work we have tested different approaches to extract the candidate query terms from the top ranked documents returned by the first-pass retrieval. One of them is the cooccurrence approach, based on measures of cooccurrence of the candidate and the query terms in the retrieved documents. The other one, the probabilistic approach, is based on the probability distribution of terms in the collection and in the top ranked set. We compare the retrieval improvement achieved by expanding the query with terms obtained with different methods belonging to both approaches. Besides, we have developed a na\"ive combination of both kinds of method, with which we have obtained results that improve those obtained with any of them separately. This result confirms that the information provided by each approach is of a different nature and, therefore, can be used in a combined manner.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
1,577