id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.16050 | Skeleton-Guided-Translation: A Benchmarking Framework for Code
Repository Translation with Fine-Grained Quality Evaluation | The advancement of large language models has intensified the need to modernize enterprise applications and migrate legacy systems to secure, versatile languages. However, existing code translation benchmarks primarily focus on individual functions, overlooking the complexities involved in translating entire repositories, such as maintaining inter-module coherence and managing dependencies. While some recent repository-level translation benchmarks attempt to address these challenges, they still face limitations, including poor maintainability and overly coarse evaluation granularity, which make them less developer-friendly. We introduce Skeleton-Guided-Translation, a framework for repository-level Java to C# code translation with fine-grained quality evaluation. It uses a two-step process: first translating the repository's structural "skeletons", then translating the full repository guided by these skeletons. Building on this, we present TRANSREPO-BENCH, a benchmark of high quality open-source Java repositories and their corresponding C# skeletons, including matching unit tests and build configurations. Our unit tests are fixed and can be applied across multiple or incremental translations without manual adjustments, enhancing automation and scalability in evaluations. Additionally, we develop fine-grained evaluation metrics that assess translation quality at the individual test case level, addressing traditional binary metrics' inability to distinguish when build failures cause all tests to fail. Evaluations using TRANSREPO-BENCH highlight key challenges and advance more accurate repository level code translation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 527,800 |
2103.14066 | Beyond permutation equivariance in graph networks | In this draft paper, we introduce a novel architecture for graph networks which is equivariant to the Euclidean group in $n$-dimensions. The model is designed to work with graph networks in their general form and can be shown to include particular variants as special cases. Thanks to its equivariance properties, we expect the proposed model to be more data efficient with respect to classical graph architectures and also intrinsically equipped with a better inductive bias. We defer investigating this matter to future work. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 226,710 |
2411.08003 | Can adversarial attacks by large language models be attributed? | Attributing outputs from Large Language Models (LLMs) in adversarial settings-such as cyberattacks and disinformation-presents significant challenges that are likely to grow in importance. We investigate this attribution problem using formal language theory, specifically language identification in the limit as introduced by Gold and extended by Angluin. By modeling LLM outputs as formal languages, we analyze whether finite text samples can uniquely pinpoint the originating model. Our results show that due to the non-identifiability of certain language classes, under some mild assumptions about overlapping outputs from fine-tuned models it is theoretically impossible to attribute outputs to specific LLMs with certainty. This holds also when accounting for expressivity limitations of Transformer architectures. Even with direct model access or comprehensive monitoring, significant computational hurdles impede attribution efforts. These findings highlight an urgent need for proactive measures to mitigate risks posed by adversarial LLM use as their influence continues to expand. | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | true | 507,745 |
2206.06706 | An analysis of retracted papers in Computer Science | Context: The retraction of research papers, for whatever reason, is a growing phenomenon. However, although retracted paper information is publicly available via publishers, it is somewhat distributed and inconsistent. Objective: The aim is to assess: (i) the extent and nature of retracted research in Computer Science (CS) (ii) the post-retraction citation behaviour of retracted works and (iii) the potential impact on systematic reviews and mapping studies. Method: We analyse the Retraction Watch database and take citation information from the Web of Science and Google scholar. Results: We find that of the 33,955 entries in the Retraction watch database (16 May 2022), 2,816 are classified as CS, i.e., approximately 8.3%. For CS, 56% of retracted papers, provide little or no information as to the reasons. This contrasts with 26% for other disciplines. There is also a remarkable disparity between different publishers, a tendency for multiple versions of a retracted paper over and above the Version of Record (VoR), and for new citations long after a paper is officially retracted. Conclusions: Unfortunately retraction seems to be a sufficiently common outcome for a scientific paper that we as a research community need to take it more seriously, e.g., standardising procedures and taxonomies across publishers and the provision of appropriate research tools. Finally, we recommend particular caution when undertaking secondary analyses and meta-analyses which are at risk of becoming contaminated by these problem primary studies. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 302,468 |
2408.16899 | Network-aware Recommender System via Online Feedback Optimization | Personalized content on social platforms can exacerbate negative phenomena such as polarization, partly due to the feedback interactions between recommendations and the users. In this paper, we present a control-theoretic recommender system that explicitly accounts for this feedback loop to mitigate polarization. Our approach extends online feedback optimization - a control paradigm for steady-state optimization of dynamical systems - to develop a recommender system that trades off users engagement and polarization reduction, while relying solely on online click data. We establish theoretical guarantees for optimality and stability of the proposed design and validate its effectiveness via numerical experiments with a user population governed by Friedkin-Johnsen dynamics. Our results show these "network-aware" recommendations can significantly reduce polarization while maintaining high levels of user engagement. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 484,495 |
2312.00914 | Optimizing Information Freshness over a Channel that Wears Out | A sensor samples and transmits status updates to a destination through a wireless channel that wears out over time and with every use. At each time slot, the sensor can decide to sample and transmit a fresh status update, restore the initial quality of the channel, or remain silent. The actions impose different costs on the operation of the system, and we study the problem of optimally selecting the actions at the transmitter so as to maximize the freshness of the information at the receiver, while minimizing the communication cost. Freshness is measured by the age of information (AoI). The problem is addressed using dynamic programming, and numerical results are presented to provide insights into the optimal transmission policy. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 412,230 |
2003.11420 | Fast and resilient manipulation planning for target retrieval in clutter | This paper presents a task and motion planning (TAMP) framework for a robotic manipulator in order to retrieve a target object from clutter. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target without collisions. For fast completion of object rearrangement, the robot aims to optimize the number of pick-and-place actions which often determines the efficiency of a TAMP framework. We propose a task planner incorporating motion planning to generate executable plans which aims to minimize the number of pick-and-place actions. In addition to fully known and static environments, our method can deal with uncertain and dynamic situations incurred by occluded views. Our method is shown to reduce the number of pick-and-place actions compared to baseline methods (e.g., at least 28.0% of reduction in a known static environment with 20 objects). | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 169,598 |
1911.00171 | PODNet: A Neural Network for Discovery of Plannable Options | Learning from demonstration has been widely studied in machine learning but becomes challenging when the demonstrated trajectories are unstructured and follow different objectives. This short-paper proposes PODNet, Plannable Option Discovery Network, addressing how to segment an unstructured set of demonstrated trajectories for option discovery. This enables learning from demonstration to perform multiple tasks and plan high-level trajectories based on the discovered option labels. PODNet combines a custom categorical variational autoencoder, a recurrent option inference network, option-conditioned policy network, and option dynamics model in an end-to-end learning architecture. Due to the concurrently trained option-conditioned policy network and option dynamics model, the proposed architecture has implications in multi-task and hierarchical learning, explainable and interpretable artificial intelligence, and applications where the agent is required to learn only from observations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,744 |
1606.07829 | Unsupervised Topic Modeling Approaches to Decision Summarization in
Spoken Meetings | We present a token-level decision summarization framework that utilizes the latent topic structures of utterances to identify "summary-worthy" words. Concretely, a series of unsupervised topic models is explored and experimental results show that fine-grained topic models, which discover topics at the utterance-level rather than the document-level, can better identify the gist of the decision-making process. Moreover, our proposed token-level summarization approach, which is able to remove redundancies within utterances, outperforms existing utterance ranking based summarization methods. Finally, context information is also investigated to add additional relevant information to the summary. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 57,788 |
2401.03500 | Quadrotor Stabilization with Safety Guarantees: A Universal Formula
Approach | Safe stabilization is a significant challenge for quadrotors, which involves reaching a goal position while avoiding obstacles. Most of the existing solutions for this problem rely on optimization-based methods, demanding substantial onboard computational resources. This paper introduces a novel approach to address this issue and provides a solution that offers fast computational capabilities tailored for onboard execution. Drawing inspiration from Sontag's universal formula, we propose an analytical control strategy that incorporates the conditions of control Lyapunov functions (CLFs) and control barrier functions (CBFs), effectively avoiding the need for solving optimization problems onboard. Moreover, we extend our approach by incorporating the concepts of input-to-state stability (ISS) and input-to-state safety (ISSf), enhancing the universal formula's capacity to effectively manage disturbances. Furthermore, we present a projection-based approach to ensure that the universal formula remains effective even when faced with control input constraints. The basic idea of this approach is to project the control input derived from the universal formula onto the closest point within the control input domain. Through comprehensive simulations and experimental results, we validate the efficacy and highlight the advantages of our methodology. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 420,129 |
1805.04690 | New Embedded Representations and Evaluation Protocols for Inferring
Transitive Relations | Beyond word embeddings, continuous representations of knowledge graph (KG) components, such as entities, types and relations, are widely used for entity mention disambiguation, relation inference and deep question answering. Great strides have been made in modeling general, asymmetric or antisymmetric KG relations using Gaussian, holographic, and complex embeddings. None of these directly enforce transitivity inherent in the is-instance-of and is-subtype-of relations. A recent proposal, called order embedding (OE), demands that the vector representing a subtype elementwise dominates the vector representing a supertype. However, the manner in which such constraints are asserted and evaluated have some limitations. In this short research note, we make three contributions specific to representing and inferring transitive relations. First, we propose and justify a significant improvement to the OE loss objective. Second, we propose a new representation of types as hyper-rectangular regions, that generalize and improve on OE. Third, we show that some current protocols to evaluate transitive relation inference can be misleading, and offer a sound alternative. Rather than use black-box deep learning modules off-the-shelf, we develop our training networks using elementary geometric considerations. | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 97,289 |
2501.15165 | A* Based Algorithm for Reduced Complexity ML Decoding of Tailbiting
Codes | The A* algorithm is a graph search algorithm which has shown good results in terms of computational complexity for Maximum Likelihood (ML) decoding of tailbiting convolutional codes. The decoding of tailbiting codes with this algorithm is performed in two phases. In the first phase, a typical Viterbi decoding is employed to collect information regarding the trellis. The A* algorithm is then applied in the second phase, using the information obtained in the first one to calculate the heuristic function. The improvements proposed in this work decrease the computational complexity of the A* algorithm using further information from the first phase of the algorithm. This information is used for obtaining a more accurate heuristic function and finding early terminating conditions for the A* algorithm. Simulation results show that the proposed modifications decrease the complexity of ML decoding with the A* algorithm in terms of the performed number of operations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 527,433 |
1805.06197 | A Structural Representation Learning for Multi-relational Networks | Most of the existing multi-relational network embedding methods, e.g., TransE, are formulated to preserve pair-wise connectivity structures in the networks. With the observations that significant triangular connectivity structures and parallelogram connectivity structures found in many real multi-relational networks are often ignored and that a hard-constraint commonly adopted by most of the network embedding methods is inaccurate by design, we propose a novel representation learning model for multi-relational networks which can alleviate both fundamental limitations. Scalable learning algorithms are derived using the stochastic gradient descent algorithm and negative sampling. Extensive experiments on real multi-relational network datasets of WordNet and Freebase demonstrate the efficacy of the proposed model when compared with the state-of-the-art embedding methods. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 97,555 |
2202.03874 | Combining Intra-Risk and Contagion Risk for Enterprise Bankruptcy
Prediction Using Graph Neural Networks | Predicting the bankruptcy risk of small and medium-sized enterprises (SMEs) is an important step for financial institutions when making decisions about loans. Existing studies in both finance and AI research fields, however, tend to only consider either the intra-risk or contagion risk of enterprises, ignoring their interactions and combinatorial effects. This study for the first time considers both types of risk and their joint effects in bankruptcy prediction. Specifically, we first propose an enterprise intra-risk encoder based on statistically significant enterprise risk indicators for its intra-risk learning. Then, we propose an enterprise contagion risk encoder based on enterprise relation information from an enterprise knowledge graph for its contagion risk embedding. In particular, the contagion risk encoder includes both the newly proposed Hyper-Graph Neural Networks and Heterogeneous Graph Neural Networks, which can model contagion risk in two different aspects, i.e. common risk factors based on hyperedges and direct diffusion risk from neighbors, respectively. To evaluate the model, we collect real-world multi-sources data on SMEs and build a novel benchmark dataset called SMEsD. We provide open access to the dataset, which is expected to further promote research on financial risk analysis. Experiments on SMEsD against twelve state-of-the-art baselines demonstrate the effectiveness of the proposed model for bankruptcy prediction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,373 |
2012.02757 | Playing Text-Based Games with Common Sense | Text based games are simulations in which an agent interacts with the world purely through natural language. They typically consist of a number of puzzles interspersed with interactions with common everyday objects and locations. Deep reinforcement learning agents can learn to solve these puzzles. However, the everyday interactions with the environment, while trivial for human players, present as additional puzzles to agents. We explore two techniques for incorporating commonsense knowledge into agents. Inferring possibly hidden aspects of the world state with either a commonsense inference model COMET, or a language model BERT. Biasing an agents exploration according to common patterns recognized by a language model. We test our technique in the 9to05 game, which is an extreme version of a text based game that requires numerous interactions with common, everyday objects in common, everyday scenarios. We conclude that agents that augment their beliefs about the world state with commonsense inferences are more robust to observational errors and omissions of common elements from text descriptions. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 209,870 |
cs/0509071 | CP-nets and Nash equilibria | We relate here two formalisms that are used for different purposes in reasoning about multi-agent systems. One of them are strategic games that are used to capture the idea that agents interact with each other while pursuing their own interest. The other are CP-nets that were introduced to express qualitative and conditional preferences of the users and which aim at facilitating the process of preference elicitation. To relate these two formalisms we introduce a natural, qualitative, extension of the notion of a strategic game. We show then that the optimal outcomes of a CP-net are exactly the Nash equilibria of an appropriately defined strategic game in the above sense. This allows us to use the techniques of game theory to search for optimal outcomes of CP-nets and vice-versa, to use techniques developed for CP-nets to search for Nash equilibria of the considered games. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 538,972 |
2407.10003 | A Dynamic Algorithm for Weighted Submodular Cover Problem | We initiate the study of the submodular cover problem in dynamic setting where the elements of the ground set are inserted and deleted. In the classical submodular cover problem, we are given a monotone submodular function $f : 2^{V} \to \mathbb{R}^{\ge 0}$ and the goal is to obtain a set $S \subseteq V$ that minimizes the cost subject to the constraint $f(S) = f(V)$. This is a classical problem in computer science and generalizes the Set Cover problem, 2-Set Cover, and dominating set problem among others. We consider this problem in a dynamic setting where there are updates to our set $V$, in the form of insertions and deletions of elements from a ground set $\mathcal{V}$, and the goal is to maintain an approximately optimal solution with low query complexity per update. For this problem, we propose a randomized algorithm that, in expectation, obtains a $(1-O(\epsilon), O(\epsilon^{-1}))$-bicriteria approximation using polylogarithmic query complexity per update. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 472,803 |
2407.13437 | FREST: Feature RESToration for Semantic Segmentation under Multiple
Adverse Conditions | Robust semantic segmentation under adverse conditions is crucial in real-world applications. To address this challenging task in practical scenarios where labeled normal condition images are not accessible in training, we propose FREST, a novel feature restoration framework for source-free domain adaptation (SFDA) of semantic segmentation to adverse conditions. FREST alternates two steps: (1) learning the condition embedding space that only separates the condition information from the features and (2) restoring features of adverse condition images on the learned condition embedding space. By alternating these two steps, FREST gradually restores features where the effect of adverse conditions is reduced. FREST achieved a state of the art on two public benchmarks (i.e., ACDC and RobotCar) for SFDA to adverse conditions. Moreover, it shows superior generalization ability on unseen datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,381 |
2403.13627 | Efficient exploration of high-Tc superconductors by a gradient-based
composition design | We propose a material design method via gradient-based optimization on compositions, overcoming the limitations of traditional methods: exhaustive database searches and conditional generation models. It optimizes inputs via backpropagation, aligning the model's output closely with the target property and facilitating the discovery of unlisted materials and precise property determination. Our method is also capable of adaptive optimization under new conditions without retraining. Applying to exploring high-Tc superconductors, we identified potential compositions beyond existing databases and discovered new hydrogen superconductors via conditional optimization. This method is versatile and significantly advances material design by enabling efficient, extensive searches and adaptability to new constraints. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 439,705 |
2406.16416 | Multilingual Knowledge Editing with Language-Agnostic Factual Neurons | Multilingual knowledge editing (MKE) aims to simultaneously update factual knowledge across multiple languages within large language models (LLMs). Previous research indicates that the same knowledge across different languages within LLMs exhibits a degree of shareability. However, most existing MKE methods overlook the connections of the same knowledge between different languages, resulting in knowledge conflicts and limited edit performance. To address this issue, we first investigate how LLMs process multilingual factual knowledge and discover that the same factual knowledge in different languages generally activates a shared set of neurons, which we call language-agnostic factual neurons (LAFNs). These neurons represent the same factual knowledge shared across languages and imply the semantic connections among multilingual knowledge. Inspired by this finding, we propose a new MKE method by Locating and Updating Language-Agnostic Factual Neurons (LU-LAFNs) to edit multilingual knowledge simultaneously, which avoids knowledge conflicts and thus improves edit performance. Experimental results on Bi-ZsRE and MzsRE benchmarks demonstrate that our method achieves the best edit performance, indicating the effectiveness and importance of modeling the semantic connections among multilingual knowledge. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 467,121 |
1803.04842 | A Learning-Based Visual Saliency Prediction Model for Stereoscopic 3D
Video (LBVS-3D) | Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 92,529 |
2404.01049 | A Novel Sector-Based Algorithm for an Optimized Star-Galaxy
Classification | This paper introduces a novel sector-based methodology for star-galaxy classification, leveraging the latest Sloan Digital Sky Survey data (SDSS-DR18). By strategically segmenting the sky into sectors aligned with SDSS observational patterns and employing a dedicated convolutional neural network (CNN), we achieve state-of-the-art performance for star galaxy classification. Our preliminary results demonstrate a promising pathway for efficient and precise astronomical analysis, especially in real-time observational settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 443,226 |
2312.09434 | Task Tree Retrieval For Robotic Cooking | This paper is based on developing different algorithms, which generate the task tree planning for the given goal node(recipe). The knowledge representation of the dishes is called FOON. It contains the different objects and their between them with respective to the motion node The graphical representation of FOON is made by noticing the change in the state of an object with respect to the human manipulators. We will explore how the FOON is created for different recipes by the robots. Task planning contains difficulties in exploring unknown problems, as its knowledge is limited to the FOON. To get the task tree planning for a given recipe, the robot will retrieve the information of different functional units from the knowledge retrieval process called FOON. Thus the generated subgraphs will allow the robot to cook the required dish. Thus the robot can able to cook the given recipe by following the sequence of instructions. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 415,718 |
2306.07850 | Exact Mean Square Linear Stability Analysis for SGD | The dynamical stability of optimization methods at the vicinity of minima of the loss has recently attracted significant attention. For gradient descent (GD), stable convergence is possible only to minima that are sufficiently flat w.r.t. the step size, and those have been linked with favorable properties of the trained model. However, while the stability threshold of GD is well-known, to date, no explicit expression has been derived for the exact threshold of stochastic GD (SGD). In this paper, we derive such a closed-form expression. Specifically, we provide an explicit condition on the step size that is both necessary and sufficient for the linear stability of SGD in the mean square sense. Our analysis sheds light on the precise role of the batch size $B$. In particular, we show that the stability threshold is monotonically non-decreasing in the batch size, which means that reducing the batch size can only decrease stability. Furthermore, we show that SGD's stability threshold is equivalent to that of a mixture process which takes in each iteration a full batch gradient step w.p. $1-p$, and a single sample gradient step w.p. $p$, where $p \approx 1/B $. This indicates that even with moderate batch sizes, SGD's stability threshold is very close to that of GD's. We also prove simple necessary conditions for linear stability, which depend on the batch size, and are easier to compute than the precise threshold. Finally, we derive the asymptotic covariance of the dynamics around the minimum, and discuss its dependence on the learning rate. We validate our theoretical findings through experiments on the MNIST dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 373,166 |
2211.02592 | A Large-Scale Study of a Sleep Tracking and Improving Device with
Closed-loop and Personalized Real-time Acoustic Stimulation | Various intervention therapies ranging from pharmaceutical to hi-tech tailored solutions have been available to treat difficulty in falling asleep commonly caused by insomnia in modern life. However, current techniques largely remain ill-suited, ineffective, and unreliable due to their lack of precise real-time sleep tracking, in-time feedback on the therapies, an ability to keep people asleep during the night, and a large-scale effectiveness evaluation. Here, we introduce a novel sleep aid system, called Earable, that can continuously sense multiple head-based physiological signals and simultaneously enable closed-loop auditory stimulation to entrain brain activities in time for effective sleep promotion. We develop the system in a lightweight, comfortable, and user-friendly headband with a comprehensive set of algorithms and dedicated own-designed audio stimuli. We conducted multiple protocols from 883 sleep studies on 377 subjects (241 women, 119 men) wearing either a gold-standard device (PSG), Earable, or both concurrently. We demonstrate that our system achieves (1) a strong correlation (0.89 +/- 0.03) between the physiological signals acquired by Earable and those from the gold-standard PSG, (2) an 87.8 +/- 5.3% agreement on sleep scoring using our automatic real-time sleep staging algorithm with the consensus scored by three sleep technicians, and (3) a successful non-pharmacological stimulation alternative to effectively shorten the duration of sleep falling by 24.1 +/- 0.1 minutes. These results show that the efficacy of Earable exceeds existing techniques in intentions to promote fast falling asleep, track sleep state accurately, and achieve high social acceptance for real-time closed-loop personalized neuromodulation-based home sleep care. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 328,624 |
2102.03739 | Infinite-channel deep stable convolutional neural networks | The interplay between infinite-width neural networks (NNs) and classes of Gaussian processes (GPs) is well known since the seminal work of Neal (1996). While numerous theoretical refinements have been proposed in the recent years, the interplay between NNs and GPs relies on two critical distributional assumptions on the NN's parameters: A1) finite variance; A2) independent and identical distribution (iid). In this paper, we consider the problem of removing A1 in the general context of deep feed-forward convolutional NNs. In particular, we assume iid parameters distributed according to a stable distribution and we show that the infinite-channel limit of a deep feed-forward convolutional NNs, under suitable scaling, is a stochastic process with multivariate stable finite-dimensional distributions. Such a limiting distribution is then characterized through an explicit backward recursion for its parameters over the layers. Our contribution extends results of Favaro et al. (2020) to convolutional architectures, and it paves the way to expand exciting recent lines of research that rely on classes of GP limits. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 218,850 |
1511.03703 | Embedded Ensemble Propagation for Improving Performance, Portability and
Scalability of Uncertainty Quantification on Emerging Computational
Architectures | Quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in an embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan). | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 48,786 |
2402.13250 | Video ReCap: Recursive Captioning of Hour-Long Videos | Most video captioning models are designed to process short video clips of few seconds and output text describing low-level visual concepts (e.g., objects, scenes, atomic actions). However, most real-world videos last for minutes or hours and have a complex hierarchical structure spanning different temporal granularities. We propose Video ReCap, a recursive video captioning model that can process video inputs of dramatically different lengths (from 1 second to 2 hours) and output video captions at multiple hierarchy levels. The recursive video-language architecture exploits the synergy between different video hierarchies and can process hour-long videos efficiently. We utilize a curriculum learning training scheme to learn the hierarchical structure of videos, starting from clip-level captions describing atomic actions, then focusing on segment-level descriptions, and concluding with generating summaries for hour-long videos. Furthermore, we introduce Ego4D-HCap dataset by augmenting Ego4D with 8,267 manually collected long-range video summaries. Our recursive model can flexibly generate captions at different hierarchy levels while also being useful for other complex video understanding tasks, such as VideoQA on EgoSchema. Data, code, and models are available at: https://sites.google.com/view/vidrecap | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 431,172 |
2105.11088 | Towards Book Cover Design via Layout Graphs | Book covers are intentionally designed and provide an introduction to a book. However, they typically require professional skills to design and produce the cover images. Thus, we propose a generative neural network that can produce book covers based on an easy-to-use layout graph. The layout graph contains objects such as text, natural scene objects, and solid color spaces. This layout graph is embedded using a graph convolutional neural network and then used with a mask proposal generator and a bounding-box generator and filled using an object proposal generator. Next, the objects are compiled into a single image and the entire network is trained using a combination of adversarial training, perceptual training, and reconstruction. Finally, a Style Retention Network (SRNet) is used to transfer the learned font style onto the desired text. Using the proposed method allows for easily controlled and unique book covers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 236,591 |
2201.05026 | Fantastic Data and How to Query Them | It is commonly acknowledged that the availability of the huge amount of (training) data is one of the most important factors for many recent advances in Artificial Intelligence (AI). However, datasets are often designed for specific tasks in narrow AI sub areas and there is no unified way to manage and access them. This not only creates unnecessary overheads when training or deploying Machine Learning models but also limits the understanding of the data, which is very important for data-centric AI. In this paper, we present our vision about a unified framework for different datasets so that they can be integrated and queried easily, e.g., using standard query languages. We demonstrate this in our ongoing work to create a framework for datasets in Computer Vision and show its advantages in different scenarios. Our demonstration is available at https://vision.semkg.org. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | true | false | 275,261 |
2407.19660 | A Causally Informed Pretraining Approach for Multimodal Foundation
Models: Applications in Remote Sensing | Self-supervised learning has emerged as a powerful paradigm for pretraining foundation models using large-scale data. Existing pretraining approaches predominantly rely on masked reconstruction or next-token prediction strategies, demonstrating strong performance across various downstream tasks, including geoscience applications. However, these approaches do not fully capture the causal interplay between different geospatial and environmental variables. To address this limitation, we propose Causally Informed Variable-Step Forecasting (CI-VSF), a novel pretraining task that models forecasting as a conditional generation task, where driver variables (e.g., weather) inform the prediction of response variables (e.g., satellite imagery). We demonstrate that pretraining in such a fashion leads to enhanced performance when finetuned on both prediction (e.g., crop mapping, missing image prediction, soil moisture estimation) and forecasting (e.g., future image forecasting, soil moisture forecasting) downstream tasks when compared to other pretraining approaches. While we use remote sensing as our main application to demonstrate the efficacy of our proposed pretraining strategy over existing paradigms, it is applicable to any domain that involves known causal relationships amongst a set of variables. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 476,870 |
1204.2035 | Wireless Information Transfer with Opportunistic Energy Harvesting | Energy harvesting is a promising solution to prolong the operation of energy-constrained wireless networks. In particular, scavenging energy from ambient radio signals, namely wireless energy harvesting (WEH), has recently drawn significant attention. In this paper, we consider a point-to-point wireless link over the narrowband flat-fading channel subject to time-varying co-channel interference. It is assumed that the receiver has no fixed power supplies and thus needs to replenish energy opportunistically via WEH from the unintended interference and/or the intended signal sent by the transmitter. We further assume a single-antenna receiver that can only decode information or harvest energy at any time due to the practical circuit limitation. Therefore, it is important to investigate when the receiver should switch between the two modes of information decoding (ID) and energy harvesting (EH), based on the instantaneous channel and interference condition. In this paper, we derive the optimal mode switching rule at the receiver to achieve various trade-offs between wireless information transfer and energy harvesting. Specifically, we determine the minimum transmission outage probability for delay-limited information transfer and the maximum ergodic capacity for no-delay-limited information transfer versus the maximum average energy harvested at the receiver, which are characterized by the boundary of so-called "outage-energy" region and "rate-energy" region, respectively. Moreover, for the case when the channel state information (CSI) is known at the transmitter, we investigate the joint optimization of transmit power control, information and energy transfer scheduling, and the receiver's mode switching. Our results provide useful guidelines for the efficient design of emerging wireless communication systems powered by opportunistic WEH. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,376 |
1201.1192 | Formalization of semantic network of image constructions in electronic
content | A formal theory based on a binary operator of directional associative relation is constructed in the article and an understanding of an associative normal form of image constructions is introduced. A model of a commutative semigroup, which provides a presentation of a sentence as three components of an interrogative linguistic image construction, is considered. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 13,698 |
2006.04451 | Novel Adaptive Binary Search Strategy-First Hybrid Pyramid- and
Clustering-Based CNN Filter Pruning Method without Parameters Setting | Pruning redundant filters in CNN models has received growing attention. In this paper, we propose an adaptive binary search-first hybrid pyramid- and clustering-based (ABSHPC-based) method for pruning filters automatically. In our method, for each convolutional layer, initially a hybrid pyramid data structure is constructed to store the hierarchical information of each filter. Given a tolerant accuracy loss, without parameters setting, we begin from the last convolutional layer to the first layer; for each considered layer with less or equal pruning rate relative to its previous layer, our ABSHPC-based process is applied to optimally partition all filters to clusters, where each cluster is thus represented by the filter with the median root mean of the hybrid pyramid, leading to maximal removal of redundant filters. Based on the practical dataset and the CNN models, with higher accuracy, the thorough experimental results demonstrated the significant parameters and floating-point operations reduction merits of the proposed filter pruning method relative to the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 180,697 |
2405.02538 | AdaFPP: Adapt-Focused Bi-Propagating Prototype Learning for Panoramic
Activity Recognition | Panoramic Activity Recognition (PAR) aims to identify multi-granularity behaviors performed by multiple persons in panoramic scenes, including individual activities, group activities, and global activities. Previous methods 1) heavily rely on manually annotated detection boxes in training and inference, hindering further practical deployment; or 2) directly employ normal detectors to detect multiple persons with varying size and spatial occlusion in panoramic scenes, blocking the performance gain of PAR. To this end, we consider learning a detector adapting varying-size occluded persons, which is optimized along with the recognition module in the all-in-one framework. Therefore, we propose a novel Adapt-Focused bi-Propagating Prototype learning (AdaFPP) framework to jointly recognize individual, group, and global activities in panoramic activity scenes by learning an adapt-focused detector and multi-granularity prototypes as the pretext tasks in an end-to-end way. Specifically, to accommodate the varying sizes and spatial occlusion of multiple persons in crowed panoramic scenes, we introduce a panoramic adapt-focuser, achieving the size-adapting detection of individuals by comprehensively selecting and performing fine-grained detections on object-dense sub-regions identified through original detections. In addition, to mitigate information loss due to inaccurate individual localizations, we introduce a bi-propagation prototyper that promotes closed-loop interaction and informative consistency across different granularities by facilitating bidirectional information propagation among the individual, group, and global levels. Extensive experiments demonstrate the significant performance of AdaFPP and emphasize its powerful applicability for PAR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 451,800 |
2102.07158 | Distributed Second Order Methods with Fast Rates and Compressed
Communication | We develop several new communication-efficient second-order methods for distributed optimization. Our first method, NEWTON-STAR, is a variant of Newton's method from which it inherits its fast local quadratic rate. However, unlike Newton's method, NEWTON-STAR enjoys the same per iteration communication cost as gradient descent. While this method is impractical as it relies on the use of certain unknown parameters characterizing the Hessian of the objective function at the optimum, it serves as the starting point which enables us design practical variants thereof with strong theoretical guarantees. In particular, we design a stochastic sparsification strategy for learning the unknown parameters in an iterative fashion in a communication efficient manner. Applying this strategy to NEWTON-STAR leads to our next method, NEWTON-LEARN, for which we prove local linear and superlinear rates independent of the condition number. When applicable, this method can have dramatically superior convergence behavior when compared to state-of-the-art methods. Finally, we develop a globalization strategy using cubic regularization which leads to our next method, CUBIC-NEWTON-LEARN, for which we prove global sublinear and linear convergence rates, and a fast superlinear rate. Our results are supported with experimental results on real datasets, and show several orders of magnitude improvement on baseline and state-of-the-art methods in terms of communication complexity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 220,013 |
2403.13632 | Extremality of stabilizer states | We investigate the extremality of stabilizer states to reveal their exceptional role in the space of all $n$-qubit/qudit states. We establish uncertainty principles for the characteristic function and the Wigner function of states, respectively. We find that only stabilizer states achieve saturation in these principles. Furthermore, we prove a general theorem that stabilizer states are extremal for convex information measures invariant under local unitaries. We explore this extremality in the context of various quantum information and correlation measures, including entanglement entropy, conditional entropy and other entanglement measures. Additionally, leveraging the recent discovery that stabilizer states are the limit states under quantum convolution, we establish the monotonicity of the entanglement entropy and conditional entropy under quantum convolution. These results highlight the remarkable information-theoretic properties of stabilizer states. Their extremality provides valuable insights into their ability to capture information content and correlations, paving the way for further exploration of their potential in quantum information processing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 439,707 |
2406.04129 | LenslessFace: An End-to-End Optimized Lensless System for
Privacy-Preserving Face Verification | Lensless cameras, innovatively replacing traditional lenses for ultra-thin, flat optics, encode light directly onto sensors, producing images that are not immediately recognizable. This compact, lightweight, and cost-effective imaging solution offers inherent privacy advantages, making it attractive for privacy-sensitive applications like face verification. Typical lensless face verification adopts a two-stage process of reconstruction followed by verification, incurring privacy risks from reconstructed faces and high computational costs. This paper presents an end-to-end optimization approach for privacy-preserving face verification directly on encoded lensless captures, ensuring that the entire software pipeline remains encoded with no visible faces as intermediate results. To achieve this, we propose several techniques to address unique challenges from the lensless setup which precludes traditional face detection and alignment. Specifically, we propose a face center alignment scheme, an augmentation curriculum to build robustness against variations, and a knowledge distillation method to smooth optimization and enhance performance. Evaluations under both simulation and real environment demonstrate our method outperforms two-stage lensless verification while enhancing privacy and efficiency. Project website: \url{lenslessface.github.io}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 461,531 |
2306.03761 | Generalised Impedance Model of Wireless Links Assisted by Reconfigurable
Intelligent Surfaces | We devise an end-to-end communication channel model that describes the performance of RIS-assisted MIMO wireless links. The model borrows the impedance (interaction) matrix formalism from the Method of Moments and provides a physics-based communication model. In configurations where the transmit and receive antenna arrays are distant from the RIS beyond a wavelength, a reduced model provides accurate results for arbitrary RIS unit cell geometry. Importantly, the simplified model configures as a cascaded channel transfer matrix whose mathematical structure is compliant with widely accepted, but less accurate, system level RIS models. A numerical validation of the communication model is presented for the design of binary RIS structures with scatterers of canonical geometry. Attained results are consistent with path-loss models: For obstructed line-of-sight between transmitter and receiver, the channel capacity of the (optimised) RIS-assisted link scales as $R^{-2}$, with $R$ RIS-receiver distance at fixed transmitter position. Our results shows that the applicability of communication models based on mutual impedance matrices is not restricted to canonical minimum scattering RIS unit cells. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 371,467 |
2402.01796 | Speech foundation models in healthcare: Effect of layer selection on
pathological speech feature prediction | Accurately extracting clinical information from speech is critical to the diagnosis and treatment of many neurological conditions. As such, there is interest in leveraging AI for automatic, objective assessments of clinical speech to facilitate diagnosis and treatment of speech disorders. We explore transfer learning using foundation models, focusing on the impact of layer selection for the downstream task of predicting pathological speech features. We find that selecting an optimal layer can greatly improve performance (~15.8% increase in balanced accuracy per feature as compared to worst layer, ~13.6% increase as compared to final layer), though the best layer varies by predicted feature and does not always generalize well to unseen data. A learned weighted sum offers comparable performance to the average best layer in-distribution (only ~1.2% lower) and had strong generalization for out-of-distribution data (only 1.5% lower than the average best layer). | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 426,228 |
2206.11970 | Learning quantum symmetries with interactive quantum-classical
variational algorithms | A symmetry of a state $\vert \psi \rangle$ is a unitary operator of which $\vert \psi \rangle$ is an eigenvector. When $\vert \psi \rangle$ is an unknown state supplied by a black-box oracle, the state's symmetries provide key physical insight into the quantum system; symmetries also boost many crucial quantum learning techniques. In this paper, we develop a variational hybrid quantum-classical learning scheme to systematically probe for symmetries of $\vert \psi \rangle$ with no a priori assumptions about the state. This procedure can be used to learn various symmetries at the same time. In order to avoid re-learning already known symmetries, we introduce an interactive protocol with a classical deep neural net. The classical net thereby regularizes against repetitive findings and allows our algorithm to terminate empirically with all possible symmetries found. Our scheme can be implemented efficiently on average with non-local SWAP gates; we also give a less efficient algorithm with only local operations, which may be more appropriate for current noisy quantum devices. We simulate our algorithm on representative families of states, including cluster states and ground states of Rydberg and Ising Hamiltonians. We also find that the numerical query complexity scales well with qubit size. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 304,431 |
1904.01987 | Hybrid Cosine Based Convolutional Neural Networks | Convolutional neural networks (CNNs) have demonstrated their capability to solve different kind of problems in a very huge number of applications. However, CNNs are limited for their computational and storage requirements. These limitations make difficult to implement these kind of neural networks on embedded devices such as mobile phones, smart cameras or advanced driving assistance systems. In this paper, we present a novel layer named Hybrid Cosine Based Convolution that replaces standard convolutional layers using cosine basis to generate filter weights. The proposed layers provide several advantages: faster convergence in training, the receptive field can be increased at no cost and substantially reduce the number of parameters. We evaluate our proposed layers on three competitive classification tasks where our proposed layers can achieve similar (and in some cases better) performances than VGG and ResNet architectures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 126,298 |
1801.01552 | Asymptotic bounds for spherical codes | The set of all error-correcting codes C over a fixed finite alphabet F of cardinality q determines the set of code points in the unit square with coordinates (R(C), delta (C)):= (relative transmission rate, relative minimal distance). The central problem of the theory of such codes consists in maximizing simultaneously the transmission rate of the code and the relative minimum Hamming distance between two different code words. The classical approach to this problem explored in vast literature consists in the inventing explicit constructions of "good codes" and comparing new classes of codes with earlier ones. Less classical approach studies the geometry of the whole set of code points (R,delta) (with q fixed), at first independently of its computability properties, and only afterwords turning to the problems of computability, analogies with statistical physics etc. The main purpose of this article consists in extending this latter strategy to domain of spherical codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 87,745 |
2403.18870 | SugarcaneNet: An Optimized Ensemble of LASSO-Regularized Pre-trained
Models for Accurate Disease Classification | Sugarcane, a key crop for the world's sugar industry, is prone to several diseases that have a substantial negative influence on both its yield and quality. To effectively manage and implement preventative initiatives, diseases must be detected promptly and accurately. In this study, we present a unique model called sugarcaneNet2024 that outperforms previous methods for automatically and quickly detecting sugarcane disease through leaf image processing. Our proposed model consolidates an optimized weighted average ensemble of seven customized and LASSO-regularized pre-trained models, particularly InceptionV3, InceptionResNetV2, DenseNet201, DenseNet169, Xception, and ResNet152V2. Initially, we added three more dense layers with 0.0001 LASSO regularization, three 30% dropout layers, and three batch normalizations with renorm enabled at the bottom of these pre-trained models to improve the performance. The accuracy of sugarcane leaf disease classification was greatly increased by this addition. Following this, several comparative studies between the average ensemble and individual models were carried out, indicating that the ensemble technique performed better. The average ensemble of all modified pre-trained models produced outstanding outcomes: 100%, 99%, 99%, and 99.45% for f1 score, precision, recall, and accuracy, respectively. Performance was further enhanced by the implementation of an optimized weighted average ensemble technique incorporated with grid search. This optimized sugarcaneNet2024 model performed the best for detecting sugarcane diseases, having achieved accuracy, precision, recall, and F1 score of 99.67%, 100%, 100%, and 100% , respectively. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 442,113 |
2006.08131 | An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks | With the widespread use of deep neural networks (DNNs) in high-stake applications, the security problem of the DNN models has received extensive attention. In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers. We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset. Specifically, we do not change parameters in the original model but insert a tiny trojan module (TrojanNet) into the target model. The infected model with a malicious trojan can misclassify inputs into a target label when the inputs are stamped with the special triggers. The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts comparing to conventional trojan attack methods. The experimental results show that TrojanNet can inject the trojan into all labels simultaneously (all-label trojan attack) and achieves 100% attack success rate without affecting model accuracy on original tasks. Experimental analysis further demonstrates that state-of-the-art trojan detection algorithms fail to detect TrojanNet attack. The code is available at https://github.com/trx14/TrojanNet. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 182,075 |
2206.05618 | Synthetic PET via Domain Translation of 3D MRI | Historically, patient datasets have been used to develop and validate various reconstruction algorithms for PET/MRI and PET/CT. To enable such algorithm development, without the need for acquiring hundreds of patient exams, in this paper we demonstrate a deep learning technique to generate synthetic but realistic whole-body PET sinograms from abundantly-available whole-body MRI. Specifically, we use a dataset of 56 $^{18}$F-FDG-PET/MRI exams to train a 3D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI. In training we implemented a balanced loss function to generate realistic uptake across a large dynamic range and computed losses along tomographic lines of response to mimic the PET acquisition. The predicted PET images are forward projected to produce synthetic PET time-of-flight (ToF) sinograms that can be used with vendor-provided PET reconstruction algorithms, including using CT-based attenuation correction (CTAC) and MR-based attenuation correction (MRAC). The resulting synthetic data recapitulates physiologic $^{18}$F-FDG uptake, e.g. high uptake localized to the brain and bladder, as well as uptake in liver, kidneys, heart and muscle. To simulate abnormalities with high uptake, we also insert synthetic lesions. We demonstrate that this synthetic PET data can be used interchangeably with real PET data for the PET quantification task of comparing CT and MR-based attenuation correction methods, achieving $\leq 7.6\%$ error in mean-SUV compared to using real data. These results together show that the proposed synthetic PET data pipeline can be reasonably used for development, evaluation, and validation of PET/MRI reconstruction methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 302,069 |
2404.02912 | Probabilistic Generating Circuits -- Demystified | Zhang et al. (ICML 2021, PLMR 139, pp. 12447-1245) introduced probabilistic generating circuits (PGCs) as a probabilistic model to unify probabilistic circuits (PCs) and determinantal point processes (DPPs). At a first glance, PGCs store a distribution in a very different way, they compute the probability generating polynomial instead of the probability mass function and it seems that this is the main reason why PGCs are more powerful than PCs or DPPs. However, PGCs also allow for negative weights, whereas classical PCs assume that all weights are nonnegative. One of the main insights of our paper is that the negative weights are responsible for the power of PGCs and not the different representation. PGCs are PCs in disguise, in particular, we show how to transform any PGC into a PC with negative weights with only polynomial blowup. PGCs were defined by Zhang et al. only for binary random variables. As our second main result, we show that there is a good reason for this: we prove that PGCs for categorial variables with larger image size do not support tractable marginalization unless NP = P. On the other hand, we show that we can model categorial variables with larger image size as PC with negative weights computing set-multilinear polynomials. These allow for tractable marginalization. In this sense, PCs with negative weights strictly subsume PGCs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 444,049 |
2006.07064 | Indexing Data on the Web: A Comparison of Schema-level Indices for Data
Search -- Extended Technical Report | Indexing the Web of Data offers many opportunities, in particular, to find and explore data sources. One major design decision when indexing the Web of Data is to find a suitable index model, i.e., how to index and summarize data. Various efforts have been conducted to develop specific index models for a given task. With each index model designed, implemented, and evaluated independently, it remains difficult to judge whether an approach generalizes well to another task, set of queries, or dataset. In this work, we empirically evaluate six representative index models with unique feature combinations. Among them is a new index model incorporating inferencing over RDFS and owl:sameAs. We implement all index models for the first time into a single, stream-based framework. We evaluate variations of the index models considering sub-graphs of size 0, 1, and 2 hops on two large, real-world datasets. We evaluate the quality of the indices regarding the compression ratio, summarization ratio, and F1-score denoting the approximation quality of the stream-based index computation. The experiments reveal huge variations in compression ratio, summarization ratio, and approximation quality for different index models, queries, and datasets. However, we observe meaningful correlations in the results that help to determine the right index model for a given task, type of query, and dataset. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 181,676 |
1809.03216 | Multimodal feedback for active robot-object interaction | In this work, we present a multimodal system for active robot-object interaction using laser-based SLAM, RGBD images, and contact sensors. In the object manipulation task, the robot adjusts its initial pose with respect to obstacles and target objects through RGBD data so it can perform object grasping in different configuration spaces while avoiding collisions, and updates the information related to the last steps of the manipulation process using the contact sensors in its hand. We perform a series of experiment to evaluate the performance of the proposed system following the the RoboCup2018 international competition regulations. We compare our approach with a number of baselines, namely a no-feedback method and visual-only and tactile-only feedback methods, where our proposed visual-and-tactile feedback method performs best. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 107,271 |
2405.04760 | Large Language Models for Cyber Security: A Systematic Literature Review | The rapid advancement of Large Language Models (LLMs) has opened up new opportunities for leveraging artificial intelligence in various domains, including cybersecurity. As the volume and sophistication of cyber threats continue to grow, there is an increasing need for intelligent systems that can automatically detect vulnerabilities, analyze malware, and respond to attacks. In this survey, we conduct a comprehensive review of the literature on the application of LLMs in cybersecurity (LLM4Security). By comprehensively collecting over 30K relevant papers and systematically analyzing 127 papers from top security and software engineering venues, we aim to provide a holistic view of how LLMs are being used to solve diverse problems across the cybersecurity domain. Through our analysis, we identify several key findings. First, we observe that LLMs are being applied to a wide range of cybersecurity tasks, including vulnerability detection, malware analysis, network intrusion detection, and phishing detection. Second, we find that the datasets used for training and evaluating LLMs in these tasks are often limited in size and diversity, highlighting the need for more comprehensive and representative datasets. Third, we identify several promising techniques for adapting LLMs to specific cybersecurity domains, such as fine-tuning, transfer learning, and domain-specific pre-training. Finally, we discuss the main challenges and opportunities for future research in LLM4Security, including the need for more interpretable and explainable models, the importance of addressing data privacy and security concerns, and the potential for leveraging LLMs for proactive defense and threat hunting. Overall, our survey provides a comprehensive overview of the current state-of-the-art in LLM4Security and identifies several promising directions for future research. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 452,669 |
1511.03260 | A Hierarchical Spectral Method for Extreme Classification | Extreme classification problems are multiclass and multilabel classification problems where the number of outputs is so large that straightforward strategies are neither statistically nor computationally viable. One strategy for dealing with the computational burden is via a tree decomposition of the output space. While this typically leads to training and inference that scales sublinearly with the number of outputs, it also results in reduced statistical performance. In this work, we identify two shortcomings of tree decomposition methods, and describe two heuristic mitigations. We compose these with an eigenvalue technique for constructing the tree. The end result is a computationally efficient algorithm that provides good statistical performance on several extreme data sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 48,737 |
1410.3596 | Detection of cheating by decimation algorithm | We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 36,719 |
2208.10536 | A Meta-Analysis of Solar Forecasting Based on Skill Score | We conduct the first comprehensive meta-analysis of deterministic solar forecasting based on skill score, screening 1,447 papers from Google Scholar and reviewing the full texts of 320 papers for data extraction. A database of 4,687 points was built and analyzed with multivariate adaptive regression spline modelling, partial dependence plots, and linear regression. The marginal impacts on skill score of ten factors were quantified. The analysis shows the non-linearity and complex interaction between variables in the database. Forecast horizon has a central impact and dominates other factors' impacts. Therefore, the analysis of solar forecasts should be done separately for each horizon. Climate zone variables have statistically significant correlation with skill score. Regarding inputs, historical data and spatial temporal information are highly helpful. For intra-day, sky and satellite images show the most importance. For day-ahead, numerical weather predictions and locally measured meteorological data are very efficient. All forecast models were compared. Ensemble-hybrid models achieve the most accurate forecasts for all horizons. Hybrid models show superiority for intra-hour while image-based methods are the most efficient for intra-day forecasts. More training data can enhance skill score. However, over-fitting is observed when there is too much training data (longer than 2000 days). There has been a substantial improvement in solar forecast accuracy, especially in recent years. More improvement is observed for intra-hour and intra-day than day-ahead forecasts. By controlling for the key differences between forecasts, including location variables, our findings can be applied globally. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 314,096 |
1912.11430 | TF3P: Three-dimensional Force Fields Fingerprint Learned by Deep
Capsular Network | Molecular fingerprints are the workhorse in ligand-based drug discovery. In recent years, an increasing number of research papers reported fascinating results on using deep neural networks to learn 2D molecular representations as fingerprints. It is anticipated that the integration of deep learning would also contribute to the prosperity of 3D fingerprints. Here, we unprecedentedly introduce deep learning into 3D small molecule fingerprints, presenting a new one we termed as the three-dimensional force fields fingerprint (TF3P). TF3P is learned by a deep capsular network whose training is in no need of labeled datasets for specific predictive tasks. TF3P can encode the 3D force fields information of molecules and demonstrates the stronger ability to capture 3D structural changes, to recognize molecules alike in 3D but not in 2D, and to identify similar targets inaccessible by other 2D or 3D fingerprints based on only ligands similarity. Furthermore, TF3P is compatible with both statistical models (e.g. similarity ensemble approach) and machine learning models. Altogether, we report TF3P as a new 3D small molecule fingerprint with a promising future in ligand-based drug discovery. All codes are written in Python and available at https://github.com/canisw/tf3p. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 158,566 |
2310.20363 | CAFE: Conflict-Aware Feature-wise Explanations | Feature attribution methods are widely used to explain neural models by determining the influence of individual input features on the models' outputs. We propose a novel feature attribution method, CAFE (Conflict-Aware Feature-wise Explanations), that addresses three limitations of the existing methods: their disregard for the impact of conflicting features, their lack of consideration for the influence of bias terms, and an overly high sensitivity to local variations in the underpinning activation functions. Unlike other methods, CAFE provides safeguards against overestimating the effects of neuron inputs and separately traces positive and negative influences of input features and biases, resulting in enhanced robustness and increased ability to surface feature conflicts. We show experimentally that CAFE is better able to identify conflicting features on synthetic tabular data and exhibits the best overall fidelity on several real-world tabular datasets, while being highly computationally efficient. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,364 |
0712.4099 | Digital Ecosystems: Optimisation by a Distributed Intelligence | Can intelligence optimise Digital Ecosystems? How could a distributed intelligence interact with the ecosystem dynamics? Can the software components that are part of genetic selection be intelligent in themselves, as in an adaptive technology? We consider the effect of a distributed intelligence mechanism on the evolutionary and ecological dynamics of our Digital Ecosystem, which is the digital counterpart of a biological ecosystem for evolving software services in a distributed network. We investigate Neural Networks and Support Vector Machine for the learning based pattern recognition functionality of our distributed intelligence. Simulation results imply that the Digital Ecosystem performs better with the application of a distributed intelligence, marginally more effectively when powered by Support Vector Machine than Neural Networks, and suggest that it can contribute to optimising the operation of our Digital Ecosystem. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 1,082 |
2312.07624 | A dynamical clipping approach with task feedback for Proximal Policy
Optimization | Proximal Policy Optimization (PPO) has been broadly applied to robotics learning, showcasing stable training performance. However, the fixed clipping bound setting may limit the performance of PPO. Specifically, there is no theoretical proof that the optimal clipping bound remains consistent throughout the entire training process. Meanwhile, previous researches suggest that a fixed clipping bound restricts the policy's ability to explore. Therefore, many past studies have aimed to dynamically adjust the PPO clipping bound to enhance PPO's performance. However, the objective of these approaches are not directly aligned with the objective of reinforcement learning (RL) tasks, which is to maximize the cumulative Return. Unlike previous clipping approaches, we propose a bi-level proximal policy optimization objective that can dynamically adjust the clipping bound to better reflect the preference (maximizing Return) of these RL tasks. Based on this bi-level proximal policy optimization paradigm, we introduce a new algorithm named Preference based Proximal Policy Optimization (Pb-PPO). Pb-PPO utilizes a multi-armed bandit approach to refelect RL preference, recommending the clipping bound for PPO that can maximizes the current Return. Therefore, Pb-PPO results in greater stability and improved performance compared to PPO with a fixed clipping bound. We test Pb-PPO on locomotion benchmarks across multiple environments, including Gym-Mujoco and legged-gym. Additionally, we validate Pb-PPO on customized navigation tasks. Meanwhile, we conducted comparisons with PPO using various fixed clipping bounds and various of clipping approaches. The experimental results indicate that Pb-PPO demonstrates superior training performance compared to PPO and its variants. Our codebase has been released at : https://github.com/stevezhangzA/pb_ppo | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,000 |
2501.03475 | Reading with Intent -- Neutralizing Intent | Queries to large language models (LLMs) can be divided into two parts: the instruction/question and the accompanying context. The context for retrieval-augmented generation (RAG) systems in most benchmarks comes from Wikipedia or Wikipedia-like texts which are written in a neutral and factual tone. However, when RAG systems retrieve internet-based content, they encounter text with diverse tones and linguistic styles, introducing challenges for downstream tasks. The Reading with Intent task addresses this issue by evaluating how varying tones in context passages affect model performance. Building on prior work that focused on sarcasm, we extend this paradigm by constructing a dataset where context passages are transformed to $11$ distinct emotions using a better synthetic data generation approach. Using this dataset, we train an emotion translation model to systematically adapt passages to specified emotional tones. The human evaluation shows that the LLM fine-tuned to become the emotion-translator benefited from the synthetically generated data. Finally, the emotion-translator is used in the Reading with Intent task to transform the passages to a neutral tone. By neutralizing the passages, it mitigates the challenges posed by sarcastic passages and improves overall results on this task by about $3\%$. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 522,886 |
1903.10180 | git2net - Mining Time-Stamped Co-Editing Networks from Large git
Repositories | Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | true | 125,223 |
2403.20222 | Shallow Cross-Encoders for Low-Latency Retrieval | Transformer-based Cross-Encoders achieve state-of-the-art effectiveness in text retrieval. However, Cross-Encoders based on large transformer models (such as BERT or T5) are computationally expensive and allow for scoring only a small number of documents within a reasonably small latency window. However, keeping search latencies low is important for user satisfaction and energy usage. In this paper, we show that weaker shallow transformer models (i.e., transformers with a limited number of layers) actually perform better than full-scale models when constrained to these practical low-latency settings since they can estimate the relevance of more documents in the same time budget. We further show that shallow transformers may benefit from the generalized Binary Cross-Entropy (gBCE) training scheme, which has recently demonstrated success for recommendation tasks. Our experiments with TREC Deep Learning passage ranking query sets demonstrate significant improvements in shallow and full-scale models in low-latency scenarios. For example, when the latency limit is 25ms per query, MonoBERT-Large (a cross-encoder based on a full-scale BERT model) is only able to achieve NDCG@10 of 0.431 on TREC DL 2019, while TinyBERT-gBCE (a cross-encoder based on TinyBERT trained with gBCE) reaches NDCG@10 of 0.652, a +51% gain over MonoBERT-Large. We also show that shallow Cross-Encoders are effective even when used without a GPU (e.g., with CPU inference, NDCG@10 decreases only by 3% compared to GPU inference with 50ms latency), which makes Cross-Encoders practical to run even without specialized hardware acceleration. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 442,667 |
1910.02653 | Checkmate: Breaking the Memory Wall with Optimal Tensor
Rematerialization | We formalize the problem of trading-off DNN training time and memory requirements as the tensor rematerialization optimization problem, a generalization of prior checkpointing strategies. We introduce Checkmate, a system that solves for optimal rematerialization schedules in reasonable times (under an hour) using off-the-shelf MILP solvers or near-optimal schedules with an approximation algorithm, then uses these schedules to accelerate millions of training iterations. Our method scales to complex, realistic architectures and is hardware-aware through the use of accelerator-specific, profile-based cost models. In addition to reducing training cost, Checkmate enables real-world networks to be trained with up to 5.1x larger input sizes. Checkmate is an open-source project, available at https://github.com/parasj/checkmate. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 148,306 |
2408.14101 | Estimating Causal Effects from Learned Causal Networks | The standard approach to answering an identifiable causal-effect query (e.g., $P(Y|do(X)$) when given a causal diagram and observational data is to first generate an estimand, or probabilistic expression over the observable variables, which is then evaluated using the observational data. In this paper, we propose an alternative paradigm for answering causal-effect queries over discrete observable variables. We propose to instead learn the causal Bayesian network and its confounding latent variables directly from the observational data. Then, efficient probabilistic graphical model (PGM) algorithms can be applied to the learned model to answer queries. Perhaps surprisingly, we show that this \emph{model completion} learning approach can be more effective than estimand approaches, particularly for larger models in which the estimand expressions become computationally difficult. We illustrate our method's potential using a benchmark collection of Bayesian networks and synthetically generated causal models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 483,419 |
2211.08168 | Type Information Utilized Event Detection via Multi-Channel GNNs in
Electrical Power Systems | Event detection in power systems aims to identify triggers and event types, which helps relevant personnel respond to emergencies promptly and facilitates the optimization of power supply strategies. However, the limited length of short electrical record texts causes severe information sparsity, and numerous domain-specific terminologies of power systems makes it difficult to transfer knowledge from language models pre-trained on general-domain texts. Traditional event detection approaches primarily focus on the general domain and ignore these two problems in the power system domain. To address the above issues, we propose a Multi-Channel graph neural network utilizing Type information for Event Detection in power systems, named MC-TED, leveraging a semantic channel and a topological channel to enrich information interaction from short texts. Concretely, the semantic channel refines textual representations with semantic similarity, building the semantic information interaction among potential event-related words. The topological channel generates a relation-type-aware graph modeling word dependencies, and a word-type-aware graph integrating part-of-speech tags. To further reduce errors worsened by professional terminologies in type analysis, a type learning mechanism is designed for updating the representations of both the word type and relation type in the topological channel. In this way, the information sparsity and professional term occurrence problems can be alleviated by enabling interaction between topological and semantic information. Furthermore, to address the lack of labeled data in power systems, we built a Chinese event detection dataset based on electrical Power Event texts, named PoE. In experiments, our model achieves compelling results not only on the PoE dataset, but on general-domain event detection datasets including ACE 2005 and MAVEN. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 330,502 |
1904.04154 | Bayesian Neural Networks at Finite Temperature | We recapitulate the Bayesian formulation of neural network based classifiers and show that, while sampling from the posterior does indeed lead to better generalisation than is obtained by standard optimisation of the cost function, even better performance can in general be achieved by sampling finite temperature ($T$) distributions derived from the posterior. Taking the example of two different deep (3 hidden layers) classifiers for MNIST data, we find quite different $T$ values to be appropriate in each case. In particular, for a typical neural network classifier a clear minimum of the test error is observed at $T>0$. This suggests an early stopping criterion for full batch simulated annealing: cool until the average validation error starts to increase, then revert to the parameters with the lowest validation error. As $T$ is increased classifiers transition from accurate classifiers to classifiers that have higher training error than assigning equal probability to each class. Efficient studies of these temperature-induced effects are enabled using a replica-exchange Hamiltonian Monte Carlo simulation technique. Finally, we show how thermodynamic integration can be used to perform model selection for deep neural networks. Similar to the Laplace approximation, this approach assumes that the posterior is dominated by a single mode. Crucially, however, no assumption is made about the shape of that mode and it is not required to precisely compute and invert the Hessian. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 126,951 |
2202.12230 | Sample Efficiency of Data Augmentation Consistency Regularization | Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data. In this paper, we take a step in this direction - we first present a simple and novel analysis for linear regression with label invariant augmentations, demonstrating that data augmentation consistency (DAC) is intrinsically more efficient than empirical risk minimization on augmented data (DA-ERM). The analysis is then extended to misspecified augmentations (i.e., augmentations that change the labels), which again demonstrates the merit of DAC over DA-ERM. Further, we extend our analysis to non-linear models (e.g., neural networks) and present generalization bounds. Finally, we perform experiments that make a clean and apples-to-apples comparison (i.e., with no extra modeling or data tweaks) between DAC and DA-ERM using CIFAR-100 and WideResNet; these together demonstrate the superior efficacy of DAC. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 282,154 |
2309.16702 | Prediction and Interpretation of Vehicle Trajectories in the Graph
Spectral Domain | This work provides a comprehensive analysis and interpretation of the graph spectral representation of traffic scenarios. Based on a spatio-temporal vehicle interaction graph, an observed traffic scenario can be transformed into the graph spectral domain by means of the multidimensional Graph Fourier Transformation. Since these spectral scenario representations have shown to successfully incorporate the complex and interactive nature of traffic scenarios, the beneficial feature representation is employed for the purpose of predicting vehicle trajectories. This work introduces GFTNNv2, a deep learning network predicting vehicle trajectories in the graph spectral domain. Evaluation of the GFTNNv2 on the publicly available datasets highD and NGSIM shows a performance gain of up to 25% in comparison to state-of-the-art prediction approaches. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 395,462 |
2408.02883 | "Sharing, Not Showing Off": How BeReal Approaches Authentic
Self-Presentation on Social Media Through Its Design | Adolescents are particularly vulnerable to the pressures created by social media, such as heightened self-consciousness and the need for extensive self-presentation. In this study, we investigate how BeReal, a social media platform designed to counter some of these pressures, influences adolescents' self-presentation behaviors. We interviewed 29 users aged 13-18 to understand their experiences with BeReal. We found that BeReal's design focuses on spontaneous sharing, including randomly timed daily notifications and reciprocal posting, discourages staged posts, encourages careful curation of the audience, and reduces pressure on self-presentation. The space created by BeReal offers benefits such as validating an unfiltered life and reframing social comparison, but its approach to self-presentation is sometimes perceived as limited or unappealing and, at times, even toxic. Drawing on this empirical data, we propose design guidelines for platforms that support authentic self-presentation while fostering reciprocity and expanding beyond spontaneous photo-sharing. These guidelines aim to enable users to portray themselves more comprehensively and accurately, ultimately supporting teens' developmental needs, particularly in building authentic relationships. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 478,803 |
2201.05890 | Robust uncertainty estimates with out-of-distribution pseudo-inputs
training | Probabilistic models often use neural networks to control their predictive uncertainty. However, when making out-of-distribution (OOD)} predictions, the often-uncontrollable extrapolation properties of neural networks yield poor uncertainty predictions. Such models then don't know what they don't know, which directly limits their robustness w.r.t unexpected inputs. To counter this, we propose to explicitly train the uncertainty predictor where we are not given data to make it reliable. As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space, and show how to leverage these in a practical Bayesian framework that casts a prior distribution over the model uncertainty. With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks such as regression and generative modelling | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,536 |
1205.6376 | Analysis and study on text representation to improve the accuracy of the
Normalized Compression Distance | The huge amount of information stored in text form makes methods that deal with texts really interesting. This thesis focuses on dealing with texts using compression distances. More specifically, the thesis takes a small step towards understanding both the nature of texts and the nature of compression distances. Broadly speaking, the way in which this is done is exploring the effects that several distortion techniques have on one of the most successful distances in the family of compression distances, the Normalized Compression Distance -NCD-. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 16,220 |
2109.07556 | Unit Selection with Causal Diagram | The unit selection problem aims to identify a set of individuals who are most likely to exhibit a desired mode of behavior, for example, selecting individuals who would respond one way if encouraged and a different way if not encouraged. Using a combination of experimental and observational data, Li and Pearl derived tight bounds on the "benefit function" - the payoff/cost associated with selecting an individual with given characteristics. This paper shows that these bounds can be narrowed significantly (enough to change decisions) when structural information is available in the form of a causal model. We address the problem of estimating the benefit function using observational and experimental data when specific graphical criteria are assumed to hold. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 255,560 |
2406.11316 | Improved Algorithms for Contextual Dynamic Pricing | In contextual dynamic pricing, a seller sequentially prices goods based on contextual information. Buyers will purchase products only if the prices are below their valuations. The goal of the seller is to design a pricing strategy that collects as much revenue as possible. We focus on two different valuation models. The first assumes that valuations linearly depend on the context and are further distorted by noise. Under minor regularity assumptions, our algorithm achieves an optimal regret bound of $\tilde{\mathcal{O}}(T^{2/3})$, improving the existing results. The second model removes the linearity assumption, requiring only that the expected buyer valuation is $\beta$-H\"older in the context. For this model, our algorithm obtains a regret $\tilde{\mathcal{O}}(T^{d+2\beta/d+3\beta})$, where $d$ is the dimension of the context space. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 464,838 |
2208.14775 | Modified Froelich's Equation for Modelling of a Three Phase Self-Excited
Synchronous Generator | With advancement in design and analysis of electro-mechanical and electromagnetic devices, the modelling of magnetic saturation of a synchronous generator has emerged to be a subject of interest in number of publications. Most of the existing electrical machine modelling methods does ignore the saturation effect for simplicity. On the other hand, who incorporate saturation effect, are dealing with complex computation of coefficients which involves tedious curve fitting techniques like non-linear regression, least-squares. This paper presents the novel method of modelling of the self-excited synchronous generator along with magnetizing characteristics with ease and good accuracy which is inspired from Froelichs equation. The proposed mathematical model is implemented in simulation environment and validated the results with a practical three phase self-excited synchronous generator in which saturation plays a vital role. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 315,414 |
1307.3419 | Pleasantly Consuming Linked Data with RDF Data Descriptions | Although the intention of RDF is to provide an open, minimally constraining way for representing information, there exists an increasing number of applications for which guarantees on the structure and values of an RDF data set become desirable if not essential. What is missing in this respect are mechanisms to tie RDF data to quality guarantees akin to schemata of relational databases, or DTDs in XML, in particular when translating legacy data coming with a rich set of integrity constraints - like keys or cardinality restrictions - into RDF. Addressing this shortcoming, we present the RDF Data Description language (RDD), which makes it possible to specify instance-level data constraints over RDF. Making such constraints explicit does not only help in asserting and maintaining data quality, but also opens up new optimization opportunities for query engines and, most importantly, makes query formulation a lot easier for users and system developers. We present design goals, syntax, and a formal, First-order logics based semantics of RDDs and discuss the impact on consuming Linked Data. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 25,801 |
2501.06122 | NDOB-Based Control of a UAV with Delta-Arm Considering Manipulator
Dynamics | Aerial Manipulators (AMs) provide a versatile platform for various applications, including 3D printing, architecture, and aerial grasping missions. However, their operational speed is often sacrificed to uphold precision. Existing control strategies for AMs often regard the manipulator as a disturbance and employ robust control methods to mitigate its influence. This research focuses on elevating the precision of the end-effector and enhancing the agility of aerial manipulator movements. We present a composite control scheme to address these challenges. Initially, a Nonlinear Disturbance Observer (NDOB) is utilized to compensate for internal coupling effects and external disturbances. Subsequently, manipulator dynamics are processed through a high pass filter to facilitate agile movements. By integrating the proposed control method into a fully autonomous delta-arm-based AM system, we substantiate the controller's efficacy through extensive real-world experiments. The outcomes illustrate that the end-effector can achieve accuracy at the millimeter level. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 523,846 |
1901.05112 | An Exponential Lower Bound on the Sub-Packetization of MSR Codes | An $(n,k,\ell)$-vector MDS code is a $\mathbb{F}$-linear subspace of $(\mathbb{F}^\ell)^n$ (for some field $\mathbb{F}$) of dimension $k\ell$, such that any $k$ (vector) symbols of the codeword suffice to determine the remaining $r=n-k$ (vector) symbols. The length $\ell$ of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading $\ell/r$ field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization $\ell \gtrsim r^{k/r}$. Our main result is an almost tight lower bound showing that for an MSR code, one must have $\ell \ge \exp(\Omega(k/r))$. This settles a central open question concerning MSR codes that has received much attention. Previously, a lower bound of $\approx \exp(\sqrt{k/r})$, and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 118,728 |
2301.09544 | Learning to View: Decision Transformers for Active Object Detection | Active perception describes a broad class of techniques that couple planning and perception systems to move the robot in a way to give the robot more information about the environment. In most robotic systems, perception is typically independent of motion planning. For example, traditional object detection is passive: it operates only on the images it receives. However, we have a chance to improve the results if we allow planning to consume detection signals and move the robot to collect views that maximize the quality of the results. In this paper, we use reinforcement learning (RL) methods to control the robot in order to obtain images that maximize the detection quality. Specifically, we propose using a Decision Transformer with online fine-tuning, which first optimizes the policy with a pre-collected expert dataset and then improves the learned policy by exploring better solutions in the environment. We evaluate the performance of proposed method on an interactive dataset collected from an indoor scenario simulator. Experimental results demonstrate that our method outperforms all baselines, including expert policy and pure offline RL methods. We also provide exhaustive analyses of the reward distribution and observation space. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 341,530 |
2211.01877 | Convex Clustering through MM: An Efficient Algorithm to Perform
Hierarchical Clustering | Convex clustering is a modern method with both hierarchical and $k$-means clustering characteristics. Although convex clustering can capture complex clustering structures hidden in data, the existing convex clustering algorithms are not scalable to large data sets with sample sizes greater than several thousands. Moreover, it is known that convex clustering sometimes fails to produce a complete hierarchical clustering structure. This issue arises if clusters split up or the minimum number of possible clusters is larger than the desired number of clusters. In this paper, we propose convex clustering through majorization-minimization (CCMM) -- an iterative algorithm that uses cluster fusions and a highly efficient updating scheme derived using diagonal majorization. Additionally, we explore different strategies to ensure that the hierarchical clustering structure terminates in a single cluster. With a current desktop computer, CCMM efficiently solves convex clustering problems featuring over one million objects in seven-dimensional space, achieving a solution time of 51 seconds on average. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 328,396 |
2108.00045 | Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning | Zero-Shot Learning (ZSL) aims to recognise unseen object classes, which are not observed during the training phase. The existing body of works on ZSL mostly relies on pretrained visual features and lacks the explicit attribute localisation mechanism on images. In this work, we propose an attention-based model in the problem settings of ZSL to learn attributes useful for unseen class recognition. Our method uses an attention mechanism adapted from Vision Transformer to capture and learn discriminative attributes by splitting images into small patches. We conduct experiments on three popular ZSL benchmarks (i.e., AWA2, CUB and SUN) and set new state-of-the-art harmonic mean results {on all the three datasets}, which illustrate the effectiveness of our proposed method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 248,577 |
1806.08015 | Stability of Scattering Decoder For Nonlinear Diffractive Imaging | The problem of image reconstruction under multiple light scattering is usually formulated as a regularized non-convex optimization. A deep learning architecture, Scattering Decoder (ScaDec), was recently proposed to solve this problem in a purely data-driven fashion. The proposed method was shown to substantially outperform optimization-based baselines and achieve state-of-the-art results. In this paper, we thoroughly test the robustness of ScaDec to different permittivity contrasts, number of transmissions, and input signal-to-noise ratios. The results on high-fidelity simulated datasets show that the performance of ScaDec is stable in different settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 101,069 |
1711.01991 | Mitigating Adversarial Effects Through Randomization | Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https://github.com/cihangxie/NIPS2017_adv_challenge_defense. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 83,984 |
2410.10681 | A System Parameterization for Direct Data-Driven Estimator Synthesis | This paper introduces a novel parameterization to characterize unknown linear time-invariant systems using noisy data. The presented parameterization describes exactly the set of all systems consistent with the available data. We then derive verifiable conditions, when the consistency constraint reduces the set to the true system and when it does not have any impact. Furthermore, we demonstrate how to use this parameterization to perform a direct data-driven estimator synthesis with guarantees on the H_{\infty}-norm. Lastly, we conduct numerical experiments to compare our approach to existing methods. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 498,185 |
2009.14261 | Abusive Language Detection and Characterization of Twitter Behavior | In this work, abusive language detection in online content is performed using Bidirectional Recurrent Neural Network (BiRNN) method. Here the main objective is to focus on various forms of abusive behaviors on Twitter and to detect whether a speech is abusive or not. The results are compared for various abusive behaviors in social media, with Convolutional Neural Netwrok (CNN) and Recurrent Neural Network (RNN) methods and proved that the proposed BiRNN is a better deep learning model for automatic abusive speech detection. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 197,977 |
1411.6757 | Echo State Condition at the Critical Point | Recurrent networks with transfer functions that fulfill the Lipschitz continuity with K=1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the exact shape of the transfer function plays a decisive role in determining whether the network still fulfills the echo state condition. In addition, several examples with one neuron networks are outlined to illustrate effects of critical connectivity. Moreover, within the manuscript a mathematical definition for a critical echo state network is suggested. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 37,869 |
2412.17263 | VarAD: Lightweight High-Resolution Image Anomaly Detection via Visual
Autoregressive Modeling | This paper addresses a practical task: High-Resolution Image Anomaly Detection (HRIAD). In comparison to conventional image anomaly detection for low-resolution images, HRIAD imposes a heavier computational burden and necessitates superior global information capture capacity. To tackle HRIAD, this paper translates image anomaly detection into visual token prediction and proposes VarAD based on visual autoregressive modeling for token prediction. Specifically, VarAD first extracts multi-hierarchy and multi-directional visual token sequences, and then employs an advanced model, Mamba, for visual autoregressive modeling and token prediction. During the prediction process, VarAD effectively exploits information from all preceding tokens to predict the target token. Finally, the discrepancies between predicted tokens and original tokens are utilized to score anomalies. Comprehensive experiments on four publicly available datasets and a real-world button inspection dataset demonstrate that the proposed VarAD achieves superior high-resolution image anomaly detection performance while maintaining lightweight, rendering VarAD a viable solution for HRIAD. Code is available at \href{https://github.com/caoyunkang/VarAD}{\url{https://github.com/caoyunkang/VarAD}}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 519,897 |
2205.11121 | A normal approximation for joint frequency estimatation under Local
Differential Privacy | In the recent years, Local Differential Privacy (LDP) has been one of the corner stone of privacy preserving data analysis. However, many challenges still opposes its widespread application. One of these problems is the scalability of LDP to high dimensional data, in particular for estimating joint-distributions. In this paper, we develop an approximate estimator for frequency joint-distribution estimation under so-called pure LDP protocols. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | 298,009 |
0908.3544 | On the Second Order Statistics of the Multihop Rayleigh Fading Channel | Second order statistics provides a dynamic representation of a fading channel and plays an important role in the evaluation and design of the wireless communication systems. In this paper, we present a novel analytical framework for the evaluation of important second order statistical parameters, as the level crossing rate (LCR) and the average fade duration (AFD) of the amplify-and-forward multihop Rayleigh fading channel. More specifically, motivated by the fact that this channel is a cascaded one and can be modeled as the product of N fading amplitudes, we derive novel analytical expressions for the average LCR and the AFD of the product of N Rayleigh fading envelopes (or of the recently so-called N*Rayleigh channel). Furthermore, we derive simple and efficient closed-form approximations to the aforementioned parameters, using the multivariate Laplace approximation theorem. It is shown that our general results reduce to the corresponding ones of the specific dual-hop case, previously published. Numerical and computer simulation examples verify the accuracy of the presented mathematical analysis and show the tightness of the proposed approximations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,329 |
2004.09677 | Approximate exploitability: Learning a best response in large games | Researchers have demonstrated that neural networks are vulnerable to adversarial examples and subtle environment changes, both of which one can view as a form of distribution shift. To humans, the resulting errors can look like blunders, eroding trust in these agents. In prior games research, agent evaluation often focused on the in-practice game outcomes. While valuable, such evaluation typically fails to evaluate robustness to worst-case outcomes. Prior research in computer poker has examined how to assess such worst-case performance, both exactly and approximately. Unfortunately, exact computation is infeasible with larger domains, and existing approximations rely on poker-specific knowledge. We introduce ISMCTS-BR, a scalable search-based deep reinforcement learning algorithm for learning a best response to an agent, thereby approximating worst-case performance. We demonstrate the technique in several two-player zero-sum games against a variety of agents, including several AlphaZero-based agents. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,412 |
2012.15397 | FREA-Unet: Frequency-aware U-net for Modality Transfer | While Positron emission tomography (PET) imaging has been widely used in diagnosis of number of diseases, it has costly acquisition process which involves radiation exposure to patients. However, magnetic resonance imaging (MRI) is a safer imaging modality that does not involve patient's exposure to radiation. Therefore, a need exists for an efficient and automated PET image generation from MRI data. In this paper, we propose a new frequency-aware attention U-net for generating synthetic PET images. Specifically, we incorporate attention mechanism into different U-net layers responsible for estimating low/high frequency scales of the image. Our frequency-aware attention Unet computes the attention scores for feature maps in low/high frequency layers and use it to help the model focus more on the most important regions, leading to more realistic output images. Experimental results on 30 subjects from Alzheimers Disease Neuroimaging Initiative (ADNI) dataset demonstrate good performance of the proposed model in PET image synthesis that achieved superior performance, both qualitative and quantitative, over current state-of-the-arts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 213,766 |
0905.1386 | Selective-Fading Multiple-Access MIMO Channels: Diversity-Multiplexing
Tradeoff and Dominant Outage Event Regions | We establish the optimal diversity-multiplexing (DM) tradeoff for coherent selective-fading multiple-access MIMO channels and provide corresponding code design criteria. As a byproduct, on the conceptual level, we find an interesting relation between the DM tradeoff framework and the notion of dominant error event regions, first introduced in the AWGN case by Gallager, IEEE Trans. IT, 1985. This relation allows us to accurately characterize the error mechanisms in MIMO fading multiple-access channels. In particular, we find that, for a given rate tuple, the maximum achievable diversity order is determined by a single outage event that dominates the total error probability exponentially in SNR. Finally, we examine the distributed space-time code construction proposed by Badr and Belfiore, Int. Zurich Seminar on Commun., 2008, using the code design criteria derived in this paper. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,657 |
2008.07475 | Absorption in Time-Varying Markov Chains: Graph-Based Conditions | We investigate absorption, i.e., almost sure convergence to an absorbing state, in time-varying (non-homogeneous) discrete-time Markov chains with finite state space. We consider systems that can switch among a finite set of transition matrices, which we call the modes. Our analysis is focused on two properties: 1) almost sure convergence to an absorbing state under any switching, and 2) almost sure convergence to a desired set of absorbing states via a proper switching policy. We derive necessary and sufficient conditions based on the structures of the transition graphs of modes. More specifically, we show that a switching policy that ensures almost sure convergence to a desired set of absorbing states from any initial state exists if and only if those absorbing states are reachable from any state on the union of simplified transition graphs. We then show three sufficient conditions for absorption under arbitrary switching. While the first two conditions depend on the acyclicity (weak acyclicity) of the union (intersection) of simplified transition graphs, the third condition is based on the distances of each state to the absorbing states in all the modes. These graph theoretic conditions can verify the stability and stabilizability of absorbing states based only on the feasibility of transitions in each mode. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 192,122 |
2407.17869 | EllipBench: A Large-scale Benchmark for Machine-learning based
Ellipsometry Modeling | Ellipsometry is used to indirectly measure the optical properties and thickness of thin films. However, solving the inverse problem of ellipsometry is time-consuming since it involves human expertise to apply the data fitting techniques. Many studies use traditional machine learning-based methods to model the complex mathematical fitting process. In our work, we approach this problem from a deep learning perspective. First, we introduce a large-scale benchmark dataset to facilitate deep learning methods. The proposed dataset encompasses 98 types of thin film materials and 4 types of substrate materials, including metals, alloys, compounds, and polymers, among others. Additionally, we propose a deep learning framework that leverages residual connections and self-attention mechanisms to learn the massive data points. We also introduce a reconstruction loss to address the common challenge of multiple solutions in thin film thickness prediction. Compared to traditional machine learning methods, our framework achieves state-of-the-art (SOTA) performance on our proposed dataset. The dataset and code will be available upon acceptance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 476,154 |
2203.10472 | Federated Spatial Reuse Optimization in Next-Generation Decentralized
IEEE 802.11 WLANs | As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency. To unleash the potential of such new features, artificial intelligence (AI) and machine learning (ML) are currently being exploited for deriving models and protocols from data, rather than by hand-programming. In this paper, we explore the feasibility of applying ML in next-generation wireless local area networks (WLANs). More specifically, we focus on the IEEE 802.11ax spatial reuse (SR) problem and predict its performance through federated learning (FL) models. The set of FL solutions overviewed in this work is part of the 2021 International Telecommunication Union (ITU) AI for 5G Challenge. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 286,552 |
1809.02850 | Rate-Adaptive Neural Networks for Spatial Multiplexers | In resource-constrained environments, one can employ spatial multiplexing cameras to acquire a small number of measurements of a scene, and perform effective reconstruction or high-level inference using purely data-driven neural networks. However, once trained, the measurement matrix and the network are valid only for a single measurement rate (MR) chosen at training time. To overcome this drawback, we answer the following question: How can we jointly design the measurement operator and the reconstruction/inference network so that the system can operate over a \textit{range} of MRs? To this end, we present a novel training algorithm, for learning \textbf{\textit{rate-adaptive}} networks. Using standard datasets, we demonstrate that, when tested over a range of MRs, a rate-adaptive network can provide high quality reconstruction over a the entire range, resulting in up to about 15 dB improvement over previous methods, where the network is valid for only one MR. We demonstrate the effectiveness of our approach for sample-efficient object tracking where video frames are acquired at dynamically varying MRs. We also extend this algorithm to learn the measurement operator in conjunction with image recognition networks. Experiments on MNIST and CIFAR-10 confirm the applicability of our algorithm to different tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 107,154 |
2209.02518 | Sequential Cross Attention Based Multi-task Learning | In multi-task learning (MTL) for visual scene understanding, it is crucial to transfer useful information between multiple tasks with minimal interferences. In this paper, we propose a novel architecture that effectively transfers informative features by applying the attention mechanism to the multi-scale features of the tasks. Since applying the attention module directly to all possible features in terms of scale and task requires a high complexity, we propose to apply the attention module sequentially for the task and scale. The cross-task attention module (CTAM) is first applied to facilitate the exchange of relevant information between the multiple task features of the same scale. The cross-scale attention module (CSAM) then aggregates useful information from feature maps at different resolutions in the same task. Also, we attempt to capture long range dependencies through the self-attention module in the feature extraction network. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the NYUD-v2 and PASCAL-Context dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 316,235 |
2312.05803 | Transformer-based Selective Super-Resolution for Efficient Image
Refinement | Conventional super-resolution methods suffer from two drawbacks: substantial computational cost in upscaling an entire large image, and the introduction of extraneous or potentially detrimental information for downstream computer vision tasks during the refinement of the background. To solve these issues, we propose a novel transformer-based algorithm, Selective Super-Resolution (SSR), which partitions images into non-overlapping tiles, selects tiles of interest at various scales with a pyramid architecture, and exclusively reconstructs these selected tiles with deep features. Experimental results on three datasets demonstrate the efficiency and robust performance of our approach for super-resolution. Compared to the state-of-the-art methods, the FID score is reduced from 26.78 to 10.41 with 40% reduction in computation cost for the BDD100K dataset. The source code is available at https://github.com/destiny301/SSR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 414,248 |
1102.0033 | Control of Multi-Agent Formations with Only Shape Constraints | This paper considers a novel problem of how to choose an appropriate geometry for a group of agents with only shape constraints but with a flexible scale. Instead of assigning the formation system with a specific geometry, here the only requirement on the desired geometry is a shape without any location, rotation and, most importantly, scale constraints. Optimal rigid transformation between two different geometries is discussed with especial focus on the scaling operation, and the cooperative performance of the system is evaluated by what we call the geometries degrees of similarity (DOS) with respect to the desired shape during the entire convergence process. The design of the scale when measuring the DOS is discussed from constant value and time-varying function perspectives respectively. Fixed structured nonlinear control laws that are functions on the scale are developed to guarantee the exponential convergence of the system to the assigned shape. Our research is originated from a three-agent formation system and is further extended to multiple (n > 3) agents by defining a triangular complement graph. Simulations demonstrate that formation system with the time-varying scale function outperforms the one with an arbitrary constant scale, and the relationship between underlying topology and the system performance is further discussed based on the simulation observations. Moveover, the control scheme is applied to bearing-only sensor-target localization to show its application potentials. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 8,981 |
1801.02254 | Theory of Deep Learning IIb: Optimization Properties of SGD | In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence for the following conjecture about SGD: SGD concentrates in probability -- like the classical Langevin equation -- on large volume, "flat" minima, selecting flat minimizers which are with very high probability also global minimizers | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 87,893 |
2403.17236 | Neural Image Compression with Quantization Rectifier | Neural image compression has been shown to outperform traditional image codecs in terms of rate-distortion performance. However, quantization introduces errors in the compression process, which can degrade the quality of the compressed image. Existing approaches address the train-test mismatch problem incurred during quantization, the random impact of quantization on the expressiveness of image features is still unsolved. This paper presents a novel quantization rectifier (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization. Our method designs a neural network architecture that predicts unquantized features from the quantized ones, preserving feature expressiveness for better image reconstruction quality. We develop a soft-to-predictive training technique to integrate QR into existing neural image codecs. In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark. The results show consistent coding efficiency improvement by QR with a negligible increase in the running time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 441,371 |
2012.06346 | Distant Domain Transfer Learning for Medical Imaging | Medical image processing is one of the most important topics in the field of the Internet of Medical Things (IoMT). Recently, deep learning methods have carried out state-of-the-art performances on medical image tasks. However, conventional deep learning have two main drawbacks: 1) insufficient training data and 2) the domain mismatch between the training data and the testing data. In this paper, we propose a distant domain transfer learning (DDTL) method for medical image classification. Moreover, we apply our methods to a recent issue (Coronavirus diagnose). Several current studies indicate that lung Computed Tomography (CT) images can be used for a fast and accurate COVID-19 diagnosis. However, the well-labeled training data cannot be easily accessed due to the novelty of the disease and a number of privacy policies. Moreover, the proposed method has two components: Reduced-size Unet Segmentation model and Distant Feature Fusion (DFF) classification model. It is related to a not well-investigated but important transfer learning problem, termed Distant Domain Transfer Learning (DDTL). DDTL aims to make efficient transfers even when the domains or the tasks are entirely different. In this study, we develop a DDTL model for COVID-19 diagnose using unlabeled Office-31, Catech-256, and chest X-ray image data sets as the source data, and a small set of COVID-19 lung CT as the target data. The main contributions of this study: 1) the proposed method benefits from unlabeled data collected from distant domains which can be easily accessed, 2) it can effectively handle the distribution shift between the training data and the testing data, 3) it has achieved 96\% classification accuracy, which is 13\% higher classification accuracy than "non-transfer" algorithms, and 8\% higher than existing transfer and distant transfer algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 211,087 |
2402.10010 | Enhancing signal detectability in learning-based CT reconstruction with
a model observer inspired loss function | Deep neural networks used for reconstructing sparse-view CT data are typically trained by minimizing a pixel-wise mean-squared error or similar loss function over a set of training images. However, networks trained with such pixel-wise losses are prone to wipe out small, low-contrast features that are critical for screening and diagnosis. To remedy this issue, we introduce a novel training loss inspired by the model observer framework to enhance the detectability of weak signals in the reconstructions. We evaluate our approach on the reconstruction of synthetic sparse-view breast CT data, and demonstrate an improvement in signal detectability with the proposed loss. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 429,767 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.