id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.14022 | Diacritic Recognition Performance in Arabic ASR | We present an analysis of diacritic recognition performance in Arabic Automatic Speech Recognition (ASR) systems. As most existing Arabic speech corpora do not contain all diacritical marks, which represent short vowels and other phonetic information in Arabic script, current state-of-the-art ASR models do not produce full diacritization in their output. Automatic text-based diacritization has previously been employed both as a pre-processing step to train diacritized ASR, or as a post-processing step to diacritize the resulting ASR hypotheses. It is generally believed that input diacritization degrades ASR performance, but no systematic evaluation of ASR diacritization performance, independent of ASR performance, has been conducted to date. In this paper, we attempt to experimentally clarify whether input diacritiztation indeed degrades ASR quality, and to compare the diacritic recognition performance against text-based diacritization as a post-processing step. We start with pre-trained Arabic ASR models and fine-tune them on transcribed speech data with different diacritization conditions: manual, automatic, and no diacritization. We isolate diacritic recognition performance from the overall ASR performance using coverage and precision metrics. We find that ASR diacritization significantly outperforms text-based diacritization in post-processing, particularly when the ASR model is fine-tuned with manually diacritized transcripts. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 348,128 |
2204.06085 | Finding Trolls Under Bridges: Preliminary Work on a Motif Detector | Motifs are distinctive recurring elements found in folklore that have significance as communicative devices in news, literature, press releases, and propaganda. Motifs concisely imply a large constellation of culturally-relevant information, and their broad usage suggests their cognitive importance as touchstones of cultural knowledge, making their detection a worthy step toward culturally-aware natural language processing tasks. Until now, folklorists and others interested in motifs have only extracted motifs from narratives manually. We present a preliminary report on the development of a system for automatically detecting motifs. We briefly describe an annotation effort to produce data for training motif detection, which is on-going. We describe our in-progress architecture in detail, which aims to capture, in part, how people determine whether or not a motif candidate is being used in a motific way. This description includes a test of an off-the-shelf metaphor detector as a feature for motif detection, which achieves a F1 of 0.35 on motifs and a macro-average F1 of 0.21 across four categories which we assign to motif candidates. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 291,220 |
2104.14074 | Statistical Inference with M-Estimators on Adaptively Collected Data | Bandit algorithms are increasingly used in real-world sequential decision-making problems. Associated with this is an increased desire to be able to use the resulting datasets to answer scientific questions like: Did one type of ad lead to more purchases? In which contexts is a mobile health intervention effective? However, classical statistical approaches fail to provide valid confidence intervals when used with data collected with bandit algorithms. Alternative methods have recently been developed for simple models (e.g., comparison of means). Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward. In this work, we develop theory justifying the use of M-estimators -- which includes estimators based on empirical risk minimization as well as maximum likelihood -- on data collected with adaptive algorithms, including (contextual) bandit algorithms. Specifically, we show that M-estimators, modified with particular adaptive weights, can be used to construct asymptotically valid confidence regions for a variety of inferential targets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 232,700 |
2312.12705 | Optimizing Distributed Training on Frontier for Large Language Models | Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling have demonstrated the superior performance of larger LLMs compared to their smaller counterparts. Nevertheless, training LLMs with billions of parameters poses significant challenges and requires considerable computational resources. For example, training a one trillion parameter GPT-style model on 20 trillion tokens requires a staggering 120 million exaflops of computation. This research explores efficient distributed training strategies to extract this computation from Frontier, the world's first exascale supercomputer dedicated to open science. We enable and investigate various model and data parallel training techniques, such as tensor parallelism, pipeline parallelism, and sharded data parallelism, to facilitate training a trillion-parameter model on Frontier. We empirically assess these techniques and their associated parameters to determine their impact on memory footprint, communication latency, and GPU's computational efficiency. We analyze the complex interplay among these techniques and find a strategy to combine them to achieve high throughput through hyperparameter tuning. We have identified efficient strategies for training large LLMs of varying sizes through empirical analysis and hyperparameter tuning. For 22 Billion, 175 Billion, and 1 Trillion parameters, we achieved GPU throughputs of $38.38\%$, $36.14\%$, and $31.96\%$, respectively. For the training of the 175 Billion parameter model and the 1 Trillion parameter model, we achieved $100\%$ weak scaling efficiency on 1024 and 3072 MI250X GPUs, respectively. We also achieved strong scaling efficiencies of $89\%$ and $87\%$ for these two models. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 417,065 |
2303.07166 | Improved Tree Search for Automatic Program Synthesis | In the task of automatic program synthesis, one obtains pairs of matching inputs and outputs and generates a computer program, in a particular domain-specific language (DSL), which given each sample input returns the matching output. A key element is being able to perform an efficient search in the space of valid programs. Here, we suggest a variant of MCTS that leads to state of the art results on two vastly different DSLs. The exploration method we propose includes multiple contributions: a modified visit count, a preprocessing procedure for the training dataset, and encoding the part of the program that was already executed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 351,150 |
2212.14612 | Conformal Prediction Intervals for Remaining Useful Lifetime Estimation | The main objective of Prognostics and Health Management is to estimate the Remaining Useful Lifetime (RUL), namely, the time that a system or a piece of equipment is still in working order before starting to function incorrectly. In recent years, numerous machine learning algorithms have been proposed for RUL estimation, mainly focusing on providing more accurate RUL predictions. However, there are many sources of uncertainty in the problem, such as inherent randomness of systems failure, lack of knowledge regarding their future states, and inaccuracy of the underlying predictive models, making it infeasible to predict the RULs precisely. Hence, it is of utmost importance to quantify the uncertainty alongside the RUL predictions. In this work, we investigate the conformal prediction (CP) framework that represents uncertainty by predicting sets of possible values for the target variable (intervals in the case of RUL) instead of making point predictions. Under very mild technical assumptions, CP formally guarantees that the actual value (true RUL) is covered by the predicted set with a degree of certainty that can be prespecified. We study three CP algorithms to conformalize any single-point RUL predictor and turn it into a valid interval predictor. Finally, we conformalize two single-point RUL predictors, deep convolutional neural networks and gradient boosting, and illustrate their performance on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,668 |
2306.14047 | Towards Optimal Pricing of Demand Response -- A Nonparametric
Constrained Policy Optimization Approach | Demand response (DR) has been demonstrated to be an effective method for reducing peak load and mitigating uncertainties on both the supply and demand sides of the electricity market. One critical question for DR research is how to appropriately adjust electricity prices in order to shift electrical load from peak to off-peak hours. In recent years, reinforcement learning (RL) has been used to address the price-based DR problem because it is a model-free technique that does not necessitate the identification of models for end-use customers. However, the majority of RL methods cannot guarantee the stability and optimality of the learned pricing policy, which is undesirable in safety-critical power systems and may result in high customer bills. In this paper, we propose an innovative nonparametric constrained policy optimization approach that improves optimality while ensuring stability of the policy update, by removing the restrictive assumption on policy representation that the majority of the RL literature adopts: the policy must be parameterized or fall into a certain distribution class. We derive a closed-form expression of optimal policy update for each iteration and develop an efficient on-policy actor-critic algorithm to address the proposed constrained policy optimization problem. The experiments on two DR cases show the superior performance of our proposed nonparametric constrained policy optimization method compared with state-of-the-art RL algorithms. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 375,513 |
2101.09957 | Activation Functions in Artificial Neural Networks: A Systematic
Overview | Activation functions shape the outputs of artificial neurons and, therefore, are integral parts of neural networks in general and deep learning in particular. Some activation functions, such as logistic and relu, have been used for many decades. But with deep learning becoming a mainstream research topic, new activation functions have mushroomed, leading to confusion in both theory and practice. This paper provides an analytic yet up-to-date overview of popular activation functions and their properties, which makes it a timely resource for anyone who studies or applies neural networks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 216,769 |
0906.2603 | Hybrid Coding for Gaussian Broadcast Channels with Gaussian Sources | This paper considers a degraded Gaussian broadcast channel over which Gaussian sources are to be communicated. When the sources are independent, this paper shows that hybrid coding achieves the optimal distortion region, the same as that of separate source and channel coding. It also shows that uncoded transmission is not optimal for this setting. For correlated sources, the paper shows that a hybrid coding strategy has a better distortion region than separate source-channel coding below a certain signal to noise ratio threshold. Thus, hybrid coding is a good choice for Gaussian broadcast channels with correlated Gaussian sources. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,879 |
2409.02395 | Deep Brain Ultrasound Ablation Thermal Dose Modeling with in Vivo
Experimental Validation | Intracorporeal needle-based therapeutic ultrasound (NBTU) is a minimally invasive option for intervening in malignant brain tumors, commonly used in thermal ablation procedures. This technique is suitable for both primary and metastatic cancers, utilizing a high-frequency alternating electric field (up to 10 MHz) to excite a piezoelectric transducer. The resulting rapid deformation of the transducer produces an acoustic wave that propagates through tissue, leading to localized high-temperature heating at the target tumor site and inducing rapid cell death. To optimize the design of NBTU transducers for thermal dose delivery during treatment, numerical modeling of the acoustic pressure field generated by the deforming piezoelectric transducer is frequently employed. The bioheat transfer process generated by the input pressure field is used to track the thermal propagation of the applicator over time. Magnetic resonance thermal imaging (MRTI) can be used to experimentally validate these models. Validation results using MRTI demonstrated the feasibility of this model, showing a consistent thermal propagation pattern. However, a thermal damage isodose map is more advantageous for evaluating therapeutic efficacy. To achieve a more accurate simulation based on the actual brain tissue environment, a new finite element method (FEM) simulation with enhanced damage evaluation capabilities was conducted. The results showed that the highest temperature and ablated volume differed between experimental and simulation results by 2.1884{\deg}C (3.71%) and 0.0631 cm$^3$ (5.74%), respectively. The lowest Pearson correlation coefficient (PCC) for peak temperature was 0.7117, and the lowest Dice coefficient for the ablated area was 0.7021, indicating a good agreement in accuracy between simulation and experiment. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 485,685 |
0805.1473 | A Fast Algorithm and Datalog Inexpressibility for Temporal Reasoning | We introduce a new tractable temporal constraint language, which strictly contains the Ord-Horn language of Buerkert and Nebel and the class of AND/OR precedence constraints. The algorithm we present for this language decides whether a given set of constraints is consistent in time that is quadratic in the input size. We also prove that (unlike Ord-Horn) this language cannot be solved by Datalog or by establishing local consistency. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 1,745 |
2103.13630 | A Survey of Quantization Methods for Efficient Neural Network Inference | As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 226,562 |
2109.12058 | Optimized Power Normalized Cepstral Coefficients towards Robust Deep
Speaker Verification | After their introduction to robust speech recognition, power normalized cepstral coefficient (PNCC) features were successfully adopted to other tasks, including speaker verification. However, as a feature extractor with long-term operations on the power spectrogram, its temporal processing and amplitude scaling steps dedicated on environmental compensation may be redundant. Further, they might suppress intrinsic speaker variations that are useful for speaker verification based on deep neural networks (DNN). Therefore, in this study, we revisit and optimize PNCCs by ablating its medium-time processor and by introducing channel energy normalization. Experimental results with a DNN-based speaker verification system indicate substantial improvement over baseline PNCCs on both in-domain and cross-domain scenarios, reflected by relatively 5.8% and 61.2% maximum lower equal error rate on VoxCeleb1 and VoxMovies, respectively. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 257,144 |
2312.02967 | AmbiGen: Generating Ambigrams from Pre-trained Diffusion Model | Ambigrams are calligraphic designs that have different meanings depending on the viewing orientation. Creating ambigrams is a challenging task even for skilled artists, as it requires maintaining the meaning under two different viewpoints at the same time. In this work, we propose to generate ambigrams by distilling a large-scale vision and language diffusion model, namely DeepFloyd IF, to optimize the letters' outline for legibility in the two viewing orientations. Empirically, we demonstrate that our approach outperforms existing ambigram generation methods. On the 500 most common words in English, our method achieves more than an 11.6% increase in word accuracy and at least a 41.9% reduction in edit distance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 413,067 |
2303.12277 | Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises:
High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation | Recently, several studies consider the stochastic optimization problem but in a heavy-tailed noise regime, i.e., the difference between the stochastic gradient and the true gradient is assumed to have a finite $p$-th moment (say being upper bounded by $\sigma^{p}$ for some $\sigma\geq0$) where $p\in(1,2]$, which not only generalizes the traditional finite variance assumption ($p=2$) but also has been observed in practice for several different tasks. Under this challenging assumption, lots of new progress has been made for either convex or nonconvex problems, however, most of which only consider smooth objectives. In contrast, people have not fully explored and well understood this problem when functions are nonsmooth. This paper aims to fill this crucial gap by providing a comprehensive analysis of stochastic nonsmooth convex optimization with heavy-tailed noises. We revisit a simple clipping-based algorithm, whereas, which is only proved to converge in expectation but under the additional strong convexity assumption. Under appropriate choices of parameters, for both convex and strongly convex functions, we not only establish the first high-probability rates but also give refined in-expectation bounds compared with existing works. Remarkably, all of our results are optimal (or nearly optimal up to logarithmic factors) with respect to the time horizon $T$ even when $T$ is unknown in advance. Additionally, we show how to make the algorithm parameter-free with respect to $\sigma$, in other words, the algorithm can still guarantee convergence without any prior knowledge of $\sigma$. Furthermore, an initial distance adaptive convergence rate is provided if $\sigma$ is assumed to be known. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 353,198 |
1104.4385 | Convex Approaches to Model Wavelet Sparsity Patterns | Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees(HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in deconvolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 10,086 |
1703.08043 | A Flexible Wideband Millimeter-Wave Channel Sounder with Local Area and
NLOS to LOS Transition Measurements | This paper presents a millimeter-wave (mmWave) wideband sliding correlator channel sounder with flexibility to operate at various transmission rates. The channel sounder can transmit and receive up to 1 GHz of RF null-to-null bandwidth while measuring a 2 nanosecond multipath time resolution. The system architecture takes advantage of field-programmable gate arrays (FPGAs), high-speed digital-to-analog converters (DACs), and low phase noise Rubidium (Rb) references for synchronization. Using steerable narrowbeam antennas, the system can measure up to 185 dB of path loss. The channel sounder is used to measure the directional and omnidirectional received power as a receiver transitions from line-of-sight to non-line-of-sight conditions down an urban canyon. A 25 dB drop in omnidirectional received power was observed as the receiver transitioned from line-of-sight (LOS) conditions to deeply shadowed non-LOS (NLOS) conditions. The channel sounder was also used to study signal variation and spatial consistency for a local set of receiver locations arranged in a cluster spanning a 5 m x 10 m local area, where the omnidirectional received power in LOS and NLOS environments is found to be relatively stable with standard deviations of received power of 2.2 dB and 4.3 dB, respectively. This work shows that when implementing beamforming at the transmitter at mmWave, the omnidirectional received power over a local area has little fluctuation among receiver locations separated by a few to several meters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 70,509 |
1911.13071 | Increasing Generality in Machine Learning through Procedural Content
Generation | Procedural Content Generation (PCG) refers to the practice, in videogames and other games, of generating content such as levels, quests, or characters algorithmically. Motivated by the need to make games replayable, as well as to reduce authoring burden, limit storage space requirements, and enable particular aesthetics, a large number of PCG methods have been devised by game developers. Additionally, researchers have explored adapting methods from machine learning, optimization, and constraint solving to PCG problems. Games have been widely used in AI research since the inception of the field, and in recent years have been used to develop and benchmark new machine learning algorithms. Through this practice, it has become more apparent that these algorithms are susceptible to overfitting. Often, an algorithm will not learn a general policy, but instead a policy that will only work for a particular version of a particular task with particular initial parameters. In response, researchers have begun exploring randomization of problem parameters to counteract such overfitting and to allow trained policies to more easily transfer from one environment to another, such as from a simulated robot to a robot in the real world. Here we review the large amount of existing work on PCG, which we believe has an important role to play in increasing the generality of machine learning methods. The main goal here is to present RL/AI with new tools from the PCG toolbox, and its secondary goal is to explain to game developers and researchers a way in which their work is relevant to AI research. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 155,579 |
2303.16938 | Are Neural Architecture Search Benchmarks Well Designed? A Deeper Look
Into Operation Importance | Neural Architecture Search (NAS) benchmarks significantly improved the capability of developing and comparing NAS methods while at the same time drastically reduced the computational overhead by providing meta-information about thousands of trained neural networks. However, tabular benchmarks have several drawbacks that can hinder fair comparisons and provide unreliable results. These usually focus on providing a small pool of operations in heavily constrained search spaces -- usually cell-based neural networks with pre-defined outer-skeletons. In this work, we conducted an empirical analysis of the widely used NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks in terms of their generability and how different operations influence the performance of the generated architectures. We found that only a subset of the operation pool is required to generate architectures close to the upper-bound of the performance range. Also, the performance distribution is negatively skewed, having a higher density of architectures in the upper-bound range. We consistently found convolution layers to have the highest impact on the architecture's performance, and that specific combination of operations favors top-scoring architectures. These findings shed insights on the correct evaluation and comparison of NAS methods using NAS benchmarks, showing that directly searching on NAS-Bench-201, ImageNet16-120 and TransNAS-Bench-101 produces more reliable results than searching only on CIFAR-10. Furthermore, with this work we provide suggestions for future benchmark evaluations and design. The code used to conduct the evaluations is available at https://github.com/VascoLopes/NAS-Benchmark-Evaluation. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 355,045 |
2010.08353 | On the Guaranteed Almost Equivalence between Imitation Learning from
Observation and Demonstration | Imitation learning from observation (LfO) is more preferable than imitation learning from demonstration (LfD) due to the nonnecessity of expert actions when reconstructing the expert policy from the expert data. However, previous studies imply that the performance of LfO is inferior to LfD by a tremendous gap, which makes it challenging to employ LfO in practice. By contrast, this paper proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness. In the deterministic robot environment, from the perspective of the control theory, we show that the inverse dynamics disagreement between LfO and LfD approaches zero, meaning that LfO is almost equivalent to LfD. To further relax the deterministic constraint and better adapt to the practical environment, we consider bounded randomness in the robot environment and prove that the optimizing targets for both LfD and LfO remain almost same in the more generalized setting. Extensive experiments for multiple robot tasks are conducted to empirically demonstrate that LfO achieves comparable performance to LfD. In fact, most common robot systems in reality are the robot environment with bounded randomness (i.e., the environment this paper considered). Hence, our findings greatly extend the potential of LfO and suggest that we can safely apply LfO without sacrificing the performance compared to LfD in practice. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 201,157 |
2405.04707 | A wearable anti-gravity supplement to therapy does not improve arm
function in chronic stroke: a randomized trial | Background: Gravity confounds arm movement ability in post-stroke hemiparesis. Reducing its influence allows effective practice leading to recovery. Yet, there is a scarcity of wearable devices suitable for personalized use across diverse therapeutic activities in the clinic. Objective: In this study, we investigated the safety, feasibility, and efficacy of anti-gravity therapy using the ExoNET device in post-stroke participants. Methods: Twenty chronic stroke survivors underwent six, 45-minute occupational therapy sessions while wearing the ExoNET, randomized into either the treatment (ExoNET tuned to gravity-support) or control group (ExoNET tuned to slack condition). Clinical outcomes were evaluated by a blinded-rater at baseline, post, and six-week follow-up sessions. Kinetic, kinematic, and patient experience outcomes were also assessed. Results: Mixed-effect models showed a significant improvement in Box and Blocks scores in the post-intervention session for the treatment group (effect size: 2.1, p = .04). No significant effects were found between the treatment and control groups for ARAT scores and other clinical metrics. Direct kinetic effects revealed a significant reduction in muscle activity during free exploration with an effect size of (-7.12%, p< 005). There were no significant longitudinal kinetic or kinematic trends. Subject feedback suggested a generally positive perception of the anti-gravity therapy. Conclusions: Anti-gravity therapy with the ExoNET is a safe and feasible treatment for post-stroke rehabilitation. The device provided anti-gravity forces, did not encumber range of motion, and clinical metrics of anti-gravity therapy demonstrated improvements in gross manual dexterity. Further research is required to explore potential benefits in broader clinical metrics. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 452,645 |
2308.11551 | Multi-event Video-Text Retrieval | Video-Text Retrieval (VTR) is a crucial multi-modal task in an era of massive video-text data on the Internet. A plethora of work characterized by using a two-stream Vision-Language model architecture that learns a joint representation of video-text pairs has become a prominent approach for the VTR task. However, these models operate under the assumption of bijective video-text correspondences and neglect a more practical scenario where video content usually encompasses multiple events, while texts like user queries or webpage metadata tend to be specific and correspond to single events. This establishes a gap between the previous training objective and real-world applications, leading to the potential performance degradation of earlier models during inference. In this study, we introduce the Multi-event Video-Text Retrieval (MeVTR) task, addressing scenarios in which each video contains multiple different events, as a niche scenario of the conventional Video-Text Retrieval Task. We present a simple model, Me-Retriever, which incorporates key event video representation and a new MeVTR loss for the MeVTR task. Comprehensive experiments show that this straightforward framework outperforms other models in the Video-to-Text and Text-to-Video tasks, effectively establishing a robust baseline for the MeVTR task. We believe this work serves as a strong foundation for future studies. Code is available at https://github.com/gengyuanmax/MeVTR. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 387,185 |
2502.09941 | A Lightweight and Effective Image Tampering Localization Network with
Vision Mamba | Current image tampering localization methods primarily rely on Convolutional Neural Networks (CNNs) and Transformers. While CNNs suffer from limited local receptive fields, Transformers offer global context modeling at the expense of quadratic computational complexity. Recently, the state space model Mamba has emerged as a competitive alternative, enabling linear-complexity global dependency modeling. Inspired by it, we propose a lightweight and effective FORensic network based on vision MAmba (ForMa) for blind image tampering localization. Firstly, ForMa captures multi-scale global features that achieves efficient global dependency modeling through linear complexity. Then the pixel-wise localization map is generated by a lightweight decoder, which employs a parameter-free pixel shuffle layer for upsampling. Additionally, a noise-assisted decoding strategy is proposed to integrate complementary manipulation traces from tampered images, boosting decoder sensitivity to forgery cues. Experimental results on 10 standard datasets demonstrate that ForMa achieves state-of-the-art generalization ability and robustness, while maintaining the lowest computational complexity. Code is available at https://github.com/multimediaFor/ForMa. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 533,673 |
1705.05552 | IAN: The Individual Aggregation Network for Person Search | Person search in real-world scenarios is a new challenging computer version task with many meaningful applications. The challenge of this task mainly comes from: (1) unavailable bounding boxes for pedestrians and the model needs to search for the person over the whole gallery images; (2) huge variance of visual appearance of a particular person owing to varying poses, lighting conditions, and occlusions. To address these two critical issues in modern person search applications, we propose a novel Individual Aggregation Network (IAN) that can accurately localize persons by learning to minimize intra-person feature variations. IAN is built upon the state-of-the-art object detection framework, i.e., faster R-CNN, so that high-quality region proposals for pedestrians can be produced in an online manner. In addition, to relieve the negative effect caused by varying visual appearances of the same individual, IAN introduces a novel center loss that can increase the intra-class compactness of feature representations. The engaged center loss encourages persons with the same identity to have similar feature characteristics. Extensive experimental results on two benchmarks, i.e., CUHK-SYSU and PRW, well demonstrate the superiority of the proposed model. In particular, IAN achieves 77.23% mAP and 80.45% top-1 accuracy on CUHK-SYSU, which outperform the state-of-the-art by 1.7% and 1.85%, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 73,513 |
cs/9905010 | Statistical Inference and Probabilistic Modelling for Constraint-Based
NLP | We present a probabilistic model for constraint-based grammars and a method for estimating the parameters of such models from incomplete, i.e., unparsed data. Whereas methods exist to estimate the parameters of probabilistic context-free grammars from incomplete data (Baum 1970), so far for probabilistic grammars involving context-dependencies only parameter estimation techniques from complete, i.e., fully parsed data have been presented (Abney 1997). However, complete-data estimation requires labor-intensive, error-prone, and grammar-specific hand-annotating of large language corpora. We present a log-linear probability model for constraint logic programming, and a general algorithm to estimate the parameters of such models from incomplete data by extending the estimation algorithm of Della-Pietra, Della-Pietra, and Lafferty (1997) to incomplete data settings. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 540,507 |
2410.13850 | Influence Functions for Scalable Data Attribution in Diffusion Models | Diffusion models have led to significant advancements in generative modelling. Yet their widespread adoption poses challenges regarding data attribution and interpretability. In this paper, we aim to help address such challenges in diffusion models by developing an influence functions framework. Influence function-based data attribution methods approximate how a model's output would have changed if some training data were removed. In supervised learning, this is usually used for predicting how the loss on a particular example would change. For diffusion models, we focus on predicting the change in the probability of generating a particular example via several proxy measurements. We show how to formulate influence functions for such quantities and how previously proposed methods can be interpreted as particular design choices in our framework. To ensure scalability of the Hessian computations in influence functions, we systematically develop K-FAC approximations based on generalised Gauss-Newton matrices specifically tailored to diffusion models. We recast previously proposed methods as specific design choices in our framework and show that our recommended method outperforms previous data attribution approaches on common evaluations, such as the Linear Data-modelling Score (LDS) or retraining without top influences, without the need for method-specific hyperparameter tuning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 499,727 |
2103.12725 | SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression | Logistic regression remains one of the most widely used tools in applied statistics, machine learning and data science. However, in moderately high-dimensional problems, where the number of features $d$ is a non-negligible fraction of the sample size $n$, the logistic regression maximum likelihood estimator (MLE), and statistical procedures based the large-sample approximation of its distribution, behave poorly. Recently, Sur and Cand\`es (2019) showed that these issues can be corrected by applying a new approximation of the MLE's sampling distribution in this high-dimensional regime. Unfortunately, these corrections are difficult to implement in practice, because they require an estimate of the \emph{signal strength}, which is a function of the underlying parameters $\beta$ of the logistic regression. To address this issue, we propose SLOE, a fast and straightforward approach to estimate the signal strength in logistic regression. The key insight of SLOE is that the Sur and Cand\`es (2019) correction can be reparameterized in terms of the \emph{corrupted signal strength}, which is only a function of the estimated parameters $\widehat \beta$. We propose an estimator for this quantity, prove that it is consistent in the relevant high-dimensional regime, and show that dimensionality correction using SLOE is accurate in finite samples. Compared to the existing ProbeFrontier heuristic, SLOE is conceptually simpler and orders of magnitude faster, making it suitable for routine use. We demonstrate the importance of routine dimensionality correction in the Heart Disease dataset from the UCI repository, and a genomics application using data from the UK Biobank. We provide an open source package for this method, available at \url{https://github.com/google-research/sloe-logistic}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 226,274 |
1608.08823 | Approximation of Continuous-Time Infinite-Horizon Optimal Control
Problems Arising in Model Predictive Control - Supplementary Notes | These notes present preliminary results regarding two different approximations of linear infinite-horizon optimal control problems arising in model predictive control. Input and state trajectories are parametrized with basis functions and a finite dimensional representation of the dynamics is obtained via a Galerkin approach. It is shown that the two approximations provide lower, respectively upper bounds on the optimal cost of the underlying infinite dimensional optimal control problem. These bounds get tighter as the number of basis functions is increased. In addition, conditions guaranteeing convergence to the cost of the underlying problem are provided. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 60,399 |
1909.13655 | A hybrid material-point spheropolygon-element method for solid and
granular material interaction | Capturing the interaction between objects that have an extreme difference in Young s modulus or geometrical scale is a highly challenging topic for numerical simulation. One of the fundamental questions is how to build an accurate multi-scale method with optimal computational efficiency. In this work, we develop a material-point-spheropolygon discrete element method (MPM-SDEM). Our approach fully couples the material point method (MPM) and the spheropolygon discrete element method (SDEM) through the exchange of contact force information. It combines the advantage of MPM for accurately simulating elastoplastic continuum materials and the high efficiency of DEM for calculating the Newtonian dynamics of discrete near-rigid objects. The MPM-SDEM framework is demonstrated with an explicit time integration scheme. Its accuracy and efficiency are further analysed against the analytical and experimental data. Results demonstrate this method could accurately capture the contact force and momentum exchange between materials while maintaining favourable computational stability and efficiency. Our framework exhibits great potential in the analysis of multi-scale, multi-physics phenomena | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 147,483 |
1802.06701 | An Adaptive Version of Brandes' Algorithm for Betweenness Centrality | Betweenness centrality---measuring how many shortest paths pass through a vertex---is one of the most important network analysis concepts for assessing the relative importance of a vertex. The well-known algorithm of Brandes [J. Math. Sociol.~'01] computes, on an $n$-vertex and $m$-edge graph, the betweenness centrality of all vertices in $O(nm)$ worst-case time. In later work, significant empirical speedups were achieved by preprocessing degree-one vertices and by graph partitioning based on cut vertices. We contribute an algorithmic treatment of degree-two vertices, which turns out to be much richer in mathematical structure than the case of degree-one vertices. Based on these three algorithmic ingredients, we provide a strengthened worst-case running time analysis for betweenness centrality algorithms. More specifically, we prove an adaptive running time bound $O(kn)$, where $k < m$ is the size of a minimum feedback edge set of the input graph. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 90,728 |
2211.07218 | SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
Simulated Annealing | Differential privacy (DP) provides a formal privacy guarantee that prevents adversaries with access to machine learning models from extracting information about individual training points. Differentially private stochastic gradient descent (DPSGD) is the most popular training method with differential privacy in image recognition. However, existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy. In this paper, we propose a simulated annealing-based differentially private stochastic gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends both on the update quality and on the number of iterations. Through this random update screening, we make the differentially private gradient descent proceed in the right direction in each iteration, and result in a more accurate model finally. In our experiments, under the same hyperparameters, our scheme achieves test accuracies 98.35%, 87.41% and 60.92% on datasets MNIST, FashionMNIST and CIFAR10, respectively, compared to the state-of-the-art result of 98.12%, 86.33% and 59.34%. Under the freely adjusted hyperparameters, our scheme achieves even higher accuracies, 98.89%, 88.50% and 64.17%. We believe that our method has a great contribution for closing the accuracy gap between private and non-private image classification. | false | false | false | false | true | false | false | false | false | false | false | true | true | false | false | false | false | false | 330,168 |
1908.08873 | Predicting knee osteoarthritis severity: comparative modeling based on
patient's data and plain X-ray images | Knee osteoarthritis (KOA) is a disease that impairs knee function and causes pain. A radiologist reviews knee X-ray images and grades the severity level of the impairments according to the Kellgren and Lawrence grading scheme; a five-point ordinal scale (0--4). In this study, we used Elastic Net (EN) and Random Forests (RF) to build predictive models using patient assessment data (i.e. signs and symptoms of both knees and medication use) and a convolution neural network (CNN) trained using X-ray images only. Linear mixed effect models (LMM) were used to model the within subject correlation between the two knees. The root mean squared error for the CNN, EN, and RF models was 0.77, 0.97, and 0.94 respectively. The LMM shows similar overall prediction accuracy as the EN regression but correctly accounted for the hierarchical structure of the data resulting in more reliable inference. Useful explanatory variables were identified that could be used for patient monitoring before X-ray imaging. Our analyses suggest that the models trained for predicting the KOA severity levels achieve comparable results when modeling X-ray images and patient data. The subjectivity in the KL grade is still a primary concern. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 142,675 |
2209.05235 | Style Variable and Irrelevant Learning for Generalizable Person
Re-identification | Recently, due to the poor performance of supervised person re-identification (ReID) to an unseen domain, Domain Generalization (DG) person ReID has attracted a lot of attention which aims to learn a domain-insensitive model and can resist the influence of domain bias. In this paper, we first verify through an experiment that style factors are a vital part of domain bias. Base on this conclusion, we propose a Style Variable and Irrelevant Learning (SVIL) method to eliminate the effect of style factors on the model. Specifically, we design a Style Jitter Module (SJM) in SVIL. The SJM module can enrich the style diversity of the specific source domain and reduce the style differences of various source domains. This leads to the model focusing on identity-relevant information and being insensitive to the style changes. Besides, we organically combine the SJM module with a meta-learning algorithm, maximizing the benefits and further improving the generalization ability of the model. Note that our SJM module is plug-and-play and inference cost-free. Extensive experiments confirm the effectiveness of our SVIL and our method outperforms the state-of-the-art methods on DG-ReID benchmarks by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 317,032 |
2411.11315 | A Review on Machine Unlearning | Recently, an increasing number of laws have governed the useability of users' privacy. For example, Article 17 of the General Data Protection Regulation (GDPR), the right to be forgotten, requires machine learning applications to remove a portion of data from a dataset and retrain it if the user makes such a request. Furthermore, from the security perspective, training data for machine learning models, i.e., data that may contain user privacy, should be effectively protected, including appropriate erasure. Therefore, researchers propose various privacy-preserving methods to deal with such issues as machine unlearning. This paper provides an in-depth review of the security and privacy concerns in machine learning models. First, we present how machine learning can use users' private data in daily life and the role that the GDPR plays in this problem. Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models and how to protect users' privacy from being violated using machine learning platforms. As the core content of the paper, we introduce and analyze current machine unlearning approaches and several representative research results and discuss them in the context of the data lineage. Furthermore, we also discuss the future research challenges in this field. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 509,009 |
2211.14177 | Overcoming Catastrophic Forgetting by XAI | Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this work proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 332,744 |
2302.10343 | Non-rigid Medical Image Registration using Physics-informed Neural
Networks | Biomechanical modelling of soft tissue provides a non-data-driven method for constraining medical image registration, such that the estimated spatial transformation is considered biophysically plausible. This has not only been adopted in real-world clinical applications, such as the MR-to-ultrasound registration for prostate intervention of interest in this work, but also provides an explainable means of understanding the organ motion and spatial correspondence establishment. This work instantiates the recently-proposed physics-informed neural networks (PINNs) to a 3D linear elastic model for modelling prostate motion commonly encountered during transrectal ultrasound guided procedures. To overcome a widely-recognised challenge in generalising PINNs to different subjects, we propose to use PointNet as the nodal-permutation-invariant feature extractor, together with a registration algorithm that aligns point sets and simultaneously takes into account the PINN-imposed biomechanics. The proposed method has been both developed and validated in both patient-specific and multi-patient manner. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 346,768 |
2002.10363 | Joint Learning of Assignment and Representation for Biometric Group
Membership | This paper proposes a framework for group membership protocols preventing the curious but honest server from reconstructing the enrolled biometric signatures and inferring the identity of querying clients. This framework learns the embedding parameters, group representations and assignments simultaneously. Experiments show the trade-off between security/privacy and verification/identification performances. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 165,379 |
1511.03488 | Constraint-Tightening and Stability in Stochastic Model Predictive
Control | Constraint tightening to non-conservatively guarantee recursive feasibility and stability in Stochastic Model Predictive Control is addressed. Stability and feasibility requirements are considered separately, highlighting the difference between existence of a solution and feasibility of a suitable, a priori known candidate solution. Subsequently, a Stochastic Model Predictive Control algorithm which unifies previous results is derived, leaving the designer the option to balance an increased feasible region against guaranteed bounds on the asymptotic average performance and convergence time. Besides typical performance bounds, under mild assumptions, we prove asymptotic stability in probability of the minimal robust positively invariant set obtained by the unconstrained LQ-optimal controller. A numerical example, demonstrating the efficacy of the proposed approach in comparison with classical, recursively feasible Stochastic MPC and Robust MPC, is provided. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 48,762 |
2205.15135 | Group Probability-Weighted Tree Sums for Interpretable Modeling of
Heterogeneous Data | Machine learning in high-stakes domains, such as healthcare, faces two critical challenges: (1) generalizing to diverse data distributions given limited training data while (2) maintaining interpretability. To address these challenges, we propose an instance-weighted tree-sum method that effectively pools data across diverse groups to output a concise, rule-based model. Given distinct groups of instances in a dataset (e.g., medical patients grouped by age or treatment site), our method first estimates group membership probabilities for each instance. Then, it uses these estimates as instance weights in FIGS (Tan et al. 2022), to grow a set of decision trees whose values sum to the final prediction. We call this new method Group Probability-Weighted Tree Sums (G-FIGS). G-FIGS achieves state-of-the-art prediction performance on important clinical datasets; e.g., holding the level of sensitivity fixed at 92%, G-FIGS increases specificity for identifying cervical spine injury by up to 10% over CART and up to 3% over FIGS alone, with larger gains at higher sensitivity levels. By keeping the total number of rules below 16 in FIGS, the final models remain interpretable, and we find that their rules match medical domain expertise. All code, data, and models are released on Github. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 299,615 |
1906.10050 | On DICE-free Smart Cities, Particulate Matter, and Feedback-Enabled
Access Control | The link between transport related emissions and human health is a major issue for city municipalities worldwide. PM emissions from exhaust and non-exhaust sources are one of the main worrying contributors to air-pollution. In this paper, we challenge the notion that a ban on internal combustion engine vehicles will result in clean and safe air in our cities, since emissions from tyres and other non-exhaust sources are expected to increase in the near future. To this end, we present data from the city of Dublin that document that the current amount of tyre-related PM emissions in the city might already be above or close to the levels deemed safe by the World Health Organization. As a solution to this problem, we present a feedback-enabled distributed access control mechanism and ride-sharing scheme to limit the number of vehicles in a city and therefore maintain the amount of transport-related PM to safe levels. | false | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | 136,344 |
2005.12494 | Towards Fine-grained Human Pose Transfer with Detail Replenishing
Network | Human pose transfer (HPT) is an emerging research topic with huge potential in fashion design, media production, online advertising and virtual reality. For these applications, the visual realism of fine-grained appearance details is crucial for production quality and user engagement. However, existing HPT methods often suffer from three fundamental issues: detail deficiency, content ambiguity and style inconsistency, which severely degrade the visual quality and realism of generated images. Aiming towards real-world applications, we develop a more challenging yet practical HPT setting, termed as Fine-grained Human Pose Transfer (FHPT), with a higher focus on semantic fidelity and detail replenishment. Concretely, we analyze the potential design flaws of existing methods via an illustrative example, and establish the core FHPT methodology by combing the idea of content synthesis and feature transfer together in a mutually-guided fashion. Thereafter, we substantiate the proposed methodology with a Detail Replenishing Network (DRN) and a corresponding coarse-to-fine model training scheme. Moreover, we build up a complete suite of fine-grained evaluation protocols to address the challenges of FHPT in a comprehensive manner, including semantic analysis, structural detection and perceptual quality assessment. Extensive experiments on the DeepFashion benchmark dataset have verified the power of proposed benchmark against start-of-the-art works, with 12\%-14\% gain on top-10 retrieval recall, 5\% higher joint localization accuracy, and near 40\% gain on face identity preservation. Moreover, the evaluation results offer further insights to the subject matter, which could inspire many promising future works along this direction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 178,742 |
2409.19750 | AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs
for Astronomy | Continual pretraining of large language models on domain-specific data has been proposed to enhance performance on downstream tasks. In astronomy, the previous absence of astronomy-focused benchmarks has hindered objective evaluation of these specialized LLM models. Leveraging a recent initiative to curate high-quality astronomical MCQs, this study aims to quantitatively assess specialized LLMs in astronomy. We find that the previously released AstroLLaMA series, based on LLaMA-2-7B, underperforms compared to the base model. We demonstrate that this performance degradation can be partially mitigated by utilizing high-quality data for continual pretraining, such as summarized text from arXiv. Despite the observed catastrophic forgetting in smaller models, our results indicate that continual pretraining on the 70B model can yield significant improvements. However, the current supervised fine-tuning dataset still constrains the performance of instruct models. In conjunction with this study, we introduce a new set of models, AstroLLaMA-3-8B and AstroLLaMA-2-70B, building upon the previous AstroLLaMA series. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 492,834 |
2405.00566 | NumLLM: Numeric-Sensitive Large Language Model for Chinese Finance | Recently, many works have proposed various financial large language models (FinLLMs) by pre-training from scratch or fine-tuning open-sourced LLMs on financial corpora. However, existing FinLLMs exhibit unsatisfactory performance in understanding financial text when numeric variables are involved in questions. In this paper, we propose a novel LLM, called numeric-sensitive large language model (NumLLM), for Chinese finance. We first construct a financial corpus from financial textbooks which is essential for improving numeric capability of LLMs during fine-tuning. After that, we train two individual low-rank adaptation (LoRA) modules by fine-tuning on our constructed financial corpus. One module is for adapting general-purpose LLMs to financial domain, and the other module is for enhancing the ability of NumLLM to understand financial text with numeric variables. Lastly, we merge the two LoRA modules into the foundation model to obtain NumLLM for inference. Experiments on financial question-answering benchmark show that NumLLM can boost the performance of the foundation model and can achieve the best overall performance compared to all baselines, on both numeric and non-numeric questions. | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 450,968 |
1906.01507 | A numerical measure of the instability of Mapper-type algorithms | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,727 |
1904.07619 | Compressed Indexes for Fast Search of Semantic Data | The sheer increase in volume of RDF data demands efficient solutions for the triple indexing problem, that is devising a compressed data structure to compactly represent RDF triples by guaranteeing, at the same time, fast pattern matching operations. This problem lies at the heart of delivering good practical performance for the resolution of complex SPARQL queries on large RDF datasets. In this work, we propose a trie-based index layout to solve the problem and introduce two novel techniques to reduce its space of representation for improved effectiveness. The extensive experimental analysis conducted over a wide range of publicly available real-world datasets, reveals that our best space/time trade-off configuration substantially outperforms existing solutions at the state-of-the-art, by taking 30-60% less space and speeding up query execution by a factor of 2-81x. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 127,845 |
2408.08990 | Adaptive Uncertainty Quantification for Generative AI | This work is concerned with conformal prediction in contemporary applications (including generative AI) where a black-box model has been trained on data that are not accessible to the user. Mirroring split-conformal inference, we design a wrapper around a black-box algorithm which calibrates conformity scores. This calibration is local and proceeds in two stages by first adaptively partitioning the predictor space into groups and then calibrating sectionally group by group. Adaptive partitioning (self-grouping) is achieved by fitting a robust regression tree to the conformity scores on the calibration set. This new tree variant is designed in such a way that adding a single new observation does not change the tree fit with overwhelmingly large probability. This add-one-in robustness property allows us to conclude a finite sample group-conditional coverage guarantee, a refinement of the marginal guarantee. In addition, unlike traditional split-conformal inference, adaptive splitting and within-group calibration yields adaptive bands which can stretch and shrink locally. We demonstrate benefits of local tightening on several simulated as well as real examples using non-parametric regression. Finally, we consider two contemporary classification applications for obtaining uncertainty quantification around GPT-4o predictions. We conformalize skin disease diagnoses based on self-reported symptoms as well as predicted states of U.S. legislators based on summaries of their ideology. We demonstrate substantial local tightening of the uncertainty sets while attaining similar marginal coverage. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 481,239 |
2305.10072 | Infinite-dimensional observers for high order boundary-controlled
port-Hamiltonian systems | This letter investigates the design of a class of infinite-dimensional observers for one dimensional (1D) boundary controlled port-Hamiltonian systems (BC-PHS) defined by differential operators of order $N \geq 1$. The convergence of the proposed observer depends on the number and location of available boundary measurements. \textcolor{hector}{Asymptotic convergence is assured for $N\geq 1$, and provided that enough boundary measurements are available, exponential convergence can be assured for the cases $N=1$ and $N=2$.} Furthermore, in the case of partitioned BC-PHS with $N=2$, such as the Euler-Bernoulli beam, \textcolor{hector}{it is shown} that exponential convergence can be assured considering less available measurements. The Euler-Bernoulli beam model is used to illustrate the design of the proposed observers and to perform numerical simulations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 364,899 |
2404.19288 | Training-free Graph Neural Networks and the Power of Labels as Features | We propose training-free graph neural networks (TFGNNs), which can be used without training and can also be improved with optional training, for transductive node classification. We first advocate labels as features (LaF), which is an admissible but not explored technique. We show that LaF provably enhances the expressive power of graph neural networks. We design TFGNNs based on this analysis. In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 450,583 |
2306.17207 | A Fast Fourier Convolutional Deep Neural Network For Accurate and
Explainable Discrimination Of Wheat Yellow Rust And Nitrogen Deficiency From
Sentinel-2 Time-Series Data | Accurate and timely detection of plant stress is essential for yield protection, allowing better-targeted intervention strategies. Recent advances in remote sensing and deep learning have shown great potential for rapid non-invasive detection of plant stress in a fully automated and reproducible manner. However, the existing models always face several challenges: 1) computational inefficiency and the misclassifications between the different stresses with similar symptoms; and 2) the poor interpretability of the host-stress interaction. In this work, we propose a novel fast Fourier Convolutional Neural Network (FFDNN) for accurate and explainable detection of two plant stresses with similar symptoms (i.e. Wheat Yellow Rust And Nitrogen Deficiency). Specifically, unlike the existing CNN models, the main components of the proposed model include: 1) a fast Fourier convolutional block, a newly fast Fourier transformation kernel as the basic perception unit, to substitute the traditional convolutional kernel to capture both local and global responses to plant stress in various time-scale and improve computing efficiency with reduced learning parameters in Fourier domain; 2) Capsule Feature Encoder to encapsulate the extracted features into a series of vector features to represent part-to-whole relationship with the hierarchical structure of the host-stress interactions of the specific stress. In addition, in order to alleviate over-fitting, a photochemical vegetation indices-based filter is placed as pre-processing operator to remove the non-photochemical noises from the input Sentinel-2 time series. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 376,640 |
1710.10006 | Deep Learning for Accelerated Ultrasound Imaging | In portable, 3-D, or ultra-fast ultrasound (US) imaging systems, there is an increasing demand to reconstruct high quality images from limited number of data. However, the existing solutions require either hardware changes or computationally expansive algorithms. To overcome these limitations, here we propose a novel deep learning approach that interpolates the missing RF data by utilizing the sparsity of the RF data in the Fourier domain. Extensive experimental results from sub-sampled RF data from a real US system confirmed that the proposed method can effectively reduce the data rate without sacrificing the image quality. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 83,299 |
2203.07825 | SPA-VAE: Similar-Parts-Assignment for Unsupervised 3D Point Cloud
Generation | This paper addresses the problem of unsupervised parts-aware point cloud generation with learned parts-based self-similarity. Our SPA-VAE infers a set of latent canonical candidate shapes for any given object, along with a set of rigid body transformations for each such candidate shape to one or more locations within the assembled object. In this way, noisy samples on the surface of, say, each leg of a table, are effectively combined to estimate a single leg prototype. When parts-based self-similarity exists in the raw data, sharing data among parts in this way confers numerous advantages: modeling accuracy, appropriately self-similar generative outputs, precise in-filling of occlusions, and model parsimony. SPA-VAE is trained end-to-end using a variational Bayesian approach which uses the Gumbel-softmax trick for the shared part assignments, along with various novel losses to provide appropriate inductive biases. Quantitative and qualitative analyses on ShapeNet demonstrate the advantage of SPA-VAE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 285,574 |
1709.06518 | Identifying Retweetable Tweets with a Personalized Global Classifier | In this paper we present a method to identify tweets that a user may find interesting enough to retweet. The method is based on a global, but personalized classifier, which is trained on data from several users, represented in terms of user-specific features. Thus, the method is trained on a sufficient volume of data, while also being able to make personalized decisions, i.e., the same post received by two different users may lead to different classification decisions. Experimenting with a collection of approx.\ 130K tweets received by 122 journalists, we train a logistic regression classifier, using a wide variety of features: the content of each tweet, its novelty, its text similarity to tweets previously posted or retweeted by the recipient or sender of the tweet, the network influence of the author and sender, and their past interactions. Our system obtains F1 approx. 0.9 using only 10 features and 5K training instances. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 81,118 |
1804.01256 | NegPSpan: efficient extraction of negative sequential patterns with
embedding constraints | Mining frequent sequential patterns consists in extracting recurrent behaviors, modeled as patterns, in a big sequence dataset. Such patterns inform about which events are frequently observed in sequences, i.e. what does really happen. Sometimes, knowing that some specific event does not happen is more informative than extracting a lot of observed events. Negative sequential patterns (NSP) formulate recurrent behaviors by patterns containing both observed events and absent events. Few approaches have been proposed to mine such NSPs. In addition, the syntax and semantics of NSPs differ in the different methods which makes it difficult to compare them. This article provides a unified framework for the formulation of the syntax and the semantics of NSPs. Then, we introduce a new algorithm, NegPSpan, that extracts NSPs using a PrefixSpan depth-first scheme and enabling maxgap constraints that other approaches do not take into account. The formal framework allows for highlighting the differences between the proposed approach wrt to the methods from the literature, especially wrt the state of the art approach eNSP. Intensive experiments on synthetic and real datasets show that NegPSpan can extract meaningful NSPs and that it can process bigger datasets than eNSP thanks to significantly lower memory requirements and better computation times. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | true | 94,200 |
2401.16348 | Improving the TENOR of Labeling: Re-evaluating Topic Models for Content
Analysis | Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models (NTMs) and can overlook a models benefits in real world applications. To this end, we conduct the first evaluation of neural, supervised and classical topic models in an interactive task based setting. We combine topic models with a classifier and test their ability to help humans conduct content analysis and document annotation. From simulated, real user and expert pilot studies, the Contextual Neural Topic Model does the best on cluster evaluation metrics and human evaluations; however, LDA is competitive with two other NTMs under our simulated experiment and user study results, contrary to what coherence scores suggest. We show that current automated metrics do not provide a complete picture of topic modeling capabilities, but the right choice of NTMs can be better than classical models on practical task. | true | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 424,787 |
2404.02845 | Cross-Modal Conditioned Reconstruction for Language-guided Medical Image
Segmentation | Recent developments underscore the potential of textual information in enhancing learning models for a deeper understanding of medical visual semantics. However, language-guided medical image segmentation still faces a challenging issue. Previous works employ implicit and ambiguous architectures to embed textual information. This leads to segmentation results that are inconsistent with the semantics represented by the language, sometimes even diverging significantly. To this end, we propose a novel cross-modal conditioned Reconstruction for Language-guided Medical Image Segmentation (RecLMIS) to explicitly capture cross-modal interactions, which assumes that well-aligned medical visual features and medical notes can effectively reconstruct each other. We introduce conditioned interaction to adaptively predict patches and words of interest. Subsequently, they are utilized as conditioning factors for mutual reconstruction to align with regions described in the medical notes. Extensive experiments demonstrate the superiority of our RecLMIS, surpassing LViT by 3.74% mIoU on the publicly available MosMedData+ dataset and achieving an average increase of 1.89% mIoU for cross-domain tests on our QATA-CoV19 dataset. Simultaneously, we achieve a relative reduction of 20.2% in parameter count and a 55.5% decrease in computational load. The code will be available at https://github.com/ShashankHuang/RecLMIS. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 444,022 |
1007.4767 | Formalization of Psychological Knowledge in Answer Set Programming and
its Application | In this paper we explore the use of Answer Set Programming (ASP) to formalize, and reason about, psychological knowledge. In the field of psychology, a considerable amount of knowledge is still expressed using only natural language. This lack of a formalization complicates accurate studies, comparisons, and verification of theories. We believe that ASP, a knowledge representation formalism allowing for concise and simple representation of defaults, uncertainty, and evolving domains, can be used successfully for the formalization of psychological knowledge. To demonstrate the viability of ASP for this task, in this paper we develop an ASP-based formalization of the mechanics of Short-Term Memory. We also show that our approach can have rather immediate practical uses by demonstrating an application of our formalization to the task of predicting a user's interaction with a graphical interface. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 7,131 |
2108.02667 | Adaptive Normalized Representation Learning for Generalizable Face
Anti-Spoofing | With various face presentation attacks arising under unseen scenarios, face anti-spoofing (FAS) based on domain generalization (DG) has drawn growing attention due to its robustness. Most existing methods utilize DG frameworks to align the features to seek a compact and generalized feature space. However, little attention has been paid to the feature extraction process for the FAS task, especially the influence of normalization, which also has a great impact on the generalization of the learned representation. To address this issue, we propose a novel perspective of face anti-spoofing that focuses on the normalization selection in the feature extraction process. Concretely, an Adaptive Normalized Representation Learning (ANRL) framework is devised, which adaptively selects feature normalization methods according to the inputs, aiming to learn domain-agnostic and discriminative representation. Moreover, to facilitate the representation learning, Dual Calibration Constraints are designed, including Inter-Domain Compatible loss and Inter-Class Separable loss, which provide a better optimization direction for generalizable representation. Extensive experiments and visualizations are presented to demonstrate the effectiveness of our method against the SOTA competitors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,408 |
1912.12636 | Training of Quantized Deep Neural Networks using a Magnetic Tunnel
Junction-Based Synapse | Quantized neural networks (QNNs) are being actively researched as a solution for the computational complexity and memory intensity of deep neural networks. This has sparked efforts to develop algorithms that support both inference and training with quantized weight and activation values, without sacrificing accuracy. A recent example is the GXNOR framework for stochastic training of ternary (TNN) and binary (BNN) neural networks. In this paper, we show how magnetic tunnel junction (MTJ) devices can be used to support QNN training. We introduce a novel hardware synapse circuit that uses the MTJ stochastic behavior to support the quantize update. The proposed circuit enables processing near memory (PNM) of QNN training, which subsequently reduces data movement. We simulated MTJ-based stochastic training of a TNN over the MNIST, SVHN, and CIFAR10 datasets and achieved an accuracy of 98.61%, 93.99% and 82.71%, respectively (less than 1% degradation compared to the GXNOR algorithm). We evaluated the synapse array performance potential and showed that the proposed synapse circuit can train ternary networks in situ, with 18.3TOPs/W for feedforward and 3TOPs/W for weight update. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 158,895 |
2410.12861 | Scaled and Inter-token Relation Enhanced Transformer for
Sample-restricted Residential NILM | Transformers have demonstrated exceptional performance across various domains due to their self-attention mechanism, which captures complex relationships in data. However, training on smaller datasets poses challenges, as standard attention mechanisms can over-smooth attention scores and overly prioritize intra-token relationships, reducing the capture of meaningful inter-token dependencies critical for tasks like Non-Intrusive Load Monitoring (NILM). To address this, we propose a novel transformer architecture with two key innovations: inter-token relation enhancement and dynamic temperature tuning. The inter-token relation enhancement mechanism removes diagonal entries in the similarity matrix to improve attention focus on inter-token relations. The dynamic temperature tuning mechanism, a learnable parameter, adapts attention sharpness during training, preventing over-smoothing and enhancing sensitivity to token relationships. We validate our method on the REDD dataset and show that it outperforms the original transformer and state-of-the-art models by 10-15\% in F1 score across various appliance types, demonstrating its efficacy for training on smaller datasets. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 499,248 |
1411.3787 | Asymmetric Minwise Hashing | Minwise hashing (Minhash) is a widely popular indexing scheme in practice. Minhash is designed for estimating set resemblance and is known to be suboptimal in many applications where the desired measure is set overlap (i.e., inner product between binary vectors) or set containment. Minhash has inherent bias towards smaller sets, which adversely affects its performance in applications where such a penalization is not desirable. In this paper, we propose asymmetric minwise hashing (MH-ALSH), to provide a solution to this problem. The new scheme utilizes asymmetric transformations to cancel the bias of traditional minhash towards smaller sets, making the final "collision probability" monotonic in the inner product. Our theoretical comparisons show that for the task of retrieving with binary inner products asymmetric minhash is provably better than traditional minhash and other recently proposed hashing algorithms for general inner products. Thus, we obtain an algorithmic improvement over existing approaches in the literature. Experimental evaluations on four publicly available high-dimensional datasets validate our claims and the proposed scheme outperforms, often significantly, other hashing algorithms on the task of near neighbor retrieval with set containment. Our proposal is simple and easy to implement in practice. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | true | true | 37,535 |
1307.4038 | An alternative Gospel of structure: order, composition, processes | We survey some basic mathematical structures, which arguably are more primitive than the structures taught at school. These structures are orders, with or without composition, and (symmetric) monoidal categories. We list several `real life' incarnations of each of these. This paper also serves as an introduction to these structures and their current and potentially future uses in linguistics, physics and knowledge representation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 25,852 |
1807.11766 | Remote sensing image regression for heterogeneous change detection | Change detection in heterogeneous multitemporal satellite images is an emerging topic in remote sensing. In this paper we propose a framework, based on image regression, to perform change detection in heterogeneous multitemporal satellite images, which has become a main topic in remote sensing. Our method learns a transformation to map the first image to the domain of the other image, and vice versa. Four regression methods are selected to carry out the transformation: Gaussian processes, support vector machines, random forests, and a recently proposed kernel regression method called homogeneous pixel transformation. To evaluate not only potentials and limitations of our framework, but also the pros and cons of each regression method, we perform experiments on two data sets. The results indicates that random forests achieve good performance, are fast and robust to hyperparameters, whereas the homogeneous pixel transformation method can achieve better accuracy at the cost of a higher complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,249 |
2212.10678 | Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias | Generated texts from large language models (LLMs) have been shown to exhibit a variety of harmful, human-like biases against various demographics. These findings motivate research efforts aiming to understand and measure such effects. This paper introduces a causal formulation for bias measurement in generative language models. Based on this theoretical foundation, we outline a list of desiderata for designing robust bias benchmarks. We then propose a benchmark called OccuGender, with a bias-measuring procedure to investigate occupational gender bias. We test several state-of-the-art open-source LLMs on OccuGender, including Llama, Mistral, and their instruction-tuned versions. The results show that these models exhibit substantial occupational gender bias. Lastly, we discuss prompting strategies for bias mitigation and an extension of our causal formulation to illustrate the generalizability of our framework. Our code and data https://github.com/chenyuen0103/gender-bias. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 337,553 |
2202.08712 | Mining On Alzheimer's Diseases Related Knowledge Graph to Identity
Potential AD-related Semantic Triples for Drug Repurposing | To date, there are no effective treatments for most neurodegenerative diseases. Knowledge graphs can provide comprehensive and semantic representation for heterogeneous data, and have been successfully leveraged in many biomedical applications including drug repurposing. Our objective is to construct a knowledge graph from literature to study relations between Alzheimer's disease (AD) and chemicals, drugs and dietary supplements in order to identify opportunities to prevent or delay neurodegenerative progression. We collected biomedical annotations and extracted their relations using SemRep via SemMedDB. We used both a BERT-based classifier and rule-based methods during data preprocessing to exclude noise while preserving most AD-related semantic triples. The 1,672,110 filtered triples were used to train with knowledge graph completion algorithms (i.e., TransE, DistMult, and ComplEx) to predict candidates that might be helpful for AD treatment or prevention. Among three knowledge graph completion models, TransE outperformed the other two (MR = 13.45, Hits@1 = 0.306). We leveraged the time-slicing technique to further evaluate the prediction results. We found supporting evidence for most highly ranked candidates predicted by our model which indicates that our approach can inform reliable new knowledge. This paper shows that our graph mining model can predict reliable new relationships between AD and other entities (i.e., dietary supplements, chemicals, and drugs). The knowledge graph constructed can facilitate data-driven knowledge discoveries and the generation of novel hypotheses. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 280,967 |
2002.07600 | Three-dimensional convolutional neural network (3D-CNN) for
heterogeneous material homogenization | Homogenization is a technique commonly used in multiscale computational science and engineering for predicting collective response of heterogeneous materials and extracting effective mechanical properties. In this paper, a three-dimensional deep convolutional neural network (3D-CNN) is proposed to predict the effective material properties for representative volume elements (RVEs) with random spherical inclusions. The high-fidelity dataset generated by a computational homogenization approach is used for training the 3D-CNN models. The inference results of the trained networks on unseen data indicate that the network is capable of capturing the microstructural features of RVEs and produces an accurate prediction of effective stiffness and Poisson's ratio. The benefits of the 3D-CNN over conventional finite-element-based homogenization with regard to computational efficiency, uncertainty quantification and model's transferability are discussed in sequence. We find the salient features of the 3D-CNN approach make it a potentially suitable alternative for facilitating material design with fast product design iteration and efficient uncertainty quantification. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 164,515 |
2409.09207 | FB-HyDON: Parameter-Efficient Physics-Informed Operator Learning of
Complex PDEs via Hypernetwork and Finite Basis Domain Decomposition | Deep operator networks (DeepONet) and neural operators have gained significant attention for their ability to map infinite-dimensional function spaces and perform zero-shot super-resolution. However, these models often require large datasets for effective training. While physics-informed operators offer a data-agnostic learning approach, they introduce additional training complexities and convergence issues, especially in highly nonlinear systems. To overcome these challenges, we introduce Finite Basis Physics-Informed HyperDeepONet (FB-HyDON), an advanced operator architecture featuring intrinsic domain decomposition. By leveraging hypernetworks and finite basis functions, FB-HyDON effectively mitigates the training limitations associated with existing physics-informed operator learning methods. We validated our approach on the high-frequency harmonic oscillator, Burgers' equation at different viscosity levels, and Allen-Cahn equation demonstrating substantial improvements over other operator learning models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 488,217 |
2402.12806 | SymBa: Symbolic Backward Chaining for Structured Natural Language
Reasoning | To improve the performance and explainability of LLM-based natural language reasoning, structured reasoning can be applied to generate explicitly structured proofs. Among different methods for structured reasoning, we specifically focus on backward chaining, where the proof goal is recursively decomposed to subgoals by searching and applying rules. We argue that current LLM-based backward chaining systems (e.g. Least-to-most prompting and LAMBADA) are incomplete, as they omit crucial algorithmic components identified from the classic backward chaining algorithm in computational logic (SLD Resolution). To this end, we propose a novel backward chaining system, SymBa (Symbolic Backward Chaining), which integrates a symbolic solver and an LLM. In SymBa, the solver controls the proof process, and the LLM is only called when the solver requires new information to complete the proof. Empowered by completeness, SymBa achieves a significant improvement in seven deductive, relational, and arithmetic reasoning benchmarks compared to the baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 431,003 |
1304.5974 | Dynamic stochastic blockmodels: Statistical models for time-evolving
networks | Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we propose a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We then propose a procedure to fit the model using a modification of the extended Kalman filter augmented with a local search. We apply the procedure to analyze a dynamic social network of email communication. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 24,136 |
2412.21161 | Open RAN-Enabled Deep Learning-Assisted Mobility Management for
Connected Vehicles | Connected Vehicles (CVs) can leverage the unique features of 5G and future 6G/NextG networks to enhance Intelligent Transportation System (ITS) services. However, even with advancements in cellular network generations, CV applications may experience communication interruptions in high-mobility scenarios due to frequent changes of serving base station, also known as handovers (HOs). This paper proposes the adoption of Open Radio Access Network (Open RAN/O-RAN) and deep learning models for decision-making to prevent Quality of Service (QoS) degradation due to HOs and to ensure the timely connectivity needed for CV services. The solution utilizes the O-RAN Software Community (OSC), an open-source O-RAN platform developed by the collaboration between the O-RAN Alliance and Linux Foundation, to develop xApps that are executed in the near-Real-Time RIC of OSC. To demonstrate the proposal's effectiveness, an integrated framework combining the OMNeT++ simulator and OSC was created. Evaluations used real-world datasets in urban application scenarios, such as video streaming transmission and over-the-air (OTA) updates. Results indicate that the proposal achieved superior performance and reduced latency compared to the standard 3GPP HO procedure. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 521,471 |
2102.01004 | Hybrid Information-driven Multi-agent Reinforcement Learning | Information theoretic sensor management approaches are an ideal solution to state estimation problems when considering the optimal control of multi-agent systems, however they are too computationally intensive for large state spaces, especially when considering the limited computational resources typical of large-scale distributed multi-agent systems. Reinforcement learning (RL) is a promising alternative which can find approximate solutions to distributed optimal control problems that take into account the resource constraints inherent in many systems of distributed agents. However, the RL training can be prohibitively inefficient, especially in low-information environments where agents receive little to no feedback in large portions of the state space. We propose a hybrid information-driven multi-agent reinforcement learning (MARL) approach that utilizes information theoretic models as heuristics to help the agents navigate large sparse state spaces, coupled with information based rewards in an RL framework to learn higher-level policies. This paper presents our ongoing work towards this objective. Our preliminary findings show that such an approach can result in a system of agents that are approximately three orders of magnitude more efficient at exploring a sparse state space than naive baseline metrics. While the work is still in its early stages, it provides a promising direction for future research. | false | false | false | false | true | false | false | false | false | true | false | false | false | false | true | false | false | false | 217,975 |
2308.08448 | Implementing Quantum Generative Adversarial Network (qGAN) and QCBM in
Finance | Quantum machine learning (QML) is a cross-disciplinary subject made up of two of the most exciting research areas: quantum computing and classical machine learning (ML), with ML and artificial intelligence (AI) being projected as the first fields that will be impacted by the rise of quantum machines. Quantum computers are being used today in drug discovery, material & molecular modelling and finance. In this work, we discuss some upcoming active new research areas in application of quantum machine learning (QML) in finance. We discuss certain QML models that has become areas of active interest in the financial world for various applications. We use real world financial dataset and compare models such as qGAN (quantum generative adversarial networks) and QCBM (quantum circuit Born machine) among others, using simulated environments. For the qGAN, we define quantum circuits for discriminators and generators and show promises of future quantum advantage via QML in finance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 385,905 |
1611.09203 | Computational Mapping of the Ground Reflectivity with Laser Scanners | In this investigation we focus on the problem of mapping the ground reflectivity with multiple laser scanners mounted on mobile robots/vehicles. The problem originates because regions of the ground become populated with a varying number of reflectivity measurements whose value depends on the observer and its corresponding perspective. Here, we propose a novel automatic, data-driven computational mapping framework specifically aimed at preserving edge sharpness in the map reconstruction process and that considers the sources of measurement variation. Our new formulation generates map-perspective gradients and applies sub-set selection fusion and de-noising operators to these through iterative algorithms that minimize an $\ell_1$ sparse regularized least squares formulation. Reconstruction of the ground reflectivity is then carried out based on Poisson's formulation posed as an $\ell_2$ term promoting consistency with the fused gradient of map-perspectives and a term that ensures equality constraints with reference measurement data. We demonstrate our new framework outperforms the capabilities of existing ones with experiments realized on Ford's fleet of autonomous vehicles. For example, we show we can achieve map enhancement (i.e., contrast enhancement), artifact removal, de-noising and map-stitching without requiring an additional reflectivity adjustment to calibrate sensors to the specific mounting and robot/vehicle motion. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 64,636 |
1309.6727 | Genie Tree and Degrees of Freedom of the Symmetric MIMO Interfering
Broadcast Channel | In this paper, we study the information theoretic maximal degrees of freedom (DoF) for the symmetric multi-input-multi-output (MIMO) interfering broadcast channel (IBC) with arbitrary antenna configurations. For the G-cell K-user MXN MIMO-IBC network, we find that the information theoretic maximal DoF per user are related to three DoF bounds: 1) the decomposition DoF bound d^{Decom}=MN/(M+KN), a lower-bound of linear interference alignment (IA) with infinite time/frequency extensions (called asymptotic IA); 2) the proper DoF bound d^{Proper}=(M+N)/(GK+1), an upper-bound of linear IA without time/frequency extensions (called linear IA); and 3) the quantity DoF bound d^{Quan}, a zigzag piecewise linear function of M and N. The whole region of M/N can be divided into two regions, Region I and Region II. Specifically, for most configurations in Region I, the information theoretic maximal DoF are the decomposition DoF bound and can be achieved by the asymptotic IA. For all configurations in Region II, the information theoretic maximal DoF are the quantity DoF bound and can be achieved by the linear IA. To obtain the tight upper-bound, we propose a unified way to construct genies, where the genies help each base station or user resolve the maximal number of interference. According to the feature that the designed genies with the same dimension can derive identical DoF upper-bound, we convert the information theoretic DoF upper-bound problem into a linear algebra problem and obtain the closed-form DoF upper-bound expression. Moreover, we develop a non-iterative linear IA transceiver to achieve the DoF upper-bound for the networks with antenna configurations in Region II, which means that the derived DoF upper-bound is tightest. The basic principles to derive the DoF upper-bound and design the linear IA transceiver to achieve the DoF upper-bound can be extended into general asymmetric networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 27,268 |
2304.09702 | Losing Focus: Can It Be Useful in Robotic Laser Surgery? | This paper proposes a method to regulate the tissue temperature during laser surgery by robotically controlling the laser focus. Laser-tissue interactions are generally considered hard to control due to the inherent inhomogeneity of biological tissue, which can create significant variability in its thermal response to laser irradiation. In this study, we use methods from nonlinear control theory to synthesize a temperature controller capable of working on virtually any tissue type without any prior knowledge of its physical properties. The performance of the controller is evaluated in ex-vivo experiments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 359,140 |
1712.10207 | Dense Pooling layers in Fully Convolutional Network for Skin Lesion
Segmentation | One of the essential tasks in medical image analysis is segmentation and accurate detection of borders. Lesion segmentation in skin images is an essential step in the computerized detection of skin cancer. However, many of the state-of-the-art segmentation methods have deficiencies in their border detection phase. In this paper, a new class of fully convolutional network is proposed, with new dense pooling layers for segmentation of lesion regions in skin images. This network leads to highly accurate segmentation of lesions on skin lesion datasets which outperforms state-of-the-art algorithms in the skin lesion segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 87,469 |
2103.08759 | A Transition-based Parser for Unscoped Episodic Logical Forms | "Episodic Logic:Unscoped Logical Form" (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is available at https://github.com/genelkim/ulf-transition-parser We also present the first official annotated ULF dataset at https://www.cs.rochester.edu/u/gkim21/ulf/resources/. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 224,978 |
1903.10145 | Cyclical Annealing Schedule: A Simple Approach to Mitigating KL
Vanishing | Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter \beta. One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for \beta, and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing \beta multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm re-starts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 125,212 |
2406.18859 | Two-Pronged Human Evaluation of ChatGPT Self-Correction in Radiology
Report Simplification | Radiology reports are highly technical documents aimed primarily at doctor-doctor communication. There has been an increasing interest in sharing those reports with patients, necessitating providing them patient-friendly simplifications of the original reports. This study explores the suitability of large language models in automatically generating those simplifications. We examine the usefulness of chain-of-thought and self-correction prompting mechanisms in this domain. We also propose a new evaluation protocol that employs radiologists and laypeople, where radiologists verify the factual correctness of simplifications, and laypeople assess simplicity and comprehension. Our experimental results demonstrate the effectiveness of self-correction prompting in producing high-quality simplifications. Our findings illuminate the preferences of radiologists and laypeople regarding text simplification, informing future research on this topic. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 468,205 |
2310.14512 | CorefPrompt: Prompt-based Event Coreference Resolution by Measuring
Event Type and Argument Compatibilities | Event coreference resolution (ECR) aims to group event mentions referring to the same real-world event into clusters. Most previous studies adopt the "encoding first, then scoring" framework, making the coreference judgment rely on event encoding. Furthermore, current methods struggle to leverage human-summarized ECR rules, e.g., coreferential events should have the same event type, to guide the model. To address these two issues, we propose a prompt-based approach, CorefPrompt, to transform ECR into a cloze-style MLM (masked language model) task. This allows for simultaneous event modeling and coreference discrimination within a single template, with a fully shared context. In addition, we introduce two auxiliary prompt tasks, event-type compatibility and argument compatibility, to explicitly demonstrate the reasoning process of ECR, which helps the model make final predictions. Experimental results show that our method CorefPrompt performs well in a state-of-the-art (SOTA) benchmark. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 401,889 |
2405.01588 | Towards Unbiased Evaluation of Detecting Unanswerable Questions in
EHRSQL | Incorporating unanswerable questions into EHR QA systems is crucial for testing the trustworthiness of a system, as providing non-existent responses can mislead doctors in their diagnoses. The EHRSQL dataset stands out as a promising benchmark because it is the only dataset that incorporates unanswerable questions in the EHR QA system alongside practical questions. However, in this work, we identify a data bias in these unanswerable questions; they can often be discerned simply by filtering with specific N-gram patterns. Such biases jeopardize the authenticity and reliability of QA system evaluations. To tackle this problem, we propose a simple debiasing method of adjusting the split between the validation and test sets to neutralize the undue influence of N-gram filtering. By experimenting on the MIMIC-III dataset, we demonstrate both the existing data bias in EHRSQL and the effectiveness of our data split strategy in mitigating this bias. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 451,420 |
2406.16655 | Large Language Models Are Cross-Lingual Knowledge-Free Reasoners | Large Language Models have demonstrated impressive reasoning capabilities across multiple languages. However, the relationship between capabilities in different languages is less explored. In this work, we decompose the process of reasoning tasks into two separated components: knowledge retrieval and knowledge-free reasoning, and analyze the relationship between cross-lingual transferability and these two components. With adapted commonsense reasoning datasets and constructed knowledge-free reasoning datasets, we show that the knowledge-free reasoning capability can be nearly perfectly transferred across various source-target language directions despite the secondary impact of resource in some specific target languages, while cross-lingual knowledge retrieval significantly hinders the transfer. Moreover, by analyzing the hidden states and feed-forward network neuron activation during the reasoning, we show that higher similarity of hidden representations and larger overlap of activated neurons could explain the better cross-lingual transferability of knowledge-free reasoning than knowledge retrieval. Thus, we hypothesize that knowledge-free reasoning shares similar neurons in different languages for reasoning, while knowledge is stored separately in different languages. Our code and data is available at: https://github.com/NJUNLP/Knowledge-Free-Reasoning. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 467,212 |
2405.07282 | Branching Narratives: Character Decision Points Detection | This paper presents the Character Decision Points Detection (CHADPOD) task, a task of identification of points within narratives where characters make decisions that may significantly influence the story's direction. We propose a novel dataset based on CYOA-like games graphs to be used as a benchmark for such a task. We provide a comparative analysis of different models' performance on this task, including a couple of LLMs and several MLMs as baselines, achieving up to 89% accuracy. This underscores the complexity of narrative analysis, showing the challenges associated with understanding character-driven story dynamics. Additionally, we show how such a model can be applied to the existing text to produce linear segments divided by potential branching points, demonstrating the practical application of our findings in narrative analysis. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 453,648 |
2304.14844 | Using Large Language Models for Interpreting Autonomous Robots Behaviors | The deployment of autonomous robots in various domains has raised significant concerns about their trustworthiness and accountability. This study explores the potential of Large Language Models (LLMs) in analyzing ROS 2 logs generated by autonomous robots and proposes a framework for log analysis that categorizes log files into different aspects. The study evaluates the performance of three different language models in answering questions related to StartUp, Warning, and PDDL logs. The results suggest that GPT 4, a transformer-based model, outperforms other models, however, their verbosity is not enough to answer why or how questions for all kinds of actors involved in the interaction. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 361,116 |
2407.09717 | Deep-TEMPEST: Using Deep Learning to Eavesdrop on HDMI from its
Unintended Electromagnetic Emanations | In this work, we address the problem of eavesdropping on digital video displays by analyzing the electromagnetic waves that unintentionally emanate from the cables and connectors, particularly HDMI. This problem is known as TEMPEST. Compared to the analog case (VGA), the digital case is harder due to a 10-bit encoding that results in a much larger bandwidth and non-linear mapping between the observed signal and the pixel's intensity. As a result, eavesdropping systems designed for the analog case obtain unclear and difficult-to-read images when applied to digital video. The proposed solution is to recast the problem as an inverse problem and train a deep learning module to map the observed electromagnetic signal back to the displayed image. However, this approach still requires a detailed mathematical analysis of the signal, firstly to determine the frequency at which to tune but also to produce training samples without actually needing a real TEMPEST setup. This saves time and avoids the need to obtain these samples, especially if several configurations are being considered. Our focus is on improving the average Character Error Rate in text, and our system improves this rate by over 60 percentage points compared to previous available implementations. The proposed system is based on widely available Software Defined Radio and is fully open-source, seamlessly integrated into the popular GNU Radio framework. We also share the dataset we generated for training, which comprises both simulated and over 1000 real captures. Finally, we discuss some countermeasures to minimize the potential risk of being eavesdropped by systems designed based on similar principles. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 472,691 |
2112.05839 | ANA: Ant Nesting Algorithm for Optimizing Real-World Problems | In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm's capability in optimizing real-world problems. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 270,970 |
2311.06534 | Translating Legalese: Enhancing Public Understanding of Court Opinions
with Legal Summarizers | Judicial opinions are written to be persuasive and could build public trust in court decisions, yet they can be difficult for non-experts to understand. We present a pipeline for using an AI assistant to generate simplified summaries of judicial opinions. Compared to existing expert-written summaries, these AI-generated simple summaries are more accessible to the public and more easily understood by non-experts. We show in a survey experiment that the AI summaries help respondents understand the key features of a ruling, and have higher perceived quality, especially for respondents with less formal education. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 406,976 |
1710.06525 | Near-Optimal Adversarial Policy Switching for Decentralized Asynchronous
Multi-Agent Systems | A key challenge in multi-robot and multi-agent systems is generating solutions that are robust to other self-interested or even adversarial parties who actively try to prevent the agents from achieving their goals. The practicality of existing works addressing this challenge is limited to only small-scale synchronous decision-making scenarios or a single agent planning its best response against a single adversary with fixed, procedurally characterized strategies. In contrast this paper considers a more realistic class of problems where a team of asynchronous agents with limited observation and communication capabilities need to compete against multiple strategic adversaries with changing strategies. This problem necessitates agents that can coordinate to detect changes in adversary strategies and plan the best response accordingly. Our approach first optimizes a set of stratagems that represent these best responses. These optimized stratagems are then integrated into a unified policy that can detect and respond when the adversaries change their strategies. The near-optimality of the proposed framework is established theoretically as well as demonstrated empirically in simulation and hardware. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 82,790 |
1505.01927 | A simpler sublinear algorithm for approximating the triangle count | A recent result of Eden, Levi, and Ron (ECCC 2015) provides a sublinear time algorithm to estimate the number of triangles in a graph. Given an undirected graph $G$, one can query the degree of a vertex, the existence of an edge between vertices, and the $i$th neighbor of a vertex. Suppose the graph has $n$ vertices, $m$ edges, and $t$ triangles. In this model, Eden et al provided a $O(\poly(\eps^{-1}\log n)(n/t^{1/3} + m^{3/2}/t))$ time algorithm to get a $(1+\eps)$-multiplicative approximation for $t$, the triangle count. This paper provides a simpler algorithm with the same running time (up to differences in the $\poly(\eps^{-1}\log n)$ factor) that has a substantially simpler analysis. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 42,898 |
2303.14244 | Implicit Balancing and Regularization: Generalization and Convergence
Guarantees for Overparameterized Asymmetric Matrix Sensing | Recently, there has been significant progress in understanding the convergence and generalization properties of gradient-based methods for training overparameterized learning models. However, many aspects including the role of small random initialization and how the various parameters of the model are coupled during gradient-based updates to facilitate good generalization remain largely mysterious. A series of recent papers have begun to study this role for non-convex formulations of symmetric Positive Semi-Definite (PSD) matrix sensing problems which involve reconstructing a low-rank PSD matrix from a few linear measurements. The underlying symmetry/PSDness is crucial to existing convergence and generalization guarantees for this problem. In this paper, we study a general overparameterized low-rank matrix sensing problem where one wishes to reconstruct an asymmetric rectangular low-rank matrix from a few linear measurements. We prove that an overparameterized model trained via factorized gradient descent converges to the low-rank matrix generating the measurements. We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards low-rank models despite the overparameterized nature of the factorized model. These two implicit properties in turn allow us to show that the gradient descent trajectory from small random initialization moves towards solutions that are both globally optimal and generalize well. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 354,000 |
1903.03425 | The Ethics of AI Ethics -- An Evaluation of Guidelines | Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the "disruptive" potentials of new AI technologies. Designed as a comprehensive evaluation, this paper analyzes and compares these guidelines highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems - and how the effectiveness in the demands of AI ethics can be improved. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 123,731 |
1406.0909 | Improvement Tracking Dynamic Programming using Replication Function for
Continuous Sign Language Recognition | In this paper we used a Replication Function (R. F.)for improvement tracking with dynamic programming. The R. F. transforms values of gray level [0 255] to [0 1]. The resulting images of R. F. are more striking and visible in skin regions. The R. F. improves Dynamic Programming (D. P.) in overlapping hand and face. Results show that Tracking Error Rate 11% and Average Tracked Distance 7% reduced | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 33,580 |
2209.10732 | In Differential Privacy, There is Truth: On Vote Leakage in Ensemble
Private Learning | When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible at all without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack becomes easier as we add more noise to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 318,952 |
2404.02372 | Obfuscated Malware Detection: Investigating Real-world Scenarios through
Memory Analysis | In the era of the internet and smart devices, the detection of malware has become crucial for system security. Malware authors increasingly employ obfuscation techniques to evade advanced security solutions, making it challenging to detect and eliminate threats. Obfuscated malware, adept at hiding itself, poses a significant risk to various platforms, including computers, mobile devices, and IoT devices. Conventional methods like heuristic-based or signature-based systems struggle against this type of malware, as it leaves no discernible traces on the system. In this research, we propose a simple and cost-effective obfuscated malware detection system through memory dump analysis, utilizing diverse machine-learning algorithms. The study focuses on the CIC-MalMem-2022 dataset, designed to simulate real-world scenarios and assess memory-based obfuscated malware detection. We evaluate the effectiveness of machine learning algorithms, such as decision trees, ensemble methods, and neural networks, in detecting obfuscated malware within memory dumps. Our analysis spans multiple malware categories, providing insights into algorithmic strengths and limitations. By offering a comprehensive assessment of machine learning algorithms for obfuscated malware detection through memory analysis, this paper contributes to ongoing efforts to enhance cybersecurity and fortify digital ecosystems against evolving and sophisticated malware threats. The source code is made open-access for reproducibility and future research endeavours. It can be accessed at https://bit.ly/MalMemCode. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 443,827 |
2409.02438 | Non-target Divergence Hypothesis: Toward Understanding Domain Gaps in
Cross-Modal Knowledge Distillation | Compared to single-modal knowledge distillation, cross-modal knowledge distillation faces more severe challenges due to domain gaps between modalities. Although various methods have proposed various solutions to overcome these challenges, there is still limited research on how domain gaps affect cross-modal knowledge distillation. This paper provides an in-depth analysis and evaluation of this issue. We first introduce the Non-Target Divergence Hypothesis (NTDH) to reveal the impact of domain gaps on cross-modal knowledge distillation. Our key finding is that domain gaps between modalities lead to distribution differences in non-target classes, and the smaller these differences, the better the performance of cross-modal knowledge distillation. Subsequently, based on Vapnik-Chervonenkis (VC) theory, we derive the upper and lower bounds of the approximation error for cross-modal knowledge distillation, thereby theoretically validating the NTDH. Finally, experiments on five cross-modal datasets further confirm the validity, generalisability, and applicability of the NTDH. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 485,705 |
2411.04274 | Effective Capacity of a Battery Energy Storage System Captive to a Wind
Farm | Wind energy's role in the global electric grid is set to expand significantly. New York State alone anticipates offshore wind farms (WFs) contributing 9GW by 2035. Integration of energy storage emerges as crucial for this advancement. In this study, we focus on a WF paired with a captive battery energy storage system (BESS). We aim to ascertain the capacity credit for a BESS with specified energy and power ratings. Unlike prior methods rooted in reliability theory, we define a power alignment function, which leads to a straightforward definition of capacity and incremental capacity for the BESS. We develop a solution method based on a linear programming formulation. Our analysis utilizes wind data, collected by NYSERDA off Long Island's coast and load demand data from NYISO. Additionally, we present theoretical insights into BESS sizing and a key time-series property influencing BESS capacity, aiding in simulating wind and demand for estimating BESS energy requirements. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 506,201 |
1908.08162 | Globally optimal registration of noisy point clouds | Registration of 3D point clouds is a fundamental task in several applications of robotics and computer vision. While registration methods such as iterative closest point and variants are very popular, they are only locally optimal. There has been some recent work on globally optimal registration, but they perform poorly in the presence of noise in the measurements. In this work we develop a mixed integer programming-based approach for globally optimal registration that explicitly considers uncertainty in its optimization, and hence produces more accurate estimates. Furthermore, from a practical implementation perspective we develop a multi-step optimization that combines fast local methods with our accurate global formulation. Through extensive simulation and real world experiments we demonstrate improved performance over state-of-the-art methods for various level of noise and outliers in the data as well as for partial geometric overlap. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 142,475 |
2209.08498 | LATITUDE: Robotic Global Localization with Truncated Dynamic Low-pass
Filter in City-scale NeRF | Neural Radiance Fields (NeRFs) have made great success in representing complex 3D scenes with high-resolution details and efficient memory. Nevertheless, current NeRF-based pose estimators have no initial pose prediction and are prone to local optima during optimization. In this paper, we present LATITUDE: Global Localization with Truncated Dynamic Low-pass Filter, which introduces a two-stage localization mechanism in city-scale NeRF. In place recognition stage, we train a regressor through images generated from trained NeRFs, which provides an initial value for global localization. In pose optimization stage, we minimize the residual between the observed image and rendered image by directly optimizing the pose on tangent plane. To avoid convergence to local optimum, we introduce a Truncated Dynamic Low-pass Filter (TDLF) for coarse-to-fine pose registration. We evaluate our method on both synthetic and real-world data and show its potential applications for high-precision navigation in large-scale city scenes. Codes and data will be publicly available at https://github.com/jike5/LATITUDE. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 318,149 |
2305.05136 | Localisation of Mammographic masses by Greedy Backtracking of
Activations in the Stacked Auto-Encoders | Mammographic image analysis requires accurate localisation of salient mammographic masses. In mammographic computer-aided diagnosis, mass or Region of Interest (ROI) is often marked by physicians and features are extracted from the marked ROI. In this paper, we present a novel mammographic mass localisation framework, based on the maximal class activations of the stacked auto-encoders. We hypothesize that the image regions activating abnormal classes in mammographic images will be the breast masses which causes the anomaly. The experiment is conducted using randomly selected 200 mammographic images (100 normal and 100 abnormal) from IRMA mammographic dataset. Abnormal mass regions marked by an expert radiologist are used as the ground truth. The proposed method outperforms existing Deep Convolutional Neural Network (DCNN) based techniques in terms of salient region detection accuracy. The proposed greedy backtracking method is more efficient and does not require a vast number of labelled training images as in DCNN based method. Such automatic localisation method will assist physicians to make accurate decisions on biopsy recommendations and treatment evaluations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 363,021 |
1609.01987 | CHSalign: A Web Server That Builds upon Junction-Explorer and RNAJAG for
Pairwise Alignment of RNA Secondary Structures with Coaxial Helical Stacking | RNA junctions are important structural elements of RNA molecules. They are formed when three or more helices come together in three-dimensional space. Recent studies have focused on the annotation and prediction of coaxial helical stacking (CHS) motifs within junctions. Here we exploit such predictions to develop an efficient alignment tool to handle RNA secondary structures with CHS motifs. Specifically, we build upon our Junction-Explorer software for predicting coaxial stacking and RNAJAG for modelling junction topologies as tree graphs to incorporate constrained tree matching and dynamic programming algorithms into a new method, called CHSalign, for aligning the secondary structures of RNA molecules containing CHS motifs. Thus, CHSalign is intended to be an efficient alignment tool for RNAs containing similar junctions. Experimental results based on thousands of alignments demonstrate that CHSalign can align two RNA secondary structures containing CHS motifs more accurately than other RNA secondary structure alignment tools. CHSalign yields a high score when aligning two RNA secondary structures with similar CHS motifs or helical arrangement patterns, and a low score otherwise. This new method has been implemented in a web server, and the program is also made freely available, at http://bioinformatics.njit.edu/CHSalign/. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 60,666 |
1706.02427 | Content-Based Table Retrieval for Web Queries | Understanding the connections between unstructured text and semi-structured table is an important yet neglected problem in natural language processing. In this work, we focus on content-based table retrieval. Given a query, the task is to find the most relevant table from a collection of tables. Further progress towards improving this area requires powerful models of semantic matching and richer training and evaluation resources. To remedy this, we present a ranking based approach, and implement both carefully designed features and neural network architectures to measure the relevance between a query and the content of a table. Furthermore, we release an open-domain dataset that includes 21,113 web queries for 273,816 tables. We conduct comprehensive experiments on both real world and synthetic datasets. Results verify the effectiveness of our approach and present the challenges for this task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 74,975 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.