id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2005.12979
Seamlessly Unifying Attributes and Items: Conversational Recommendation for Cold-Start Users
Static recommendation methods like collaborative filtering suffer from the inherent limitation of performing real-time personalization for cold-start users. Online recommendation, e.g., multi-armed bandit approach, addresses this limitation by interactively exploring user preference online and pursuing the exploration-exploitation (EE) trade-off. However, existing bandit-based methods model recommendation actions homogeneously. Specifically, they only consider the items as the arms, being incapable of handling the item attributes, which naturally provide interpretable information of user's current demands and can effectively filter out undesired items. In this work, we consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively. This important scenario was studied in a recent work. However, it employs a hand-crafted function to decide when to ask attributes or make recommendations. Such separate modeling of attributes and items makes the effectiveness of the system highly rely on the choice of the hand-crafted function, thus introducing fragility to the system. To address this limitation, we seamlessly unify attributes and items in the same arm space and achieve their EE trade-offs automatically using the framework of Thompson Sampling. Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play. Extensive experiments on three benchmark datasets show that ConTS outperforms the state-of-the-art methods Conversational UCB (ConUCB) and Estimation-Action-Reflection model in both metrics of success rate and average number of conversation turns.
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
false
178,869
2209.07484
Hydra Attention: Efficient Attention with Many Heads
While transformers have begun to dominate many tasks in vision, applying them to large images is still computationally difficult. A large reason for this is that self-attention scales quadratically with the number of tokens, which in turn, scales quadratically with the image size. On larger images (e.g., 1080p), over 60% of the total computation in the network is spent solely on creating and applying attention matrices. We take a step toward solving this issue by introducing Hydra Attention, an extremely efficient attention operation for Vision Transformers (ViTs). Paradoxically, this efficiency comes from taking multi-head attention to its extreme: by using as many attention heads as there are features, Hydra Attention is computationally linear in both tokens and features with no hidden constants, making it significantly faster than standard self-attention in an off-the-shelf ViT-B/16 by a factor of the token count. Moreover, Hydra Attention retains high accuracy on ImageNet and, in some cases, actually improves it.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
317,762
2403.02286
Stage: Query Execution Time Prediction in Amazon Redshift
Query performance (e.g., execution time) prediction is a critical component of modern DBMSes. As a pioneering cloud data warehouse, Amazon Redshift relies on an accurate execution time prediction for many downstream tasks, ranging from high-level optimizations, such as automatically creating materialized views, to low-level tasks on the critical path of query execution, such as admission, scheduling, and execution resource control. Unfortunately, many existing execution time prediction techniques, including those used in Redshift, suffer from cold start issues, inaccurate estimation, and are not robust against workload/data changes. In this paper, we propose a novel hierarchical execution time predictor: the Stage predictor. The Stage predictor is designed to leverage the unique characteristics and challenges faced by Redshift. The Stage predictor consists of three model states: an execution time cache, a lightweight local model optimized for a specific DB instance with uncertainty measurement, and a complex global model that is transferable across all instances in Redshift. We design a systematic approach to use these models that best leverages optimality (cache), instance-optimization (local model), and transferable knowledge about Redshift (global model). Experimentally, we show that the Stage predictor makes more accurate and robust predictions while maintaining a practical inference latency and memory overhead. Overall, the Stage predictor can improve the average query execution latency by $20\%$ on these instances compared to the prior query performance predictor in Redshift.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
434,754
2402.17204
Advancing Generative Model Evaluation: A Novel Algorithm for Realistic Image Synthesis and Comparison in OCR System
This research addresses a critical challenge in the field of generative models, particularly in the generation and evaluation of synthetic images. Given the inherent complexity of generative models and the absence of a standardized procedure for their comparison, our study introduces a pioneering algorithm to objectively assess the realism of synthetic images. This approach significantly enhances the evaluation methodology by refining the Fr\'echet Inception Distance (FID) score, allowing for a more precise and subjective assessment of image quality. Our algorithm is particularly tailored to address the challenges in generating and evaluating realistic images of Arabic handwritten digits, a task that has traditionally been near-impossible due to the subjective nature of realism in image generation. By providing a systematic and objective framework, our method not only enables the comparison of different generative models but also paves the way for improvements in their design and output. This breakthrough in evaluation and comparison is crucial for advancing the field of OCR, especially for scripts that present unique complexities, and sets a new standard in the generation and assessment of high-quality synthetic images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,887
1911.08678
Robust Triple-Matrix-Recovery-Based Auto-Weighted Label Propagation for Classification
The graph-based semi-supervised label propagation algorithm has delivered impressive classification results. However, the estimated soft labels typically contain mixed signs and noise, which cause inaccurate predictions due to the lack of suitable constraints. Moreover, available methods typically calculate the weights and estimate the labels in the original input space, which typically contains noise and corruption. Thus, the en-coded similarities and manifold smoothness may be inaccurate for label estimation. In this paper, we present effective schemes for resolving these issues and propose a novel and robust semi-supervised classification algorithm, namely, the tri-ple-matrix-recovery-based robust auto-weighted label propa-gation framework (ALP-TMR). Our ALP-TMR introduces a triple matrix recovery mechanism to remove noise or mixed signs from the estimated soft labels and improve the robustness to noise and outliers in the steps of assigning weights and pre-dicting the labels simultaneously. Our method can jointly re-cover the underlying clean data, clean labels and clean weighting spaces by decomposing the original data, predicted soft labels or weights into a clean part plus an error part by fitting noise. In addition, ALP-TMR integrates the au-to-weighting process by minimizing reconstruction errors over the recovered clean data and clean soft labels, which can en-code the weights more accurately to improve both data rep-resentation and classification. By classifying samples in the recovered clean label and weight spaces, one can potentially improve the label prediction results. The results of extensive experiments demonstrated the satisfactory performance of our ALP-TMR.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
154,262
2308.12554
Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling
In order to coordinate energy interactions among various communities and energy conversions among multi-energy subsystems within the multi-community integrated energy system under uncertain conditions, and achieve overall optimization and scheduling of the comprehensive energy system, this paper proposes a comprehensive scheduling model that utilizes a multi-agent deep reinforcement learning algorithm to learn load characteristics of different communities and make decisions based on this knowledge. In this model, the scheduling problem of the integrated energy system is transformed into a Markov decision process and solved using a data-driven deep reinforcement learning algorithm, which avoids the need for modeling complex energy coupling relationships between multi-communities and multi-energy subsystems. The simulation results show that the proposed method effectively captures the load characteristics of different communities and utilizes their complementary features to coordinate reasonable energy interactions among them. This leads to a reduction in wind curtailment rate from 16.3% to 0% and lowers the overall operating cost by 5445.6 Yuan, demonstrating significant economic and environmental benefits.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
387,583
2006.06331
Mental Workload and Language Production in Non-Native Speaker IPA Interaction
Through proliferation on smartphones and smart speakers, intelligent personal assistants (IPAs) have made speech a common interaction modality. Yet, due to linguistic coverage and varying levels of functionality, many speakers engage with IPAs using a non-native language. This may impact the mental workload and pattern of language production displayed by non-native speakers. We present a mixed-design experiment, wherein native (L1) and non-native (L2) English speakers completed tasks with IPAs through smartphones and smart speakers. We found significantly higher mental workload for L2 speakers during IPA interactions. Contrary to our hypotheses, we found no significant differences between L1 and L2 speakers in terms of number of turns, lexical complexity, diversity, or lexical adaptation when encountering errors. These findings are discussed in relation to language production and processing load increases for L2 speakers in IPA interaction.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
181,390
1506.08422
Topic2Vec: Learning Distributed Representations of Topics
Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability. The experimental results show that Topic2Vec achieves interesting and meaningful results.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
44,612
2106.05969
Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation
We propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used disjointly, we synergize the two approaches via dynamics-regulated training. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator. By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
240,300
2312.16627
MIM4DD: Mutual Information Maximization for Dataset Distillation
Dataset distillation (DD) aims to synthesize a small dataset whose test performance is comparable to a full dataset using the same model. State-of-the-art (SoTA) methods optimize synthetic datasets primarily by matching heuristic indicators extracted from two networks: one from real data and one from synthetic data (see Fig.1, Left), such as gradients and training trajectories. DD is essentially a compression problem that emphasizes maximizing the preservation of information contained in the data. We argue that well-defined metrics which measure the amount of shared information between variables in information theory are necessary for success measurement but are never considered by previous works. Thus, we introduce mutual information (MI) as the metric to quantify the shared information between the synthetic and the real datasets, and devise MIM4DD numerically maximizing the MI via a newly designed optimizable objective within a contrastive learning framework to update the synthetic dataset. Specifically, we designate the samples in different datasets that share the same labels as positive pairs and vice versa negative pairs. Then we respectively pull and push those samples in positive and negative pairs into contrastive space via minimizing NCE loss. As a result, the targeted MI can be transformed into a lower bound represented by feature maps of samples, which is numerically feasible. Experiment results show that MIM4DD can be implemented as an add-on module to existing SoTA DD methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
418,461
1907.03922
Are deep ResNets provably better than linear predictors?
Recent results in the literature indicate that a residual network (ResNet) composed of a single residual block outperforms linear predictors, in the sense that all local minima in its optimization landscape are at least as good as the best linear predictor. However, these results are limited to a single residual block (i.e., shallow ResNets), instead of the deep ResNets composed of multiple residual blocks. We take a step towards extending this result to deep ResNets. We start by two motivating examples. First, we show that there exist datasets for which all local minima of a fully-connected ReLU network are no better than the best linear predictor, whereas a ResNet has strictly better local minima. Second, we show that even at the global minimum, the representation obtained from the residual block outputs of a 2-block ResNet do not necessarily improve monotonically over subsequent blocks, which highlights a fundamental difficulty in analyzing deep ResNets. Our main theorem on deep ResNets shows under simple geometric conditions that, any critical point in the optimization landscape is either (i) at least as good as the best linear predictor; or (ii) the Hessian at this critical point has a strictly negative eigenvalue. Notably, our theorem shows that a chain of multiple skip-connections can improve the optimization landscape, whereas existing results study direct skip-connections to the last hidden layer or output layer. Finally, we complement our results by showing benign properties of the "near-identity regions" of deep ResNets, showing depth-independent upper bounds for the risk attained at critical points as well as the Rademacher complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,967
2203.08788
Are Shortest Rationales the Best Explanations for Human Understanding?
Existing self-explaining models typically favor extracting the shortest possible rationales - snippets of an input text "responsible for" corresponding output - to explain the model prediction, with the assumption that shorter rationales are more intuitive to humans. However, this assumption has yet to be validated. Is the shortest rationale indeed the most human-understandable? To answer this question, we design a self-explaining model, LimitedInk, which allows users to extract rationales at any target length. Compared to existing baselines, LimitedInk achieves compatible end-task performance and human-annotated rationale agreement, making it a suitable representation of the recent class of self-explaining models. We use LimitedInk to conduct a user study on the impact of rationale length, where we ask human judges to predict the sentiment label of documents based only on LimitedInk-generated rationales with different lengths. We show rationales that are too short do not help humans predict labels better than randomly masked text, suggesting the need for more careful design of the best human rationales.
true
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
285,910
1908.05849
AGDC: Automatic Garbage Detection and Collection
Waste management is one of the significant problems throughout the world. Contemporaneous methods find it difficult to manage the volume of solid waste generated by the growing urban population. In this paper, we propose a system which is very hygienic and cheap that uses Artificial Intelligence algorithms for detection of the garbage. Once the garbage is detected the system calculates the position of the garbage by the use of the camera only. The proposed system is capable of distinguishing between valuables and garbage with more than 95% confidence in real-time. Finally, a robotic arm controlled by the microcontroller is used to pick up the garbage and places it in the bin. Concluding, the paper explains a system that is capable of working as a human in terms of inspecting and collecting the garbage. The system is able to achieve 3-4 frames per second on the Raspberry Pi, capable of detecting the garbage in real-time with 90%+ confidence.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
141,832
2401.17838
A Cross-View Hierarchical Graph Learning Hypernetwork for Skill Demand-Supply Joint Prediction
The rapidly changing landscape of technology and industries leads to dynamic skill requirements, making it crucial for employees and employers to anticipate such shifts to maintain a competitive edge in the labor market. Existing efforts in this area either rely on domain-expert knowledge or regarding skill evolution as a simplified time series forecasting problem. However, both approaches overlook the sophisticated relationships among different skills and the inner-connection between skill demand and supply variations. In this paper, we propose a Cross-view Hierarchical Graph learning Hypernetwork (CHGH) framework for joint skill demand-supply prediction. Specifically, CHGH is an encoder-decoder network consisting of i) a cross-view graph encoder to capture the interconnection between skill demand and supply, ii) a hierarchical graph encoder to model the co-evolution of skills from a cluster-wise perspective, and iii) a conditional hyper-decoder to jointly predict demand and supply variations by incorporating historical demand-supply gaps. Extensive experiments on three real-world datasets demonstrate the superiority of the proposed framework compared to seven baselines and the effectiveness of the three modules.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
425,345
2305.15689
Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts
Recent studies have demonstrated that natural-language prompts can help to leverage the knowledge learned by pre-trained language models for the binary sentence-level sentiment classification task. Specifically, these methods utilize few-shot learning settings to fine-tune the sentiment classification model using manual or automatically generated prompts. However, the performance of these methods is sensitive to the perturbations of the utilized prompts. Furthermore, these methods depend on a few labeled instances for automatic prompt generation and prompt ranking. This study aims to find high-quality prompts for the given task in a zero-shot setting. Given a base prompt, our proposed approach automatically generates multiple prompts similar to the base prompt employing positional, reasoning, and paraphrasing techniques and then ranks the prompts using a novel metric. We empirically demonstrate that the top-ranked prompts are high-quality and significantly outperform the base prompt and the prompts generated using few-shot learning for the binary sentence-level sentiment classification task.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
367,723
1105.4408
A Simple Proof of the Mutual Incoherence Condition for Orthogonal Matching Pursuit
This paper provides a simple proof of the mutual incoherence condition $\mu < \frac{1}{2K-1}$ under which K-sparse signal can be accurately reconstructed from a small number of linear measurements using the orthogonal matching pursuit (OMP) algorithm. Our proof, based on mathematical induction, is built on an observation that the general step of the OMP process is in essence same as the initial step since the residual is considered as a new measurement preserving the sparsity level of an input vector.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,463
2307.01845
Deep Features for Contactless Fingerprint Presentation Attack Detection: Can They Be Generalized?
The rapid evolution of high-end smartphones with advanced high-resolution cameras has resulted in contactless capture of fingerprint biometrics that are more reliable and suitable for verification. Similar to other biometric systems, contactless fingerprint-verification systems are vulnerable to presentation attacks. In this paper, we present a comparative study on the generalizability of seven different pre-trained Convolutional Neural Networks (CNN) and a Vision Transformer (ViT) to reliably detect presentation attacks. Extensive experiments were carried out on publicly available smartphone-based presentation attack datasets using four different Presentation Attack Instruments (PAI). The detection performance of the eighth deep feature technique was evaluated using the leave-one-out protocol to benchmark the generalization performance for unseen PAI. The obtained results indicated the best generalization performance with the ResNet50 CNN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
377,487
1705.00253
Multi-dueling Bandits with Dependent Arms
The dueling bandits problem is an online learning framework for learning from pairwise preference feedback, and is particularly well-suited for modeling settings that elicit subjective or implicit human feedback. In this paper, we study the problem of multi-dueling bandits with dependent arms, which extends the original dueling bandits setting by simultaneously dueling multiple arms as well as modeling dependencies between arms. These extensions capture key characteristics found in many real-world applications, and allow for the opportunity to develop significantly more efficient algorithms than were possible in the original setting. We propose the \selfsparring algorithm, which reduces the multi-dueling bandits problem to a conventional bandit setting that can be solved using a stochastic bandit algorithm such as Thompson Sampling, and can naturally model dependencies using a Gaussian process prior. We present a no-regret analysis for multi-dueling setting, and demonstrate the effectiveness of our algorithm empirically on a wide range of simulation settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
72,645
2002.12435
Learning in Markov Decision Processes under Constraints
We consider reinforcement learning (RL) in Markov Decision Processes in which an agent repeatedly interacts with an environment that is modeled by a controlled Markov process. At each time step $t$, it earns a reward, and also incurs a cost-vector consisting of $M$ costs. We design model-based RL algorithms that maximize the cumulative reward earned over a time horizon of $T$ time-steps, while simultaneously ensuring that the average values of the $M$ cost expenditures are bounded by agent-specified thresholds $c^{ub}_i,i=1,2,\ldots,M$. In order to measure the performance of a reinforcement learning algorithm that satisfies the average cost constraints, we define an $M+1$ dimensional regret vector that is composed of its reward regret, and $M$ cost regrets. The reward regret measures the sub-optimality in the cumulative reward, while the $i$-th component of the cost regret vector is the difference between its $i$-th cumulative cost expense and the expected cost expenditures $Tc^{ub}_i$. We prove that the expected value of the regret vector of UCRL-CMDP, is upper-bounded as $\tilde{O}\left(T^{2\slash 3}\right)$, where $T$ is the time horizon. We further show how to reduce the regret of a desired subset of the $M$ costs, at the expense of increasing the regrets of rewards and the remaining costs. To the best of our knowledge, ours is the only work that considers non-episodic RL under average cost constraints, and derive algorithms that can~\emph{tune the regret vector} according to the agent's requirements on its cost regrets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,027
2301.00199
Action Codes
We provide a new perspective on the problem how high-level state machine models with abstract actions can be related to low-level models in which these actions are refined by sequences of concrete actions. We describe the connection between high-level and low-level actions using \emph{action codes}, a variation of the prefix codes known from coding theory. For each action code ${\mathcal{R}}$, we introduce a \emph{contraction} operator $\alpha_{\mathcal{R}}$ that turns a low-level model $\mathcal{M}$ into a high-level model, and a \emph{refinement} operator $\rho_{\mathcal{R}}$ that transforms a high-level model $\mathcal{N}$ into a low-level model. We establish a Galois connection $\rho_{\mathcal{R}}(\mathcal{N}) \sqsubseteq \mathcal{M} \Leftrightarrow \mathcal{N} \sqsubseteq \alpha_{\mathcal{R}}(\mathcal{M})$, where $\sqsubseteq$ is the well-known simulation preorder. For conformance, we typically want to obtain an overapproximation of model $\mathcal{M}$. To this end, we also introduce a \emph{concretization} operator $\gamma_{\mathcal{R}}$, which behaves like the refinement operator but adds arbitrary behavior at intermediate points, giving us a second Galois connection $\alpha_{\mathcal{R}}(\mathcal{M}) \sqsubseteq \mathcal{N} \Leftrightarrow \mathcal{M} \sqsubseteq \gamma_{\mathcal{R}}(\mathcal{N})$. Action codes may be used to construct adaptors that translate between concrete and abstract actions during learning and testing of Mealy machines. If Mealy machine $\mathcal{M}$ models a black-box system then $\alpha_{\mathcal{R}}(\mathcal{M})$ describes the behavior that can be observed by a learner/tester that interacts with this system via an adaptor derived from code ${\mathcal{R}}$. Whenever $\alpha_{\mathcal{R}}(\mathcal{M})$ implements (or conforms to) $\mathcal{N}$, we may conclude that $\mathcal{M}$ implements (or conforms to) $\gamma_{{\mathcal{R}}} (\mathcal{N})$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
338,835
1906.02121
Classifying Norm Conflicts using Learned Semantic Representations
While most social norms are informal, they are often formalized by companies in contracts to regulate trades of goods and services. When poorly written, contracts may contain normative conflicts resulting from opposing deontic meanings or contradict specifications. As contracts tend to be long and contain many norms, manually identifying such conflicts requires human-effort, which is time-consuming and error-prone. Automating such task benefits contract makers increasing productivity and making conflict identification more reliable. To address this problem, we introduce an approach to detect and classify norm conflicts in contracts by converting them into latent representations that preserve both syntactic and semantic information and training a model to classify norm conflicts in four conflict types. Our results reach the new state of the art when compared to a previous approach.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
133,946
2012.04221
Ditto: Fair and Robust Federated Learning Through Personalization
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,385
2010.13309
Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition
We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We also conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a correlation between the proposed QCNN features, class activation maps, and input spectrograms. We provide an implementation for future studies.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
203,078
2108.02482
MixMicrobleed: Multi-stage detection and segmentation of cerebral microbleeds
Cerebral microbleeds are small, dark, round lesions that can be visualised on T2*-weighted MRI or other sequences sensitive to susceptibility effects. In this work, we propose a multi-stage approach to both microbleed detection and segmentation. First, possible microbleed locations are detected with a Mask R-CNN technique. Second, at each possible microbleed location, a simple U-Net performs the final segmentation. This work used the 72 subjects as training data provided by the "Where is VALDO?" challenge of MICCAI 2021.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,339
2211.12701
Continual Learning of Natural Language Processing Tasks: A Survey
Continual learning (CL) is a learning paradigm that emulates the human capability of learning and accumulating knowledge continually without forgetting the previously learned knowledge and also transferring the learned knowledge to help learn new tasks better. This survey presents a comprehensive review and analysis of the recent progress of CL in NLP, which has significant differences from CL in computer vision and machine learning. It covers (1) all CL settings with a taxonomy of existing techniques; (2) catastrophic forgetting (CF) prevention, (3) knowledge transfer (KT), which is particularly important for NLP tasks; and (4) some theory and the hidden challenge of inter-task class separation (ICS). (1), (3) and (4) have not been included in the existing survey. Finally, a list of future directions is discussed.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
332,198
2411.01447
Privacy-Preserving Customer Churn Prediction Model in the Context of Telecommunication Industry
Data is the main fuel of a successful machine learning model. A dataset may contain sensitive individual records e.g. personal health records, financial data, industrial information, etc. Training a model using this sensitive data has become a new privacy concern when someone uses third-party cloud computing. Trained models also suffer privacy attacks which leads to the leaking of sensitive information of the training data. This study is conducted to preserve the privacy of training data in the context of customer churn prediction modeling for the telecommunications industry (TCI). In this work, we propose a framework for privacy-preserving customer churn prediction (PPCCP) model in the cloud environment. We have proposed a novel approach which is a combination of Generative Adversarial Networks (GANs) and adaptive Weight-of-Evidence (aWOE). Synthetic data is generated from GANs, and aWOE is applied on the synthetic training dataset before feeding the data to the classification algorithms. Our experiments were carried out using eight different machine learning (ML) classifiers on three openly accessible datasets from the telecommunication sector. We then evaluated the performance using six commonly employed evaluation metrics. In addition to presenting a data privacy analysis, we also performed a statistical significance test. The training and prediction processes achieve data privacy and the prediction classifiers achieve high prediction performance (87.1\% in terms of F-Measure for GANs-aWOE based Na\"{\i}ve Bayes model). In contrast to earlier studies, our suggested approach demonstrates a prediction enhancement of up to 28.9\% and 27.9\% in terms of accuracy and F-measure, respectively.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
505,078
2011.12390
Anomaly Detection Model for Imbalanced Datasets
This paper proposes a method to detect bank frauds using a mixed approach combining a stochastic intensity model with the probability of fraud observed on transactions. It is a dynamic unsupervised approach which is able to predict financial frauds. The fraud prediction probability on the financial transaction is derived as a function of the dynamic intensities. In this context, the Kalman filter method is proposed to estimate the dynamic intensities. The application of our methodology to financial datasets shows a better predictive power in higher imbalanced data compared to other intensity-based models.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
208,138
2104.05018
TedNet: A Pytorch Toolkit for Tensor Decomposition Networks
Tensor Decomposition Networks (TDNs) prevail for their inherent compact architectures. To give more researchers a flexible way to exploit TDNs, we present a Pytorch toolkit named TedNet. TedNet implements 5 kinds of tensor decomposition(i.e., CANDECOMP/PARAFAC (CP), Block-Term Tucker (BTT), Tucker-2, Tensor Train (TT) and Tensor Ring (TR) on traditional deep neural layers, the convolutional layer and the fully-connected layer. By utilizing the basic layers, it is simple to construct a variety of TDNs. TedNet is available at https://github.com/tnbar/tednet.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
229,574
2110.09376
Planning of EM Skins for Improved Quality-of-Service in Urban Areas
The optimal planning of electromagnetic skins (EMSs) installed on the building facades to enhance the received signal strength, thus the wireless coverage and/or the quality-of-service (QoS) in large-scale urban areas, is addressed. More specifically, a novel instance of the System-by-Design (SbD) paradigm is proposed towards the implementation of a smart electromagnetic environment (SEME) where low-cost passive static reflective skins are deployed to enhance the level of the power received within selected regions-of-interest (RoIs). Thanks to the ad-hoc customization of the SbD functional blocks, which includes the exploitation of a digital twin (DT) for the accurate yet fast assessment of the wireless coverage condition, effective solutions are yielded. Numerical results, dealing with real-world test-beds, are shown to assess the capabilities, the potentialities, and the current limitations of the proposed EMSs planning strategy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
261,789
1812.01840
Attention Boosted Sequential Inference Model
Attention mechanism has been proven effective on natural language processing. This paper proposes an attention boosted natural language inference model named aESIM by adding word attention and adaptive direction-oriented attention mechanisms to the traditional Bi-LSTM layer of natural language inference models, e.g. ESIM. This makes the inference model aESIM has the ability to effectively learn the representation of words and model the local subsentential inference between pairs of premise and hypothesis. The empirical studies on the SNLI, MultiNLI and Quora benchmarks manifest that aESIM is superior to the original ESIM model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
115,630
2310.08080
RT-SRTS: Angle-Agnostic Real-Time Simultaneous 3D Reconstruction and Tumor Segmentation from Single X-Ray Projection
Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
399,252
2502.04635
Exercise Specialists Evaluation of Robot-led Physical Therapy for People with Parkinsons Disease
Robot-led physical therapy (PT) offers a promising avenue to enhance the care provided by clinical exercise specialists (ES) and physical and occupational therapists to improve patients' adherence to prescribed exercises outside of a clinic, such as at home. Collaborative efforts among roboticists, ES, physical and occupational therapists, and patients are essential for developing interactive, personalized exercise systems that meet each stakeholder's needs. We conducted a user study in which 11 ES evaluated a novel robot-led PT system for people with Parkinson's disease (PD), introduced in [1], focusing on the system's perceived efficacy and acceptance. Utilizing a mixed-methods approach, including technology acceptance questionnaires, task load questionnaires, and semi-structured interviews, we gathered comprehensive insights into ES perspectives and experiences after interacting with the system. Findings reveal a broadly positive reception, which highlights the system's capacity to augment traditional PT for PD, enhance patient engagement, and ensure consistent exercise support. We also identified two key areas for improvement: incorporating more human-like feedback systems and increasing the robot's ease of use. This research emphasizes the value of incorporating robotic aids into PT for PD, offering insights that can guide the development of more effective and user-friendly rehabilitation technologies.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
531,248
2304.03561
A Low-Complexity Diversity-Preserving Universal Bit-Flipping Enhanced Hard Decision Decoder for Arbitrary Linear Codes
V2X (Vehicle-to-everything) communication relies on short messages for short-range transmissions over a fading wireless channel, yet requires high reliability and low latency. Hard-decision decoding sacrifices the preservation of diversity order, leading to pronounced performance degradation in fading channels. By contrast, soft-decision decoding retains diversity order, albeit at the cost of increased computational complexity. We introduce a novel enhanced hard-decision decoder termed as the Diversity Flip decoder (DFD) designed for preserving the diversity order. Moreover, it exhibits 'universal' applicability to all linear block codes. For a $\mathscr{C}(n,k)$ code having a minimum distance ${d_{\min}}$, the proposed decoder incurs a worst-case complexity order of $2^{({d_{\min}}-1)}-1$. Notably, for codes having low ${d_{\min}}$, this complexity represents a significant reduction compared to the popular soft and hard decision decoding algorithms. Due to its capability of maintaining diversity at a low complexity, it is eminently suitable for applications such as V2X (Vehicle-to-everything), IoT (Internet of Things), mMTC (Massive Machine type Communications), URLLC (Ultra-Reliable Low Latency Communications) and WBAN (Wireless Body Area Networks) for efficient decoding with favorable performance characteristics. The simulation results provided for various known codes and decoding algorithms validate the performance versus complexity benefits of the proposed decoder.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
356,864
2406.12910
Human-level molecular optimization driven by mol-gene evolution
De novo molecule generation allows the search for more drug-like hits across a vast chemical space. However, lead optimization is still required, and the process of optimizing molecular structures faces the challenge of balancing structural novelty with pharmacological properties. This study introduces the Deep Genetic Molecular Modification Algorithm (DGMM), which brings structure modification to the level of medicinal chemists. A discrete variational autoencoder (D-VAE) is used in DGMM to encode molecules as quantization code, mol-gene, which incorporates deep learning into genetic algorithms for flexible structural optimization. The mol-gene allows for the discovery of pharmacologically similar but structurally distinct compounds, and reveals the trade-offs of structural optimization in drug discovery. We demonstrate the effectiveness of the DGMM in several applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
465,619
2205.05477
Marsupial Walking-and-Flying Robotic Deployment for Collaborative Exploration of Unknown Environments
This work contributes a marsupial robotic system-of-systems involving a legged and an aerial robot capable of collaborative mapping and exploration path planning that exploits the heterogeneous properties of the two systems and the ability to selectively deploy the aerial system from the ground robot. Exploiting the dexterous locomotion capabilities and long endurance of quadruped robots, the marsupial combination can explore within large-scale and confined environments involving rough terrain. However, as certain types of terrain or vertical geometries can render any ground system unable to continue its exploration, the marsupial system can - when needed - deploy the flying robot which, by exploiting its 3D navigation capabilities, can undertake a focused exploration task within its endurance limitations. Focusing on autonomy, the two systems can co-localize and map together by sharing LiDAR-based maps and plan exploration paths individually, while a tailored graph search onboard the legged robot allows it to identify where and when the ferried aerial platform should be deployed. The system is verified within multiple experimental studies demonstrating the expanded exploration capabilities of the marsupial system-of-systems and facilitating the exploration of otherwise individually unreachable areas.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
295,944
1108.2829
Energy Minimization for the Half-Duplex Relay Channel with Decode-Forward Relaying
We analyze coding for energy efficiency in relay channels at a fixed source rate. We first propose a half-duplex decode-forward coding scheme for the Gaussian relay channel. We then derive three optimal sets of power allocation, which respectively minimize the network, the relay and the source energy consumption. These optimal power allocations are given in closed-form, which have so far remained implicit for maximum-rate schemes. Moreover, analysis shows that minimizing the network energy consumption at a given rate is not equivalent to maximizing the rate given energy, since it only covers part of all rates achievable by decode-forward. We thus combine the optimized schemes for network and relay energy consumptions into a generalized one, which then covers all achievable rates. This generalized scheme is not only energy-optimal for the desired source rate but also rate-optimal for the consumed energy. The results also give a detailed understanding of the power consumption regimes and allow a comprehensive description of the optimal message coding and resource allocation for each desired source rate and channel realization. Finally, we simulate the proposed schemes in a realistic environment, considering path-loss and shadowing as modelled in the 3GPP standard. Significant energy gain can be obtained over both direct and two-hop transmissions, particularly when the source is far from relay and destination.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
11,651
2305.14688
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at \url{https://github.com/OFA-Sys/ExpertLLaMA}.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
367,184
2412.01926
Beyond Pairwise Correlations: Higher-Order Redundancies in Self-Supervised Representation Learning
Several self-supervised learning (SSL) approaches have shown that redundancy reduction in the feature embedding space is an effective tool for representation learning. However, these methods consider a narrow notion of redundancy, focusing on pairwise correlations between features. To address this limitation, we formalize the notion of embedding space redundancy and introduce redundancy measures that capture more complex, higher-order dependencies. We mathematically analyze the relationships between these metrics, and empirically measure these redundancies in the embedding spaces of common SSL methods. Based on our findings, we propose Self Supervised Learning with Predictability Minimization (SSLPM) as a method for reducing redundancy in the embedding space. SSLPM combines an encoder network with a predictor engaging in a competitive game of reducing and exploiting dependencies respectively. We demonstrate that SSLPM is competitive with state-of-the-art methods and find that the best performing SSL methods exhibit low embedding space redundancy, suggesting that even methods without explicit redundancy reduction mechanisms perform redundancy reduction implicitly.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
513,305
2312.06514
Where exactly does contextualization in a PLM happen?
Pre-trained Language Models (PLMs) have shown to be consistently successful in a plethora of NLP tasks due to their ability to learn contextualized representations of words (Ethayarajh, 2019). BERT (Devlin et al., 2018), ELMo (Peters et al., 2018) and other PLMs encode word meaning via textual context, as opposed to static word embeddings, which encode all meanings of a word in a single vector representation. In this work, we present a study that aims to localize where exactly in a PLM word contextualization happens. In order to find the location of this word meaning transformation, we investigate representations of polysemous words in the basic BERT uncased 12 layer architecture (Devlin et al., 2018), a masked language model trained on an additional sentence adjacency objective, using qualitative and quantitative measures.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
414,545
2301.13289
On the Statistical Benefits of Temporal Difference Learning
Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD's errors are bounded in terms of a novel measure - the problem's trajectory crossing time - which can be much smaller than the problem's time horizon.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,840
2004.04438
Improving Readability for Automatic Speech Recognition Transcription
Modern Automatic Speech Recognition (ASR) systems can achieve high performance in terms of recognition accuracy. However, a perfectly accurate transcript still can be challenging to read due to grammatical errors, disfluency, and other errata common in spoken communication. Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline. In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker. In addition, we describe a method to address the lack of task-specific data by synthesizing examples for the APR task using the datasets collected for Grammatical Error Correction (GEC) followed by text-to-speech (TTS) and ASR. Furthermore, we propose metrics borrowed from similar tasks to evaluate performance on the APR task. We compare fine-tuned models based on several open-sourced and adapted pre-trained models with the traditional pipeline method. Our results suggest that finetuned models improve the performance on the APR task significantly, hinting at the potential benefits of using APR systems. We hope that the read, understand, and rewrite approach of our work can serve as a basis that many NLP tasks and human readers can benefit from.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,881
2306.09718
Label-noise-tolerant medical image classification via self-attention and self-supervised learning
Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group attention mixup module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
373,941
2305.01870
Task-Aware Risk Estimation of Perception Failures for Autonomous Vehicles
Safety and performance are key enablers for autonomous driving: on the one hand we want our autonomous vehicles (AVs) to be safe, while at the same time their performance (e.g., comfort or progression) is key to adoption. To effectively walk the tight-rope between safety and performance, AVs need to be risk-averse, but not entirely risk-avoidant. To facilitate safe-yet-performant driving, in this paper, we develop a task-aware risk estimator that assesses the risk a perception failure poses to the AV's motion plan. If the failure has no bearing on the safety of the AV's motion plan, then regardless of how egregious the perception failure is, our task-aware risk estimator considers the failure to have a low risk; on the other hand, if a seemingly benign perception failure severely impacts the motion plan, then our estimator considers it to have a high risk. In this paper, we propose a task-aware risk estimator to decide whether a safety maneuver needs to be triggered. To estimate the task-aware risk, first, we leverage the perception failure - detected by a perception monitor - to synthesize an alternative plausible model for the vehicle's surroundings. The risk due to the perception failure is then formalized as the "relative" risk to the AV's motion plan between the perceived and the alternative plausible scenario. We employ a statistical tool called copula, which models tail dependencies between distributions, to estimate this risk. The theoretical properties of the copula allow us to compute probably approximately correct (PAC) estimates of the risk. We evaluate our task-aware risk estimator using NuPlan and compare it with established baselines, showing that the proposed risk estimator achieves the best F1-score (doubling the score of the best baseline) and exhibits a good balance between recall and precision, i.e., a good balance of safety and performance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
361,827
2211.07052
Treatment-RSPN: Recurrent Sum-Product Networks for Sequential Treatment Regimes
Sum-product networks (SPNs) have recently emerged as a novel deep learning architecture enabling highly efficient probabilistic inference. Since their introduction, SPNs have been applied to a wide range of data modalities and extended to time-sequence data. In this paper, we propose a general framework for modelling sequential treatment decision-making behaviour and treatment response using recurrent sum-product networks (RSPNs). Models developed using our framework benefit from the full range of RSPN capabilities, including the abilities to model the full distribution of the data, to seamlessly handle latent variables, missing values and categorical data, and to efficiently perform marginal and conditional inference. Our methodology is complemented by a novel variant of the expectation-maximization algorithm for RSPNs, enabling efficient training of our models. We evaluate our approach on a synthetic dataset as well as real-world data from the MIMIC-IV intensive care unit medical database. Our evaluation demonstrates that our approach can closely match the ground-truth data generation process on synthetic data and achieve results close to neural and probabilistic baselines while using a tractable and interpretable model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
330,106
1612.08813
Optimization of Test Case Generation using Genetic Algorithm (GA)
Testing provides means pertaining to assuring software performance. The total aim of software industry is actually to make a certain start associated with high quality software for the end user. However, associated with software testing has quite a few underlying concerns, which are very important and need to pay attention on these issues. These issues are effectively generating, prioritization of test cases, etc. These issues can be overcome by paying attention and focus. Solitary of the greatest Problems in the software testing area is usually how to acquire a great proper set associated with cases to confirm software. Some other strategies and also methodologies are proposed pertaining to shipping care of most of these issues. Genetic Algorithm (GA) belongs to evolutionary algorithms. Evolutionary algorithms have a significant role in the automatic test generation and many researchers are focusing on it. In this study explored software testing related issues by using the GA approach. In addition to right after applying some analysis, better solution produced, that is feasible and reliable. The particular research presents the implementation of GAs because of its generation of optimized test cases. Along these lines, this paper gives proficient system for the optimization of test case generation using genetic algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
66,116
2407.13517
Mask2Map: Vectorized HD Map Construction Using Bird's Eye View Segmentation Masks
In this paper, we introduce Mask2Map, a novel end-to-end online HD map construction method designed for autonomous driving applications. Our approach focuses on predicting the class and ordered point set of map instances within a scene, represented in the bird's eye view (BEV). Mask2Map consists of two primary components: the Instance-Level Mask Prediction Network (IMPNet) and the Mask-Driven Map Prediction Network (MMPNet). IMPNet generates Mask-Aware Queries and BEV Segmentation Masks to capture comprehensive semantic information globally. Subsequently, MMPNet enhances these query features using local contextual information through two submodules: the Positional Query Generator (PQG) and the Geometric Feature Extractor (GFE). PQG extracts instance-level positional queries by embedding BEV positional information into Mask-Aware Queries, while GFE utilizes BEV Segmentation Masks to generate point-level geometric features. However, we observed limited performance in Mask2Map due to inter-network inconsistency stemming from different predictions to Ground Truth (GT) matching between IMPNet and MMPNet. To tackle this challenge, we propose the Inter-network Denoising Training method, which guides the model to denoise the output affected by both noisy GT queries and perturbed GT Segmentation Masks. Our evaluation conducted on nuScenes and Argoverse2 benchmarks demonstrates that Mask2Map achieves remarkable performance improvements over previous state-of-the-art methods, with gains of 10.1% mAP and 4.1 mAP, respectively. Our code can be found at https://github.com/SehwanChoi0307/Mask2Map.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,403
1712.00164
Generative Adversarial Networks for Electronic Health Records: A Framework for Exploring and Evaluating Methods for Predicting Drug-Induced Laboratory Test Trajectories
Generative Adversarial Networks (GANs) represent a promising class of generative networks that combine neural networks with game theory. From generating realistic images and videos to assisting musical creation, GANs are transforming many fields of arts and sciences. However, their application to healthcare has not been fully realized, more specifically in generating electronic health records (EHR) data. In this paper, we propose a framework for exploring the value of GANs in the context of continuous laboratory time series data. We devise an unsupervised evaluation method that measures the predictive power of synthetic laboratory test time series. Further, we show that when it comes to predicting the impact of drug exposure on laboratory test data, incorporating representation learning of the training cohorts prior to training GAN models is beneficial.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
85,842
2208.13141
Federated Learning of Large Models at the Edge via Principal Sub-Model Training
Federated Learning (FL) is emerging as a popular, promising decentralized learning framework that enables collaborative training among clients, with no need to share private data between them or to a centralized server. However, considering many edge clients do not have sufficient computing, memory, or communication capabilities, federated learning of large models still faces significant bottlenecks. To keep such weak but crucial clients in the loop, prior works either consider a heterogeneous-client setting where clients train models with different sizes; or offload training to the server. However, the heterogeneous-client setting requires some clients to train full model, which is not aligned with the resource-constrained setting; while the latter ones break privacy promises in FL when sharing intermediate representations or labels with the server. To overcome these limitations, in this work, we formulate a realistic, but much less explored, cross-device FL setting in which no client can train a full large model nor is willing to share any intermediate information with the remote server. Under such a formulation, we develop a principal sub-model (PriSM) training methodology to collaboratively train a full large model, while assigning each client a small sub-model that is a probabilistic low-rank approximation to the full server model. When creating sub-models, PriSM first performs a principal kernel analysis in the orthogonal kernel space to obtain importance of each kernel. Then, PriSM adopts a novel importance-aware sampling process to select a subset of kernels (i.e., a kernel with high importance is assigned with a higher sampling probability). This sampling process ensures each sub-model is still a low-rank approximation to the full model, while all sub-models together achieve nearly full coverage on the principal kernels.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
314,967
2411.02366
Accelerating Multi-UAV Collaborative Sensing Data Collection: A Hybrid TDMA-NOMA-Cooperative Transmission in Cell-Free MIMO Networks
This work investigates a collaborative sensing and data collection system in which multiple unmanned aerial vehicles (UAVs) sense an area of interest and transmit images to a cloud server (CS) for processing. To accelerate the completion of sensing missions, including data transmission, the sensing task is divided into individual private sensing tasks for each UAV and a common sensing task that is executed by all UAVs to enable cooperative transmission. Unlike existing studies, we explore the use of an advanced cell-free multiple-input multiple-output (MIMO) network, which effectively manages inter-UAV interference. To further optimize wireless channel utilization, we propose a hybrid transmission strategy that combines time-division multiple access (TDMA), non-orthogonal multiple access (NOMA), and cooperative transmission. The problem of jointly optimizing task splitting ratios and the hybrid TDMA-NOMA-cooperative transmission strategy is formulated with the objective of minimizing mission completion time. Extensive numerical results demonstrate the effectiveness of the proposed task allocation and hybrid transmission scheme in accelerating the completion of sensing missions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
505,460
2205.01435
RLFlow: Optimising Neural Network Subgraph Transformation with World Models
Training deep learning models takes an extremely long execution time and consumes large amounts of computing resources. At the same time, recent research proposed systems and compilers that are expected to decrease deep learning models runtime. An effective optimisation methodology in data processing is desirable, and the reduction of compute requirements of deep learning models is the focus of extensive research. In this paper, we address the neural network sub-graph transformation by exploring reinforcement learning (RL) agents to achieve performance improvement. Our proposed approach RLFlow can learn to perform neural network subgraph transformations, without the need for expertly designed heuristics to achieve a high level of performance. Recent work has aimed at applying RL to computer systems with some success, especially using model-free RL techniques. Model-based reinforcement learning methods have seen an increased focus in research as they can be used to learn the transition dynamics of the environment; this can be leveraged to train an agent using a hallucinogenic environment such as World Model (WM), thereby increasing sample efficiency compared to model-free approaches. WM uses variational auto-encoders and it builds a model of the system and allows exploring the model in an inexpensive way. In RLFlow, we propose a design for a model-based agent with WM which learns to optimise the architecture of neural networks by performing a sequence of sub-graph transformations to reduce model runtime. We show that our approach can match the state-of-the-art performance on common convolutional networks and outperforms by up to 5% those based on transformer-style architectures
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
294,584
2309.07496
Comparison of Middlewares in Edge-to-Edge and Edge-to-Cloud Communication for Distributed ROS2 Systems
The increased data transmission and number of devices involved in communications among distributed systems make it challenging yet significantly necessary to have an efficient and reliable networking middleware. In robotics and autonomous systems, the wide application of ROS\,2 brings the possibility of utilizing various networking middlewares together with DDS in ROS\,2 for better communication among edge devices or between edge devices and the cloud. However, there is a lack of comprehensive communication performance comparison of integrating these networking middlewares with ROS\,2. In this study, we provide a quantitative analysis for the communication performance of utilized networking middlewares including MQTT and Zenoh alongside DDS in ROS\,2 among a multiple host system. For a complete and reliable comparison, we calculate the latency and throughput of these middlewares by sending distinct amounts and types of data through different network setups including Ethernet, Wi-Fi, and 4G. To further extend the evaluation to real-world application scenarios, we assess the drift error (the position changes) over time caused by these networking middlewares with the robot moving in an identical square-shaped path. Our results show that CycloneDDS performs better under Ethernet while Zenoh performs better under Wi-Fi and 4G. In the actual robot test, the robot moving trajectory drift error over time (96\,s) via Zenoh is the smallest. It is worth noting we have a discussion of the CPU utilization of these networking middlewares and the performance impact caused by enabling the security feature in ROS\,2 at the end of the paper.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
391,807
1608.04767
Proceedings of the LexSem+Logics Workshop 2016
Lexical semantics continues to play an important role in driving research directions in NLP, with the recognition and understanding of context becoming increasingly important in delivering successful outcomes in NLP tasks. Besides traditional processing areas such as word sense and named entity disambiguation, the creation and maintenance of dictionaries, annotated corpora and resources have become cornerstones of lexical semantics research and produced a wealth of contextual information that NLP processes can exploit. New efforts both to link and construct from scratch such information - as Linked Open Data or by way of formal tools coming from logic, ontologies and automated reasoning - have increased the interoperability and accessibility of resources for lexical and computational semantics, even in those languages for which they have previously been limited. LexSem+Logics 2016 combines the 1st Workshop on Lexical Semantics for Lesser-Resources Languages and the 3rd Workshop on Logics and Ontologies. The accepted papers in our program covered topics across these two areas, including: the encoding of plurals in Wordnets, the creation of a thesaurus from multiple sources based on semantic similarity metrics, and the use of cross-lingual treebanks and annotations for universal part-of-speech tagging. We also welcomed talks from two distinguished speakers: on Portuguese lexical knowledge bases (different approaches, results and their application in NLP tasks) and on new strategies for open information extraction (the capture of verb-based propositions from massive text corpora).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
59,881
1805.09781
Efficient Inference in Multi-task Cox Process Models
We generalize the log Gaussian Cox process (LGCP) framework to model multiple correlated point data jointly. The observations are treated as realizations of multiple LGCPs, whose log intensities are given by linear combinations of latent functions drawn from Gaussian process priors. The combination coefficients are also drawn from Gaussian processes and can incorporate additional dependencies. We derive closed-form expressions for the moments of the intensity functions and develop an efficient variational inference algorithm that is orders of magnitude faster than competing deterministic and stochastic approximations of multivariate LGCP, coregionalization models, and multi-task permanental processes. Our approach outperforms these benchmarks in multiple problems, offering the current state of the art in modeling multivariate point processes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,497
2404.02162
LoS Sensing-based Channel Estimation in UAV-Assisted OFDM Systems
In unmanned aerial vehicle (UAV)-assisted orthogonal frequency division multiplexing (OFDM) systems, the potential advantage of the line-of-sight (LoS) path, characterized by its high probability of existence, has not been fully harnessed, thereby impeding the improvement of channel estimation (CE) accuracy. Inspired by the ideas of integrated sensing and communication (ISAC), this letter develops a LoS sensing method aimed at detecting the presence of LoS path. Leveraging the prior information obtained from LoS path detection, the detection thresholds for resolvable paths are proposed for LoS and Non-LoS (NLoS) scenarios, respectively. By employing these specifically designed detection thresholds, denoising processing is applied to classical least square (LS) CE, thereby improving the CE accuracy. Simulation results validate the effectiveness of the proposed method in enhancing CE accuracy and demonstrate its robustness against parameter variations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
443,747
cmp-lg/9404010
Intensional Verbs Without Type-Raising or Lexical Ambiguity
We present an analysis of the semantic interpretation of intensional verbs such as seek that allows them to take direct objects of either individual or quantifier type, producing both de dicto and de re readings in the quantifier case, all without needing to stipulate type-raising or quantifying-in rules. This simple account follows directly from our use of logical deduction in linear logic to express the relationship between syntactic structures and meanings. While our analysis resembles current categorial approaches in important ways, it differs from them in allowing the greater type flexibility of categorial semantics while maintaining a precise connection to syntax. As a result, we are able to provide derivations for certain readings of sentences with intensional verbs and complex direct objects that are not derivable in current purely categorial accounts of the syntax-semantics interface. The analysis forms a part of our ongoing work on semantic interpretation within the framework of Lexical-Functional Grammar.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,043
2203.14825
HDR Reconstruction from Bracketed Exposures and Events
Reconstruction of high-quality HDR images is at the core of modern computational photography. Significant progress has been made with multi-frame HDR reconstruction methods, producing high-resolution, rich and accurate color reconstructions with high-frequency details. However, they are still prone to fail in dynamic or largely over-exposed scenes, where frame misalignment often results in visible ghosting artifacts. Recent approaches attempt to alleviate this by utilizing an event-based camera (EBC), which measures only binary changes of illuminations. Despite their desirable high temporal resolution and dynamic range characteristics, such approaches have not outperformed traditional multi-frame reconstruction methods, mainly due to the lack of color information and low-resolution sensors. In this paper, we propose to leverage both bracketed LDR images and simultaneously captured events to obtain the best of both worlds: high-quality RGB information from bracketed LDRs and complementary high frequency and dynamic range information from events. We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event modalities in the feature domain using attention and multi-scale spatial alignment modules. We propose a novel event-to-image feature distillation module that learns to translate event features into the image-feature space with self-supervision. Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window, enriching our combined feature representation. Our proposed approach surpasses SoTA multi-frame HDR reconstruction methods using synthetic and real events, with a 2dB and 1dB improvement in PSNR-L and PSNR-mu on the HdM HDR dataset, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,146
2306.09295
Neural Fine-Tuning Search for Few-Shot Learning
In few-shot recognition, a classifier that has been trained on one set of classes is required to rapidly adapt and generalize to a disjoint, novel set of classes. To that end, recent studies have shown the efficacy of fine-tuning with carefully crafted adaptation architectures. However this raises the question of: How can one design the optimal adaptation strategy? In this paper, we study this question through the lens of neural architecture search (NAS). Given a pre-trained neural network, our algorithm discovers the optimal arrangement of adapters, which layers to keep frozen and which to fine-tune. We demonstrate the generality of our NAS method by applying it to both residual networks and vision transformers and report state-of-the-art performance on Meta-Dataset and Meta-Album.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
373,752
2002.04742
Fast Geometric Projections for Local Robustness Certification
Local robustness ensures that a model classifies all inputs within an $\ell_2$-ball consistently, which precludes various forms of adversarial inputs. In this paper, we present a fast procedure for checking local robustness in feed-forward neural networks with piecewise-linear activation functions. Such networks partition the input space into a set of convex polyhedral regions in which the network's behavior is linear; hence, a systematic search for decision boundaries within the regions around a given input is sufficient for assessing robustness. Crucially, we show how the regions around a point can be analyzed using simple geometric projections, thus admitting an efficient, highly-parallel GPU implementation that excels particularly for the $\ell_2$ norm, where previous work has been less effective. Empirically we find this approach to be far more precise than many approximate verification approaches, while at the same time performing multiple orders of magnitude faster than complete verifiers, and scaling to much deeper networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,684
2005.05085
Comparison and Benchmarking of AI Models and Frameworks on Mobile Devices
Due to increasing amounts of data and compute resources, deep learning achieves many successes in various domains. The application of deep learning on the mobile and embedded devices is taken more and more attentions, benchmarking and ranking the AI abilities of mobile and embedded devices becomes an urgent problem to be solved. Considering the model diversity and framework diversity, we propose a benchmark suite, AIoTBench, which focuses on the evaluation of the inference abilities of mobile and embedded devices. AIoTBench covers three typical heavy-weight networks: ResNet50, InceptionV3, DenseNet121, as well as three light-weight networks: SqueezeNet, MobileNetV2, MnasNet. Each network is implemented by three frameworks which are designed for mobile and embedded devices: Tensorflow Lite, Caffe2, Pytorch Mobile. To compare and rank the AI capabilities of the devices, we propose two unified metrics as the AI scores: Valid Images Per Second (VIPS) and Valid FLOPs Per Second (VOPS). Currently, we have compared and ranked 5 mobile devices using our benchmark. This list will be extended and updated soon after.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
176,641
1702.02808
Memetic search for overlapping topics based on a local evaluation of link communities
In spite of recent advances in field delineation methods, bibliometricians still don't know the extent to which their topic detection algorithms reconstruct `ground truths', i.e. thematic structures in the scientific literature. In this paper, we demonstrate a new approach to the delineation of thematic structures that attempts to match the algorithm to theoretically derived and empirically observed properties all thematic structures have in common. We cluster citation links rather than publication nodes, use predominantly local information and search for communities of links starting from seed subgraphs in order to allow for pervasive overlaps of topics. We evaluate sets of links with a new cost function and assume that local minima in the cost landscape correspond to link communities. Because this cost landscape has many local minima we define a valid community as the community with the lowest minimum within a certain range. Since finding all valid communities is impossible for large networks, we designed a memetic algorithm that combines probabilistic evolutionary strategies with deterministic local searches. We apply our approach to a network of about 15,000 Astronomy & Astrophysics papers published 2010 and their cited sources, and to a network of about 100,000 Astronomy & Astrophysics papers (published 2003--2010) which are linked through direct citations.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
68,040
2105.09152
Preconditioning for a pressure-robust HDG discretization of the Stokes equations
We introduce a new preconditioner for a recently developed pressure-robust hybridized discontinuous Galerkin (HDG) finite element discretization of the Stokes equations. A feature of HDG methods is the straightforward elimination of degrees-of-freedom defined on the interior of an element. In our previous work (J. Sci. Comput., 77(3):1936--1952, 2018) we introduced a preconditioner for the case in which only the degrees-of-freedom associated with the element velocity were eliminated via static condensation. In this work we introduce a preconditioner for the statically condensed system in which the element pressure degrees-of-freedom are also eliminated. In doing so the number of globally coupled degrees-of-freedom are reduced, but at the expense of a more difficult problem to analyse. We will show, however, that the Schur complement of the statically condensed system is spectrally equivalent to a simple trace pressure mass matrix. This result is used to formulate a new, provably optimal preconditioner. Through numerical examples in two- and three-dimensions we show that the new preconditioned iterative method converges in fewer iterations, has superior conservation properties for inexact solves, and is faster in CPU time when compared to our previous preconditioner.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
235,988
1208.1613
A Dynamic Phase Selection Strategy for Satisfiability Solvers
The phase selection is an important of a SAT Solver based on conflict-driven DPLL. This paper presents a new phase selection strategy, in which the weight of each literal is defined as the sum of its implied-literals static weights. The implied literals of each literal is computed dynamically during the search. Therefore, it is call a dynamic phase selection strategy. In general, computing dynamically a weight is time-consuming. Hence, so far no SAT solver applies successfully a dynamic phase selection. Since the implied literal of our strategy conforms to that of the search process, the usual two watched-literals scheme can be applied here. Thus, the cost of our dynamic phase selection is very low. To improve Glucose 2.0 which won a Gold Medal for application category at SAT 2011 competition, we build five phase selection schemes using the dynamic phase selection policy. On application instances of SAT 2011, Glucose improved by the dynamic phase selection is significantly better than the original Glucose. We conduct also experiments on Lingeling, using the dynamic phase selection policy, and build two phase selection schemes. Experimental results show that the improved Lingeling is better than the original Lingeling.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
17,977
2407.05168
Deception in Nash Equilibrium Seeking
In socio-technical multi-agent systems, deception exploits privileged information to induce false beliefs in "victims," keeping them oblivious and leading to outcomes detrimental to them or advantageous to the deceiver. We consider model-free Nash-equilibrium-seeking for non-cooperative games with asymmetric information and introduce model-free deceptive algorithms with stability guarantees. In the simplest algorithm, the deceiver includes in his action policy the victim's exploration signal, with an amplitude tuned by an integrator of the regulation error between the deceiver's actual and desired payoff. The integral feedback drives the deceiver's payoff to the payoff's reference value, while the victim is led to adopt a suboptimal action, at which the pseudogradient of the deceiver's payoff is zero. The deceiver's and victim's actions turn out to constitute a "deceptive" Nash equilibrium of a different game, whose structure is managed - in real time - by the deceiver. We examine quadratic, aggregative, and more general games and provide conditions for a successful deception, mutual and benevolent deception, and immunity to deception. Stability results are established using techniques based on averaging and singular perturbations. Among the examples in the paper is a microeconomic duopoly in which the deceiver induces in the victim a belief that the buyers disfavor the deceiver more than they actually do, leading the victim to increase the price above the Nash price, and resulting in an increased profit for the deceiver and a decreased profit for the victim. A study of the deceiver's integral feedback for the desired profit reveals that, in duopolies with equal marginal costs, a deceiver that is greedy for very high profit can attain any such profit, and pursue this with arbitrarily high integral gain (impatiently), irrespective of the market preference for the victim.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
470,850
2205.14798
Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism
Proportionality is an attractive fairness concept that has been applied to a range of problems including the facility location problem, a classic problem in social choice. In our work, we propose a concept called Strong Proportionality, which ensures that when there are two groups of agents at different locations, both groups incur the same total cost. We show that although Strong Proportionality is a well-motivated and basic axiom, there is no deterministic strategyproof mechanism satisfying the property. We then identify a randomized mechanism called Random Rank (which uniformly selects a number $k$ between $1$ to $n$ and locates the facility at the $k$'th highest agent location) which satisfies Strong Proportionality in expectation. Our main theorem characterizes Random Rank as the unique mechanism that achieves universal truthfulness, universal anonymity, and Strong Proportionality in expectation among all randomized mechanisms. Finally, we show via the AverageOrRandomRank mechanism that even stronger ex-post fairness guarantees can be achieved by weakening universal truthfulness to strategyproofness in expectation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
299,491
1808.07371
Everybody Dance Now
This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
105,727
2205.11590
Forecasting Argumentation Frameworks
We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research. FAFs comprise update frameworks which empower (human or artificial) agents to argue over time about the probability of outcomes, e.g. the winner of a political election or a fluctuation in inflation rates, whilst flagging perceived irrationality in the agents' behaviour with a view to improving their forecasting accuracy. FAFs include five argument types, amounting to standard pro/con arguments, as in bipolar argumentation, as well as novel proposal arguments and increase/decrease amendment arguments. We adapt an existing gradual semantics for bipolar argumentation to determine the aggregated dialectical strength of proposal arguments and define irrational behaviour. We then give a simple aggregation function which produces a final group forecast from rational agents' individual forecasts. We identify and study properties of FAFs and conduct an empirical evaluation which signals FAFs' potential to increase the forecasting accuracy of participants.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
298,197
2401.16310
An Insight into Security Code Review with LLMs: Capabilities, Obstacles and Influential Factors
Security code review is a time-consuming and labor-intensive process typically requiring integration with automated security defect detection tools. However, existing security analysis tools struggle with poor generalization, high false positive rates, and coarse detection granularity. Large Language Models (LLMs) have been considered promising candidates for addressing those challenges. In this study, we conducted an empirical study to explore the potential of LLMs in detecting security defects during code review. Specifically, we evaluated the performance of six LLMs under five different prompts and compared them with state-of-theart static analysis tools. We also performed linguistic and regression analyses for the best-performing LLM to identify quality problems in its responses and factors influencing its performance. Our findings show that: (1) existing pre-trained LLMs have limited capability in security code review but? significantly outperform the state-of-the-art static analysis tools. (2) GPT-4 performs best among all LLMs when provided with a CWE list for reference. (3) GPT-4 frequently generates responses that are verbose or not compliant with the task requirements given in the prompts. (4) GPT-4 is more adept at identifying security defects in code files with fewer tokens, containing functional logic, or written by developers with less involvement in the project.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
424,774
2105.11344
OverlapNet: Loop Closing for LiDAR-based SLAM
Simultaneous localization and mapping (SLAM) is a fundamental capability required by most autonomous systems. In this paper, we address the problem of loop closing for SLAM based on 3D laser scans recorded by autonomous cars. Our approach utilizes a deep neural network exploiting different cues generated from LiDAR data for finding loop closures. It estimates an image overlap generalized to range images and provides a relative yaw angle estimate between pairs of scans. Based on such predictions, we tackle loop closure detection and integrate our approach into an existing SLAM system to improve its mapping results. We evaluate our approach on sequences of the KITTI odometry benchmark and the Ford campus dataset. We show that our method can effectively detect loop closures surpassing the detection performance of state-of-the-art methods. To highlight the generalization capabilities of our approach, we evaluate our model on the Ford campus dataset while using only KITTI for training. The experiments show that the learned representation is able to provide reliable loop closure candidates, also in unseen environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
236,670
1810.12368
A Pragmatic Guide to Geoparsing Evaluation
Empirical methods in geoparsing have thus far lacked a standard evaluation framework describing the task, metrics and data used to compare state-of-the-art systems. Evaluation is further made inconsistent, even unrepresentative of real-world usage by the lack of distinction between the different types of toponyms, which necessitates new guidelines, a consolidation of metrics and a detailed toponym taxonomy with implications for Named Entity Recognition (NER) and beyond. To address these deficiencies, our manuscript introduces a new framework in three parts. Part 1) Task Definition: clarified via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of Toponyms. Part 2) Metrics: discussed and reviewed for a rigorous evaluation including recommendations for NER/Geoparsing practitioners. Part 3) Evaluation Data: shared via a new dataset called GeoWebNews to provide test/train examples and enable immediate use of our contributions. In addition to fine-grained Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable for prototyping and evaluating machine learning NLP models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
111,743
2309.08897
Asynchronous Task Plan Refinement for Multi-Robot Task and Motion Planning
This paper explores general multi-robot task and motion planning, where multiple robots in close proximity manipulate objects while satisfying constraints and a given goal. In particular, we formulate the plan refinement problem--which, given a task plan, finds valid assignments of variables corresponding to solution trajectories--as a hybrid constraint satisfaction problem. The proposed algorithm follows several design principles that yield the following features: (1) efficient solution finding due to sequential heuristics and implicit time and roadmap representations, and (2) maximized feasible solution space obtained by introducing minimally necessary coordination-induced constraints and not relying on prevalent simplifications that exist in the literature. The evaluation results demonstrate the planning efficiency of the proposed algorithm, outperforming the synchronous approach in terms of makespan.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
392,373
2501.15263
Explainable YOLO-Based Dyslexia Detection in Synthetic Handwriting Data
Dyslexia affects reading and writing skills across many languages. This work describes a new application of YOLO-based object detection to isolate and label handwriting patterns (Normal, Reversal, Corrected) within synthetic images that resemble real words. Individual letters are first collected, preprocessed into 32x32 samples, then assembled into larger synthetic 'words' to simulate realistic handwriting. Our YOLOv11 framework simultaneously localizes each letter and classifies it into one of three categories, reflecting key dyslexia traits. Empirically, we achieve near-perfect performance, with precision, recall, and F1 metrics typically exceeding 0.999. This surpasses earlier single-letter approaches that rely on conventional CNNs or transfer-learning classifiers (for example, MobileNet-based methods in Robaa et al. arXiv:2410.19821). Unlike simpler pipelines that consider each letter in isolation, our solution processes complete word images, resulting in more authentic representations of handwriting. Although relying on synthetic data raises concerns about domain gaps, these experiments highlight the promise of YOLO-based detection for faster and more interpretable dyslexia screening. Future work will expand to real-world handwriting, other languages, and deeper explainability methods to build confidence among educators, clinicians, and families.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
527,472
1808.09363
An Issue in the Martingale Analysis of the Influence Maximization Algorithm IMM
This paper explains a subtle issue in the martingale analysis of the IMM algorithm, a state-of-the-art influence maximization algorithm. Two workarounds are proposed to fix the issue, both requiring minor changes on the algorithm and incurring a slight penalty on the running time of the algorithm.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
106,168
2403.16846
GreeDy and CoDy: Counterfactual Explainers for Dynamic Graphs
Temporal Graph Neural Networks (TGNNs), crucial for modeling dynamic graphs with time-varying interactions, face a significant challenge in explainability due to their complex model structure. Counterfactual explanations, crucial for understanding model decisions, examine how input graph changes affect outcomes. This paper introduces two novel counterfactual explanation methods for TGNNs: GreeDy (Greedy Explainer for Dynamic Graphs) and CoDy (Counterfactual Explainer for Dynamic Graphs). They treat explanations as a search problem, seeking input graph alterations that alter model predictions. GreeDy uses a simple, greedy approach, while CoDy employs a sophisticated Monte Carlo Tree Search algorithm. Experiments show both methods effectively generate clear explanations. Notably, CoDy outperforms GreeDy and existing factual methods, with up to 59\% higher success rate in finding significant counterfactual inputs. This highlights CoDy's potential in clarifying TGNN decision-making, increasing their transparency and trustworthiness in practice.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
441,213
2301.06555
Error-related Potential Variability: Exploring the Effects on Classification and Transferability
Brain-Computer Interfaces (BCI) have allowed for direct communication from the brain to external applications for the automatic detection of cognitive processes such as error recognition. Error-related potentials (ErrPs) are a particular brain signal elicited when one commits or observes an erroneous event. However, due to the noisy properties of the brain and recording devices, ErrPs vary from instance to instance as they are combined with an assortment of other brain signals, biological noise, and external noise, making the classification of ErrPs a non-trivial problem. Recent works have revealed particular cognitive processes such as awareness, embodiment, and predictability that contribute to ErrP variations. In this paper, we explore the performance of classifier transferability when trained on different ErrP variation datasets generated by varying the levels of awareness and embodiment for a given task. In particular, we look at transference between observational and interactive ErrP categories when elicited by similar and differing tasks. Our empirical results provide an exploratory analysis into the ErrP transferability problem from a data perspective.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
340,672
2404.02062
Digital Forgetting in Large Language Models: A Survey of Unlearning Methods
The objective of digital forgetting is, given a model with undesirable knowledge or behavior, obtain a new model where the detected issues are no longer present. The motivations for forgetting include privacy protection, copyright protection, elimination of biases and discrimination, and prevention of harmful content generation. Effective digital forgetting has to be effective (meaning how well the new model has forgotten the undesired knowledge/behavior), retain the performance of the original model on the desirable tasks, and be scalable (in particular forgetting has to be more efficient than retraining from scratch on just the tasks/data to be retained). This survey focuses on forgetting in large language models (LLMs). We first provide background on LLMs, including their components, the types of LLMs, and their usual training pipeline. Second, we describe the motivations, types, and desired properties of digital forgetting. Third, we introduce the approaches to digital forgetting in LLMs, among which unlearning methodologies stand out as the state of the art. Fourth, we provide a detailed taxonomy of machine unlearning methods for LLMs, and we survey and compare current approaches. Fifth, we detail datasets, models and metrics used for the evaluation of forgetting, retaining and runtime. Sixth, we discuss challenges in the area. Finally, we provide some concluding remarks.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
443,706
2402.00049
Hybrid Dynamical Model for Reluctance Actuators Including Saturation, Hysteresis and Eddy Currents
A novel hybrid dynamical model for single-coil, short-stroke reluctance actuators is presented in this paper. The model, which is partially based on the principles of magnetic equivalent circuits, includes the magnetic phenomena of hysteresis and saturation by means of the generalized Preisach model. In addition, the eddy currents induced in the iron core are also considered, and the flux fringing effect in the air is incorporated by using results from finite element simulations. An explicit solution of the dynamics without need of inverting the Preisach model is derived, and the hybrid automaton that results from combining the electromagnetic and motion equations is presented and discussed. Finally, an identification method to determine the model parameters is proposed and experimentally illustrated on a real actuator. The results are presented and the advantages of our modeling method are emphasized.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
425,456
1811.04370
Improved Visual Relocalization by Discovering Anchor Points
We address the visual relocalization problem of predicting the location and camera orientation or pose (6DOF) of the given input scene. We propose a method based on how humans determine their location using the visible landmarks. We define anchor points uniformly across the route map and propose a deep learning architecture which predicts the most relevant anchor point present in the scene as well as the relative offsets with respect to it. The relevant anchor point need not be the nearest anchor point to the ground truth location, as it might not be visible due to the pose. Hence we propose a multi task loss function, which discovers the relevant anchor point, without needing the ground truth for it. We validate the effectiveness of our approach by experimenting on CambridgeLandmarks (large scale outdoor scenes) as well as 7 Scenes (indoor scenes) using variousCNN feature extractors. Our method improves the median error in indoor as well as outdoor localization datasets compared to the previous best deep learning model known as PoseNet (with geometric re-projection loss) using the same feature extractor. We improve the median error in localization in the specific case of Street scene, by over 8m.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
113,073
2405.04177
A 49.8mm2 Fully Integrated, 1.5m Transmission-Range, High-Data-Rate IR-UWB Transmitter for Brain Implants
To address the challenge of extending the transmission range of implantable TXs while also minimizing their size and power consumption, this paper introduces a transcutaneous, high data-rate, fully integrated IR-UWB transmitter that employs a novel co-designed power amplifier (PA) and antenna interface for enhanced performance. With the co-designed interface, we achieved the smallest footprint of 49.8mm2 and the longest transmission range of 1.5m compared to the state-of-the-art IR-UWB TXs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
452,479
2308.14350
Simple Modification of the Upper Confidence Bound Algorithm by Generalized Weighted Averages
The multi-armed bandit (MAB) problem is a classical problem that models sequential decision-making under uncertainty in reinforcement learning. In this study, we propose a new generalized upper confidence bound (UCB) algorithm (GWA-UCB1) by extending UCB1, which is a representative algorithm for MAB problems, using generalized weighted averages, and present an effective algorithm for various problem settings. GWA-UCB1 is a two-parameter generalization of the balance between exploration and exploitation in UCB1 and can be implemented with a simple modification of the UCB1 formula. Therefore, this algorithm can be easily applied to UCB-based reinforcement learning models. In preliminary experiments, we investigated the optimal parameters of a simple generalized UCB1 (G-UCB1), prepared for comparison and GWA-UCB1, in a stochastic MAB problem with two arms. Subsequently, we confirmed the performance of the algorithms with the investigated parameters on stochastic MAB problems when arm reward probabilities were sampled from uniform or normal distributions and on survival MAB problems assuming more realistic situations. GWA-UCB1 outperformed G-UCB1, UCB1-Tuned, and Thompson sampling in most problem settings and can be useful in many situations. The code is available at https://github.com/manome/python-mab.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
388,293
1908.08223
NL-LinkNet: Toward Lighter but More Accurate Road Extraction with Non-Local Operations
Road extraction from very high resolution satellite (VHR) images is one of the most important topics in the field of remote sensing. In this paper, we propose an efficient Non-Local LinkNet with non-local blocks that can grasp relations between global features. This enables each spatial feature point to refer to all other contextual information and results in more accurate road segmentation. In detail, our single model without any post-processing like CRF refinement, performed better than any other published state-of-the-art ensemble model in the official DeepGlobe Challenge. Moreover, our NL-LinkNet beat the D-LinkNet, the winner of the DeepGlobe challenge, with 43 \% less parameters, less giga floating-point operations per seconds (GFLOPs) and shorter training convergence time. We also present empirical analyses on the proper usages of non-local blocks for the baseline model.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
142,496
1103.2503
Coded Single-Tone Signaling for Resource Coordination and Interference Management in Femtocell Networks
Resource coordination and interference management is the key to achieving the benefits of femtocell networks. Over-the-air signaling is one of the most effective means for distributed dynamic resource coordination and interference management. However, the design of this type of signal is challenging. In this letter, we address the challenges and propose an effective solution, referred to as coded single-tone signaling (STS). The proposed coded STS scheme possesses certain highly desirable properties, such as no dedicated resource requirement (no overhead), no near-and-far effect, no inter-signal interference (no multi-user interference), low peak-to-average power ratio (deep coverage). In addition, the proposed coded STS can fully exploit frequency diversity and provides a means for high quality wideband channel estimation. The coded STS design is demonstrated through a concrete numerical example. Performance of the proposed coded STS is evaluated through simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,590
2410.14640
HR-Bandit: Human-AI Collaborated Linear Recourse Bandit
Human doctors frequently recommend actionable recourses that allow patients to modify their conditions to access more effective treatments. Inspired by such healthcare scenarios, we propose the Recourse Linear UCB ($\textsf{RLinUCB}$) algorithm, which optimizes both action selection and feature modifications by balancing exploration and exploitation. We further extend this to the Human-AI Linear Recourse Bandit ($\textsf{HR-Bandit}$), which integrates human expertise to enhance performance. $\textsf{HR-Bandit}$ offers three key guarantees: (i) a warm-start guarantee for improved initial performance, (ii) a human-effort guarantee to minimize required human interactions, and (iii) a robustness guarantee that ensures sublinear regret even when human decisions are suboptimal. Empirical results, including a healthcare case study, validate its superior performance against existing benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,120
1702.03049
An Overview of Multi-Processor Approximate Message Passing
Approximate message passing (AMP) is an algorithmic framework for solving linear inverse problems from noisy measurements, with exciting applications such as reconstructing images, audio, hyper spectral images, and various other signals, including those acquired in compressive signal acquisiton systems. The growing prevalence of big data systems has increased interest in large-scale problems, which may involve huge measurement matrices that are unsuitable for conventional computing systems. To address the challenge of large-scale processing, multiprocessor (MP) versions of AMP have been developed. We provide an overview of two such MP-AMP variants. In row-MP-AMP, each computing node stores a subset of the rows of the matrix and processes corresponding measurements. In column- MP-AMP, each node stores a subset of columns, and is solely responsible for reconstructing a portion of the signal. We will discuss pros and cons of both approaches, summarize recent research results for each, and explain when each one may be a viable approach. Aspects that are highlighted include some recent results on state evolution for both MP-AMP algorithms, and the use of data compression to reduce communication in the MP network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,072
2412.07616
PVP: Polar Representation Boost for 3D Semantic Occupancy Prediction
Recently, polar coordinate-based representations have shown promise for 3D perceptual tasks. Compared to Cartesian methods, polar grids provide a viable alternative, offering better detail preservation in nearby spaces while covering larger areas. However, they face feature distortion due to non-uniform division. To address these issues, we introduce the Polar Voxel Occupancy Predictor (PVP), a novel 3D multi-modal predictor that operates in polar coordinates. PVP features two key design elements to overcome distortion: a Global Represent Propagation (GRP) module that integrates global spatial data into 3D volumes, and a Plane Decomposed Convolution (PD-Conv) that simplifies 3D distortions into 2D convolutions. These innovations enable PVP to outperform existing methods, achieving significant improvements in mIoU and IoU metrics on the OpenOccupancy dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
515,733
2502.01386
Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks to Retrieval-Augmented Generation Models
Retrieval-Augmented Generation (RAG) systems based on Large Language Models (LLMs) have become essential for tasks such as question answering and content generation. However, their increasing impact on public opinion and information dissemination has made them a critical focus for security research due to inherent vulnerabilities. Previous studies have predominantly addressed attacks targeting factual or single-query manipulations. In this paper, we address a more practical scenario: topic-oriented adversarial opinion manipulation attacks on RAG models, where LLMs are required to reason and synthesize multiple perspectives, rendering them particularly susceptible to systematic knowledge poisoning. Specifically, we propose Topic-FlipRAG, a two-stage manipulation attack pipeline that strategically crafts adversarial perturbations to influence opinions across related queries. This approach combines traditional adversarial ranking attack techniques and leverages the extensive internal relevant knowledge and reasoning capabilities of LLMs to execute semantic-level perturbations. Experiments show that the proposed attacks effectively shift the opinion of the model's outputs on specific topics, significantly impacting user information perception. Current mitigation methods cannot effectively defend against such attacks, highlighting the necessity for enhanced safeguards for RAG systems, and offering crucial insights for LLM security research.
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
529,835
1904.03548
Precision Matrix Estimation with Noisy and Missing Data
Estimating conditional dependence graphs and precision matrices are some of the most common problems in modern statistics and machine learning. When data are fully observed, penalized maximum likelihood-type estimators have become standard tools for estimating graphical models under sparsity conditions. Extensions of these methods to more complex settings where data are contaminated with additive or multiplicative noise have been developed in recent years. In these settings, however, the relative performance of different methods is not well understood and algorithmic gaps still exist. In particular, in high-dimensional settings these methods require using non-positive semidefinite matrices as inputs, presenting novel optimization challenges. We develop an alternating direction method of multipliers (ADMM) algorithm for these problems, providing a feasible algorithm to estimate precision matrices with indefinite input and potentially nonconvex penalties. We compare this method with existing alternative solutions and empirically characterize the tradeoffs between them. Finally, we use this method to explore the networks among US senators estimated from voting records data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
126,760
2404.03186
RAnGE: Reachability Analysis for Guaranteed Ergodicity
This paper investigates performance guarantees on coverage-based ergodic exploration methods in environments containing disturbances. Ergodic exploration methods generate trajectories for autonomous robots such that time spent in each area of the exploration space is proportional to the utility of exploring in the area. We find that it is possible to use techniques from reachability analysis to solve for optimal controllers that guarantee ergodic coverage and are robust against disturbances. We formulate ergodic search as a differential game between the controller optimizing for ergodicity and an external disturbance, and we derive the reachability equations for ergodic search using an extended-state Bolza-form transform of the ergodic problem. Contributions include the computation of a continuous value function for the ergodic exploration problem and the derivation of a controller that provides guarantees for coverage under disturbances. Our approach leverages neural-network-based methods to solve the reachability equations; we also construct a robust model-predictive controller for comparison. Simulated and experimental results demonstrate the efficacy of our approach for generating robust ergodic trajectories for search and exploration on a 1D system with an external disturbance force.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
444,152
2003.12687
Serialized Output Training for End-to-End Overlapped Speech Recognition
This paper proposes serialized output training (SOT), a novel framework for multi-speaker overlapped speech recognition based on an attention-based encoder-decoder approach. Instead of having multiple output layers as with the permutation invariant training (PIT), SOT uses a model with only one output layer that generates the transcriptions of multiple speakers one after another. The attention and decoder modules take care of producing multiple transcriptions from overlapped speech. SOT has two advantages over PIT: (1) no limitation in the maximum number of speakers, and (2) an ability to model the dependencies among outputs for different speakers. We also propose a simple trick that allows SOT to be executed in $O(S)$, where $S$ is the number of the speakers in the training sample, by using the start times of the constituent source utterances. Experimental results on LibriSpeech corpus show that the SOT models can transcribe overlapped speech with variable numbers of speakers significantly better than PIT-based models. We also show that the SOT models can accurately count the number of speakers in the input audio.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
169,980
2208.04726
Deep Patch Visual Odometry
We propose Deep Patch Visual Odometry (DPVO), a new deep learning system for monocular Visual Odometry (VO). DPVO uses a novel recurrent network architecture designed for tracking image patches across time. Recent approaches to VO have significantly improved the state-of-the-art accuracy by using deep networks to predict dense flow between video frames. However, using dense flow incurs a large computational cost, making these previous methods impractical for many use cases. Despite this, it has been assumed that dense flow is important as it provides additional redundancy against incorrect matches. DPVO disproves this assumption, showing that it is possible to get the best accuracy and efficiency by exploiting the advantages of sparse patch-based matching over dense flow. DPVO introduces a novel recurrent update operator for patch based correspondence coupled with differentiable bundle adjustment. On Standard benchmarks, DPVO outperforms all prior work, including the learning-based state-of-the-art VO-system (DROID) using a third of the memory while running 3x faster on average. Code is available at https://github.com/princeton-vl/DPVO
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,213
cmp-lg/9607003
Domain and Language Independent Feature Extraction for Statistical Text Categorization
A generic system for text categorization is presented which uses a representative text corpus to adapt the processing steps: feature extraction, dimension reduction, and classification. Feature extraction automatically learns features from the corpus by reducing actual word forms using statistical information of the corpus and general linguistic knowledge. The dimension of feature vector is then reduced by linear transformation keeping the essential information. The classification principle is a minimum least square approach based on polynomials. The described system can be readily adapted to new domains or new languages. In application, the system is reliable, fast, and processes completely automatically. It is shown that the text categorizer works successfully both on text generated by document image analysis - DIA and on ground truth data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,601
2001.11100
A Scalable Framework for Quality Assessment of RDF Datasets
Over the last years, Linked Data has grown continuously. Today, we count more than 10,000 datasets being available online following Linked Data standards. These standards allow data to be machine readable and inter-operable. Nevertheless, many applications, such as data integration, search, and interlinking, cannot take full advantage of Linked Data if it is of low quality. There exist a few approaches for the quality assessment of Linked Data, but their performance degrades with the increase in data size and quickly grows beyond the capabilities of a single machine. In this paper, we present DistQualityAssessment -- an open source implementation of quality assessment of large RDF datasets that can scale out to a cluster of machines. This is the first distributed, in-memory approach for computing different quality metrics for large RDF datasets using Apache Spark. We also provide a quality assessment pattern that can be used to generate new scalable metrics that can be applied to big data. The work presented here is integrated with the SANSA framework and has been applied to at least three use cases beyond the SANSA community. The results show that our approach is more generic, efficient, and scalable as compared to previously proposed approaches.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
161,969
2011.11925
A Statistical Characterization of Localization Performance in Millimeter-Wave Cellular Networks
Millimeter-wave (mmWave) communication is a promising solution for achieving high data rate and low latency in 5G wireless cellular networks. Since directional beamforming and antenna arrays are exploited in the mmWave networks, accurate angle-of-arrival (AOA) information can be obtained and utilized for localization purposes. The performance of a localization system is typically assessed by the Cramer-Rao lower bound (CRLB) evaluated based on fixed node locations. However, this strategy only produces a fixed value for the CRLB specific to the scenario of interest. To allow randomly distributed nodes, stochastic geometry has been proposed to study the CRLB for time-of-arrival-based localization. To the best of our knowledge, this methodology has not yet been investigated for AOA-based localization. In this work, we are motivated to consider the mmWave cellular network and derive the CRLB for AOA-based localization and its distribution using stochastic geometry. We analyze how the CRLB is affected by the node locations' spatial distribution, including the target and participating base stations. To apply the CRLB on a network setting with random node locations, we propose an accurate approximation of the CRLB using the L/4-th value of ordered distances where L is the number of participating base stations. Furthermore, we derive the localizability of the mmWave network, which is the probability that a target is localizable, and examine how the network parameters influence the localization performance. These findings provide us deep insight into optimum network design that meets specified localization requirements.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
207,994
1608.06132
Reconstructing Neural Parameters and Synapses of arbitrary interconnected Neurons from their Simulated Spiking Activity
To understand the behavior of a neural circuit it is a presupposition that we have a model of the dynamical system describing this circuit. This model is determined by several parameters, including not only the synaptic weights, but also the parameters of each neuron. Existing works mainly concentrate on either the synaptic weights or the neural parameters. In this paper we present an algorithm to reconstruct all parameters including the synaptic weights of a spiking neuron model. The model based on works of Eugene M. Izhikevich (Izhikevich 2007) consists of two differential equations and covers different types of cortical neurons. It combines the dynamical properties of Hodgkin-Huxley-type dynamics with a high computational efficiency. The presented algorithm uses the recordings of the corresponding membrane potentials of the model for the reconstruction and consists of two main components. The first component is a rank based Genetic Algorithm (GA) which is used to find the neural parameters of the model. The second one is a Least Mean Squares approach which computes the synaptic weights of all interconnected neurons by minimizing the squared error between the calculated and the measured membrane potentials for each time step. In preparation for the reconstruction of the neural parameters and of the synaptic weights from real measured membrane potentials, promising results based on simulated data generated with a randomly parametrized Izhikevich model are presented. The reconstruction does not only converge to a global minimum of neural parameters, but also approximates the synaptic weights with high precision.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
60,076
2501.05435
Neuro-Symbolic AI in 2024: A Systematic Review
Background: The field of Artificial Intelligence has undergone cyclical periods of growth and decline, known as AI summers and winters. Currently, we are in the third AI summer, characterized by significant advancements and commercialization, particularly in the integration of Symbolic AI and Sub-Symbolic AI, leading to the emergence of Neuro-Symbolic AI. Methods: The review followed the PRISMA methodology, utilizing databases such as IEEE Explore, Google Scholar, arXiv, ACM, and SpringerLink. The inclusion criteria targeted peer-reviewed papers published between 2020 and 2024. Papers were screened for relevance to Neuro-Symbolic AI, with further inclusion based on the availability of associated codebases to ensure reproducibility. Results: From an initial pool of 1,428 papers, 167 met the inclusion criteria and were analyzed in detail. The majority of research efforts are concentrated in the areas of learning and inference (63%), logic and reasoning (35%), and knowledge representation (44%). Explainability and trustworthiness are less represented (28%), with Meta-Cognition being the least explored area (5%). The review identifies significant interdisciplinary opportunities, particularly in integrating explainability and trustworthiness with other research areas. Conclusion: Neuro-Symbolic AI research has seen rapid growth since 2020, with concentrated efforts in learning and inference. Significant gaps remain in explainability, trustworthiness, and Meta-Cognition. Addressing these gaps through interdisciplinary research will be crucial for advancing the field towards more intelligent, reliable, and context-aware AI systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
523,578
2407.05789
CANDID DAC: Leveraging Coupled Action Dimensions with Importance Differences in DAC
High-dimensional action spaces remain a challenge for dynamic algorithm configuration (DAC). Interdependencies and varying importance between action dimensions are further known key characteristics of DAC problems. We argue that these Coupled Action Dimensions with Importance Differences (CANDID) represent aspects of the DAC problem that are not yet fully explored. To address this gap, we introduce a new white-box benchmark within the DACBench suite that simulates the properties of CANDID. Further, we propose sequential policies as an effective strategy for managing these properties. Such policies factorize the action space and mitigate exponential growth by learning a policy per action dimension. At the same time, these policies accommodate the interdependence of action dimensions by fostering implicit coordination. We show this in an experimental study of value-based policies on our new benchmark. This study demonstrates that sequential policies significantly outperform independent learning of factorized policies in CANDID action spaces. In addition, they overcome the scalability limitations associated with learning a single policy across all action dimensions. The code used for our experiments is available under https://github.com/PhilippBordne/candidDAC.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
471,135
2210.10984
RAIS: Robust and Accurate Interactive Segmentation via Continual Learning
Interactive image segmentation aims at segmenting a target region through a way of human-computer interaction. Recent works based on deep learning have achieved excellent performance, while most of them focus on improving the accuracy of the training set and ignore potential improvement on the test set. In the inference phase, they tend to have a good performance on similar domains to the training set, and lack adaptability to domain shift, so they require more user efforts to obtain satisfactory results. In this work, we propose RAIS, a robust and accurate architecture for interactive segmentation with continuous learning, where the model can learn from both train and test data sets. For efficient learning on the test set, we propose a novel optimization strategy to update global and local parameters with a basic segmentation module and adaptation module, respectively. Moreover, we perform extensive experiments on several benchmarks that show our method can handle data distribution shifts and achieves SOTA performance compared with recent interactive segmentation methods. Besides, our method also shows its robustness in the datasets of remote sensing and medical imaging where the data domains are completely different between training and testing.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,133
2112.03205
Simultaneously Predicting Multiple Plant Traits from Multiple Sensors via Deformable CNN Regression
Trait measurement is critical for the plant breeding and agricultural production pipeline. Typically, a suite of plant traits is measured using laborious manual measurements and then used to train and/or validate higher throughput trait estimation techniques. Here, we introduce a relatively simple convolutional neural network (CNN) model that accepts multiple sensor inputs and predicts multiple continuous trait outputs - i.e. a multi-input, multi-output CNN (MIMO-CNN). Further, we introduce deformable convolutional layers into this network architecture (MIMO-DCNN) to enable the model to adaptively adjust its receptive field, model complex variable geometric transformations in the data, and fine-tune the continuous trait outputs. We examine how the MIMO-CNN and MIMO-DCNN models perform on a multi-input (i.e. RGB and depth images), multi-trait output lettuce dataset from the 2021 Autonomous Greenhouse Challenge. Ablation studies were conducted to examine the effect of using single versus multiple inputs, and single versus multiple outputs. The MIMO-DCNN model resulted in a normalized mean squared error (NMSE) of 0.068 - a substantial improvement over the top 2021 leaderboard score of 0.081. Open-source code is provided.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,116
2307.00360
BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer
BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University. It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio. In the modeling level, we employ a bidirectional autoregressive architecture that allows the model to efficiently capture the complex dependencies of natural language, making it highly effective in tasks such as language generation, dialog systems, and question answering. Moreover, the bidirectional autoregressive modeling not only operates from left to right but also from right to left, effectively reducing fixed memory effects and alleviating model hallucinations. In the training aspect, we propose a novel parameter expansion method for leveraging the pre-training of smaller models and employ reinforcement learning from both AI and human feedback, aimed at improving the model's alignment performance. Overall, these approaches significantly improve the effectiveness of BatGPT, and the model can be utilized for a wide range of natural language applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
376,976
0704.3878
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints
A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. The effect of trellis-coded modulation on energy efficiency is also discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
119
2112.15319
A Critical Review of Inductive Logic Programming Techniques for Explainable AI
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence systems, Explainable Artificial Intelligence has emerged as a response to improving modern machine learning algorithms' explainability. Inductive Logic Programming (ILP), a subfield of symbolic artificial intelligence, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory artificial intelligence systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,743