id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2404.13925
MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit
Large language models (LLMs) have been explored in a variety of reasoning tasks including solving of mathematical problems. Each math dataset typically includes its own specially designed evaluation script, which, while suitable for its intended use, lacks generalizability across different datasets. Consequently, updates and adaptations to these evaluation tools tend to occur without being systematically reported, leading to inconsistencies and obstacles to fair comparison across studies. To bridge this gap, we introduce a comprehensive mathematical evaluation toolkit that not only utilizes a python computer algebra system (CAS) for its numerical accuracy, but also integrates an optional LLM, known for its considerable natural language processing capabilities. To validate the effectiveness of our toolkit, we manually annotated two distinct datasets. Our experiments demonstrate that the toolkit yields more robust evaluation results compared to prior works, even without an LLM. Furthermore, when an LLM is incorporated, there is a notable enhancement. The code for our method will be made available at \url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
448,503
2101.01418
Support Vector Machine and YOLO for a Mobile Food Grading System
Food quality and safety are of great concern to society since it is an essential guarantee not only for human health but also for social development, and stability. Ensuring food quality and safety is a complex process. All food processing stages should be considered, from cultivating, harvesting and storage to preparation and consumption. Grading is one of the essential processes to control food quality. This paper proposed a mobile visual-based system to evaluate food grading. Specifically, the proposed system acquires images of bananas when they are on moving conveyors. A two-layer image processing system based on machine learning is used to grade bananas, and these two layers are allocated on edge devices and cloud servers, respectively. Support Vector Machine (SVM) is the first layer to classify bananas based on an extracted feature vector composed of color and texture features. Then, the a You Only Look Once (YOLO) v3 model further locating the peel's defected area and determining if the inputs belong to the mid-ripened or well-ripened class. According to experimental results, the first layer's performance achieved an accuracy of 98.5% while the accuracy of the second layer is 85.7%, and the overall accuracy is 96.4%.
false
false
false
false
true
false
true
false
false
false
true
true
false
false
false
false
false
false
214,365
2211.02947
Prototypical quadruplet for few-shot class incremental learning
Scarcity of data and incremental learning of new tasks pose two major bottlenecks for many modern computer vision algorithms. The phenomenon of catastrophic forgetting, i.e., the model's inability to classify previously learned data after training with new batches of data, is a major challenge. Conventional methods address catastrophic forgetting while compromising the current session's training. Generative replay-based approaches, such as generative adversarial networks (GANs), have been proposed to mitigate catastrophic forgetting, but training GANs with few samples may lead to instability. To address these challenges, we propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss. Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes, by updating previous session class prototypes to represent the true class mean, which is crucial for our nearest class mean classification strategy. We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
328,765
2406.00333
A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation
The training paradigm integrating large language models (LLM) is gradually reshaping sequential recommender systems (SRS) and has shown promising results. However, most existing LLM-enhanced methods rely on rich textual information on the item side and instance-level supervised fine-tuning (SFT) to inject collaborative information into LLM, which is inefficient and limited in many applications. To alleviate these problems, this paper proposes a practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for SRS. Specifically, in the information reconstruction stage, we design a new user-level SFT task for collaborative information injection with the assistance of a pre-trained SRS model, which is more efficient and compatible with limited text information. Our goal is to let LLM learn to reconstruct a corresponding prior preference distribution from each user's interaction sequence, where LLM needs to effectively parse the latent category of each item and the relationship between different items to accomplish this task. In the information augmentation stage, we feed each item into LLM to obtain a set of enhanced embeddings that combine collaborative information and LLM inference capabilities. These embeddings can then be used to help train various future SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec on three SRS benchmark datasets.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
459,803
2403.08270
Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing. Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features. In addition, due to the absence of explicit supervision to keep the model constantly focused on cloth-irrelevant areas, existing methods are still hampered by the disruption of clothing variations. To solve the above issues, we propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task. Specifically, to help the model extract cloth-irrelevant clues, we propose a Clothes Diversity Augmentation (CDA), which generates more realistic cloth-changing samples by enriching the clothing color while preserving the texture. In addition, a Multi-scale Constraint Block (MCB) is designed, which extracts fine-grained identity-related features and effectively transfers cloth-irrelevant knowledge. Moreover, a Counterfactual-guided Attention Module (CAM) is presented, which learns cloth-irrelevant features from channel and space dimensions and utilizes the counterfactual intervention for supervising the attention map to highlight identity-related regions. Finally, a Semantic Alignment Constraint (SAC) is designed to facilitate high-level semantic feature interaction. Comprehensive experiments on four CC-ReID datasets indicate that our method outperforms prior state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
437,267
2110.14216
What Do We Mean by Generalization in Federated Learning?
Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from local data distributions. Thus generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap). In this work, we propose a framework for disentangling these performance gaps. Using this framework, we observe and explain differences in behavior across natural and synthetic federated datasets, indicating that dataset synthesis strategy can be important for realistic simulations of generalization in federated learning. We propose a semantic synthesis strategy that enables realistic simulation without naturally-partitioned data. Informed by our findings, we call out community suggestions for future federated learning works.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
263,467
2408.02811
Development of REGAI: Rubric Enabled Generative Artificial Intelligence
This paper presents and evaluates a new retrieval augmented generation (RAG) and large language model (LLM)-based artificial intelligence (AI) technique: rubric enabled generative artificial intelligence (REGAI). REGAI uses rubrics, which can be created manually or automatically by the system, to enhance the performance of LLMs for evaluation purposes. REGAI improves on the performance of both classical LLMs and RAG-based LLM techniques. This paper describes REGAI, presents data regarding its performance and discusses several possible application areas for the technology.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
478,774
2403.06223
IDEAS: Information-Driven EV Admission in Charging Station Considering User Impatience to Improve QoS and Station Utilization
Our work delves into user behaviour at Electric Vehicle(EV) charging stations during peak times, particularly focusing on how impatience drives balking (not joining queues) and reneging (leaving queues prematurely). We introduce an Agent-based simulation framework that incorporates user optimism levels (pessimistic, standard, and optimistic) in the queue dynamics. Unlike previous work, this framework highlights the crucial role of human behaviour in shaping station efficiency for peak demand. The simulation reveals a key issue: balking often occurs due to a lack of queue insights, creating user dilemmas. To address this, we propose real-time sharing of wait time metrics with arriving EV users at the station. This ensures better Quality of Service (QoS) with user-informed queue joining and demonstrates significant reductions in reneging (up to 94%) improving the charging operation. Further analysis shows that charging speed decreases significantly beyond 80%, but most users prioritize full charges due to range anxiety, leading to a longer queue. To address this, we propose a two-mode, two-port charger design with power-sharing options. This allows users to fast-charge to 80% and automatically switch to slow charging, enabling fast charging on the second port. Thus, increasing fast charger availability and throughput by up to 5%. As the mobility sector transitions towards intelligent traffic, our modelling framework, which integrates human decision-making within automated planning, provides valuable insights for optimizing charging station efficiency and improving the user experience. This approach is particularly relevant during the introduction phase of new stations, when historical data might be limited.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
436,355
1911.05494
Concept Drift Adaptive Physical Event Detection for Social Media Streams
Event detection has long been the domain of physical sensors operating in a static dataset assumption. The prevalence of social media and web access has led to the emergence of social, or human sensors who report on events globally. This warrants development of event detectors that can take advantage of the truly dense and high spatial and temporal resolution data provided by more than 3 billion social users. The phenomenon of concept drift, which causes terms and signals associated with a topic to change over time, renders static machine learning ineffective. Towards this end, we present an application for physical event detection on social sensors that improves traditional physical event detection with concept drift adaptation. Our approach continuously updates its machine learning classifiers automatically, without the need for human intervention. It integrates data from heterogeneous sources and is designed to handle weak-signal events (landslides, wildfires) with around ten posts per event in addition to large-signal events (hurricanes, earthquakes) with hundreds of thousands of posts per event. We demonstrate a landslide detector on our application that detects almost 350% more land-slides compared to static approaches. Our application has high performance: using classifiers trained in 2014, achieving event detection accuracy of 0.988, compared to 0.762 for static approaches.
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
false
153,283
2307.10928
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. We publicly release the evaluation data and code implementation at https://github.com/kaistAI/FLASK.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
380,741
2311.16657
SCALAR-NeRF: SCAlable LARge-scale Neural Radiance Fields for Scene Reconstruction
In this work, we introduce SCALAR-NeRF, a novel framework tailored for scalable large-scale neural scene reconstruction. We structure the neural representation as an encoder-decoder architecture, where the encoder processes 3D point coordinates to produce encoded features, and the decoder generates geometric values that include volume densities of signed distances and colors. Our approach first trains a coarse global model on the entire image dataset. Subsequently, we partition the images into smaller blocks using KMeans with each block being modeled by a dedicated local model. We enhance the overlapping regions across different blocks by scaling up the bounding boxes of each local block. Notably, the decoder from the global model is shared across distinct blocks and therefore promoting alignment in the feature space of local encoders. We propose an effective and efficient methodology to fuse the outputs from these local models to attain the final reconstruction. Employing this refined coarse-to-fine strategy, our method outperforms state-of-the-art NeRF methods and demonstrates scalability for large-scale scene reconstruction. The code will be available on our project page at https://aibluefisher.github.io/SCALAR-NeRF/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,005
2412.16250
Training-free Heterogeneous Graph Condensation via Data Selection
Efficient training of large-scale heterogeneous graphs is of paramount importance in real-world applications. However, existing approaches typically explore simplified models to mitigate resource and time overhead, neglecting the crucial aspect of simplifying large-scale heterogeneous graphs from the data-centric perspective. Addressing this gap, HGCond introduces graph condensation (GC) in heterogeneous graphs and generates a small condensed graph for efficient model training. Despite its efficacy in graph generation, HGCond encounters two significant limitations. The first is low effectiveness, HGCond excessively relies on the simplest relay model for the condensation procedure, which restricts the ability to exert powerful Heterogeneous Graph Neural Networks (HGNNs) with flexible condensation ratio and limits the generalization ability. The second is low efficiency, HGCond follows the existing GC methods designed for homogeneous graphs and leverages the sophisticated optimization paradigm, resulting in a time-consuming condensing procedure. In light of these challenges, we present the first Training \underline{Free} Heterogeneous Graph Condensation method, termed FreeHGC, facilitating both efficient and high-quality generation of heterogeneous condensed graphs. Specifically, we reformulate the heterogeneous graph condensation problem as a data selection issue, offering a new perspective for assessing and condensing representative nodes and edges in the heterogeneous graphs. By leveraging rich meta-paths, we introduce a new, high-quality heterogeneous data selection criterion to select target-type nodes. Furthermore, two training-free condensation strategies for heterogeneous graphs are designed to condense and synthesize other-types nodes effectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
519,440
2011.06680
Cognitive RF-FSO Fronthaul Assignment in Cell-Free and User-Centric mMIMO Networks
Cell-free massive MIMO (CF-mMIMO) network and its user-centric (UC) version are considered as promising techniques for the next generations of wireless networks. However, fronthaul and backhaul assignments are challenging issues in these networks. In this paper, energy efficiencies of uplink transmission for the CF- and UC-mMIMO networks are studied, wherein access points (APs) are connected to aggregation nodes (ANs) through radio frequency (RF) and/or free-space optic (FSO) fronthauls, and the ANs are connected to a central processing unit via fiber backhauls. The achievable data rates are derived by taking into account the effects of hardware non-ideality at the APs and ANs, FSO alignment and weather conditions. To have a robust and energy-efficient network, especially in the presence of FSO misalignment and adverse weather conditions, firstly, a cognitive RF--FSO fronthaul assignment algorithm is proposed at the cost of sharing the available RF bandwidth between the access and fronthaul links. Then, optimal power allocations at the users and APs are investigated, and two analytical approaches are proposed to solve the non-convex optimization problem. Through numerical results, we have discussed how utilizing the cognitive RF--FSO fronthaul assignment achieves higher energy efficiency compared to that of FSO-only, RF-only, or simultaneously using RF and FSO fronthaul links, e.g., achieving up to $198\%$ higher energy efficiency under unfavorable conditions. Moreover, the effects of FSO misalignment, weather conditions, and power allocations on the performances of the CF- and UC-mMIMO networks are discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
206,303
2006.03638
Robust Face Verification via Disentangled Representations
We introduce a robust algorithm for face verification, i.e., deciding whether twoimages are of the same person or not. Our approach is a novel take on the idea ofusing deep generative networks for adversarial robustness. We use the generativemodel during training as an online augmentation method instead of a test-timepurifier that removes adversarial noise. Our architecture uses a contrastive loss termand a disentangled generative model to sample negative pairs. Instead of randomlypairing two real images, we pair an image with its class-modified counterpart whilekeeping its content (pose, head tilt, hair, etc.) intact. This enables us to efficientlysample hard negative pairs for the contrastive loss. We experimentally show that, when coupled with adversarial training, the proposed scheme converges with aweak inner solver and has a higher clean and robust accuracy than state-of-the-art-methods when evaluated against white-box physical attacks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
180,375
1810.07889
Robust Transmissions in Wireless Powered Multi-Relay Networks with Chance Interference Constraints
In this paper, we consider a wireless powered multi-relay network in which a multi-antenna hybrid access point underlaying a cellular system transmits information to distant receivers. Multiple relays capable of energy harvesting are deployed in the network to assist the information transmission. The hybrid access point can wirelessly supply energy to the relays, achieving multi-user gains from signal and energy cooperation. We propose a joint optimization for signal beamforming of the hybrid access point as well as wireless energy harvesting and collaborative beamforming strategies of the relays. The objective is to maximize network throughput subject to probabilistic interference constraints at the cellular user equipment. We formulate the throughput maximization with both the time-switching and power-splitting schemes, which impose very different couplings between the operating parameters for wireless power and information transfer. Although the optimization problems are inherently non-convex, they share similar structural properties that can be leveraged for efficient algorithm design. In particular, by exploiting monotonicity in the throughput, we maximize it iteratively via customized polyblock approximation with reduced complexity. The numerical results show that the proposed algorithms can achieve close to optimal performance in terms of the energy efficiency and throughput.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
110,714
1910.07969
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Human explanations of high-level decisions are often expressed in terms of key concepts the decisions are based on. In this paper, we study such concept-based explainability for Deep Neural Networks (DNNs). First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior based on the assumption that complete concept scores are sufficient statistics of the model prediction. Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations. To define an importance score for each discovered concept, we adapt game-theoretic notions to aggregate over sets and propose ConceptSHAP. Via proposed metrics and user studies, on a synthetic dataset with apriori-known concept explanations, as well as on real-world image and language datasets, we validate the effectiveness of our method in finding concepts that are both complete in explaining the decisions and interpretable. (The code is released at https://github.com/chihkuanyeh/concept_exp)
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
149,747
1710.04748
HyperENTM: Evolving Scalable Neural Turing Machines through HyperNEAT
Recent developments within memory-augmented neural networks have solved sequential problems requiring long-term memory, which are intractable for traditional neural networks. However, current approaches still struggle to scale to large memory sizes and sequence lengths. In this paper we show how access to memory can be encoded geometrically through a HyperNEAT-based Neural Turing Machine (HyperENTM). We demonstrate that using the indirect HyperNEAT encoding allows for training on small memory vectors in a bit-vector copy task and then applying the knowledge gained from such training to speed up training on larger size memory vectors. Additionally, we demonstrate that in some instances, networks trained to copy bit-vectors of size 9 can be scaled to sizes of 1,000 without further training. While the task in this paper is simple, these results could open up the problems amendable to networks with external memories to problems with larger memory vectors and theoretically unbounded memory sizes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
82,520
2404.14964
Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks
Training spiking neural networks to approximate universal functions is essential for studying information processing in the brain and for neuromorphic computing. Yet the binary nature of spikes poses a challenge for direct gradient-based training. Surrogate gradients have been empirically successful in circumventing this problem, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to the lack of support for automatic differentiation, are impractical for training multi-layer spiking neural networks but provide derivatives equivalent to surrogate gradients for single neurons. On the other hand, we investigate stochastic automatic differentiation, which is compatible with discrete randomness but has not yet been used to train spiking neural networks. We find that the latter gives surrogate gradients a theoretical basis in stochastic spiking neural networks, where the surrogate derivative matches the derivative of the neuronal escape noise function. This finding supports the effectiveness of surrogate gradients in practice and suggests their suitability for stochastic spiking neural networks. However, surrogate gradients are generally not gradients of a surrogate loss despite their relation to stochastic automatic differentiation. Nevertheless, we empirically confirm the effectiveness of surrogate gradients in stochastic multi-layer spiking neural networks and discuss their relation to deterministic networks as a special case. Our work gives theoretical support to surrogate gradients and the choice of a suitable surrogate derivative in stochastic spiking neural networks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
448,889
2208.10248
Composing RNNs and FSTs for Small Data: Recovering Missing Characters in Old Hawaiian Text
In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops. These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation. However, transliterating between older and newer texts is a laborious task when performed manually. We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model. One method is implemented, end-to-end, using finite state transducers (FSTs). The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network (RNN). We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
313,985
2108.02323
Active Reinforcement Learning over MDPs
The past decade has seen the rapid development of Reinforcement Learning, which acquires impressive performance with numerous training resources. However, one of the greatest challenges in RL is generalization efficiency (i.e., generalization performance in a unit time). This paper proposes a framework of Active Reinforcement Learning (ARL) over MDPs to improve generalization efficiency in a limited resource by instance selection. Given a number of instances, the algorithm chooses out valuable instances as training sets while training the policy, thereby costing fewer resources. Unlike existing approaches, we attempt to actively select and use training data rather than train on all the given data, thereby costing fewer resources. Furthermore, we introduce a general instance evaluation metrics and selection mechanism into the framework. Experiments results reveal that the proposed framework with Proximal Policy Optimization as policy optimizer can effectively improve generalization efficiency than unselect-ed and unbiased selected methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
249,281
1707.08232
Quality-Driven Resource Allocation for Full-Duplex Delay-Constrained Wireless Video Transmissions
In this paper, wireless video transmission over full-duplex channels under total bandwidth and minimum required quality constraints is studied. In order to provide the desired performance levels to the end-users in real-time video transmissions, quality of service (QoS) requirements such as statistical delay constraints are also considered. Effective capacity (EC) is used as the throughput metric in the presence of such statistical delay constraints since deterministic delay bounds are difficult to guarantee due to the time-varying nature of wireless fading channels. A communication scenario with multiple pairs of users in which different users have different delay requirements is addressed. Following characterizations from the rate-distortion (R-D) theory, a logarithmic model of the quality-rate relation is used for predicting the quality of the reconstructed video in terms of the peak signal-to-noise ratio (PSNR) at the receiver side. Since the optimization problem is not concave or convex, the optimal bandwidth and power allocation policies that maximize the weighted sum video quality subject to total bandwidth, maximum transmission power level and minimum required quality constraints are derived by using monotonic optimization (MO) theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
77,780
2406.17254
Scalp Diagnostic System With Label-Free Segmentation and Training-Free Image Translation
Scalp diseases and alopecia affect millions of people around the world, underscoring the urgent need for early diagnosis and management of the disease. However, the development of a comprehensive AI-based diagnosis system encompassing these conditions remains an underexplored domain due to the challenges associated with data imbalance and the costly nature of labeling. To address these issues, we propose ScalpVision, an AI-driven system for the holistic diagnosis of scalp diseases and alopecia. In ScalpVision, effective hair segmentation is achieved using pseudo image-label pairs and an innovative prompting method in the absence of traditional hair masking labels. This approach is crucial for extracting key features such as hair thickness and count, which are then used to assess alopecia severity. Additionally, ScalpVision introduces DiffuseIT-M, a generative model adept at dataset augmentation while maintaining hair information, facilitating improved predictions of scalp disease severity. Our experimental results affirm ScalpVision's efficiency in diagnosing a variety of scalp conditions and alopecia, showcasing its potential as a valuable tool in dermatological care.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
467,487
2106.02714
Point Cloud Failure Criterion for Composites using k-Nearest Neighbor Classification
Numerous theories of failure have been postulated and implemented in various commercial programs for composite materials. Even the best theories have had limited success in predicting damage and failure in validation exercises. In view of this background, many researchers have started exploring the use of multiscale modeling to improve the fidelity of the modeling and simulation of various structural and materials systems. In this paper, a multi-scale modeling scheme is used to illustrate how a combination of virtual and laboratory testing programs can be used to generate a point cloud of failure surface data that can then be queried during finite element analysis at the continuum scale to ascertain if the onset of failure has occurred. The k-nearest neighbor (k-NN) classification concept is used to obtain the answer to the query. A linear, elastic, static finite element example using a unidirectional composite shows that the framework can be generated and used effectively and efficiently with the possibility to extend the approach for all types of composite architectures and behaviors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
238,988
2407.01929
What We Talk About When We Talk About LMs: Implicit Paradigm Shifts and the Ship of Language Models
The term Language Models (LMs) as a time-specific collection of models of interest is constantly reinvented, with its referents updated much like the $\textit{Ship of Theseus}$ replaces its parts but remains the same ship in essence. In this paper, we investigate this $\textit{Ship of Language Models}$ problem, wherein scientific evolution takes the form of continuous, implicit retrofits of key existing terms. We seek to initiate a novel perspective of scientific progress, in addition to the more well-studied emergence of new terms. To this end, we construct the data infrastructure based on recent NLP publications. Then, we perform a series of text-based analyses toward a detailed, quantitative understanding of the use of Language Models as a term of art. Our work highlights how systems and theories influence each other in scientific discourse, and we call for attention to the transformation of this Ship that we all are contributing to.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
469,516
2206.03217
A Perspective on K-12 AI Education
Artificial intelligence (AI), which enables machines to learn to perform a task by training on diverse datasets, is one of the most revolutionary developments in scientific history. Although AI and especially deep learning is relatively new, it has already had transformative impact on medicine, biology, transportation, entertainment, and beyond. As AI changes our daily lives at an increasingly fast pace, we are challenged with preparing our society for an AI-driven future. To this end, a critical step is to ensure an AI-ready workforce through education. Advocates of beginning instruction of AI basics at the K-12 level typically note benefits to the workforce, economy, and national security. In this complementary perspective, we discuss why learning AI is beneficial for motivating students and promoting creative thinking, and how to develop a module-based approach that optimizes learning outcomes. We hope to excite and engage more members of the education community to join the effort to advance K-12 AI education in the USA and worldwide.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
301,187
2205.09799
Digital Reconfigurable Intelligent Surfaces: On the Impact of Realistic Reradiation Models
Reconfigurable intelligent surface (RIS) is an emerging technology that is under investigation for different applications in wireless communications. RISs are often analyzed and optimized by considering simplified electromagnetic reradiation models. In this chapter, we aim to study the impact of realistic reradiation models for RISs as a function of the sub-wavelength inter-distance between nearby elements of the RIS, the quantization levels of the reflection coefficients, the interplay between the amplitude and phase of the reflection coefficients, and the presence of electromagnetic interference. We consider both case studies in which the users may be located in the far-field and near-field regions of an RIS. Our study shows that, due to design constraints, such as the need of using quantized reflection coefficients or the inherent interplay between the phase and the amplitude of the reflection coefficients, an RIS may reradiate power towards unwanted directions that depend on the intended and interfering electromagnetic waves. Therefore, it is in general important to optimize an RIS by considering the entire reradiation pattern by design to maximize the reradiated power towards the desired directions of reradiation while keeping the power reradiated towards other unwanted directions at a low level. Our study shows that a 2-bit digitally controllable RIS with an almost constant reflection amplitude as a function of the applied phase shift, and whose scattering elements have a size and an inter-distance between (1/8)th and (1/4)th of the signal wavelength may be a good tradeoff between performance, implementation complexity and cost. However, the presented results are preliminary and pave the way for further research into the performance of RISs based on accurate and realistic electromagnetic reradiation models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
297,416
2007.15480
Capacity of Remote Classification Over Wireless Channels
Wireless connectivity creates a computing paradigm that merges communication and inference. A basic operation in this paradigm is the one where a device offloads classification tasks to the edge servers. We term this remote classification, with a potential to enable intelligent applications. Remote classification is challenged by the finite and variable data rate of the wireless channel, which affects the capability to transfer high-dimensional features and thus limits the classification resolution. We introduce a set of metrics under the name of classification capacity that are defined as the maximum number of classes that can be discerned over a given communication channel while meeting a target classification error probability. The objective is to choose a subset of classes from a library that offers satisfactory performance over a given channel. We treat two cases of subset selection. First, a device can select the subset by pruning the class library until arriving at a subset that meets the targeted error probability while maximizing the classification capacity. Adopting a subspace data model, we prove the equivalence of classification capacity maximization to Grassmannian packing. The results show that the classification capacity grows exponentially with the instantaneous communication rate, and super-exponentially with the dimensions of each data cluster. This also holds for ergodic and outage capacities with fading if the instantaneous rate is replaced with an average rate and a fixed rate, respectively. In the second case, a device has a preference of class subset for every communication rate, which is modeled as an instance of uniformly sampling the library. Without class selection, the classification capacity and its ergodic and outage counterparts are proved to scale linearly with their corresponding communication rates instead of the exponential growth in the last case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
189,675
1812.00993
Nonlinear Stochastic Position and Attitude Filter on the Special Euclidean Group 3
This paper formulates the pose estimation problem as nonlinear stochastic filter kinematics evolved directly on the Special Euclidean Group SE(3). Proposed filter guarantees that the errors present in position and Rodriguez vector estimates are semi-globally uniformly ultimately bounded (SGUUB) in mean square, and that they converge to small neighborhood of the origin in probability. Simulation results show the robustness and effectiveness of the proposed filter in presence of high levels of noise and bias associated with the velocity vector as well as body-frame measurements. Keywords: Pose estimator, pose observer, attitude estimate, control, estimator, observer, Nonlinear stochastic pose filter, stochastic differential equations, Brownian motion process, Ito, Stratonovich, Wong Zakai, unit-quaternion, special orthogonal group, homogeneous transformation matrix, complimentary filter, Euler angles, Angle-axis, mapping, Parameterization, Representation, Robust, Multiplicative Extended Kalman Filter, Unscented Kalman Filter, Particle filter, KF, EKF, IEKF, UKF, MEKF, partial derivative, small, dynamics, equilibrium, asymptotic, covariance, expected value, zero, unknown, time-varying, global, semi-global, stable, stability, uncertain, Gaussian, colored, white, noise, vectorial measurement, vector measurement, translational velocity, angular velocity, singular value decomposition, rotational matrix, identity, deterministic, comparison, inertial frame, rigid body, three dimensional, 3D, space, adjoint, Lie group, projection, landmark, feature, Gyroscope, micro electromechanical systems, Inertial measurement units, sensor, IMUs, Fixed, moving, orientation, Roll, Pitch, Yaw, SVD, UAVs, QUAV, unmanned, underwater vehicle, robot, Robotic System, Spacecraft, quadrotor, quadcopter, integral, advantage, disadvantage, Comparative study, Review, Overview, Survey, autonomous, xyz, axis, SO(3), SE(3).
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
115,402
2108.07595
spectrai: A deep learning framework for spectral data
Deep learning computer vision techniques have achieved many successes in recent years across numerous imaging domains. However, the application of deep learning to spectral data remains a complex task due to the need for augmentation routines, specific architectures for spectral data, and significant memory requirements. Here we present spectrai, an open-source deep learning framework designed to facilitate the training of neural networks on spectral data and enable comparison between different methods. Spectrai provides numerous built-in spectral data pre-processing and augmentation methods, neural networks for spectral data including spectral (image) denoising, spectral (image) classification, spectral image segmentation, and spectral image super-resolution. Spectrai includes both command line and graphical user interfaces (GUI) designed to guide users through model and hyperparameter decisions for a wide range of applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
250,963
2501.06268
Cluster Catch Digraphs with the Nearest Neighbor Distance
We introduce a new method for clustering based on Cluster Catch Digraphs (CCDs). The new method addresses the limitations of RK-CCDs by employing a new variant of spatial randomness test that employs the nearest neighbor distance (NND) instead of the Ripley's K function used by RK-CCDs. We conduct a comprehensive Monte Carlo analysis to assess the performance of our method, considering factors such as dimensionality, data set size, number of clusters, cluster volumes, and inter-cluster distance. Our method is particularly effective for high-dimensional data sets, comparable to or outperforming KS-CCDs and RK-CCDs that rely on a KS-type statistic or the Ripley's K function. We also evaluate our methods using real and complex data sets, comparing them to well-known clustering methods. Again, our methods exhibit competitive performance, producing high-quality clusters with desirable properties. Keywords: Graph-based clustering, Cluster catch digraphs, High-dimensional data, The nearest neighbor distance, Spatial randomness test
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
523,921
2404.18870
More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness
The trustworthiness of Large Language Models (LLMs) refers to the extent to which their outputs are reliable, safe, and ethically aligned, and it has become a crucial consideration alongside their cognitive performance. In practice, Reinforcement Learning From Human Feedback (RLHF) has been widely used to align LLMs with labeled human preferences, but its assumed effect on model trustworthiness hasn't been rigorously evaluated. To bridge this knowledge gap, this study investigates how models aligned with general-purpose preference data perform across five trustworthiness verticals: toxicity, stereotypical bias, machine ethics, truthfulness, and privacy. Our results demonstrate that RLHF on human preferences doesn't automatically guarantee trustworthiness, and reverse effects are often observed. Furthermore, we propose to adapt efficient influence function based data attribution methods to the RLHF setting to better understand the influence of fine-tuning data on individual trustworthiness benchmarks, and show its feasibility by providing our estimated attribution scores. Together, our results underscore the need for more nuanced approaches for model alignment from both the data and framework perspectives, and we hope this research will guide the community towards developing language models that are increasingly capable without sacrificing trustworthiness.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
450,419
2112.05749
Label, Verify, Correct: A Simple Few Shot Object Detection Method
The objective of this paper is few-shot object detection (FSOD) -- the task of expanding an object detector for a new category given only a few instances for training. We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from the training set, for each new category, vastly increasing the number of training instances and reducing class imbalance; our method finds previously unlabelled instances. Na\"ively training with model predictions yields sub-optimal performance; we present two novel methods to improve the precision of the pseudo-labelling process: first, we introduce a verification technique to remove candidate detections with incorrect class labels; second, we train a specialised model to correct poor quality bounding boxes. After these two novel steps, we obtain a large set of high-quality pseudo-annotations that allow our final detector to be trained end-to-end. Additionally, we demonstrate our method maintains base class performance, and the utility of simple augmentations in FSOD. While benchmarking on PASCAL VOC and MS-COCO, our method achieves state-of-the-art or second-best performance compared to existing approaches across all number of shots.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,941
2112.12985
DeepGANTT: A Scalable Deep Learning Scheduler for Backscatter Networks
Novel backscatter communication techniques enable battery-free sensor tags to interoperate with unmodified standard IoT devices, extending a sensor network's capabilities in a scalable manner. Without requiring additional dedicated infrastructure, the battery-free tags harvest energy from the environment, while the IoT devices provide them with the unmodulated carrier they need to communicate. A schedule coordinates the provision of carriers for the communications of battery-free devices with IoT nodes. Optimal carrier scheduling is an NP-hard problem that limits the scalability of network deployments. Thus, existing solutions waste energy and other valuable resources by scheduling the carriers suboptimally. We present DeepGANTT, a deep learning scheduler that leverages graph neural networks to efficiently provide near-optimal carrier scheduling. We train our scheduler with relatively small optimal schedules obtained from a constraint optimization solver, achieving a performance within 3% of the optimal scheduler. Without the need to retrain, DeepGANTT generalizes to networks 6x larger in the number of nodes and 10x larger in the number of tags than those used for training, breaking the scalability limitations of the optimal scheduler and reducing carrier utilization by up to 50% compared to the state-of-the-art heuristic. Our scheduler efficiently reduces energy and spectrum utilization in backscatter networks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
273,105
2408.03722
Improving the Intelligent Driver Model by Incorporating Vehicle Dynamics: Microscopic Calibration and Macroscopic Validation
Microscopic traffic simulations are used to evaluate the impact of infrastructure modifications and evolving vehicle technologies, such as connected and automated driving. Simulated vehicles are controlled via car-following, lane-changing and junction models, which are designed to imitate human driving behavior. However, physics-based car-following models (CFMs) cannot fully replicate measured vehicle trajectories. Therefore, we present model extensions for the Intelligent Driver Model (IDM), of which some are already included in the Extended Intelligent Driver Model (EIDM), to improve calibration and validation results. They consist of equations based on vehicle dynamics and drive off procedures. In addition, parameter selection plays a decisive role. Thus, we introduce a framework to calibrate CFMs using drone data captured at a signalized intersection in Stuttgart, Germany. We compare the calibration error of the Krauss Model with the IDM and EIDM. In this setup, the EIDM achieves a 17.78 % lower mean error than the IDM, based on the distance difference between real world and simulated vehicles. Adding vehicle dynamics equations to the EIDM further improves the results by an additional 18.97 %. The calibrated vehicle-driver combinations are then investigated by simulating the traffic in three different scenarios: at the original intersection, in a closed loop and in a stop-and-go wave. The data shows that the improved calibration process of individual vehicles, openly available at https://www.github.com/stepeos/pycarmodel_calibration, also provides more accurate macroscopic results.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
479,127
2308.11836
Characterizing normal perinatal development of the human brain structural connectivity
Early brain development is characterized by the formation of a highly organized structural connectome. The interconnected nature of this connectome underlies the brain's cognitive abilities and influences its response to diseases and environmental factors. Hence, quantitative assessment of structural connectivity in the perinatal stage is useful for studying normal and abnormal neurodevelopment. However, estimation of the connectome from diffusion MRI data involves complex computations. For the perinatal period, these computations are further challenged by the rapid brain development and imaging difficulties. Combined with high inter-subject variability, these factors make it difficult to chart the normal development of the structural connectome. As a result, there is a lack of reliable normative baselines of structural connectivity metrics at this critical stage in brain development. In this study, we developed a computational framework, based on spatio-temporal averaging, for determining such baselines. We used this framework to analyze the structural connectivity between 33 and 44 postmenstrual weeks using data from 166 subjects. Our results unveiled clear and strong trends in the development of structural connectivity in perinatal stage. Connection weighting based on fractional anisotropy and neurite density produced the most consistent results. We observed increases in global and local efficiency, a decrease in characteristic path length, and widespread strengthening of the connections within and across brain lobes and hemispheres. We also observed asymmetry patterns that were consistent between different connection weighting approaches. The new computational method and results are useful for assessing normal and abnormal development of the structural connectome early in life.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
387,294
2402.04665
Gaussian Process-Based Nonlinear Moving Horizon Estimation
In this paper, we propose a novel Gaussian process-based moving horizon estimation (MHE) framework for unknown nonlinear systems. In the proposed scheme, we take advantage of the properties of Gaussian processes. On the one hand, we approximate the system dynamics by the posterior means of the learned Gaussian processes (GPs). On the other hand, we exploit the posterior variances of the Gaussian processes to design the weighting matrices in the MHE cost function and account for the uncertainty in the learned system dynamics. The data collection and the tuning of the hyperparameters are done offline. We prove robust stability of the GP-based MHE scheme using a Lyapunov-based proof technique. Furthermore, as additional contribution, we analyze under which conditions incremental input/output-to-state stability (a nonlinear detectability notion) is preserved when approximating the system dynamics using, e.g., machine learning techniques. Finally, we illustrate the performance of the GP-based MHE scheme in a simulation case study and show how the chosen weighting matrices can lead to an improved performance compared to standard cost functions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
427,549
2111.08867
TYolov5: A Temporal Yolov5 Detector Based on Quasi-Recurrent Neural Networks for Real-Time Handgun Detection in Video
Timely handgun detection is a crucial problem to improve public safety; nevertheless, the effectiveness of many surveillance systems still depends of finite human attention. Much of the previous research on handgun detection is based on static image detectors, leaving aside valuable temporal information that could be used to improve object detection in videos. To improve the performance of surveillance systems, a real-time temporal handgun detection system should be built. Using Temporal Yolov5, an architecture based on Quasi-Recurrent Neural Networks, temporal information is extracted from video to improve the results of handgun detection. Moreover, two publicly available datasets are proposed, labeled with hands, guns, and phones. One containing 2199 static images to train static detectors, and another with 5960 frames of videos to train temporal modules. Additionally, we explore two temporal data augmentation techniques based on Mosaic and Mixup. The resulting systems are three temporal architectures: one focused in reducing inference with a mAP$_{50:95}$ of 55.9, another in having a good balance between inference and accuracy with a mAP$_{50:95}$ of 59, and a last one specialized in accuracy with a mAP$_{50:95}$ of 60.2. Temporal Yolov5 achieves real-time detection in the small and medium architectures. Moreover, it takes advantage of temporal features contained in videos to perform better than Yolov5 in our temporal dataset, making TYolov5 suitable for real-world applications. The source code is publicly available at https://github.com/MarioDuran/TYolov5.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
266,834
2109.02955
Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention
Automatically describing video, or video captioning, has been widely studied in the multimedia field. This paper proposes a new task of sensor-augmented egocentric-video captioning, a newly constructed dataset for it called MMAC Captions, and a method for the newly proposed task that effectively utilizes multi-modal data of video and motion sensors, or inertial measurement units (IMUs). While conventional video captioning tasks have difficulty in dealing with detailed descriptions of human activities due to the limited view of a fixed camera, egocentric vision has greater potential to be used for generating the finer-grained descriptions of human activities on the basis of a much closer view. In addition, we utilize wearable-sensor data as auxiliary information to mitigate the inherent problems in egocentric vision: motion blur, self-occlusion, and out-of-camera-range activities. We propose a method for effectively utilizing the sensor data in combination with the video data on the basis of an attention mechanism that dynamically determines the modality that requires more attention, taking the contextual information into account. We compared the proposed sensor-fusion method with strong baselines on the MMAC Captions dataset and found that using sensor data as supplementary information to the egocentric-video data was beneficial, and that our proposed method outperformed the strong baselines, demonstrating the effectiveness of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,911
1701.04241
Modularity-like objective function in annotated networks
We ascertain the modularity-like objective function whose optimization is equivalent to the maximum likelihood in annotated networks. We demonstrate that the modularity-like objective function is a linear combination of modularity and conditional entropy. In contrast with statistical inference methods, in our method, the influence of the metadata is adjustable; when its influence is strong enough, the metadata can be recovered. Conversely, when it is weak, the detection may correspond to another partition. Between the two, there is a transition. This paper provides a concept for expanding the scope of modularity methods.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
66,831
2411.17296
GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
Graph Transformers (GTs) have demonstrated remarkable performance in graph representation learning over popular graph neural networks (GNNs). However, self--attention, the core module of GTs, preserves only low-frequency signals in graph features, leading to ineffectiveness in capturing other important signals like high-frequency ones. Some recent GT models help alleviate this issue, but their flexibility and expressiveness are still limited since the filters they learn are fixed on predefined graph spectrum or order. To tackle this challenge, we propose a Graph Fourier Kolmogorov-Arnold Transformer (GrokFormer), a novel GT model that learns highly expressive spectral filters with adaptive graph spectrum and order through a Fourier series modeling over learnable activation functions. We demonstrate theoretically and empirically that the proposed GrokFormer filter offers better expressiveness than other spectral methods. Comprehensive experiments on 10 real-world node classification datasets across various domains, scales, and graph properties, as well as 5 graph classification datasets, show that GrokFormer outperforms state-of-the-art GTs and GNNs. Our code is available at https://github.com/GGA23/GrokFormer
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
511,389
2209.14252
Physics-aware Differentiable Discrete Codesign for Diffractive Optical Neural Networks
Diffractive optical neural networks (DONNs) have attracted lots of attention as they bring significant advantages in terms of power efficiency, parallelism, and computational speed compared with conventional deep neural networks (DNNs), which have intrinsic limitations when implemented on digital platforms. However, inversely mapping algorithm-trained physical model parameters onto real-world optical devices with discrete values is a non-trivial task as existing optical devices have non-unified discrete levels and non-monotonic properties. This work proposes a novel device-to-system hardware-software codesign framework, which enables efficient physics-aware training of DONNs w.r.t arbitrary experimental measured optical devices across layers. Specifically, Gumbel-Softmax is employed to enable differentiable discrete mapping from real-world device parameters into the forward function of DONNs, where the physical parameters in DONNs can be trained by simply minimizing the loss function of the ML task. The results have demonstrated that our proposed framework offers significant advantages over conventional quantization-based methods, especially with low-precision optical devices. Finally, the proposed algorithm is fully verified with physical experimental optical systems in low-precision settings.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
320,188
2306.13914
G-TRACER: Expected Sharpness Optimization
We propose a new regularization scheme for the optimization of deep learning architectures, G-TRACER ("Geometric TRACE Ratio"), which promotes generalization by seeking flat minima, and has a sound theoretical basis as an approximation to a natural-gradient descent based optimization of a generalized Bayes objective. By augmenting the loss function with a TRACER, curvature-regularized optimizers (eg SGD-TRACER and Adam-TRACER) are simple to implement as modifications to existing optimizers and don't require extensive tuning. We show that the method converges to a neighborhood (depending on the regularization strength) of a local minimum of the unregularized objective, and demonstrate competitive performance on a number of benchmark computer vision and NLP datasets, with a particular focus on challenging low signal-to-noise ratio problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
375,455
1812.00281
HUMBI: A Large Multiview Dataset of Human Body Expressions
This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and physical condition. With the multiview image streams, we reconstruct high fidelity body expressions using 3D mesh models, which allows representing view-specific appearance using their canonical atlas. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
115,210
1906.03466
Strategies to architect AI Safety: Defense to guard AI from Adversaries
The impact of designing for security of AI is critical for humanity in the AI era. With humans increasingly becoming dependent upon AI, there is a need for neural networks that work reliably, inspite of Adversarial attacks. The vision for Safe and secure AI for popular use is achievable. To achieve safety of AI, this paper explores strategies and a novel deep learning architecture. To guard AI from adversaries, paper explores combination of 3 strategies: 1. Introduce randomness at inference time to hide the representation learning from adversaries. 2. Detect presence of adversaries by analyzing the sequence of inferences. 3. Exploit visual similarity. To realize these strategies, this paper designs a novel architecture, Dynamic Neural Defense, DND. This defense has 3 deep learning architectural features: 1. By hiding the way a neural network learns from exploratory attacks using a random computation graph, DND evades attack. 2. By analyzing input sequence to cloud AI inference engine with LSTM, DND detects attack sequence. 3. By inferring with visual similar inputs generated by VAE, any AI defended by DND approach does not succumb to hackers. Thus, a roadmap to develop reliable, safe and secure AI is presented.
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
false
134,383
1902.09879
Robust Resource Allocation for PD-NOMA-Based MISO Heterogeneous Networks with CoMP Technology
In this paper, we consider a hybrid scheme of coordinated multi-point (CoMP) technology in MISO heterogeneous communication networks based on power domain non-orthogonal multiple access (PD-NOMA). We propose a novel method based on matching game with externalities to realize the hybrid scheme where the number of the cooperative nodes are variable. Moreover, we propose a new matching utility function to manage the interference caused by CoMP and NOMA techniques. We also devise robust beamforming to cope with the channel uncertainty. In this regard, we focus on both no CSI and partial CSI cases to increase the achievable data rate. We provide the complexity analysis of both schemes which shows that the complexity of the partial CSI approach is more than that of the no CSI method. Results evaluate the performance of proposed CoMP scheme and the sensibility of our methods, Index Terms, CoMP technology, hybrid scheme, matching game with externalities, PD-NOMA, robust beamforming, probabilistic constraint, no CSI, partial CSI.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
122,536
2104.13803
Does Face Recognition Error Echo Gender Classification Error?
This paper is the first to explore the question of whether images that are classified incorrectly by a face analytics algorithm (e.g., gender classification) are any more or less likely to participate in an image pair that results in a face recognition error. We analyze results from three different gender classification algorithms (one open-source and two commercial), and two face recognition algorithms (one open-source and one commercial), on image sets representing four demographic groups (African-American female and male, Caucasian female and male). For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution than pairs in which both images have correct gender classification, and so are less likely to generate a false match error. For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution (increased false non-match rate) compared to individuals whose images all have correct gender classification. Thus, compared to images that generate correct gender classification, images that generate gender classification errors do generate a different pattern of recognition errors, both better (false match) and worse (false non-match).
false
false
false
false
true
false
false
false
false
false
false
true
false
true
false
false
false
false
232,619
2409.16972
Efficient Submap-based Autonomous MAV Exploration using Visual-Inertial SLAM Configurable for LiDARs or Depth Cameras
Autonomous exploration of unknown space is an essential component for the deployment of mobile robots in the real world. Safe navigation is crucial for all robotics applications and requires accurate and consistent maps of the robot's surroundings. To achieve full autonomy and allow deployment in a wide variety of environments, the robot must rely on on-board state estimation which is prone to drift over time. We propose a Micro Aerial Vehicle (MAV) exploration framework based on local submaps to allow retaining global consistency by applying loop-closure corrections to the relative submap poses. To enable large-scale exploration we efficiently compute global, environment-wide frontiers from the local submap frontiers and use a sampling-based next-best-view exploration planner. Our method seamlessly supports using either a LiDAR sensor or a depth camera, making it suitable for different kinds of MAV platforms. We perform comparative evaluations in simulation against a state-of-the-art submap-based exploration framework to showcase the efficiency and reconstruction quality of our approach. Finally, we demonstrate the applicability of our method to real-world MAVs, one equipped with a LiDAR and the other with a depth camera. Video available at https://youtu.be/Uf5fwmYcuq4 .
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
491,600
2106.12605
Deep Fake Detection: Survey of Facial Manipulation Detection Solutions
Deep Learning as a field has been successfully used to solve a plethora of complex problems, the likes of which we could not have imagined a few decades back. But as many benefits as it brings, there are still ways in which it can be used to bring harm to our society. Deep fakes have been proven to be one such problem, and now more than ever, when any individual can create a fake image or video simply using an application on the smartphone, there need to be some countermeasures, with which we can detect if the image or video is a fake or real and dispose of the problem threatening the trustworthiness of online information. Although the Deep fakes created by neural networks, may seem to be as real as a real image or video, it still leaves behind spatial and temporal traces or signatures after moderation, these signatures while being invisible to a human eye can be detected with the help of a neural network trained to specialize in Deep fake detection. In this paper, we analyze several such states of the art neural networks (MesoNet, ResNet-50, VGG-19, and Xception Net) and compare them against each other, to find an optimal solution for various scenarios like real-time deep fake detection to be deployed in online social media platforms where the classification should be made as fast as possible or for a small news agency where the classification need not be in real-time but requires utmost accuracy.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
242,774
2302.06353
Contour-based Interactive Segmentation
Recent advances in interactive segmentation (IS) allow speeding up and simplifying image editing and labeling greatly. The majority of modern IS approaches accept user input in the form of clicks. However, using clicks may require too many user interactions, especially when selecting small objects, minor parts of an object, or a group of objects of the same type. In this paper, we consider such a natural form of user interaction as a loose contour, and introduce a contour-based IS method. We evaluate the proposed method on the standard segmentation benchmarks, our novel UserContours dataset, and its subset UserContours-G containing difficult segmentation cases. Through experiments, we demonstrate that a single contour provides the same accuracy as multiple clicks, thus reducing the required amount of user interactions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,369
0910.1761
Decomposition of forging dies for machining planning
This paper will provide a method to decompose forging dies for machining planning in the case of high speed machining finishing operations. This method lies on a machining feature approach model presented in the following paper. The two main decomposition phases, called Basic Machining Features Extraction and Process Planning Generation, are presented. These two decomposition phases integrates machining resources models and expert machining knowledge to provide an outstanding process planning.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
4,689
2307.01305
Efficient Communication for Pursuit-Evasion Games with Asymmetric Information
We consider a class of pursuit-evasion differential games in which the evader has continuous access to the pursuer's location, but not vice-versa. There is an immobile sensor (e.g., a ground radar station) that can sense the evader's location and communicate that information intermittently to the pursuer. Transmitting the information from the sensor to the pursuer is costly and only a finite number of transmissions can happen throughout the entire game. The outcome of the game is determined by the control strategies of the players and the communication strategy between the sensor and the pursuer. We obtain the (Nash) equilibrium control strategies for both the players as well as the optimal communication strategy between the static sensor and the pursuer. We discuss a dilemma for the evader that emerges in this game. We also discuss the emergence of implicit communication where the absence of communication from the sensor can also convey some actionable information to the pursuer.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
377,304
2203.09064
Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning
This paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS. The trained weights and codes are available at https://github.com/StomachCold/HCTransformers.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
286,012
2210.15957
Brain Modeling for Control: A Review
Neurostimulation technologies have seen a recent surge in interest from the neuroscience and controls communities alike due to their proven potential to treat conditions such as Parkinson's Disease, and depression. The provided stimulation can be of different types, such as electric, and optogenetic, and is generally applied to a specific region of the brain in order to drive the local and/or global dynamics to a desired state of (in)activity. However, an underlying theoretical understanding of the efficacy of neurostimulation is still lacking. From a control-theoretic perspective, it is important to understand how each stimulus modality interacts with the complex brain network in order to assess the controllability of the system and develop neurophysiologically relevant computational models that can be used to design the stimulation profile in a closed-loop manner. In this paper, we review the computational modeling studies of (i) deep brain stimulation, (ii) transcranial magnetic stimulation, (iii) direct current stimulation, (iv) transcranial electrical stimulation, and (v) optogenetics as five of the most popular neurostimulation technologies in research and clinical settings. For each technology, we split the reviewed studies into (a)theory-driven biophysical models capturing the low-level physics of the interactions between the stimulation source and neuronal tissue, (b) data-driven stimulus-response models which capture the end-to-end effects of stimulation on various biomarkers of interest and (c) data-driven dynamical system models that extract the precise dynamics of the brain's response to neurostimulation from neural data. While our focus is particularly on the latter category due to their greater utility in control design, we review key works in the former two categories as the basis and context in which dynamical system models have been and will be developed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
327,159
1411.5107
Towards a Theory of Societal Co-Evolution: Individualism versus Collectivism
Substantial empirical research has shown that the level of individualism vs. collectivism is one of the most critical and important determinants of societal traits, such as economic growth, economic institutions and health conditions. But the exact nature of this impact has thus far not been well understood in an analytical setting. In this work, we develop one of the first theoretical models that analytically studies the impact of individualism-collectivism on the society. We model the growth of an individual's welfare (wealth, resources and health) as depending not only on himself, but also on the level of collectivism, i.e. the level of dependence on the rest of the individuals in the society, which leads to a co-evolutionary setting. Based on our model, we are able to predict the impact of individualism-collectivism on various societal metrics, such as average welfare, average life-time, total population, cumulative welfare and average inequality. We analytically show that individualism has a positive impact on average welfare and cumulative welfare, but comes with the drawbacks of lower average life-time, lower total population and higher average inequality.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
37,703
2003.06699
Tiny Eats: Eating Detection on a Microcontroller
There is a growing interest in low power highly efficient wearable devices for automatic dietary monitoring (ADM) [1]. The success of deep neural networks in audio event classification problems makes them ideal for this task. Deep neural networks are, however, not only computationally intensive and energy inefficient but also require a large amount of memory. To address these challenges, we propose a shallow gated recurrent unit (GRU) architecture suitable for resource-constrained applications. This paper describes the implementation of the Tiny Eats GRU, a shallow GRU neural network, on a low power micro-controller, Arm Cortex M0+, to classify eating episodes. Tiny Eats GRU is a hybrid of the traditional GRU [2] and eGRU [3] to make it small and fast enough to fit on the Arm Cortex M0+ with comparable accuracy to the traditional GRU. The Tiny Eats GRU utilizes only 4% of the Arm Cortex M0+ memory and identifies eating or non-eating episodes with 6 ms latency and accuracy of 95.15%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
168,197
2403.17695
PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition
We present PlainMamba: a simple non-hierarchical state space model (SSM) designed for general visual recognition. The recent Mamba model has shown how SSMs can be highly competitive with other architectures on sequential data and initial attempts have been made to apply it to images. In this paper, we further adapt the selective scanning process of Mamba to the visual domain, enhancing its ability to learn features from two-dimensional images by (i) a continuous 2D scanning process that improves spatial continuity by ensuring adjacency of tokens in the scanning sequence, and (ii) direction-aware updating which enables the model to discern the spatial relations of tokens by encoding directional information. Our architecture is designed to be easy to use and easy to scale, formed by stacking identical PlainMamba blocks, resulting in a model with constant width throughout all layers. The architecture is further simplified by removing the need for special tokens. We evaluate PlainMamba on a variety of visual recognition tasks, achieving performance gains over previous non-hierarchical models and is competitive with hierarchical alternatives. For tasks requiring high-resolution inputs, in particular, PlainMamba requires much less computing while maintaining high performance. Code and models are available at: https://github.com/ChenhongyiYang/PlainMamba .
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
441,579
2411.01410
PageRank Bandits for Link Prediction
Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion. Numerous research efforts have been directed at solving this problem, including approaches based on similarity metrics and Graph Neural Networks (GNN). However, most existing solutions are still rooted in conventional supervised learning, which makes it challenging to adapt over time to changing customer interests and to address the inherent dilemma of exploitation versus exploration in link prediction. To tackle these challenges, this paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially. We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration. We also introduce a new reward formulation and provide a theoretical performance guarantee for PRB. Finally, we extensively evaluate PRB in both online and offline settings, comparing it with bandit-based and graph-based methods. The empirical success of PRB demonstrates the value of the proposed fusion approach. Our code is released at https://github.com/jiaruzouu/PRB.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
505,057
astro-ph/0502164
Particle Swarm Optimization: An efficient method for tracing periodic orbits in 3D galactic potentials
We propose the Particle Swarm Optimization (PSO) as an alternative method for locating periodic orbits in a three--dimensional (3D) model of barred galaxies. We develop an appropriate scheme that transforms the problem of finding periodic orbits into the problem of detecting global minimizers of a function, which is defined on the Poincar\'{e} Surface of Section (PSS) of the Hamiltonian system. By combining the PSO method with deflection techniques, we succeeded in tracing systematically several periodic orbits of the system. The method succeeded in tracing the initial conditions of periodic orbits in cases where Newton iterative techniques had difficulties. In particular, we found families of 2D and 3D periodic orbits associated with the inner 8:1 to 12:1 resonances, between the radial 4:1 and corotation resonances of our 3D Ferrers bar model. The main advantages of the proposed algorithm is its simplicity, its ability to work using function values solely, as well as its ability to locate many periodic orbits per run at a given Jacobian constant.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
536,027
2206.14683
Computer-aided diagnosis and prediction in brain disorders
Computer-aided methods have shown added value for diagnosing and predicting brain disorders and can thus support decision making in clinical care and treatment planning. This chapter will provide insight into the type of methods, their working, their input data - such as cognitive tests, imaging and genetic data - and the types of output they provide. We will focus on specific use cases for diagnosis, i.e. estimating the current 'condition' of the patient, such as early detection and diagnosis of dementia, differential diagnosis of brain tumours, and decision making in stroke. Regarding prediction, i.e. estimation of the future 'condition' of the patient, we will zoom in on use cases such as predicting the disease course in multiple sclerosis and predicting patient outcomes after treatment in brain cancer. Furthermore, based on these use cases, we will assess the current state-of-the-art methodology and highlight current efforts on benchmarking of these methods and the importance of open science therein. Finally, we assess the current clinical impact of computer-aided methods and discuss the required next steps to increase clinical impact.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
305,367
2305.06021
Orders between channels and implications for partial information decomposition
The partial information decomposition (PID) framework is concerned with decomposing the information that a set of random variables has with respect to a target variable into three types of components: redundant, synergistic, and unique. Classical information theory alone does not provide a unique way to decompose information in this manner and additional assumptions have to be made. Recently, Kolchinsky proposed a new general axiomatic approach to obtain measures of redundant information, based on choosing an order relation between information sources (equivalently, order between communication channels). In this paper, we exploit this approach to introduce three new measures of redundant information (and the resulting decompositions) based on well-known preorders between channels, thus contributing to the enrichment of the PID landscape. We relate the new decompositions to existing ones, study some of their properties, and provide examples illustrating their novelty. As a side result, we prove that any preorder that satisfies Kolchinsky's axioms yields a decomposition that meets the axioms originally introduced by Williams and Beer when they first propose the PID.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
363,376
2009.02046
Delay Compensation for Regular Linear Systems
This is the third part of four series papers, aiming at the delay compensation for the abstract linear system (A,B,C). Both the input delay and output delay are investigated. We first propose a full state feedback control to stabilize the system (A,B) with input delay and then design a Luenberger-like observer for the system (A,C) in terms of the delayed output. We formulate the delay compensation in the framework of regular linear systems. The developed approach builds upon an upper-block-triangle transform that is associated with a Sylvester operator equation. It is found that the controllability/observability map of system (-A,B)/(-A,-C) happens to be the solution of the corresponding Sylvester equation. As an immediate consequence, both the feedback law and the state observer can be expressed explicitly in the operator form. The exponential stability of the resulting closed-loop system and the exponential convergence of the observation error are established without using the Lyapunov functional approach. The theoretical results are validated through the delay compensation for a benchmark one-dimensional wave equation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
194,455
2101.01366
A Symmetric Loss Perspective of Reliable Machine Learning
When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize. Examples of well-known surrogate losses for binary classification include the logistic loss, hinge loss, and sigmoid loss. It is known that the choice of a surrogate loss can highly influence the performance of the trained classifier and therefore it should be carefully chosen. Recently, surrogate losses that satisfy a certain symmetric condition (aka., symmetric losses) have demonstrated their usefulness in learning from corrupted labels. In this article, we provide an overview of symmetric losses and their applications. First, we review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization and area under the receiver operating characteristic curve (AUC) maximization. Then, we demonstrate how the robust AUC maximization method can benefit natural language processing in the problem where we want to learn only from relevant keywords and unlabeled documents. Finally, we conclude this article by discussing future directions, including potential applications of symmetric losses for reliable machine learning and the design of non-symmetric losses that can benefit from the symmetric condition.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
214,353
1905.10358
On the Global Minimizers of Real Robust Phase Retrieval with Sparse Noise
We study a class of real robust phase retrieval problems under a Gaussian assumption on the coding matrix when the received signal is sparsely corrupted by noise. The goal is to establish conditions on the sparsity under which the input vector can be exactly recovered. The recovery problem is formulated as the minimization of the $\ell_1$ norm of the residual. The main contribution is a robust phase retrieval counterpart to the seminal paper by Candes and Tao on compressed sensing ($\ell_1$ regression) [Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):4203-4215, 2005]. Our analysis depends on a key new property on the coding matrix which we call the {Absolute Range Property} (ARP). This property is an analogue to the Null Space Property (NSP) in compressed sensing. When the residuals are computed using squared magnitudes, we show that ARP follows from a standard Restricted Isometry Property (RIP). However, when the residuals are computed using absolute magnitudes, a new and very different kind of RIP or growth property is required. We conclude by showing that the robust phase retrieval objectives are sharp with respect to their minimizers with high probability.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
132,034
2002.06746
Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint
Machine learning is used to make decisions for individuals in various fields, which require us to achieve good prediction accuracy while ensuring fairness with respect to sensitive features (e.g., race and gender). This problem, however, remains difficult in complex real-world scenarios. To quantify unfairness under such situations, existing methods utilize {\it path-specific causal effects}. However, none of them can ensure fairness for each individual without making impractical functional assumptions on the data. In this paper, we propose a far more practical framework for learning an individually fair classifier. To avoid restrictive functional assumptions, we define the {\it probability of individual unfairness} (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero. We elucidate why our method can guarantee fairness for each individual. Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
164,291
2411.15982
Anda: Unlocking Efficient LLM Inference with a Variable-Length Grouped Activation Data Format
The widely-used, weight-only quantized large language models (LLMs), which leverage low-bit integer (INT) weights and retain floating-point (FP) activations, reduce storage requirements while maintaining accuracy. However, this shifts the energy and latency bottlenecks towards the FP activations that are associated with costly memory accesses and computations. Existing LLM accelerators focus primarily on computation optimizations, overlooking the potential of jointly optimizing FP computations and data movement, particularly for the dominant FP-INT GeMM operations in LLM inference. To address these challenges, we investigate the sensitivity of activation precision across various LLM modules and its impact on overall model accuracy. Based on our findings, we first propose the Anda data type: an adaptive data format with group-shared exponent bits and dynamic mantissa bit allocation. Secondly, we develop an iterative post-training adaptive precision search algorithm that optimizes the bit-width for different LLM modules to balance model accuracy, energy efficiency, and inference speed. Lastly, a suite of hardware optimization techniques is proposed to maximally exploit the benefits of the Anda format. These include a bit-plane-based data organization scheme, Anda-enhanced processing units with bit-serial computation, and a runtime bit-plane Anda compressor to simultaneously optimize storage, computation, and memory footprints. Our evaluations on FPINT GeMM operations show that Anda achieves a 2.4x speedup, 4.0x area efficiency, and 3.1x energy efficiency improvement on average for popular LLMs including OPT, LLaMA, and LLaMA-2 series over the GPU-like FP-FP baseline. Anda demonstrates strong adaptability across various application scenarios, accuracy requirements, and system performance, enabling efficient LLM inference across a wide range of deployment scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
510,844
1402.0051
Distributed Algorithms for Stochastic Source Seeking with Mobile Robot Networks: Technical Report
Autonomous robot networks are an effective tool for monitoring large-scale environmental fields. This paper proposes distributed control strategies for localizing the source of a noisy signal, which could represent a physical quantity of interest such as magnetic force, heat, radio signal, or chemical concentration. We develop algorithms specific to two scenarios: one in which the sensors have a precise model of the signal formation process and one in which a signal model is not available. In the model-free scenario, a team of sensors is used to follow a stochastic gradient of the signal field. Our approach is distributed, robust to deformations in the group geometry, does not necessitate global localization, and is guaranteed to lead the sensors to a neighborhood of a local maximum of the field. In the model-based scenario, the sensors follow the stochastic gradient of the mutual information between their expected measurements and the location of the source in a distributed manner. The performance is demonstrated in simulation using a robot sensor network to localize the source of a wireless radio signal.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
30,527
2208.03754
Exploring Long- and Short-Range Temporal Information for Learned Video Compression
Learned video compression methods have gained a variety of interest in the video coding community since they have matched or even exceeded the rate-distortion (RD) performance of traditional video codecs. However, many current learning-based methods are dedicated to utilizing short-range temporal information, thus limiting their performance. In this paper, we focus on exploiting the unique characteristics of video content and further exploring temporal information to enhance compression performance. Specifically, for long-range temporal information exploitation, we propose temporal prior that can update continuously within the group of pictures (GOP) during inference. In that case temporal prior contains valuable temporal information of all decoded images within the current GOP. As for short-range temporal information, we propose a progressive guided motion compensation to achieve robust and effective compensation. In detail, we design a hierarchical structure to achieve multi-scale compensation. More importantly, we use optical flow guidance to generate pixel offsets between feature maps at each scale, and the compensation results at each scale will be used to guide the following scale's compensation. Sufficient experimental results demonstrate that our method can obtain better RD performance than state-of-the-art video compression approaches. The code is publicly available on: https://github.com/Huairui/LSTVC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,887
0809.2968
Bounds on Covering Codes with the Rank Metric
In this paper, we investigate geometrical properties of the rank metric space and covering properties of rank metric codes. We first establish an analytical expression for the intersection of two balls with rank radii, and then derive an upper bound on the volume of the union of multiple balls with rank radii. Using these geometrical properties, we derive both upper and lower bounds on the minimum cardinality of a code with a given rank covering radius. The geometrical properties and bounds proposed in this paper are significant to the design, decoding, and performance analysis of rank metric codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,362
1310.4389
ImageSpirit: Verbal Guided Image Parsing
Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
27,814
2210.09809
Analysis of Convolutions, Non-linearity and Depth in Graph Neural Networks using Neural Tangent Kernel
The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a `graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions. Therefore, understanding the influence of each of the design choice on the network performance is crucial. Convolutions based on graph Laplacian have emerged as the dominant choice with the symmetric normalization of the adjacency matrix as the most widely adopted one. However, some empirical studies show that row normalization of the adjacency matrix outperforms it in node classification. Despite the widespread use of GNNs, there is no rigorous theoretical study on the representation power of these convolutions, that could explain this behavior. Similarly, the empirical observation of the linear GNNs performance being on par with non-linear ReLU GNNs lacks rigorous theory. In this work, we theoretically analyze the influence of different aspects of the GNN architecture using the Graph Neural Tangent Kernel in a semi-supervised node classification setting. Under the population Degree Corrected Stochastic Block Model, we prove that: (i) linear networks capture the class information as good as ReLU networks; (ii) row normalization preserves the underlying class structure better than other convolutions; (iii) performance degrades with network depth due to over-smoothing, but the loss in class information is the slowest in row normalization; (iv) skip connections retain the class information even at infinite depth, thereby eliminating over-smoothing. We finally validate our theoretical findings numerically and on real datasets such as Cora and Citeseer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
324,676
2008.00072
Dynamic Object Tracking and Masking for Visual SLAM
In dynamic environments, performance of visual SLAM techniques can be impaired by visual features taken from moving objects. One solution is to identify those objects so that their visual features can be removed for localization and mapping. This paper presents a simple and fast pipeline that uses deep neural networks, extended Kalman filters and visual SLAM to improve both localization and mapping in dynamic environments (around 14 fps on a GTX 1080). Results on the dynamic sequences from the TUM dataset using RTAB-Map as visual SLAM suggest that the approach achieves similar localization performance compared to other state-of-the-art methods, while also providing the position of the tracked dynamic objects, a 3D map free of those dynamic objects, better loop closure detection with the whole pipeline able to run on a robot moving at moderate speed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
189,887
2008.02004
Beyond Controlled Environments: 3D Camera Re-Localization in Changing Indoor Scenes
Long-term camera re-localization is an important task with numerous computer vision and robotics applications. Whilst various outdoor benchmarks exist that target lighting, weather and seasonal changes, far less attention has been paid to appearance changes that occur indoors. This has led to a mismatch between popular indoor benchmarks, which focus on static scenes, and indoor environments that are of interest for many real-world applications. In this paper, we adapt 3RScan - a recently introduced indoor RGB-D dataset designed for object instance re-localization - to create RIO10, a new long-term camera re-localization benchmark focused on indoor scenes. We propose new metrics for evaluating camera re-localization and explore how state-of-the-art camera re-localizers perform according to these metrics. We also examine in detail how different types of scene change affect the performance of different methods, based on novel ways of detecting such changes in a given RGB-D frame. Our results clearly show that long-term indoor re-localization is an unsolved problem. Our benchmark and tools are publicly available at waldjohannau.github.io/RIO10
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
190,499
2212.03612
Von Data Warehouse bis Data Mesh: Ein Wegweiser durch den Dschungel analytischer Datenarchitekturen
Data warehouse, data lake, data lakehouse, data mesh ... many new names for analytical data architectures are currently circulating in the scene. But are the various approaches really so different? This article attempts a structured comparison of the different architecture paradigms, methodically based on DAMA-DMBOK and ArchiMate. Differences, similarities and dependencies as well as overlapping architectural building blocks are worked out and illustrated. This results in a first orientation guide for the choice of the right analytical data architecture for the respective use case. -- Data Warehouse, Data Lake, Date Lakehouse, Data Mesh ... in der Szene kursieren derzeit viele neue Namen f\"ur analytische Datenarchitekturen. Doch sind die diversen Ans\"atze wirklich so unterschiedlich? Dieser Beitrag versucht einen strukturierten Vergleich der verschiedenen Architekturparadigmen, methodisch basierend auf DAMA-DMBOK und ArchiMate. Es werden Unterschiede, Gemeinsamkeiten und Abh\"angigkeiten sowie \"uberlappende Architekturbausteine herausgearbeitet und illustriert. Daraus entsteht eine erste Orientierungshilfe f\"ur die Wahl der richtigen analytischen Datenarchitektur f\"ur den jeweiligen Anwendungsfall.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
335,182
2109.07298
FFAVOD: Feature Fusion Architecture for Video Object Detection
A significant amount of redundancy exists between consecutive frames of a video. Object detectors typically produce detections for one image at a time, without any capabilities for taking advantage of this redundancy. Meanwhile, many applications for object detection work with videos, including intelligent transportation systems, advanced driver assistance systems and video surveillance. Our work aims at taking advantage of the similarity between video frames to produce better detections. We propose FFAVOD, standing for feature fusion architecture for video object detection. We first introduce a novel video object detection architecture that allows a network to share feature maps between nearby frames. Second, we propose a feature fusion module that learns to merge feature maps to enhance them. We show that using the proposed architecture and the fusion module can improve the performance of three base object detectors on two object detection benchmarks containing sequences of moving road users. Additionally, to further increase performance, we propose an improvement to the SpotNet attention module. Using our architecture on the improved SpotNet detector, we obtain the state-of-the-art performance on the UA-DETRAC public benchmark as well as on the UAVDT dataset. Code is available at https://github.com/hu64/FFAVOD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
255,471
2306.08740
Privacy-Preserving Password Cracking: How a Third Party Can Crack Our Password Hash Without Learning the Hash Value or the Cleartext
Using the computational resources of an untrusted third party to crack a password hash can pose a high number of privacy and security risks. The act of revealing the hash digest could in itself negatively impact both the data subject who created the password, and the data controller who stores the hash digest. This paper solves this currently open problem by presenting a Privacy-Preserving Password Cracking protocol (3PC), that prevents the third party cracking server from learning any useful information about the hash digest, or the recovered cleartext. This is achieved by a tailored anonymity set of decoy hashes, based on the concept of predicate encryption, where we extend the definition of a predicate function, to evaluate the output of a one way hash function. The protocol allows the client to maintain plausible deniability where the real choice of hash digest cannot be proved, even by the client itself. The probabilistic information the server obtains during the cracking process can be calculated and minimized to a desired level. While in theory cracking a larger set of hashes would decrease computational speed, the 3PC protocol provides constant-time lookup on an arbitrary list size, bounded by the input/output operation per second (IOPS) capabilities of the third party server, thereby allowing the protocol to scale efficiently. We demonstrate these claims both theoretically and in practice, with a real-life use case implemented on an FPGA architecture.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
373,521
2403.12024
Enhancing Taiwanese Hokkien Dual Translation by Exploring and Standardizing of Four Writing Systems
Machine translation focuses mainly on high-resource languages (HRLs), while low-resource languages (LRLs) like Taiwanese Hokkien are relatively under-explored. The study aims to address this gap by developing a dual translation model between Taiwanese Hokkien and both Traditional Mandarin Chinese and English. We employ a pre-trained LLaMA 2-7B model specialized in Traditional Mandarin Chinese to leverage the orthographic similarities between Taiwanese Hokkien Han and Traditional Mandarin Chinese. Our comprehensive experiments involve translation tasks across various writing systems of Taiwanese Hokkien as well as between Taiwanese Hokkien and other HRLs. We find that the use of a limited monolingual corpus still further improves the model's Taiwanese Hokkien capabilities. We then utilize our translation model to standardize all Taiwanese Hokkien writing systems into Hokkien Han, resulting in further performance improvements. Additionally, we introduce an evaluation method incorporating back-translation and GPT-4 to ensure reliable translation quality assessment even for LRLs. The study contributes to narrowing the resource gap for Taiwanese Hokkien and empirically investigates the advantages and limitations of pre-training and fine-tuning based on LLaMA 2.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
438,983
2406.13808
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?
In this work, we present empirical results regarding the feasibility of using offline large language models (LLMs) in the context of electronic design automation (EDA). The goal is to investigate and evaluate a contemporary language model's (Llama-2-7B) ability to function as a microelectronic Q & A expert as well as its reasoning, and generation capabilities in solving microelectronic-related problems. Llama-2-7B was tested across a variety of adaptation methods, including introducing a novel low-rank knowledge distillation (LoRA-KD) scheme. Our experiments produce both qualitative and quantitative results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
466,004
1703.03149
Detecting Sockpuppets in Deceptive Opinion Spam
This paper explores the problem of sockpuppet detection in deceptive opinion spam using authorship attribution and verification approaches. Two methods are explored. The first is a feature subsampling scheme that uses the KL-Divergence on stylistic language models of an author to find discriminative features. The second is a transduction scheme, spy induction that leverages the diversity of authors in the unlabeled test set by sending a set of spies (positive samples) from the training set to retrieve hidden samples in the unlabeled test set using nearest and farthest neighbors. Experiments using ground truth sockpuppet data show the effectiveness of the proposed schemes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
69,689
1901.01944
A Compact Representation of Raster Time Series
The raster model is widely used in Geographic Information Systems to represent data that vary continuously in space, such as temperatures, precipitations, elevation, among other spatial attributes. In applications like weather forecast systems, not just a single raster, but a sequence of rasters covering the same region at different timestamps, known as a raster time series, needs to be stored and queried. Compact data structures have proven successful to provide space-efficient representations of rasters with query capabilities. Hence, a naive approach to save space is to use such a representation for each raster in a time series. However, in this paper we show that it is possible to take advantage of the temporal locality that exists in a raster time series to reduce the space necessary to store it while keeping competitive query times for several types of queries.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
118,081
2402.16980
Saliency-Aware Automatic Buddhas Statue Recognition
Buddha statues, as a symbol of many religions, have significant cultural implications that are crucial for understanding the culture and history of different regions, and the recognition of Buddha statues is therefore the pivotal link in the field of Buddha study. However, the Buddha statue recognition requires extensive time and effort from knowledgeable professionals, making it a costly task to perform. Convolution neural networks (CNNs) are inherently efficient at processing visual information, but CNNs alone are likely to make inaccurate classification decisions when subjected to the class imbalance problem. Therefore, this paper proposes an end-to-end automatic Buddha statue recognition model based on saliency map sampling. The proposed Grid-Wise Local Self-Attention Module (GLSA) provides extra salient features which can serve to enrich the dataset and allow CNNs to observe in a much more comprehensive way. Eventually, our model is evaluated on a Buddha dataset collected with the aid of Buddha experts and outperforms state-of-the-art networks in terms of Top-1 accuracy by 4.63\% on average, while only marginally increasing MUL-ADD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,790
2206.08004
When a RF Beats a CNN and GRU, Together -- A Comparison of Deep Learning and Classical Machine Learning Approaches for Encrypted Malware Traffic Classification
Internet traffic classification is widely used to facilitate network management. It plays a crucial role in Quality of Services (QoS), Quality of Experience (QoE), network visibility, intrusion detection, and traffic trend analyses. While there is no theoretical guarantee that deep learning (DL)-based solutions perform better than classic machine learning (ML)-based ones, DL-based models have become the common default. This paper compares well-known DL-based and ML-based models and shows that in the case of malicious traffic classification, state-of-the-art DL-based solutions do not necessarily outperform the classical ML-based ones. We exemplify this finding using two well-known datasets for a varied set of tasks, such as: malware detection, malware family classification, detection of zero-day attacks, and classification of an iteratively growing dataset. Note that, it is not feasible to evaluate all possible models to make a concrete statement, thus, the above finding is not a recommendation to avoid DL-based models, but rather empirical proof that in some cases, there are more simplistic solutions, that may perform even better.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
302,966
1603.04779
Revisiting Batch Normalization For Practical Domain Adaptation
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
53,284
2404.15154
Do not think about pink elephant!
Large Models (LMs) have heightened expectations for the potential of general AI as they are akin to human intelligence. This paper shows that recent large models such as Stable Diffusion and DALL-E3 also share the vulnerability of human intelligence, namely the "white bear phenomenon". We investigate the causes of the white bear phenomenon by analyzing their representation space. Based on this analysis, we propose a simple prompt-based attack method, which generates figures prohibited by the LM provider's policy. To counter these attacks, we introduce prompt-based defense strategies inspired by cognitive therapy techniques, successfully mitigating attacks by up to 48.22\%.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
448,960
1306.1073
Web Synchronization Simulations using the ResourceSync Framework
Maintenance of multiple, distributed up-to-date copies of collections of changing Web resources is important in many application contexts and is often achieved using ad hoc or proprietary synchronization solutions. ResourceSync is a resource synchronization framework that integrates with the Web architecture and leverages XML sitemaps. We define a model for the ResourceSync framework as a basis for understanding its properties. We then describe experiments in which simulations of a variety of synchronization scenarios illustrate the effects of model configuration on consistency, latency, and data transfer efficiency. These results provide insight into which congurations are appropriate for various application scenarios.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
25,015
2103.06689
Unsupervised Transfer Learning in Multilingual Neural Machine Translation with Cross-Lingual Word Embeddings
In this work we look into adding a new language to a multilingual NMT system in an unsupervised fashion. Under the utilization of pre-trained cross-lingual word embeddings we seek to exploit a language independent multilingual sentence representation to easily generalize to a new language. While using cross-lingual embeddings for word lookup we decode from a yet entirely unseen source language in a process we call blind decoding. Blindly decoding from Portuguese using a basesystem containing several Romance languages we achieve scores of 36.4 BLEU for Portuguese-English and 12.8 BLEU for Russian-English. In an attempt to train the mapping from the encoder sentence representation to a new target language we use our model as an autoencoder. Merely training to translate from Portuguese to Portuguese while freezing the encoder we achieve 26 BLEU on English-Portuguese, and up to 28 BLEU when adding artificial noise to the input. Lastly we explore a more practical adaptation approach through non-iterative backtranslation, exploiting our model's ability to produce high quality translations through blind decoding. This yields us up to 34.6 BLEU on English-Portuguese, attaining near parity with a model adapted on real bilingual data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
224,378
2403.13434
Advancing 6D Pose Estimation in Augmented Reality -- Overcoming Projection Ambiguity with Uncontrolled Imagery
This study addresses the challenge of accurate 6D pose estimation in Augmented Reality (AR), a critical component for seamlessly integrating virtual objects into real-world environments. Our research primarily addresses the difficulty of estimating 6D poses from uncontrolled RGB images, a common scenario in AR applications, which lacks metadata such as focal length. We propose a novel approach that strategically decomposes the estimation of z-axis translation and focal length, leveraging the neural-render and compare strategy inherent in the FocalPose architecture. This methodology not only streamlines the 6D pose estimation process but also significantly enhances the accuracy of 3D object overlaying in AR settings. Our experimental results demonstrate a marked improvement in 6D pose estimation accuracy, with promising applications in manufacturing and robotics. Here, the precise overlay of AR visualizations and the advancement of robotic vision systems stand to benefit substantially from our findings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
439,635
2203.08399
Privacy-preserving Online AutoML for Domain-Specific Face Detection
Despite the impressive progress of general face detection, the tuning of hyper-parameters and architectures is still critical for the performance of a domain-specific face detector. Though existing AutoML works can speedup such process, they either require tuning from scratch for a new scenario or do not consider data privacy. To scale up, we derive a new AutoML setting from a platform perspective. In such setting, new datasets sequentially arrive at the platform, where an architecture and hyper-parameter configuration is recommended to train the optimal face detector for each dataset. This, however, brings two major challenges: (1) how to predict the best configuration for any given dataset without touching their raw images due to the privacy concern? and (2) how to continuously improve the AutoML algorithm from previous tasks and offer a better warm-up for future ones? We introduce "HyperFD", a new privacy-preserving online AutoML framework for face detection. At its core part, a novel meta-feature representation of a dataset as well as its learning paradigm is proposed. Thanks to HyperFD, each local task (client) is able to effectively leverage the learning "experience" of previous tasks without uploading raw images to the platform; meanwhile, the meta-feature extractor is continuously learned to better trade off the bias and variance. Extensive experiments demonstrate the effectiveness and efficiency of our design.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
285,772
2201.02310
Generalized quantum similarity learning
The similarity between objects is significant in a broad range of areas. While similarity can be measured using off-the-shelf distance functions, they may fail to capture the inherent meaning of similarity, which tends to depend on the underlying data and task. Moreover, conventional distance functions limit the space of similarity measures to be symmetric and do not directly allow comparing objects from different spaces. We propose using quantum networks (GQSim) for learning task-dependent (a)symmetric similarity between data that need not have the same dimensionality. We analyze the properties of such similarity function analytically (for a simple case) and numerically (for a complex case) and showthat these similarity measures can extract salient features of the data. We also demonstrate that the similarity measure derived using this technique is $(\epsilon,\gamma,\tau)$-good, resulting in theoretically guaranteed performance. Finally, we conclude by applying this technique for three relevant applications - Classification, Graph Completion, Generative modeling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
274,506
2003.06041
On Robustness Metrics for Learning STL Tasks
Signal temporal logic (STL) is a powerful tool for describing complex behaviors for dynamical systems. Among many approaches, the control problem for systems under STL task constraints is well suited for learning-based solutions, because STL is equipped with robustness metrics that quantify the satisfaction of task specifications and thus serve as useful rewards. In this work, we examine existing and potential robustness metrics specifically from the perspective of how they can aid such learning algorithms. We show that various desirable properties restrict the form of potential metrics, and introduce a new one based on the results. The effectiveness of this new robustness metric for accelerating the learning procedure is demonstrated through an insightful case study.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
168,006
1609.03426
Multi-Label Learning with Provable Guarantee
Here we study the problem of learning labels for large text corpora where each text can be assigned a variable number of labels. The problem might seem trivial when the label dimensionality is small and can be easily solved using a series of one-vs-all classifiers. However, as the label dimensionality increases to several thousand, the parameter space becomes extremely large, and it is no longer possible to use the one-vs-all technique. Here we propose a model based on the factorization of higher order moments of the words in the corpora, as well as the cross moment between the labels and the words for multi-label prediction. Our model provides guaranteed convergence bounds on the estimated parameters. Further, our model takes only three passes through the training dataset to extract the parameters, resulting in a highly scalable algorithm that can train on GB's of data consisting of millions of documents with hundreds of thousands of labels using a nominal resource of a single processor with 16GB RAM. Our model achieves 10x-15x order of speed-up on large-scale datasets while producing competitive performance in comparison with existing benchmark algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
60,876
1405.0647
Feature Selection On Boolean Symbolic Objects
With the boom in IT technology, the data sets used in application are more and more larger and are described by a huge number of attributes, therefore, the feature selection become an important discipline in Knowledge discovery and data mining, allowing the experts to select the most relevant features to improve the quality of their studies and to reduce the time processing of their algorithm. In addition to that, the data used by the applications become richer. They are now represented by a set of complex and structured objects, instead of simple numerical matrixes. The purpose of our algorithm is to do feature selection on rich data, called Boolean Symbolic Objects (BSOs). These objects are described by multivalued features. The BSOs are considered as higher level units which can model complex data, such as cluster of individuals, aggregated data or taxonomies. In this paper we will introduce a new feature selection criterion for BSOs, and we will explain how we improved its complexity.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
32,787
1812.10097
Trip Prediction by Leveraging Trip Histories from Neighboring Users
We propose a novel approach for trip prediction by analyzing user's trip histories. We augment users' (self-) trip histories by adding 'similar' trips from other users, which could be informative and useful for predicting future trips for a given user. This also helps to cope with noisy or sparse trip histories, where the self-history by itself does not provide a reliable prediction of future trips. We show empirical evidence that by enriching the users' trip histories with additional trips, one can improve the prediction error by 15%-40%, evaluated on multiple subsets of the Nancy2012 dataset. This real-world dataset is collected from public transportation ticket validations in the city of Nancy, France. Our prediction tool is a central component of a trip simulator system designed to analyze the functionality of public transportation in the city of Nancy.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
117,291
2311.13648
Evaluating Pretrained models for Deployable Lifelong Learning
We create a novel benchmark for evaluating a Deployable Lifelong Learning system for Visual Reinforcement Learning (RL) that is pretrained on a curated dataset, and propose a novel Scalable Lifelong Learning system capable of retaining knowledge from the previously learnt RL tasks. Our benchmark measures the efficacy of a deployable Lifelong Learning system that is evaluated on scalability, performance and resource utilization. Our proposed system, once pretrained on the dataset, can be deployed to perform continual learning on unseen tasks. Our proposed method consists of a Few Shot Class Incremental Learning (FSCIL) based task-mapper and an encoder/backbone trained entirely using the pretrain dataset. The policy parameters corresponding to the recognized task are then loaded to perform the task. We show that this system can be scaled to incorporate a large number of tasks due to the small memory footprint and fewer computational resources. We perform experiments on our DeLL (Deployment for Lifelong Learning) benchmark on the Atari games to determine the efficacy of the system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
409,817
cmp-lg/9803003
Nymble: a High-Performance Learning Name-finder
This paper presents a statistical, learned approach to finding names and other non-recursive entities in text (as per the MUC-6 definition of the NE task), using a variant of the standard hidden Markov model. We present our justification for the problem and our approach, a detailed discussion of the model itself and finally the successful results of this new approach.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,857
2404.04308
Visual Knowledge in the Big Model Era: Retrospect and Prospect
Visual knowledge is a new form of knowledge representation that can encapsulate visual concepts and their relations in a succinct, comprehensive, and interpretable manner, with a deep root in cognitive psychology. As the knowledge about the visual world has been identified as an indispensable component of human cognition and intelligence, visual knowledge is poised to have a pivotal role in establishing machine intelligence. With the recent advance of Artificial Intelligence (AI) techniques, large AI models (or foundation models) have emerged as a potent tool capable of extracting versatile patterns from broad data as implicit knowledge, and abstracting them into an outrageous amount of numeric parameters. To pave the way for creating visual knowledge empowered AI machines in this coming wave, we present a timely review that investigates the origins and development of visual knowledge in the pre-big model era, and accentuates the opportunities and unique role of visual knowledge in the big model era.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
444,596
2009.09474
Persian Ezafe Recognition Using Transformers and Its Role in Part-Of-Speech Tagging
Ezafe is a grammatical particle in some Iranian languages that links two words together. Regardless of the important information it conveys, it is almost always not indicated in Persian script, resulting in mistakes in reading complex sentences and errors in natural language processing tasks. In this paper, we experiment with different machine learning methods to achieve state-of-the-art results in the task of ezafe recognition. Transformer-based methods, BERT and XLMRoBERTa, achieve the best results, the latter achieving 2.68% F1-score more than the previous state-of-the-art. We, moreover, use ezafe information to improve Persian part-of-speech tagging results and show that such information will not be useful to transformer-based methods and explain why that might be the case.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
196,595
2202.13772
Path-Aware Graph Attention for HD Maps in Motion Prediction
The success of motion prediction for autonomous driving relies on integration of information from the HD maps. As maps are naturally graph-structured, investigation on graph neural networks (GNNs) for encoding HD maps is burgeoning in recent years. However, unlike many other applications where GNNs have been straightforwardly deployed, HD maps are heterogeneous graphs where vertices (lanes) are connected by edges (lane-lane interaction relationships) of various nature, and most graph-based models are not designed to understand the variety of edge types which provide crucial cues for predicting how the agents would travel the lanes. To overcome this challenge, we propose Path-Aware Graph Attention, a novel attention architecture that infers the attention between two vertices by parsing the sequence of edges forming the paths that connect them. Our analysis illustrates how the proposed attention mechanism can facilitate learning in a didactic problem where existing graph networks like GCN struggle. By improving map encoding, the proposed model surpasses previous state of the art on the Argoverse Motion Forecasting dataset, and won the first place in the 2021 Argoverse Motion Forecasting Competition.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
282,741
2406.04203
Explicit Steady-State Approximations for Parallel Server Systems with Heterogeneous Servers
The weighted-workload-task-allocation (WWTA) load-balancing policy is known to be throughput optimal for parallel server systems with heterogeneous servers. This work concerns the heavy traffic approximation of steady-state performance for parallel server systems operating under WWTA policy. Under a relaxed complete-resource-pooling condition, we prove that WWTA achieves a "strong form" of state-space collapse in heavy traffic and that the scaled workload for each server converges in distribution to an exponential random variable, whose parameter is explicitly given by system primitives. Various steady-state performance measures are shown to be approximated from this exponential random variable. Instead of proving a stochastic process limit followed by an interchange of limits - a method that dominates the literature, our method works directly with a pre-limit basic adjoint relationship (BAR) that characterizes the stationary distribution of each pre-limit system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
461,561
2010.03802
TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling
We present a novel approach to the problem of text style transfer. Unlike previous approaches requiring style-labeled training data, our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences, and uses labeled data only at inference time. We adapt T5 (Raffel et al., 2020), a strong pretrained text-to-text model, to extract a style vector from text and use it to condition the decoder to perform style transfer. As our label-free training results in a style vector space encoding many facets of style, we recast transfers as "targeted restyling" vector operations that adjust specific attributes of the input while preserving others. We demonstrate that training on unlabeled Amazon reviews data results in a model that is competitive on sentiment transfer, even compared to models trained fully on labeled data. Furthermore, applying our novel method to a diverse corpus of unlabeled web text results in a single model capable of transferring along multiple dimensions of style (dialect, emotiveness, formality, politeness, sentiment) despite no additional training and using only a handful of exemplars at inference time.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
199,536
1507.07833
Pseudo-Cores: The Terminus of an Intelligent Viral Meme's Trajectory
Comprehending the virality of a meme can help us in addressing the problems pertaining to disciplines like epidemiology and digital marketing. Therefore, it is not surprising that memetics remains a highly analyzed research topic ever since the mid 1990s. Some scientists choose to investigate the intrinsic contagiousness of a meme while others study the problem from a network theory perspective. In this paper, we revisit the idea of a core-periphery structure and apply it to understand the trajectory of a viral meme in a social network. We have proposed shell-based hill climbing algorithms to determine the path from a periphery shell(where the meme originates) to the core of the network. Further simulations and analysis on the networks behavioral characteristics helped us unearth specialized shells which we term Pseudo-Cores. These shells emulate the behavior of the core in terms of size of the cascade triggered. In our experiments, we have considered two sets for the target nodes, one being core and the other being any of the pseudo-core. We compare our algorithms against already existing path finding algorithms and validate the better performance experimentally.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
45,508