id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1701.08288
Select Your Questions Wisely: For Entity Resolution With Crowd Errors
Crowdsourcing is becoming increasingly important in entity resolution tasks due to their inherent complexity such as clustering of images and natural language processing. Humans can provide more insightful information for these difficult problems compared to machine-based automatic techniques. Nevertheless, human workers can make mistakes due to lack of domain expertise or seriousness, ambiguity, or even due to malicious intents. The state-of-the-art literature usually deals with human errors via majority voting or by assigning a universal error rate over crowd workers. However, such approaches are incomplete, and often inconsistent, because the expertise of crowd workers are diverse with possible biases, thereby making it largely inappropriate to assume a universal error rate for all workers over all crowdsourcing tasks. To this end, we mitigate the above challenges by considering an uncertain graph model, where the edge probability between two records A and B denotes the ratio of crowd workers who voted Yes on the question if A and B are same entity. In order to reflect independence across different crowdsourcing tasks, we apply the well-established notion of possible worlds, and develop parameter-free algorithms both for next crowdsourcing, as well as for entity resolution problems. In particular, using our framework, the problem of entity resolution becomes equivalent to finding the maximum-likelihood clustering; whereas for the next crowdsourcing, we identify the record pair that maximally increases the reliability of the maximum-likelihood clustering. Based on detailed empirical analysis over real-world datasets, we find that our proposed solution, PERC (probabilistic entity resolution with imperfect crowd) improves the quality by 15% and reduces the overall cost by 50% for the crowdsourcing-based entity resolution problem.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
67,437
2205.10841
Robust Modeling and Controls for Racing on the Edge
Race cars are routinely driven to the edge of their handling limits in dynamic scenarios well above 200mph. Similar challenges are posed in autonomous racing, where a software stack, instead of a human driver, interacts within a multi-agent environment. For an Autonomous Racing Vehicle (ARV), operating at the edge of handling limits and acting safely in these dynamic environments is still an unsolved problem. In this paper, we present a baseline controls stack for an ARV capable of operating safely up to 140mph. Additionally, limitations in the current approach are discussed to highlight the need for improved dynamics modeling and learning.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
297,894
2409.15383
Generalization in birdsong classification: impact of transfer learning methods and dataset characteristics
Animal sounds can be recognised automatically by machine learning, and this has an important role to play in biodiversity monitoring. Yet despite increasingly impressive capabilities, bioacoustic species classifiers still exhibit imbalanced performance across species and habitats, especially in complex soundscapes. In this study, we explore the effectiveness of transfer learning in large-scale bird sound classification across various conditions, including single- and multi-label scenarios, and across different model architectures such as CNNs and Transformers. Our experiments demonstrate that both fine-tuning and knowledge distillation yield strong performance, with cross-distillation proving particularly effective in improving in-domain performance on Xeno-canto data. However, when generalizing to soundscapes, shallow fine-tuning exhibits superior performance compared to knowledge distillation, highlighting its robustness and constrained nature. Our study further investigates how to use multi-species labels, in cases where these are present but incomplete. We advocate for more comprehensive labeling practices within the animal sound community, including annotating background species and providing temporal details, to enhance the training of robust bird sound classifiers. These findings provide insights into the optimal reuse of pretrained models for advancing automatic bioacoustic recognition.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
490,905
1008.4232
Online Learning in Case of Unbounded Losses Using the Follow Perturbed Leader Algorithm
In this paper the sequential prediction problem with expert advice is considered for the case where losses of experts suffered at each step cannot be bounded in advance. We present some modification of Kalai and Vempala algorithm of following the perturbed leader where weights depend on past losses of the experts. New notions of a volume and a scaled fluctuation of a game are introduced. We present a probabilistic algorithm protected from unrestrictedly large one-step losses. This algorithm has the optimal performance in the case when the scaled fluctuations of one-step losses of experts of the pool tend to zero.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
7,366
2303.05076
Pedestrian Attribute Editing for Gait Recognition and Anonymization
As a kind of biometrics, the gait information of pedestrians has attracted widespread attention from both industry and academia since it can be acquired from long distances without the cooperation of targets. In recent literature, this line of research has brought exciting chances along with alarming challenges: On the positive side, gait recognition used for security applications such as suspect retrieval and safety checks is becoming more and more promising. On the negative side, the misuse of gait information may lead to privacy concerns, as lawbreakers can track subjects of interest using gait characteristics even under face-masked and clothes-changed scenarios. To handle this double-edged sword, we propose a gait attribute editing framework termed GaitEditor. It can perform various degrees of attribute edits on real gait sequences while maintaining the visual authenticity, respectively used for gait data augmentation and de-identification, thereby adaptively enhancing or degrading gait recognition performance according to users' intentions. Experimentally, we conduct a comprehensive evaluation under both gait recognition and anonymization protocols on three widely used gait benchmarks. Numerous results illustrate that the adaptable utilization of GaitEditor efficiently improves gait recognition performance and generates vivid visualizations with de-identification to protect human privacy. To the best of our knowledge, GaitEditor is the first framework capable of editing multiple gait attributes while simultaneously benefiting gait recognition and gait anonymization. The source code of GaitEditor will be available at https://github.com/ShiqiYu/OpenGait.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,327
2402.00698
Time-Series Analysis Approach for Improving Energy Efficiency of a Fixed-Route Vessel in Short-Sea Shipping
Several approaches have been developed for improving the ship energy efficiency, thereby reducing operating costs and ensuring compliance with climate change mitigation regulations. Many of these approaches will heavily depend on measured data from onboard IoT devices, including operational and environmental information, as well as external data sources for additional navigational data. In this paper, we develop a framework that implements time-series analysis techniques to optimize the vessel's speed profile for improving the vessel's energy efficiency. We present a case study involving a real-world data from a passenger vessel that was collected over a span of 15 months in the south of Sweden. The results indicate that the implemented models exhibit a range of outcomes and adaptability across different scenarios. The findings highlight the effectiveness of time-series analysis approach for optimizing vessel voyages within the context of constrained landscapes, as often seen in short-sea shipping.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
425,686
2103.10621
Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement
Low-light image enhancement aims to improve an image's visibility while keeping its visual naturalness. Different from existing methods tending to accomplish the relighting task directly by ignoring the fidelity and naturalness recovery, we investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps. Inspired by the color image formulation (diffuse illumination color plus environment illumination color), we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color. To this end, we propose a novel Degradation-to-Refinement Generation Network (DRGN). Its distinctive features can be summarized as 1) A novel two-step generation network for degradation learning and content refinement. It is not only superior to one-step methods, but also capable of synthesizing sufficient paired samples to benefit the model training; 2) A multi-resolution fusion network to represent the target information (degradation or contents) in a multi-scale cooperative manner, which is more effective to address the complex unmixing problems. Extensive experiments on both the enhancement task and joint detection task have verified the effectiveness and efficiency of our proposed method, surpassing the SOTA by \textit{0.70dB on average and 3.18\% in mAP}, respectively. The code is available at \url{https://github.com/kuijiang0802/DRGN}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
225,515
2401.05667
EsaCL: Efficient Continual Learning of Sparse Models
A key challenge in the continual learning setting is to efficiently learn a sequence of tasks without forgetting how to perform previously learned tasks. Many existing approaches to this problem work by either retraining the model on previous tasks or by expanding the model to accommodate new tasks. However, these approaches typically suffer from increased storage and computational requirements, a problem that is worsened in the case of sparse models due to need for expensive re-training after sparsification. To address this challenge, we propose a new method for efficient continual learning of sparse models (EsaCL) that can automatically prune redundant parameters without adversely impacting the model's predictive power, and circumvent the need of retraining. We conduct a theoretical analysis of loss landscapes with parameter pruning, and design a directional pruning (SDP) strategy that is informed by the sharpness of the loss function with respect to the model parameters. SDP ensures model with minimal loss of predictive accuracy, accelerating the learning of sparse models at each stage. To accelerate model update, we introduce an intelligent data selection (IDS) strategy that can identify critical instances for estimating loss landscape, yielding substantially improved data efficiency. The results of our experiments show that EsaCL achieves performance that is competitive with the state-of-the-art methods on three continual learning benchmarks, while using substantially reduced memory and computational resources.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
420,868
1912.12828
ICSTrace: A Malicious IP Traceback Model for Attacking Data of Industrial Control System
Considering the attacks against industrial control system are mostly organized and premeditated actions, IP traceback is significant for the security of industrial control system. Based on the infrastructure of the Internet, we have developed a novel malicious IP traceback model-ICSTrace, without deploying any new services. The model extracts the function codes and their parameters from the attack data according to the format of industrial control protocol, and employs a short sequence probability method to transform the function codes and their parameter into a vector, which characterizes the attack pattern of malicious IP addresses. Furthermore, a Partial Seeded K-Means algorithm is proposed for the pattern's clustering, which helps in tracing the attacks back to an organization. ICSTrace is evaluated basing on the attack data captured by the large-scale deployed honeypots for industrial control system, and the results demonstrate that ICSTrace is effective on malicious IP traceback in industrial control system.
false
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
true
158,941
2412.04261
Aya Expanse: Combining Research Breakthroughs for a New Multilingual Frontier
We introduce the Aya Expanse model family, a new generation of 8B and 32B parameter multilingual language models, aiming to address the critical challenge of developing highly performant multilingual models that match or surpass the capabilities of monolingual models. By leveraging several years of research at Cohere For AI and Cohere, including advancements in data arbitrage, multilingual preference training, and model merging, Aya Expanse sets a new state-of-the-art in multilingual performance. Our evaluations on the Arena-Hard-Auto dataset, translated into 23 languages, demonstrate that Aya Expanse 8B and 32B outperform leading open-weight models in their respective parameter classes, including Gemma 2, Qwen 2.5, and Llama 3.1, achieving up to a 76.6% win-rate. Notably, Aya Expanse 32B outperforms Llama 3.1 70B, a model with twice as many parameters, achieving a 54.0% win-rate. In this short technical report, we present extended evaluation results for the Aya Expanse model family and release their open-weights, together with a new multilingual evaluation dataset m-ArenaHard.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
514,330
2012.04573
Estimation of the Mean Function of Functional Data via Deep Neural Networks
In this work, we propose a deep neural network method to perform nonparametric regression for functional data. The proposed estimators are based on sparsely connected deep neural networks with ReLU activation function. By properly choosing network architecture, our estimator achieves the optimal nonparametric convergence rate in empirical norm. Under certain circumstances such as trigonometric polynomial kernel and a sufficiently large sampling frequency, the convergence rate is even faster than root-$n$ rate. Through Monte Carlo simulation studies we examine the finite-sample performance of the proposed method. Finally, the proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease obtained from the Alzheimer Disease Neuroimaging Initiative database.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,496
2104.05508
Noether: The More Things Change, the More Stay the Same
Symmetries have proven to be important ingredients in the analysis of neural networks. So far their use has mostly been implicit or seemingly coincidental. We undertake a systematic study of the role that symmetry plays. In particular, we clarify how symmetry interacts with the learning algorithm. The key ingredient in our study is played by Noether's celebrated theorem which, informally speaking, states that symmetry leads to conserved quantities (e.g., conservation of energy or conservation of momentum). In the realm of neural networks under gradient descent, model symmetries imply restrictions on the gradient path. E.g., we show that symmetry of activation functions leads to boundedness of weight matrices, for the specific case of linear activations it leads to balance equations of consecutive layers, data augmentation leads to gradient paths that have "momentum"-type restrictions, and time symmetry leads to a version of the Neural Tangent Kernel. Symmetry alone does not specify the optimization path, but the more symmetries are contained in the model the more restrictions are imposed on the path. Since symmetry also implies over-parametrization, this in effect implies that some part of this over-parametrization is cancelled out by the existence of the conserved quantities. Symmetry can therefore be thought of as one further important tool in understanding the performance of neural networks under gradient descent.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
229,755
2001.02524
LTP: A New Active Learning Strategy for CRF-Based Named Entity Recognition
In recent years, deep learning has achieved great success in many natural language processing tasks including named entity recognition. The shortcoming is that a large amount of manually-annotated data is usually required. Previous studies have demonstrated that active learning could elaborately reduce the cost of data annotation, but there is still plenty of room for improvement. In real applications we found existing uncertainty-based active learning strategies have two shortcomings. Firstly, these strategies prefer to choose long sequence explicitly or implicitly, which increase the annotation burden of annotators. Secondly, some strategies need to invade the model and modify to generate some additional information for sample selection, which will increase the workload of the developer and increase the training/prediction time of the model. In this paper, we first examine traditional active learning strategies in a specific case of BiLstm-CRF that has widely used in named entity recognition on several typical datasets. Then we propose an uncertainty-based active learning strategy called Lowest Token Probability (LTP) which combines the input and output of CRF to select informative instance. LTP is simple and powerful strategy that does not favor long sequences and does not need to invade the model. We test LTP on multiple datasets, and the experiments show that LTP performs slightly better than traditional strategies with obviously less annotation tokens on both sentence-level accuracy and entity-level F1-score. Related code have been release on https://github.com/HIT-ICES/AL-NER
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
159,758
2405.10216
Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting
Low-Rank Adaptation (LoRA) is a widely used technique for fine-tuning large pre-trained or foundational models across different modalities and tasks. However, its application to time series data, particularly within foundational models, remains underexplored. This paper examines the impact of LoRA on contemporary time series foundational models: Lag-Llama, MOIRAI, and Chronos. We demonstrate LoRA's fine-tuning potential for forecasting the vital signs of sepsis patients in intensive care units (ICUs), emphasizing the models' adaptability to previously unseen, out-of-domain modalities. Integrating LoRA aims to enhance forecasting performance while reducing inefficiencies associated with fine-tuning large models on limited domain-specific data. Our experiments show that LoRA fine-tuning of time series foundational models significantly improves forecasting, achieving results comparable to state-of-the-art models trained from scratch on similar modalities. We conduct comprehensive ablation studies to demonstrate the trade-offs between the number of tunable parameters and forecasting performance and assess the impact of varying LoRA matrix ranks on model performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
454,679
2205.04538
A Realistic Cyclist Model for SUMO Based on the SimRa Dataset
Increasing the modal share of bicycle traffic to reduce carbon emissions, reduce urban car traffic, and to improve the health of citizens, requires a shift away from car-centric city planning. For this, traffic planners often rely on simulation tools such as SUMO which allow them to study the effects of construction changes before implementing them. Similarly, studies of vulnerable road users, here cyclists, also use such models to assess the performance of communication-based road traffic safety systems. The cyclist model in SUMO, however, is very imprecise as SUMO cyclists behave either like slow cars or fast pedestrians, thus, casting doubt on simulation results for bicycle traffic. In this paper, we analyze acceleration, velocity, and intersection left-turn behavior of cyclists in a large dataset of real world cycle tracks. We use the results to derive an improved cyclist model and implement it in SUMO.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
295,666
2105.03743
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all existing certified defense methods assume that the defenders are informed of how the adversaries generate synonyms, which is not a realistic scenario. In this paper, we propose a certifiably robust defense method by randomly masking a certain proportion of the words in an input text, in which the above unrealistic assumption is no longer necessary. The proposed method can defend against not only word substitution-based attacks, but also character-level perturbations. We can certify the classifications of over 50% texts to be robust to any perturbation of 5 words on AGNEWS, and 2 words on SST2 dataset. The experimental results show that our randomized smoothing method significantly outperforms recently proposed defense methods across multiple datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
234,250
2311.02421
Digital Twins for Human-Robot Collaboration: A Future Perspective
As collaborative robot (Cobot) adoption in many sectors grows, so does the interest in integrating digital twins in human-robot collaboration (HRC). Virtual representations of physical systems (PT) and assets, known as digital twins, can revolutionize human-robot collaboration by enabling real-time simulation, monitoring, and control. In this article, we present a review of the state-of-the-art and our perspective on the future of digital twins (DT) in human-robot collaboration. We argue that DT will be crucial in increasing the efficiency and effectiveness of these systems by presenting compelling evidence and a concise vision of the future of DT in human-robot collaboration, as well as insights into the possible advantages and challenges associated with their integration.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
405,437
2402.12713
Are LLMs Rational Investors? A Study on Detecting and Reducing the Financial Bias in LLMs
Large Language Models (LLMs) are increasingly adopted in financial analysis for interpreting complex market data and trends. However, their use is challenged by intrinsic biases (e.g., risk-preference bias) and a superficial understanding of market intricacies, necessitating a thorough assessment of their financial insight. To address these issues, we introduce Financial Bias Indicators (FBI), a framework with components like Bias Unveiler, Bias Detective, Bias Tracker, and Bias Antidote to identify, detect, analyze, and eliminate irrational biases in LLMs. By combining behavioral finance principles with bias examination, we evaluate 23 leading LLMs and propose a de-biasing method based on financial causal knowledge. Results show varying degrees of financial irrationality among models, influenced by their design and training. Models trained specifically on financial datasets may exhibit more irrationality, and even larger financial language models (FinLLMs) can show more bias than smaller, general models. We utilize four prompt-based methods incorporating causal debiasing, effectively reducing financial biases in these models. This work enhances the understanding of LLMs' bias in financial applications, laying the foundation for developing more reliable and rational financial analysis tools.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
430,960
2406.06910
Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models
Simultaneous Machine Translation (SiMT) generates target translations while reading the source sentence. It relies on a policy to determine the optimal timing for reading sentences and generating translations. Existing SiMT methods generally adopt the traditional Transformer architecture, which concurrently determines the policy and generates translations. While they excel at determining policies, their translation performance is suboptimal. Conversely, Large Language Models (LLMs), trained on extensive corpora, possess superior generation capabilities, but it is difficult for them to acquire translation policy through the training methods of SiMT. Therefore, we introduce Agent-SiMT, a framework combining the strengths of LLMs and traditional SiMT methods. Agent-SiMT contains the policy-decision agent and the translation agent. The policy-decision agent is managed by a SiMT model, which determines the translation policy using partial source sentence and translation. The translation agent, leveraging an LLM, generates translation based on the partial source sentence. The two agents collaborate to accomplish SiMT. Experiments demonstrate that Agent-SiMT attains state-of-the-art performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
462,819
1902.08588
Towards Neural Mixture Recommender for Long Range Dependent User Sequences
Understanding temporal dynamics has proved to be highly valuable for accurate recommendation. Sequential recommenders have been successful in modeling the dynamics of users and items over time. However, while different model architectures excel at capturing various temporal ranges or dynamics, distinct application contexts require adapting to diverse behaviors. In this paper we examine how to build a model that can make use of different temporal ranges and dynamics depending on the request context. We begin with the analysis of an anonymized Youtube dataset comprising millions of user sequences. We quantify the degree of long-range dependence in these sequences and demonstrate that both short-term and long-term dependent behavioral patterns co-exist. We then propose a neural Multi-temporal-range Mixture Model (M3) as a tailored solution to deal with both short-term and long-term dependencies. Our approach employs a mixture of models, each with a different temporal range. These models are combined by a learned gating mechanism capable of exerting different model combinations given different contextual information. In empirical evaluations on a public dataset and our own anonymized YouTube dataset, M3 consistently outperforms state-of-the-art sequential recommendation methods.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
122,224
2008.10580
Classification of Noncoding RNA Elements Using Deep Convolutional Neural Networks
The paper proposes to employ deep convolutional neural networks (CNNs) to classify noncoding RNA (ncRNA) sequences. To this end, we first propose an efficient approach to convert the RNA sequences into images characterizing their base-pairing probability. As a result, classifying RNA sequences is converted to an image classification problem that can be efficiently solved by available CNN-based classification models. The paper also considers the folding potential of the ncRNAs in addition to their primary sequence. Based on the proposed approach, a benchmark image classification dataset is generated from the RFAM database of ncRNA sequences. In addition, three classical CNN models have been implemented and compared to demonstrate the superior performance and efficiency of the proposed approach. Extensive experimental results show the great potential of using deep learning approaches for RNA classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,043
1803.05815
OFDM-Autoencoder for End-to-End Learning of Communications Systems
We extend the idea of end-to-end learning of communications systems through deep neural network (NN)-based autoencoders to orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP). Our implementation has the same benefits as a conventional OFDM system, namely singletap equalization and robustness against sampling synchronization errors, which turned out to be one of the major challenges in previous single-carrier implementations. This enables reliable communication over multipath channels and makes the communication scheme suitable for commodity hardware with imprecise oscillators. We show that the proposed scheme can be realized with state-of-the-art deep learning software libraries as transmitter and receiver solely consist of differentiable layers required for gradient-based training. We compare the performance of the autoencoder-based system against that of a state-of-the-art OFDM baseline over frequency-selective fading channels. Finally, the impact of a non-linear amplifier is investigated and we show that the autoencoder inherently learns how to deal with such hardware impairments.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
92,708
2406.02047
Kinematic analysis of a parallel robot for minimally invasive surgery
The paper presents the kinematic modelling for the coupled motion of a 6-DOF surgical parallel robot PARA-SILSROB which guides a mobile platform carrying the surgical instruments, and the actuators of the sub-modules which hold these tools. To increase the surgical procedure safety, a closed form solution for the kinematic model is derived and then, the forward and inverse kinematic models for the mobile orientation platform are obtained. The kinematic models are used in numerical simulations for the reorientation of the endoscopic camera, which imposes an automated compensatory motion from the active instruments' mod-ules.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
460,589
2309.06677
SHARM: Segmented Head Anatomical Reference Models
Reliable segmentation of anatomical tissues of human head is a major step in several clinical applications such as brain mapping, surgery planning and associated computational simulation studies. Segmentation is based on identifying different anatomical structures through labeling different tissues through medical imaging modalities. The segmentation of brain structures is commonly feasible with several remarkable contributions mainly for medical perspective; however, non-brain tissues are of less interest due to anatomical complexity and difficulties to be observed using standard medical imaging protocols. The lack of whole head segmentation methods and unavailability of large human head segmented datasets limiting the variability studies, especially in the computational evaluation of electrical brain stimulation (neuromodulation), human protection from electromagnetic field, and electroencephalography where non-brain tissues are of great importance. To fill this gap, this study provides an open-access Segmented Head Anatomical Reference Models (SHARM) that consists of 196 subjects. These models are segmented into 15 different tissues; skin, fat, muscle, skull cancellous bone, skull cortical bone, brain white matter, brain gray matter, cerebellum white matter, cerebellum gray matter, cerebrospinal fluid, dura, vitreous humor, lens, mucous tissue and blood vessels. The segmented head models are generated using open-access IXI MRI dataset through convolutional neural network structure named ForkNet+. Results indicate a high consistency in statistical characteristics of different tissue distribution in age scale with real measurements. SHARM is expected to be a useful benchmark not only for electromagnetic dosimetry studies but also for different human head segmentation applications.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
391,501
cs/0406047
Self-organizing neural networks in classification and image recognition
Self-organizing neural networks are used for brick finding in OPERA experiment. Self-organizing neural networks and wavelet analysis used for recognition and extraction of car numbers from images.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
538,244
2205.05600
RLOP: RL Methods in Option Pricing from a Mathematical Perspective
Abstract In this work, we build two environments, namely the modified QLBS and RLOP models, from a mathematics perspective which enables RL methods in option pricing through replicating by portfolio. We implement the environment specifications (the source code can be found at https://github.com/owen8877/RLOP), the learning algorithm, and agent parametrization by a neural network. The learned optimal hedging strategy is compared against the BS prediction. The effect of various factors is considered and studied based on how they affect the optimal price and position.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
295,980
2101.12369
Information Theoretic Limits of Exact Recovery in Sub-hypergraph Models for Community Detection
In this paper, we study the information theoretic bounds for exact recovery in sub-hypergraph models for community detection. We define a general model called the $m-$uniform sub-hypergraph stochastic block model ($m-$ShSBM). Under the $m-$ShSBM, we use Fano's inequality to identify the region of model parameters where any algorithm fails to exactly recover the planted communities with a large probability. We also identify the region where a Maximum Likelihood Estimation (MLE) algorithm succeeds to exactly recover the communities with high probability. Our bounds are tight and pertain to the community detection problems in various models such as the planted hypergraph stochastic block model, the planted densest sub-hypergraph model, and the planted multipartite hypergraph model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
217,543
1805.07784
Adaptive Recovery of Dictionary-sparse Signals using Binary Measurements
One-bit compressive sensing (CS) is an advanced version of sparse recovery in which the sparse signal of interest can be recovered from extremely quantized measurements. Namely, only the sign of each measurement is available to us. In many applications, the ground-truth signal is not sparse itself, but can be represented in a redundant dictionary. A strong line of research has addressed conventional CS in this signal model including its extension to one-bit measurements. However, one-bit CS suffers from the extremely large number of required measurements to achieve a predefined reconstruction error level. A common alternative to resolve this issue is to exploit adaptive schemes. Adaptive sampling acts on the acquired samples to trace the signal in an efficient way. In this work, we utilize an adaptive sampling strategy to recover dictionary-sparse signals from binary measurements. For this task, a multi-dimensional threshold is proposed to incorporate the previous signal estimates into the current sampling procedure. This strategy substantially reduces the required number of measurements for exact recovery. Our proof approach is based on the recent tools in high dimensional geometry in particular random hyperplane tessellation and Gaussian width. We show through rigorous and numerical analysis that the proposed algorithm considerably outperforms state of the art approaches. Further, our algorithm reaches an exponential error decay in terms of the number of quantized measurements.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
97,935
1908.00981
Situation-Aware Left-Turning Connected and Automated Vehicle Operation at Signalized Intersections
One challenging aspect of the Connected and Automated Vehicle (CAV) operation in mixed traffic is the development of a situation-awareness module for CAVs. While operating on public roads, CAVs need to assess their surroundings, especially the intentions of non-CAVs. Generally, CAVs demonstrate a defensive driving behavior, and CAVs expect other non-autonomous entities on the road will follow the traffic rules or common driving behavior. However, the presence of aggressive human drivers in the surrounding environment, who may not follow traffic rules and behave abruptly, can lead to serious safety consequences. In this paper, we have addressed the CAV and non-CAV interaction by evaluating a situation-awareness module for left-turning CAV operations in an urban area. Existing literature does not consider the intent of the following vehicle for a CAVs left-turning movement, and existing CAV controllers do not assess the following non-CAVs intents. Based on our simulation study, the situation-aware CAV controller module reduces up to 27% of the abrupt braking of the following non-CAVs for scenarios with different opposing through movement compared to the base scenario with the autonomous vehicle, without considering the following vehicles intent. The analysis shows that the average travel time reductions for the opposite through traffic volumes of 600, 800, and 1000 vehicle/hour/lane are 58%, 52%, and 62%, respectively, for the aggressive human driver following the CAV if the following vehicles intent is considered by a CAV in making a left turn at an intersection.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
140,644
2405.20104
Object-centric Reconstruction and Tracking of Dynamic Unknown Objects using 3D Gaussian Splatting
Generalizable perception is one of the pillars of high-level autonomy in space robotics. Estimating the structure and motion of unknown objects in dynamic environments is fundamental for such autonomous systems. Traditionally, the solutions have relied on prior knowledge of target objects, multiple disparate representations, or low-fidelity outputs unsuitable for robotic operations. This work proposes a novel approach to incrementally reconstruct and track a dynamic unknown object using a unified representation -- a set of 3D Gaussian blobs that describe its geometry and appearance. The differentiable 3D Gaussian Splatting framework is adapted to a dynamic object-centric setting. The input to the pipeline is a sequential set of RGB-D images. 3D reconstruction and 6-DoF pose tracking tasks are tackled using first-order gradient-based optimization. The formulation is simple, requires no pre-training, assumes no prior knowledge of the object or its motion, and is suitable for online applications. The proposed approach is validated on a dataset of 10 unknown spacecraft of diverse geometry and texture under arbitrary relative motion. The experiments demonstrate successful 3D reconstruction and accurate 6-DoF tracking of the target object in proximity operations over a short to medium duration. The causes of tracking drift are discussed and potential solutions are outlined.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
459,199
2410.16240
Nonlinear Magnetics Model for Permanent Magnet Synchronous Machines Capturing Saturation and Temperature Effects
This paper proposes a nonlinear magnetics model for Permanent Magnet Synchronous Machines (PMSMs) that accurately captures the effects of magnetic saturation in the machine iron and variations in rotor temperature on the permanent magnet excitation. The proposed model considers the permanent magnet as a current source rather than the more commonly used flux-linkage source. A comparison of the two modelling approaches is conducted using Finite Element Analysis (FEA) for different machine designs as well as experimental validation, where it is shown that the proposed model has substantially better accuracy. The proposed model decouples magnetic saturation and rotor temperature effects in the current/flux-linkage relationship, allowing for adaptive estimation of the PM excitation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
500,941
2004.02767
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Automatic designing computationally efficient neural networks has received much attention in recent years. Existing approaches either utilize network pruning or leverage the network architecture search methods. This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs, so that under each network configuration, one can estimate the FLOPs utilization ratio (FUR) for each layer and use it to determine whether to increase or decrease the number of channels on the layer. Note that FUR, like the gradient of a non-linear function, is accurate only in a small neighborhood of the current network. Hence, we design an iterative mechanism so that the initial network undergoes a number of steps, each of which has a small `adjusting rate' to control the changes to the network. The computational overhead of the entire search process is reasonable, i.e., comparable to that of re-training the final model from scratch. Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach, which consistently outperforms the pruning counterpart. The code is available at https://github.com/danczs/NetworkAdjustment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
171,336
2308.14525
Semi-Supervised Learning for Visual Bird's Eye View Semantic Segmentation
Visual bird's eye view (BEV) semantic segmentation helps autonomous vehicles understand the surrounding environment only from images, including static elements (e.g., roads) and dynamic elements (e.g., vehicles, pedestrians). However, the high cost of annotation procedures of full-supervised methods limits the capability of the visual BEV semantic segmentation, which usually needs HD maps, 3D object bounding boxes, and camera extrinsic matrixes. In this paper, we present a novel semi-supervised framework for visual BEV semantic segmentation to boost performance by exploiting unlabeled images during the training. A consistency loss that makes full use of unlabeled data is then proposed to constrain the model on not only semantic prediction but also the BEV feature. Furthermore, we propose a novel and effective data augmentation method named conjoint rotation which reasonably augments the dataset while maintaining the geometric relationship between the front-view images and the BEV semantic segmentation. Extensive experiments on the nuScenes and Argoverse datasets show that our semi-supervised framework can effectively improve prediction accuracy. To the best of our knowledge, this is the first work that explores improving visual BEV semantic segmentation performance using unlabeled data. The code is available at https://github.com/Junyu-Z/Semi-BEVseg
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
388,363
2205.11460
Graph-theoretical approach to robust 3D normal extraction of LiDAR data
Low dimensional primitive feature extraction from LiDAR point clouds (such as planes) forms the basis of majority of LiDAR data processing tasks. A major challenge in LiDAR data analysis arises from the irregular nature of LiDAR data that forces practitioners to either regularize the data using some form of gridding or utilize a triangular mesh such as triangulated irregular network (TIN). While there have been a handful applications using LiDAR data as a connected graph, a principled treatment of utilizing graph-theoretical approach for LiDAR data modelling is still lacking. In this paper, we try to bridge this gap by utilizing graphical approach for normal estimation from LiDAR point clouds. We formulate the normal estimation problem in an optimization framework, where we find the corresponding normal vector for each LiDAR point by utilizing its nearest neighbors and simultaneously enforcing a graph smoothness assumption based on point samples. This is a non-linear constrained convex optimization problem which can then be solved using projected conjugate gradient descent to yield an unique solution. As an enhancement to our optimization problem, we also provide different weighted solutions based on the dot product of the normals and Euclidean distance between the points. In order to assess the performance of our proposed normal extraction method and weighting strategies, we first provide a detailed analysis on repeated randomly generated datasets with four different noise levels and four different tuning parameters. Finally, we benchmark our proposed method against existing state-of-the-art approaches on a large scale synthetic plane extraction dataset. The code for the proposed approach along with the simulations and benchmarking is available at https://github.com/arpan-kusari/graph-plane-extraction-simulation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
298,150
2405.20681
No Free Lunch Theorem for Privacy-Preserving LLM Inference
Individuals and businesses have been significantly benefited by Large Language Models (LLMs) including PaLM, Gemini and ChatGPT in various ways. For example, LLMs enhance productivity, reduce costs, and enable us to focus on more valuable tasks. Furthermore, LLMs possess the capacity to sift through extensive datasets, uncover underlying patterns, and furnish critical insights that propel the frontiers of technology and science. However, LLMs also pose privacy concerns. Users' interactions with LLMs may expose their sensitive personal or company information. A lack of robust privacy safeguards and legal frameworks could permit the unwarranted intrusion or improper handling of individual data, thereby risking infringements of privacy and the theft of personal identities. To ensure privacy, it is essential to minimize the dependency between shared prompts and private information. Various randomization approaches have been proposed to protect prompts' privacy, but they may incur utility loss compared to unprotected LLMs prompting. Therefore, it is essential to evaluate the balance between the risk of privacy leakage and loss of utility when conducting effective protection mechanisms. The current study develops a framework for inferring privacy-protected Large Language Models (LLMs) and lays down a solid theoretical basis for examining the interplay between privacy preservation and utility. The core insight is encapsulated within a theorem that is called as the NFL (abbreviation of the word No-Free-Lunch) Theorem.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
459,469
2003.12944
Mutual Learning Network for Multi-Source Domain Adaptation
Early Unsupervised Domain Adaptation (UDA) methods have mostly assumed the setting of a single source domain, where all the labeled source data come from the same distribution. However, in practice the labeled data can come from multiple source domains with different distributions. In such scenarios, the single source domain adaptation methods can fail due to the existence of domain shifts across different source domains and multi-source domain adaptation methods need to be designed. In this paper, we propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA). Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network, while taking the pair of the combined multi-source domain and target domain to train a conditional adversarial adaptive network as the guidance network. The multiple branch networks are aligned with the guidance network to achieve mutual learning by enforcing JS-divergence regularization over their prediction probability distributions on the corresponding target data. We conduct extensive experiments on multiple multi-source domain adaptation benchmark datasets. The results show the proposed ML-MSDA method outperforms the comparison methods and achieves the state-of-the-art performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
170,051
2110.06339
Natural Computational Architectures for Cognitive Info-Communication
Recent comprehensive overview of 40 years of research in cognitive architectures, (Kotseruba and Tsotsos 2020), evaluates modelling of the core cognitive abilities in humans, but only marginally addresses biologically plausible approaches based on natural computation. This mini review presents a set of perspectives and approaches which have shaped the development of biologically inspired computational models in the recent past that can lead to the development of biologically more realistic cognitive architectures. For describing continuum of natural cognitive architectures, from basal cellular to human-level cognition, we use evolutionary info-computational framework, where natural/ physical/ morphological computation leads to evolution of increasingly complex cognitive systems. Forty years ago, when the first cognitive architectures have been proposed, understanding of cognition, embodiment and evolution was different. So was the state of the art of information physics, bioinformatics, information chemistry, computational neuroscience, complexity theory, self-organization, theory of evolution, information and computation. Novel developments support a constructive interdisciplinary framework for cognitive architectures in the context of computing nature, where interactions between constituents at different levels of organization lead to complexification of agency and increased cognitive capacities. We identify several important research questions for further investigation that can increase understanding of cognition in nature and inspire new developments of cognitive technologies. Recently, basal cell cognition attracted a lot of interest for its possible applications in medicine, new computing technologies, as well as micro- and nanorobotics.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
260,571
2303.08896
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
351,813
2001.09249
TiFL: A Tier-based Federated Learning System
Federated Learning (FL) enables learning a shared model across many clients without violating the privacy requirements. One of the key attributes in FL is the heterogeneity that exists in both resource and data due to the differences in computation and communication capacity, as well as the quantity and content of data among different clients. We conduct a case study to show that heterogeneity in resource and data has a significant impact on training time and model accuracy in conventional FL systems. To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity. To further tame the heterogeneity caused by non-IID (Independent and Identical Distribution) data and resources, TiFL employs an adaptive tier selection approach to update the tiering on-the-fly based on the observed training performance and accuracy overtime. We prototype TiFL in a FL testbed following Google's FL architecture and evaluate it using popular benchmarks and the state-of-the-art FL benchmark LEAF. Experimental evaluation shows that TiFL outperforms the conventional FL in various heterogeneous conditions. With the proposed adaptive tier selection policy, we demonstrate that TiFL achieves much faster training performance while keeping the same (and in some cases - better) test accuracy across the board.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
161,515
1501.07422
Pairwise Rotation Hashing for High-dimensional Features
Binary Hashing is widely used for effective approximate nearest neighbors search. Even though various binary hashing methods have been proposed, very few methods are feasible for extremely high-dimensional features often used in visual tasks today. We propose a novel highly sparse linear hashing method based on pairwise rotations. The encoding cost of the proposed algorithm is $\mathrm{O}(n \log n)$ for n-dimensional features, whereas that of the existing state-of-the-art method is typically $\mathrm{O}(n^2)$. The proposed method is also remarkably faster in the learning phase. Along with the efficiency, the retrieval accuracy is comparable to or slightly outperforming the state-of-the-art. Pairwise rotations used in our method are formulated from an analytical study of the trade-off relationship between quantization error and entropy of binary codes. Although these hashing criteria are widely used in previous researches, its analytical behavior is rarely studied. All building blocks of our algorithm are based on the analytical solution, and it thus provides a fairly simple and efficient procedure.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,707
2409.02841
Historical German Text Normalization Using Type- and Token-Based Language Modeling
Historic variations of spelling poses a challenge for full-text search or natural language processing on historical digitized texts. To minimize the gap between the historic orthography and contemporary spelling, usually an automatic orthographic normalization of the historical source material is pursued. This report proposes a normalization system for German literary texts from c. 1700-1900, trained on a parallel corpus. The proposed system makes use of a machine learning approach using Transformer language models, combining an encoder-decoder model to normalize individual word types, and a pre-trained causal language model to adjust these normalizations within their context. An extensive evaluation shows that the proposed system provides state-of-the-art accuracy, comparable with a much larger fully end-to-end sentence-based normalization system, fine-tuning a pre-trained Transformer large language model. However, the normalization of historical text remains a challenge due to difficulties for models to generalize, and the lack of extensive high-quality parallel data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
485,847
2308.05751
Artificial-Intelligence-Based Design for Circuit Parameters of Power Converters
Parameter design is significant in ensuring a satisfactory holistic performance of power converters. Generally, circuit parameter design for power converters consists of two processes: analysis and deduction process and optimization process. The existing approaches for parameter design consist of two types: traditional approach and computer-aided optimization (CAO) approach. In the traditional approaches, heavy human-dependence is required. Even though the emerging CAO approaches automate the optimization process, they still require manual analysis and deduction process. To mitigate human-dependence for the sake of high accuracy and easy implementation, an artificial-intelligence-based design (AI-D) approach is proposed in this article for the parameter design of power converters. In the proposed AI-D approach, to achieve automation in the analysis and deduction process, simulation tools and batch-normalization neural network (BN-NN) are adopted to build data-driven models for the optimization objectives and design constraints. Besides, to achieve automation in the optimization process, genetic algorithm is used to search for optimal design results. The proposed AI-D approach is validated in the circuit parameter design of the synchronous buck converter in the 48 to 12 V accessory-load power supply system in electric vehicle. The design case of an efficiency-optimal synchronous buck converter with constraints in volume, voltage ripple, and current ripple is provided. In the end of this article, feasibility and accuracy of the proposed AI-D approach have been validated by hardware experiments.
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
384,899
1906.06765
Defending Against Adversarial Attacks Using Random Forests
As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world. Unfortunately, some recent studies show that adversarial examples, which are hard to be distinguished from real examples, can easily fool DNNs and manipulate their predictions. Upon observing that adversarial examples are mostly generated by gradient-based methods, in this paper, we first propose to use a simple yet very effective non-differentiable hybrid model that combines DNNs and random forests, rather than hide gradients from attackers, to defend against the attacks. Our experiments show that our model can successfully and completely defend the white-box attacks, has a lower transferability, and is quite resistant to three representative types of black-box attacks; while at the same time, our model achieves similar classification accuracy as the original DNNs. Finally, we investigate and suggest a criterion to define where to grow random forests in DNNs.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
135,403
2307.04571
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation
Offline reinforcement learning (RL), a technology that offline learns a policy from logged data without the need to interact with online environments, has become a favorable choice in decision-making processes like interactive recommendation. Offline RL faces the value overestimation problem. To address it, existing methods employ conservatism, e.g., by constraining the learned policy to be close to behavior policies or punishing the rarely visited state-action pairs. However, when applying such offline RL to recommendation, it will cause a severe Matthew effect, i.e., the rich get richer and the poor get poorer, by promoting popular items or categories while suppressing the less popular ones. It is a notorious issue that needs to be addressed in practical recommender systems. In this paper, we aim to alleviate the Matthew effect in offline RL-based recommendation. Through theoretical analyses, we find that the conservatism of existing methods fails in pursuing users' long-term satisfaction. It inspires us to add a penalty term to relax the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. This leads to the main technical contribution of the work: Debiased model-based Offline RL (DORL) method. Experiments show that DORL not only captures user interests well but also alleviates the Matthew effect. The implementation is available via https://github.com/chongminggao/DORL-codes.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
378,451
2410.22007
Survey of Load-Altering Attacks Against Power Grids: Attack Impact, Detection and Mitigation
The growing penetration of IoT devices in power grids despite its benefits, raises cyber security concerns. In particular, load-altering attacks (LAAs) targetting high-wattage IoT-controllable load devices pose serious risks to grid stability and disrupt electricity markets. This paper provides a comprehensive review of LAAs, highlighting the threat model, analyzing its impact on transmission and distribution networks, and the electricity market dynamics. We also review the detection and localization schemes for LAAs that employ either model-based or data-driven approaches, with some hybrid methods combining the strengths of both. Additionally, mitigation techniques are examined, focusing on both preventive measures, designed to thwart attack execution, and reactive methods, which aim to optimize responses to ongoing attacks. We look into the application of each study and highlight potential streams for future research in this field.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
503,465
2204.08261
Visio-Linguistic Brain Encoding
Enabling effective brain-computer interfaces requires understanding how the human brain encodes stimuli across modalities such as visual, language (or text), etc. Brain encoding aims at constructing fMRI brain activity given a stimulus. There exists a plethora of neural encoding models which study brain encoding for single mode stimuli: visual (pretrained CNNs) or text (pretrained language models). Few recent papers have also obtained separate visual and text representation models and performed late-fusion using simple heuristics. However, previous work has failed to explore: (a) the effectiveness of image Transformer models for encoding visual stimuli, and (b) co-attentive multi-modal modeling for visual and text reasoning. In this paper, we systematically explore the efficacy of image Transformers (ViT, DEiT, and BEiT) and multi-modal Transformers (VisualBERT, LXMERT, and CLIP) for brain encoding. Extensive experiments on two popular datasets, BOLD5000 and Pereira, provide the following insights. (1) To the best of our knowledge, we are the first to investigate the effectiveness of image and multi-modal Transformers for brain encoding. (2) We find that VisualBERT, a multi-modal Transformer, significantly outperforms previously proposed single-mode CNNs, image Transformers as well as other previously proposed multi-modal models, thereby establishing new state-of-the-art. The supremacy of visio-linguistic models raises the question of whether the responses elicited in the visual regions are affected implicitly by linguistic processing even when passively viewing images. Future fMRI tasks can verify this computational insight in an appropriate experimental setting.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
292,021
2208.05399
Towards Autonomous Atlas-based Ultrasound Acquisitions in Presence of Articulated Motion
Robotic ultrasound (US) imaging aims at overcoming some of the limitations of free-hand US examinations, e.g. difficulty in guaranteeing intra- and inter-operator repeatability. However, due to anatomical and physiological variations between patients and relative movement of anatomical substructures, it is challenging to robustly generate optimal trajectories to examine the anatomies of interest, in particular, when they comprise articulated joints. To address this challenge, this paper proposes a vision-based approach allowing autonomous robotic US limb scanning. To this end, an atlas MRI template of a human arm with annotated vascular structures is used to generate trajectories and register and project them onto patients' skin surfaces for robotic US acquisition. To effectively segment and accurately reconstruct the targeted 3D vessel, we make use of spatial continuity in consecutive US frames by incorporating channel attention modules into a U-Net-type neural network. The automatic trajectory generation method is evaluated on six volunteers with various articulated joint angles. In all cases, the system can successfully acquire the planned vascular structure on volunteers' limbs. For one volunteer the MRI scan was also available, which allows the evaluation of the average radius of the scanned artery from US images, resulting in a radius estimation ($1.2\pm0.05~mm$) comparable to the MRI ground truth ($1.2\pm0.04~mm$).
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
312,397
2008.02069
Data Cleansing with Contrastive Learning for Vocal Note Event Annotations
Data cleansing is a well studied strategy for cleaning erroneous labels in datasets, which has not yet been widely adopted in Music Information Retrieval. Previously proposed data cleansing models do not consider structured (e.g. time varying) labels, such as those common to music data. We propose a novel data cleansing model for time-varying, structured labels which exploits the local structure of the labels, and demonstrate its usefulness for vocal note event annotations in music. %Our model is trained in a contrastive learning manner by automatically creating local deformations of likely correct labels. Our model is trained in a contrastive learning manner by automatically contrasting likely correct labels pairs against local deformations of them. We demonstrate that the accuracy of a transcription model improves greatly when trained using our proposed strategy compared with the accuracy when trained using the original dataset. Additionally we use our model to estimate the annotation error rates in the DALI dataset, and highlight other potential uses for this type of model.
false
false
true
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
190,518
2305.15118
Fairness in Streaming Submodular Maximization over a Matroid Constraint
Streaming submodular maximization is a natural model for the task of selecting a representative subset from a large-scale dataset. If datapoints have sensitive attributes such as gender or race, it becomes important to enforce fairness to avoid bias and discrimination. This has spurred significant interest in developing fair machine learning algorithms. Recently, such algorithms have been developed for monotone submodular maximization under a cardinality constraint. In this paper, we study the natural generalization of this problem to a matroid constraint. We give streaming algorithms as well as impossibility results that provide trade-offs between efficiency, quality and fairness. We validate our findings empirically on a range of well-known real-world applications: exemplar-based clustering, movie recommendation, and maximum coverage in social networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
true
367,464
2010.05013
An Encoder-Decoder CNN for Hair Removal in Dermoscopic Images
The process of removing occluding hair has a relevant role in the early and accurate diagnosis of skin cancer. It consists of detecting hairs and restore the texture below them, which is sporadically occluded. In this work, we present a model based on convolutional neural networks for hair removal in dermoscopic images. During the network's training, we use a combined loss function to improve the restoration ability of the proposed model. In order to train the CNN and to quantitatively validate their performance, we simulate the presence of skin hair in hairless images extracted from publicly known datasets such as the PH2, dermquest, dermis, EDRA2002, and the ISIC Data Archive. As far as we know, there is no other hair removal method based on deep learning. Thus, we compare our results with six state-of-the-art algorithms based on traditional computer vision techniques by means of similarity measures that compare the reference hairless image and the one with hair simulated. Finally, a statistical test is used to compare the methods. Both qualitative and quantitative results demonstrate the effectiveness of our network.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
199,962
1301.6675
A Temporal Bayesian Network for Diagnosis and Prediction
Diagnosis and prediction in some domains, like medical and industrial diagnosis, require a representation that combines uncertainty management and temporal reasoning. Based on the fact that in many cases there are few state changes in the temporal range of interest, we propose a novel representation called Temporal Nodes Bayesian Networks (TNBN). In a TNBN each node represents an event or state change of a variable, and an arc corresponds to a causal-temporal relationship. The temporal intervals can differ in number and size for each temporal node, so this allows multiple granularity. Our approach is contrasted with a dynamic Bayesian network for a simple medical example. An empirical evaluation is presented for a more complex problem, a subsystem of a fossil power plant, in which this approach is used for fault diagnosis and prediction with good results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,469
2403.06332
Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups
This paper explores the intricate relationship between capitalism, racial injustice, and artificial intelligence (AI), arguing that AI acts as a contemporary vehicle for age-old forms of exploitation. By linking historical patterns of racial and economic oppression with current AI practices, this study illustrates how modern technology perpetuates and deepens societal inequalities. It specifically examines how AI is implicated in the exploitation of marginalized communities through underpaid labor in the gig economy, the perpetuation of biases in algorithmic decision-making, and the reinforcement of systemic barriers that prevent these groups from benefiting equitably from technological advances. Furthermore, the paper discusses the role of AI in extending and intensifying the social, economic, and psychological burdens faced by these communities, highlighting the problematic use of AI in surveillance, law enforcement, and mental health contexts. The analysis concludes with a call for transformative changes in how AI is developed and deployed. Advocating for a reevaluation of the values driving AI innovation, the paper promotes an approach that integrates social justice and equity into the core of technological design and policy. This shift is crucial for ensuring that AI serves as a tool for societal improvement, fostering empowerment and healing rather than deepening existing divides.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
436,398
1904.04917
Novel Uncertainty Framework for Deep Learning Ensembles
Deep neural networks have become the default choice for many of the machine learning tasks such as classification and regression. Dropout, a method commonly used to improve the convergence of deep neural networks, generates an ensemble of thinned networks with extensive weight sharing. Recent studies that dropout can be viewed as an approximate variational inference in Gaussian processes, and used as a practical tool to obtain uncertainty estimates of the network. We propose a novel statistical mechanics based framework to dropout and use this framework to propose a new generic algorithm that focuses on estimates of the variance of the loss as measured by the ensemble of thinned networks. Our approach can be applied to a wide range of deep neural network architectures and machine learning tasks. In classification, this algorithm allows the generation of a don't-know answer to be generated, which can increase the reliability of the classifier. Empirically we demonstrate state-of-the-art AUC results on publicly available benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
127,154
2010.04762
Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
A growing body of work shows that models exploit annotation artifacts to achieve state-of-the-art performance on standard crowdsourced benchmarks---datasets collected from crowdworkers to create an evaluation task---while still failing on out-of-domain examples for the same task. Recent work has explored the use of counterfactually-augmented data---data built by minimally editing a set of seed examples to yield counterfactual labels---to augment training data associated with these benchmarks and build more robust classifiers that generalize better. However, Khashabi et al. (2020) find that this type of augmentation yields little benefit on reading comprehension tasks when controlling for dataset size and cost of collection. We build upon this work by using English natural language inference data to test model generalization and robustness and find that models trained on a counterfactually-augmented SNLI dataset do not generalize better than unaugmented datasets of similar size and that counterfactual augmentation can hurt performance, yielding models that are less robust to challenge examples. Counterfactual augmentation of natural language understanding data through standard crowdsourcing techniques does not appear to be an effective way of collecting training data and further innovation is required to make this general line of work viable.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
199,853
2403.00372
HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry for Enhanced 3D Text2Shape Generation
3D shape generation from text is a fundamental task in 3D representation learning. The text-shape pairs exhibit a hierarchical structure, where a general text like ``chair" covers all 3D shapes of the chair, while more detailed prompts refer to more specific shapes. Furthermore, both text and 3D shapes are inherently hierarchical structures. However, existing Text2Shape methods, such as SDFusion, do not exploit that. In this work, we propose HyperSDFusion, a dual-branch diffusion model that generates 3D shapes from a given text. Since hyperbolic space is suitable for handling hierarchical data, we propose to learn the hierarchical representations of text and 3D shapes in hyperbolic space. First, we introduce a hyperbolic text-image encoder to learn the sequential and multi-modal hierarchical features of text in hyperbolic space. In addition, we design a hyperbolic text-graph convolution module to learn the hierarchical features of text in hyperbolic space. In order to fully utilize these text features, we introduce a dual-branch structure to embed text features in 3D feature space. At last, to endow the generated 3D shapes with a hierarchical structure, we devise a hyperbolic hierarchical loss. Our method is the first to explore the hyperbolic hierarchical representation for text-to-shape generation. Experimental results on the existing text-to-shape paired dataset, Text2Shape, achieved state-of-the-art results. We release our implementation under HyperSDFusion.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
433,958
2210.03123
On the Effectiveness of Hybrid Pooling in Mixup-Based Graph Learning for Language Processing
Graph neural network (GNN)-based graph learning has been popular in natural language and programming language processing, particularly in text and source code classification. Typically, GNNs are constructed by incorporating alternating layers which learn transformations of graph node features, along with graph pooling layers that use graph pooling operators (e.g., Max-pooling) to effectively reduce the number of nodes while preserving the semantic information of the graph. Recently, to enhance GNNs in graph learning tasks, Manifold-Mixup, a data augmentation technique that produces synthetic graph data by linearly mixing a pair of graph data and their labels, has been widely adopted. However, the performance of Manifold-Mixup can be highly affected by graph pooling operators, and there have not been many studies that are dedicated to uncovering such affection. To bridge this gap, we take an early step to explore how graph pooling operators affect the performance of Mixup-based graph learning. To that end, we conduct a comprehensive empirical study by applying Manifold-Mixup to a formal characterization of graph pooling based on 11 graph pooling operations (9 hybrid pooling operators, 2 non-hybrid pooling operators). The experimental results on both natural language datasets (Gossipcop, Politifact) and programming language datasets (JAVA250, Python800) demonstrate that hybrid pooling operators are more effective for Manifold-Mixup than the standard Max-pooling and the state-of-the-art graph multiset transformer (GMT) pooling, in terms of producing more accurate and robust GNN models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
321,911
2312.09424
Open Domain Knowledge Extraction for Knowledge Graphs
The quality of a knowledge graph directly impacts the quality of downstream applications (e.g. the number of answerable questions using the graph). One ongoing challenge when building a knowledge graph is to ensure completeness and freshness of the graph's entities and facts. In this paper, we introduce ODKE, a scalable and extensible framework that sources high-quality entities and facts from open web at scale. ODKE utilizes a wide range of extraction models and supports both streaming and batch processing at different latency. We reflect on the challenges and design decisions made and share lessons learned when building and deploying ODKE to grow an industry-scale open domain knowledge graph.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
415,710
2105.01998
Instance segmentation of fallen trees in aerial color infrared imagery using active multi-contour evolution with fully convolutional network-based intensity priors
In this paper, we introduce a framework for segmenting instances of a common object class by multiple active contour evolution over semantic segmentation maps of images obtained through fully convolutional networks. The contour evolution is cast as an energy minimization problem, where the aggregate energy functional incorporates a data fit term, an explicit shape model, and accounts for object overlap. Efficient solution neighborhood operators are proposed, enabling optimization through metaheuristics such as simulated annealing. We instantiate the proposed framework in the context of segmenting individual fallen stems from high-resolution aerial multispectral imagery. We validated our approach on 3 real-world scenes of varying complexity. The test plots were situated in regions of the Bavarian Forest National Park, Germany, which sustained a heavy bark beetle infestation. Evaluations were performed on both the polygon and line segment level, showing that the multi-contour segmentation can achieve up to 0.93 precision and 0.82 recall. An improvement of up to 7 percentage points (pp) in recall and 6 in precision compared to an iterative sample consensus line segment detection was achieved. Despite the simplicity of the applied shape parametrization, an explicit shape model incorporated into the energy function improved the results by up to 4 pp of recall. Finally, we show the importance of using a deep learning based semantic segmentation method as the basis for individual stem detection. Our method is a step towards increased accessibility of automatic fallen tree mapping, due to higher cost efficiency of aerial imagery acquisition compared to laser scanning. The precise fallen tree maps could be further used as a basis for plant and animal habitat modeling, studies on carbon sequestration as well as soil quality in forest ecosystems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
233,700
2005.00872
How deep the machine learning can be
Today we live in the age of artificial intelligence and machine learning; from small startups to HW or SW giants, everyone wants to build machine intelligence chips, applications. The task, however, is hard: not only because of the size of the problem: the technology one can utilize (and the paradigm it is based upon) strongly degrades the chances to succeed efficiently. Today the single-processor performance practically reached the limits the laws of nature enable. The only feasible way to achieve the needed high computing performance seems to be parallelizing many sequentially working units. The laws of the (massively) parallelized computing, however, are different from those experienced in connection with assembling and utilizing systems comprising just-a-few single processors. As machine learning is mostly based on the conventional computing (processors), we scrutinize the (known, but somewhat faded) laws of the parallel computing, concerning AI. This paper attempts to review some of the caveats, especially concerning scaling the computing performance of the AI solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
175,414
2406.04299
NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise
Graph Neural Networks (GNNs) exhibit strong potential in node classification task through a message-passing mechanism. However, their performance often hinges on high-quality node labels, which are challenging to obtain in real-world scenarios due to unreliable sources or adversarial attacks. Consequently, label noise is common in real-world graph data, negatively impacting GNNs by propagating incorrect information during training. To address this issue, the study of Graph Neural Networks under Label Noise (GLN) has recently gained traction. However, due to variations in dataset selection, data splitting, and preprocessing techniques, the community currently lacks a comprehensive benchmark, which impedes deeper understanding and further development of GLN. To fill this gap, we introduce NoisyGL in this paper, the first comprehensive benchmark for graph neural networks under label noise. NoisyGL enables fair comparisons and detailed analyses of GLN methods on noisy labeled graph data across various datasets, with unified experimental settings and interface. Our benchmark has uncovered several important insights that were missed in previous research, and we believe these findings will be highly beneficial for future studies. We hope our open-source benchmark library will foster further advancements in this field. The code of the benchmark can be found in https://github.com/eaglelab-zju/NoisyGL.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
461,610
2407.05155
Wi-Fi Beyond Communications: Experimental Evaluation of Respiration Monitoring and Motion Detection Using COTS Devices
Wi-Fi sensing has become an attractive option for non-invasive monitoring of human activities and vital signs. This paper explores the feasibility of using state-of-the-art commercial off-the-shelf (COTS) devices for Wi-Fi sensing applications, particularly respiration monitoring and motion detection. We utilize the Intel AX210 network interface card (NIC) to transmit Wi-Fi signals in both 2.4 GHz and 6 GHz frequency bands. Our experiments rely on channel frequency response (CFR) and received signal strength indicator (RSSI) data, which are processed using a moving average algorithm to extract human behavior patterns. The experimental results demonstrate the effectiveness of our approach in capturing and representing human respiration and motion patterns. Furthermore, we compare the performance of Wi-Fi sensing across different frequency bands, highlighting the advantages of using higher frequencies for improved sensitivity and clarity. Our findings showcase the practicality of using COTS devices for Wi-Fi sensing and lay the groundwork for the development of non-invasive, contactless sensing systems. These systems have potential applications in various fields, including healthcare, smart homes, and Metaverse.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
470,846
1701.08513
Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images
Predictive coding is attractive for compression of hyperspecral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
67,482
2202.08187
Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey
This paper surveys recent work in the intersection of differential privacy (DP) and fairness. It reviews the conditions under which privacy and fairness may have aligned or contrasting goals, analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks, and describes available mitigation measures for the fairness issues arising in DP systems. The survey provides a unified understanding of the main challenges and potential risks arising when deploying privacy-preserving machine-learning or decisions-making tasks under a fairness lens.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
280,795
1903.03515
Learning $\textit{Ex Nihilo}$
This paper introduces, philosophically and to a degree formally, the novel concept of learning $\textit{ex nihilo}$, intended (obviously) to be analogous to the concept of creation $\textit{ex nihilo}$. Learning $\textit{ex nihilo}$ is an agent's learning "from nothing," by the suitable employment of schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a $\textit{cognitive calculus}$ (i.e., roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning $\textit{ex nihilo}$ is a challenge to contemporary forms of ML, indeed a severe one, but the challenge is offered in the spirt of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-$\textit{ex nihilo}$ frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the-art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
123,755
2305.16338
Think Before You Act: Decision Transformers with Working Memory
Decision Transformer-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performance relies on massive data and computation. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model's performance on previous tasks. In contrast to LLMs' implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Inspired by this, we propose a working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in Atari games and Meta-World object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
368,032
1812.01388
Bad practices in evaluation methodology relevant to class-imbalanced problems
For research to go in the right direction, it is essential to be able to compare and quantify performance of different algorithms focused on the same problem. Choosing a suitable evaluation metric requires deep understanding of the pursued task along with all of its characteristics. We argue that in the case of applied machine learning, proper evaluation metric is the basic building block that should be in the spotlight and put under thorough examination. Here, we address tasks with class imbalance, in which the class of interest is the one with much lower number of samples. We encountered non-insignificant amount of recent papers, in which improper evaluation methods are used, borrowed mainly from the field of balanced problems. Such bad practices may heavily bias the results in favour of inappropriate algorithms and give false expectations of the state of the field.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
115,508
1910.03892
Fast Panoptic Segmentation Network
In this work, we present an end-to-end network for fast panoptic segmentation. This network, called Fast Panoptic Segmentation Network (FPSNet), does not require computationally costly instance mask predictions or merging heuristics. This is achieved by casting the panoptic task into a custom dense pixel-wise classification task, which assigns a class label or an instance id to each pixel. We evaluate FPSNet on the Cityscapes and Pascal VOC datasets, and find that FPSNet is faster than existing panoptic segmentation methods, while achieving better or similar panoptic segmentation performance. On the Cityscapes validation set, we achieve a Panoptic Quality score of 55.1%, at prediction times of 114 milliseconds for images with a resolution of 1024x2048 pixels. For lower resolutions of the Cityscapes dataset and for the Pascal VOC dataset, FPSNet runs at 22 and 35 frames per second, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
148,622
2402.11887
Generative Semi-supervised Graph Anomaly Detection
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes. Code will be made available at https://github.com/mala-lab/GGAD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
430,623
1902.06843
Fusing Visual, Textual and Connectivity Clues for Studying Mental Health
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
false
false
false
true
false
false
false
false
true
false
false
false
false
true
false
false
false
false
121,856
2311.08265
On The Relationship Between Universal Adversarial Attacks And Sparse Representations
The prominent success of neural networks, mainly in computer vision tasks, is increasingly shadowed by their sensitivity to small, barely perceivable adversarial perturbations in image input. In this work, we aim at explaining this vulnerability through the framework of sparsity. We show the connection between adversarial attacks and sparse representations, with a focus on explaining the universality and transferability of adversarial examples in neural networks. To this end, we show that sparse coding algorithms, and the neural network-based learned iterative shrinkage thresholding algorithm (LISTA) among them, suffer from this sensitivity, and that common attacks on neural networks can be expressed as attacks on the sparse representation of the input image. The phenomenon that we observe holds true also when the network is agnostic to the sparse representation and dictionary, and thus can provide a possible explanation for the universality and transferability of adversarial attacks. The code is available at https://github.com/danawr/adversarial_attacks_and_sparse_representations.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
407,652
2412.18516
Generating Explanations for Autonomous Robots: a Systematic Review
Building trust between humans and robots has long interested the robotics community. Various studies have aimed to clarify the factors that influence the development of user trust. In Human-Robot Interaction (HRI) environments, a critical aspect of trust development is the robot's ability to make its behavior understandable. The concept of an eXplainable Autonomous Robot (XAR) addresses this requirement. However, giving a robot self-explanatory abilities is a complex task. Robot behavior includes multiple skills and diverse subsystems. This complexity led to research into a wide range of methods for generating explanations about robot behavior. This paper presents a systematic literature review that analyzes existing strategies for generating explanations in robots and studies the current XAR trends. Results indicate promising advancements in explainability systems. However, these systems are still unable to fully cover the complex behavior of autonomous robots. Furthermore, we also identify a lack of consensus on the theoretical concept of explainability, and the need for a robust methodology to assess explainability methods and tools has been identified.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
520,450
1610.04789
Bsmooth: Learning from user feedback to disambiguate query terms in interactive data retrieval
There is great interest in supporting imprecise queries (e.g., keyword search or natural language queries) over databases today. To support such queries, the database system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, data values distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can serve as an extrinsic resource to supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problem in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
62,432
1801.01265
Improved Bounds on Lossless Source Coding and Guessing Moments via R\'enyi Measures
This paper provides upper and lower bounds on the optimal guessing moments of a random variable taking values on a finite set when side information may be available. These moments quantify the number of guesses required for correctly identifying the unknown object and, similarly to Arikan's bounds, they are expressed in terms of the Arimoto-R\'enyi conditional entropy. Although Arikan's bounds are asymptotically tight, the improvement of the bounds in this paper is significant in the non-asymptotic regime. Relationships between moments of the optimal guessing function and the MAP error probability are also established, characterizing the exact locus of their attainable values. The bounds on optimal guessing moments serve to improve non-asymptotic bounds on the cumulant generating function of the codeword lengths for fixed-to-variable optimal lossless source coding without prefix constraints. Non-asymptotic bounds on the reliability function of discrete memoryless sources are derived as well. Relying on these techniques, lower bounds on the cumulant generating function of the codeword lengths are derived, by means of the smooth R\'enyi entropy, for source codes that allow decoding errors.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
87,702
2111.07031
Improving the Otsu Thresholding Method of Global Binarization Using Ring Theory for Ultrasonographies of Congestive Heart Failure
Ring Theory states that a ring is an algebraic structure where two binary operations can be performed among the elements addition and multiplication. Binarization is a method of image processing where values within pixels are reduced to a scale from zero to one, with zero representing the most absence of light and one representing the most presence of light. Currently, sonograms are implemented in scanning for congestive heart failure. However, the renowned Playboy Bunny symbol representing the ailment becomes increasingly difficult to isolate due to surrounding organs and lower quality image productions. This paper examines the Otsu thresholding method and incorporates new elements to account for different image features meant to better isolate congestive heart failure indicators in ultrasound images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
266,252
1802.05014
Distributional Term Set Expansion
This paper is a short empirical study of the performance of centrality and classification based iterative term set expansion methods for distributional semantic models. Iterative term set expansion is an interactive process using distributional semantics models where a user labels terms as belonging to some sought after term set, and a system uses this labeling to supply the user with new, candidate, terms to label, trying to maximize the number of positive examples found. While centrality based methods have a long history in term set expansion, we compare them to classification methods based on the the Simple Margin method, an Active Learning approach to classification using Support Vector Machines. Examining the performance of various centrality and classification based methods for a variety of distributional models over five different term sets, we can show that active learning based methods consistently outperform centrality based methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
90,360
2106.02514
The Image Local Autoregressive Transformer
Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model -- image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both the quantitative and qualitative results show the efficacy of our model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
238,899
1907.03736
LocationSpark: In-memory Distributed Spatial Query Processing and Optimization
Due to the ubiquity of spatial data applications and the large amounts of spatial data that these applications generate and process, there is a pressing need for scalable spatial query processing. In this paper, we present new techniques for spatial query processing and optimization in an in-memory and distributed setup to address scalability. More specifically, we introduce new techniques for handling query skew, which is common in practice, and optimize communication costs accordingly. We propose a distributed query scheduler that use a new cost model to optimize the cost of spatial query processing. The scheduler generates query execution plans that minimize the effect of query skew. The query scheduler employs new spatial indexing techniques based on bitmap filters to forward queries to the appropriate local nodes. Each local computation node is responsible for optimizing and selecting its best local query execution plan based on the indexes and the nature of the spatial queries in that node. All the proposed spatial query processing and optimization techniques are prototyped inside Spark, a distributed memory-based computation system. The experimental study is based on real datasets and demonstrates that distributed spatial query processing can be enhanced by up to an order of magnitude over existing in-memory and distributed spatial systems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
137,920
1808.03604
Disease Progression Timeline Estimation for Alzheimer's Disease using Discriminative Event Based Modeling
Alzheimer's Disease (AD) is characterized by a cascade of biomarkers becoming abnormal, the pathophysiology of which is very complex and largely unknown. Event-based modeling (EBM) is a data-driven technique to estimate the sequence in which biomarkers for a disease become abnormal based on cross-sectional data. It can help in understanding the dynamics of disease progression and facilitate early diagnosis and prognosis. In this work we propose a novel discriminative approach to EBM, which is shown to be more accurate than existing state-of-the-art EBM methods. The method first estimates for each subject an approximate ordering of events. Subsequently, the central ordering over all subjects is estimated by fitting a generalized Mallows model to these approximate subject-specific orderings. We also introduce the concept of relative distance between events which helps in creating a disease progression timeline. Subsequently, we propose a method to stage subjects by placing them on the estimated disease progression timeline. We evaluated the proposed method on Alzheimer's Disease Neuroimaging Initiative (ADNI) data and compared the results with existing state-of-the-art EBM methods. We also performed extensive experiments on synthetic data simulating the progression of Alzheimer's disease. The event orderings obtained on ADNI data seem plausible and are in agreement with the current understanding of progression of AD. The proposed patient staging algorithm performed consistently better than that of state-of-the-art EBM methods. Event orderings obtained in simulation experiments were more accurate than those of other EBM methods and the estimated disease progression timeline was observed to correlate with the timeline of actual disease progression. The results of these experiments are encouraging and suggest that discriminative EBM is a promising approach to disease progression modeling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
104,963
2104.05353
Sparse Coding Frontend for Robust Neural Networks
Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations. The current most effective defense methods against these adversarial attacks are variants of adversarial training. In this paper, we introduce a radically different defense trained only on clean images: a sparse coding based frontend which significantly attenuates adversarial attacks before they reach the classifier. We evaluate our defense on CIFAR-10 dataset under a wide range of attack types (including Linf , L2, and L1 bounded attacks), demonstrating its promise as a general-purpose approach for defense.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
229,694
2310.06457
Small-Signal Stability and SCR Enhancement of Offshore WPPs with Synchronous Condensers
Synchronous condensers (SCs) have been reported to improve the overall stability and short-circuit power of a power system. SCs are also being integrated into offshore wind power plants (WPPs) for the same reason. This paper, investigates the effect of synchronous condensers on an offshore wind power plant with grid-following (GFL) and grid-forming (GFM) converter controls. Primarily, the effect of synchronous condensers can be two-fold: (1) overall stability enhancement of the WPP by providing reactive power support, (2) contribution to the effective short circuit ratio (SCR) of the WPP by fault current support. Therefore, this paper focuses on studies concerning these effects on an aggregated model of a WPP connected to the grid. To that end, a state-space model of the test system is developed for small-signal stability assessment and the synchronous condenser's effect on its stability. In addition, a mathematical explanation of SCR enhancement with synchronous condenser is provided and is verified with time-domain electromagnetic transient simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
398,590
1810.09003
Robust Receiver Design for Non-orthogonal Multiple Access
Non-orthogonal multiple access (NOMA) has been proposed for massive connectivity in future generations of wireless communications. A dominant NOMA scheme is based on power optimization, in which decoding of target user is assumed to be perfect. In this work, rather than optimize on power domain, we are aimed to propose a robust receiver that can detect and decode the data streams of target user by FEC code diversity. Compared with existing NOMA receivers, our novel receiver substantially improves the bit-error-rate (BER) performance, and BER flooring can even be eliminated by assigning proper interleaving patterns to different users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
110,958
2201.10620
Structural importance and evolution: an application to financial transaction networks
A fundamental problem in the study of networks is the identification of important nodes. This is typically achieved using centrality metrics, which rank nodes in terms of their position in the network. This approach works well for static networks, that do not change over time, but does not consider the dynamics of the network. Here we propose instead to measure the importance of a node based on how much a change to its strength will impact the global structure of the network, which we measure in terms of the spectrum of its adjacency matrix. We apply our method to the identification of important nodes in equity transaction networks, and we show that, while it can still be computed from a static network, our measure is a good predictor of nodes subsequently transacting. This implies that static representations of temporal networks can contain information about their dynamics.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
277,045
2408.01269
A General Framework to Boost 3D GS Initialization for Text-to-3D Generation by Lexical Richness
Text-to-3D content creation has recently received much attention, especially with the prevalence of 3D Gaussians Splatting. In general, GS-based methods comprise two key stages: initialization and rendering optimization. To achieve initialization, existing works directly apply random sphere initialization or 3D diffusion models, e.g., Point-E, to derive the initial shapes. However, such strategies suffer from two critical yet challenging problems: 1) the final shapes are still similar to the initial ones even after training; 2) shapes can be produced only from simple texts, e.g., "a dog", not for lexically richer texts, e.g., "a dog is sitting on the top of the airplane". To address these problems, this paper proposes a novel general framework to boost the 3D GS Initialization for text-to-3D generation upon the lexical richness. Our key idea is to aggregate 3D Gaussians into spatially uniform voxels to represent complex shapes while enabling the spatial interaction among the 3D Gaussians and semantic interaction between Gaussians and texts. Specifically, we first construct a voxelized representation, where each voxel holds a 3D Gaussian with its position, scale, and rotation fixed while setting opacity as the sole factor to determine a position's occupancy. We then design an initialization network mainly consisting of two novel components: 1) Global Information Perception (GIP) block and 2) Gaussians-Text Fusion (GTF) block. Such a design enables each 3D Gaussian to assimilate the spatial information from other areas and semantic information from texts. Extensive experiments show the superiority of our framework of high-quality 3D GS initialization against the existing methods, e.g., Shap-E, by taking lexically simple, medium, and hard texts. Also, our framework can be seamlessly plugged into SoTA training frameworks, e.g., LucidDreamer, for semantically consistent text-to-3D generation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
478,164
2112.03154
VAE based Text Style Transfer with Pivot Words Enhancement Learning
Text Style Transfer (TST) aims to alter the underlying style of the source text to another specific style while keeping the same content. Due to the scarcity of high-quality parallel training data, unsupervised learning has become a trending direction for TST tasks. In this paper, we propose a novel VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method which utilizes Variational AutoEncoder (VAE) and external style embeddings to learn semantics and style distribution jointly. Additionally, we introduce pivot words learning, which is applied to learn decisive words for a specific style and thereby further improve the overall performance of the style transfer. The proposed VT-STOWER can be scaled to different TST scenarios given very limited and non-parallel training data with a novel and flexible style strength control mechanism. Experiments demonstrate that the VT-STOWER outperforms the state-of-the-art on sentiment, formality, and code-switching TST tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
270,102
2207.07993
Relative Position Estimation in Multi-Agent Systems Using Attitude-Coupled Range Measurements
The ability to accurately estimate the position of robotic agents relative to one another, in possibly GPS-denied environments, is crucial to execute collaborative tasks. Inter-agent range measurements are available at a low cost, due to technologies such as ultra-wideband radio. However, the task of three-dimensional relative position estimation using range measurements in multi-agent systems suffers from unobservabilities. This letter presents a sufficient condition for the observability of the relative positions, and satisfies the condition using a simple framework with only range measurements, an accelerometer, a rate gyro, and a magnetometer. The framework has been tested in simulation and in experiments, where 40-50 cm positioning accuracy is achieved using inexpensive off-the-shelf hardware.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
308,408
cs/0606017
From semiotics of hypermedia to physics of semiosis: A view from system theory
Given that theoretical analysis and empirical validation is fundamental to any model, whether conceptual or formal, it is surprising that these two tools of scientific discovery are so often ignored in the contemporary studies of communication. In this paper, we pursued the ideas of a) correcting and expanding the modeling approaches of linguistics, which are otherwise inapplicable (more precisely, which should not but are widely applied), to the general case of hypermedia-based communication, and b) developing techniques for empirical validation of semiotic models, which are nowadays routinely used to explore (in fact, to conjecture about) internal mechanisms of complex systems, yet on a purely speculative basis. This study thus offers two experimentally tested substantive contributions: the formal representation of communication as the mutually-orienting behavior of coupled autonomous systems, and the mathematical interpretation of the semiosis of communication, which together offer a concrete and parsimonious understanding of diverse communication phenomena.
true
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
539,504
2007.01965
On the application of transfer learning in prognostics and health management
Advancements in sensing and computing technologies, the development of human and computer interaction frameworks, big data storage capabilities, and the emergence of cloud storage and could computing have resulted in an abundance of data in the modern industry. This data availability has encouraged researchers and industry practitioners to rely on data-based machine learning, especially deep learning, models for fault diagnostics and prognostics more than ever. These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data. This issue mandates fine-tuning and even training the models from scratch when there is a slight change in operating conditions or equipment. Transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application. In this paper, a unified definition for transfer learning and its different types is provided, Prognostics and Health Management (PHM) studies that have used transfer learning are reviewed in detail, and finally, a discussion on transfer learning application considerations and gaps is provided for improving the applicability of transfer learning in PHM.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
185,585
2308.04008
Coarse-to-Fine: Learning Compact Discriminative Representation for Single-Stage Image Retrieval
Image retrieval targets to find images from a database that are visually similar to the query image. Two-stage methods following retrieve-and-rerank paradigm have achieved excellent performance, but their separate local and global modules are inefficient to real-world applications. To better trade-off retrieval efficiency and accuracy, some approaches fuse global and local feature into a joint representation to perform single-stage image retrieval. However, they are still challenging due to various situations to tackle, $e.g.$, background, occlusion and viewpoint. In this work, we design a Coarse-to-Fine framework to learn Compact Discriminative representation (CFCD) for end-to-end single-stage image retrieval-requiring only image-level labels. Specifically, we first design a novel adaptive softmax-based loss which dynamically tunes its scale and margin within each mini-batch and increases them progressively to strengthen supervision during training and intra-class compactness. Furthermore, we propose a mechanism which attentively selects prominent local descriptors and infuse fine-grained semantic relations into the global representation by a hard negative sampling strategy to optimize inter-class distinctiveness at a global scale. Extensive experimental results have demonstrated the effectiveness of our method, which achieves state-of-the-art single-stage image retrieval performance on benchmarks such as Revisited Oxford and Revisited Paris. Code is available at https://github.com/bassyess/CFCD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
384,249
2409.18313
Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation
There is no limit to how much a robot might explore and learn, but all of that knowledge needs to be searchable and actionable. Within language research, retrieval augmented generation (RAG) has become the workhorse of large-scale non-parametric knowledge; however, existing techniques do not directly transfer to the embodied domain, which is multimodal, where data is highly correlated, and perception requires abstraction. To address these challenges, we introduce Embodied-RAG, a framework that enhances the foundational model of an embodied agent with a non-parametric memory system capable of autonomously constructing hierarchical knowledge for both navigation and language generation. Embodied-RAG handles a full range of spatial and semantic resolutions across diverse environments and query types, whether for a specific object or a holistic description of ambiance. At its core, Embodied-RAG's memory is structured as a semantic forest, storing language descriptions at varying levels of detail. This hierarchical organization allows the system to efficiently generate context-sensitive outputs across different robotic platforms. We demonstrate that Embodied-RAG effectively bridges RAG to the robotics domain, successfully handling over 250 explanation and navigation queries across kilometer-level environments, highlighting its promise as a general-purpose non-parametric system for embodied agents.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
492,186
2409.15127
Boosting Healthcare LLMs Through Retrieved Context
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing, and yet, their factual inaccuracies and hallucinations limits their application, particularly in critical domains like healthcare. Context retrieval methods, by introducing relevant information as input, have emerged as a crucial approach for enhancing LLM factuality and reliability. This study explores the boundaries of context retrieval methods within the healthcare domain, optimizing their components and benchmarking their performance against open and closed alternatives. Our findings reveal how open LLMs, when augmented with an optimized retrieval system, can achieve performance comparable to the biggest private solutions on established healthcare benchmarks (multiple-choice question answering). Recognizing the lack of realism of including the possible answers within the question (a setup only found in medical exams), and after assessing a strong LLM performance degradation in the absence of those options, we extend the context retrieval system in that direction. In particular, we propose OpenMedPrompt a pipeline that improves the generation of more reliable open-ended answers, moving this technology closer to practical application.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
490,766
1312.5486
Molecular communication networks with general molecular circuit receivers
In a molecular communication network, transmitters may encode information in concentration or frequency of signalling molecules. When the signalling molecules reach the receivers, they react, via a set of chemical reactions or a molecular circuit, to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The aim of this paper is to investigate the impact of different reaction types on the information transmission capacity of molecular communication networks. We realise this aim by using a general molecular circuit model. We derive general expressions of mean receiver output, and signal and noise spectra. We use these expressions to investigate the information transmission capacities of a number of molecular circuits.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,239
2005.05874
Fair Resource Allocation in Optical Networks under Tidal Traffic
We propose an alpha-fair routing and spectrum allocation (RSA) framework for reconfigurable elastic optical networks under modeled tidal traffic, that is based on the maximization of the social welfare function parameterized by a scalar alpha (the inequality aversion parameter). The objective is to approximate an egalitarian spectrum allocation (SA) that maximizes the minimum possible SA over all connections contending for the network resources, shifting from the widely used utilitarian SA that merely maximizes the network efficiency. A set of existing metrics are examined (i.e., connection blocking, resource utilization, coefficient of variation (CV) of utilities), and a set of new measures are also introduced (i.e., improvement on connection over- (COP) and under-provisioning (CUP), CV of unserved traffic), allowing a network operator to derive and evaluate in advance a set of alpha-fair RSA solutions and select the one that best fits the performance requirements of both the individual connections and the overall network. We show that an egalitarian SA better utilizes the network resources by significantly improving both COP (up to 20%) and CUP (up to 80%), compared to the utilitarian allocation, while attaining zero blocking. Importantly, the CVs of utilities and unserved traffic indicate that a SA that is fairest with respect to the amount of utilities allocated to the connections does not imply that the SA is also fairest with respect to the achievable QoS of the connections, while an egalitarian SA better approximates a fairest QoS-based SA.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
176,855
2411.08892
Auto-assessment of assessment: A conceptual framework towards fulfilling the policy gaps in academic assessment practices
Education is being transformed by rapid advances in Artificial Intelligence (AI), including emerging Generative Artificial Intelligence (GAI). Such technology can significantly support academics and students by automating monotonous tasks and making personalised suggestions. However, despite the potential of the technology, there are significant concerns regarding AI misuse, particularly by students in assessments. There are two schools of thought: one advocates for a complete ban on it, while the other views it as a valuable educational tool, provided it is governed by a robust usage policy. This contradiction clearly indicates a major policy gap in academic practices, and new policies are required to uphold academic standards while enabling staff and students to benefit from technological advancements. We surveyed 117 academics from three countries (UK, UAE, and Iraq), and identified that most academics retain positive opinions regarding AI in education. For example, the majority of experienced academics do not favour complete bans, and they see the potential benefits of AI for students, teaching staff, and academic institutions. Importantly, academics specifically identified the particular benefits of AI for autonomous assessment (71.79% of respondents agreed). Therefore, for the first time, we propose a novel AI framework for autonomously evaluating students' work (e.g., reports, coursework, etc.) and automatically assigning grades based on their knowledge and in-depth understanding of the submitted content. The survey results further highlight a significant lack of awareness of modern AI-based tools (e.g., ChatGPT) among experienced academics, a gap that must be addressed to uphold educational standards.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
508,055
2403.15393
Detection of Opioid Users from Reddit Posts via an Attention-based Bidirectional Recurrent Neural Network
The opioid epidemic, referring to the growing hospitalizations and deaths because of overdose of opioid usage and addiction, has become a severe health problem in the United States. Many strategies have been developed by the federal and local governments and health communities to combat this crisis. Among them, improving our understanding of the epidemic through better health surveillance is one of the top priorities. In addition to direct testing, machine learning approaches may also allow us to detect opioid users by analyzing data from social media because many opioid users may choose not to do the tests but may share their experiences on social media anonymously. In this paper, we take advantage of recent advances in machine learning, collect and analyze user posts from a popular social network Reddit with the goal to identify opioid users. Posts from more than 1,000 users who have posted on three sub-reddits over a period of one month have been collected. In addition to the ones that contain keywords such as opioid, opiate, or heroin, we have also collected posts that contain slang words of opioid such as black or chocolate. We apply an attention-based bidirectional long short memory model to identify opioid users. Experimental results show that the approaches significantly outperform competitive algorithms in terms of F1-score. Furthermore, the model allows us to extract most informative words, such as opiate, opioid, and black, from posts via the attention layer, which provides more insights on how the machine learning algorithm works in distinguishing drug users from non-drug users.
false
false
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
440,529
2007.14329
On the use of GNSS for Automatic Detection of Attenuating Environments
When different radio applications share the same spectrum, the separation by attenuating material is a way to mitigate potential interference. The indoor restriction for WLAN devices in 5150-5350 MHz is an example for a regulatory measure that aims at having WLAN devices operating in an environment that provides sufficient attenuation to enable sharing with other services. In this paper we investigate whether an attenuating environment can be automatically detected without user interaction. Instead of detecting an indoor location, we are directly looking for a detection of an attenuating environment. The basic idea is that signals from global navigation satellite services (GNSS) can be received practically everywhere on earth where there is a view to the sky. Where these signals are attenuated, the receiving device is assumed to be in an attenuating environment. In order to characterize such environment, we evaluate the detectable GNSS satellites and their carrier-to-noise density. Example measurements show that GNSS raw data can help to distinguish between low-attenuating locations and higher-attenuating locations. These measurements were conducted with GNSS receiver in an off-the-shelf Android tablet in order to show the feasibility of the approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
189,374
2110.07572
LAGr: Labeling Aligned Graphs for Improving Systematic Generalization in Semantic Parsing
Semantic parsing is the task of producing a structured meaning representation for natural language utterances or questions. Recent research has pointed out that the commonly-used sequence-to-sequence (seq2seq) semantic parsers struggle to generalize systematically, i.e. to handle examples that require recombining known knowledge in novel settings. In this work, we show that better systematic generalization can be achieved by producing the meaning representation (MR) directly as a graph and not as a sequence. To this end we propose LAGr, the Labeling Aligned Graphs algorithm that produces semantic parses by predicting node and edge labels for a complete multi-layer input-aligned graph. The strongly-supervised LAGr algorithm requires aligned graphs as inputs, whereas weakly-supervised LAGr infers alignments for originally unaligned target graphs using an approximate MAP inference procedure. On the COGS and CFQ compositional generalization benchmarks the strongly- and weakly- supervised LAGr algorithms achieve significant improvements upon the baseline seq2seq parsers.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
261,038
1904.05426
A Grounded Unsupervised Universal Part-of-Speech Tagger for Low-Resource Languages
Unsupervised part of speech (POS) tagging is often framed as a clustering problem, but practical taggers need to \textit{ground} their clusters as well. Grounding generally requires reference labeled data, a luxury a low-resource language might not have. In this work, we describe an approach for low-resource unsupervised POS tagging that yields fully grounded output and requires no labeled training data. We find the classic method of Brown et al. (1992) clusters well in our use case and employ a decipherment-based approach to grounding. This approach presumes a sequence of cluster IDs is a `ciphertext' and seeks a POS tag-to-cluster ID mapping that will reveal the POS sequence. We show intrinsically that, despite the difficulty of the task, we obtain reasonable performance across a variety of languages. We also show extrinsically that incorporating our POS tagger into a name tagger leads to state-of-the-art tagging performance in Sinhalese and Kinyarwanda, two languages with nearly no labeled POS data available. We further demonstrate our tagger's utility by incorporating it into a true `zero-resource' variant of the Malopa (Ammar et al., 2016) dependency parser model that removes the current reliance on multilingual resources and gold POS tags for new languages. Experiments show that including our tagger makes up much of the accuracy lost when gold POS tags are unavailable.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
127,313
1911.12082
Topological Machine Learning for Multivariate Time Series
We develop a framework for analyzing multivariate time series using topological data analysis (TDA) methods. The proposed methodology involves converting the multivariate time series to point cloud data, calculating Wasserstein distances between the persistence diagrams and using the $k$-nearest neighbors algorithm ($k$-NN) for supervised machine learning. Two methods (symmetry-breaking and anchor points) are also introduced to enable TDA to better analyze data with heterogeneous features that are sensitive to translation, rotation, or choice of coordinates. We apply our methods to room occupancy detection based on 5 time-dependent variables (temperature, humidity, light, CO2 and humidity ratio). Experimental results show that topological methods are effective in predicting room occupancy during a time window. We also apply our methods to an Activity Recognition dataset and obtained good results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
155,310
2307.05888
Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment
In the era of Internet of Things (IoT), Digital Twin (DT) is envisioned to empower various areas as a bridge between physical objects and the digital world. Through virtualization and simulation techniques, multiple functions can be achieved by leveraging computing resources. In this process, Mobile Cloud Computing (MCC) and Mobile Edge Computing (MEC) have become two of the key factors to achieve real-time feedback. However, current works only considered edge servers or cloud servers in the DT system models. Besides, The models ignore the DT with not only one data resource. In this paper, we propose a new DT system model considering a heterogeneous MEC/MCC environment. Each DT in the model is maintained in one of the servers via multiple data collection devices. The offloading decision-making problem is also considered and a new offloading scheme is proposed based on Distributed Deep Learning (DDL). Simulation results demonstrate that our proposed algorithm can effectively and efficiently decrease the system's average latency and energy consumption. Significant improvement is achieved compared with the baselines under the dynamic environment of DTs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
378,885
cs/0608015
Towards "Propagation = Logic + Control"
Constraint propagation algorithms implement logical inference. For efficiency, it is essential to control whether and in what order basic inference steps are taken. We provide a high-level framework that clearly differentiates between information needed for controlling propagation versus that needed for the logical semantics of complex constraints composed from primitive ones. We argue for the appropriateness of our controlled propagation framework by showing that it captures the underlying principles of manually designed propagation algorithms, such as literal watching for unit clause propagation and the lexicographic ordering constraint. We provide an implementation and benchmark results that demonstrate the practicality and efficiency of our framework.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
539,627