id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1903.02188
Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases
When answering natural language questions over knowledge bases (KBs), different question components and KB aspects play different roles. However, most existing embedding-based methods for knowledge base question answering (KBQA) ignore the subtle inter-relationships between the question and the KB (e.g., entity types, relation paths and context). In this work, we propose to directly model the two-way flow of interactions between the questions and the KB via a novel Bidirectional Attentive Memory Network, called BAMnet. Requiring no external resources and only very few hand-crafted features, on the WebQuestions benchmark, our method significantly outperforms existing information-retrieval based methods, and remains competitive with (hand-crafted) semantic parsing based methods. Also, since we use attention mechanisms, our method offers better interpretability compared to other baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
123,449
cs/0609066
Building and displaying name relations using automatic unsupervised analysis of newspaper articles
We present a tool that, from automatically recognised names, tries to infer inter-person relations in order to present associated people on maps. Based on an in-house Named Entity Recognition tool, applied on clusters of an average of 15,000 news articles per day, in 15 different languages, we build a knowledge base that allows extracting statistical co-occurrences of persons and visualising them on a per-person page or in various graphs.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
539,696
2208.04491
Improving Vaccine Stance Detection by Combining Online and Offline Data
Differing opinions about COVID-19 have led to various online discourses regarding vaccines. Due to the detrimental effects and the scale of the COVID-19 pandemic, detecting vaccine stance has become especially important and is attracting increasing attention. Communication during the pandemic is typically done via online and offline sources, which provide two complementary avenues for detecting vaccine stance. Therefore, this paper aims to (1) study the importance of integrating online and offline data to vaccine stance detection; and (2) identify the critical online and offline attributes that influence an individual's vaccine stance. We model vaccine hesitancy as a surrogate for identifying the importance of online and offline factors. With the aid of explainable AI and combinatorial analysis, we conclude that both online and offline factors help predict vaccine stance.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
312,126
1912.09216
Semantic Segmentation from Remote Sensor Data and the Exploitation of Latent Learning for Classification of Auxiliary Tasks
In this paper we address three different aspects of semantic segmentation from remote sensor data using deep neural networks. Firstly, we focus on the semantic segmentation of buildings from remote sensor data and propose ICT-Net. The proposed network has been tested on the INRIA and AIRS benchmark datasets and is shown to outperform all other state of the art by more than 1.5% and 1.8% on the Jaccard index, respectively. Secondly, as the building classification is typically the first step of the reconstruction process, we investigate the relationship of the classification accuracy to the reconstruction accuracy. Finally, we present the simple yet compelling concept of latent learning and the implications it carries within the context of deep learning. We posit that a network trained on a primary task (i.e. building classification) is unintentionally learning about auxiliary tasks (e.g. the classification of road, tree, etc) which are complementary to the primary task. We extensively tested the proposed technique on the ISPRS benchmark dataset which contains multi-label ground truth, and report an average classification accuracy (F1 score) of 54.29% (SD=17.03) for roads, 10.15% (SD=2.54) for cars, 24.11% (SD=5.25) for trees, 42.74% (SD=6.62) for low vegetation, and 18.30% (SD=16.08) for clutter. The source code and supplemental material is publicly available at http://www.theICTlab.org/lp/2019ICT-Net/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
158,023
2402.13587
A Multimodal In-Context Tuning Approach for E-Commerce Product Description Generation
In this paper, we propose a new setting for generating product descriptions from images, augmented by marketing keywords. It leverages the combined power of visual and textual information to create descriptions that are more tailored to the unique features of products. For this setting, previous methods utilize visual and textual encoders to encode the image and keywords and employ a language model-based decoder to generate the product description. However, the generated description is often inaccurate and generic since same-category products have similar copy-writings, and optimizing the overall framework on large-scale samples makes models concentrate on common words yet ignore the product features. To alleviate the issue, we present a simple and effective Multimodal In-Context Tuning approach, named ModICT, which introduces a similar product sample as the reference and utilizes the in-context learning capability of language models to produce the description. During training, we keep the visual encoder and language model frozen, focusing on optimizing the modules responsible for creating multimodal in-context references and dynamic prompts. This approach preserves the language generation prowess of large language models (LLMs), facilitating a substantial increase in description diversity. To assess the effectiveness of ModICT across various language model scales and types, we collect data from three distinct product categories within the E-commerce domain. Extensive experiments demonstrate that ModICT significantly improves the accuracy (by up to 3.3% on Rouge-L) and diversity (by up to 9.4% on D-5) of generated results compared to conventional methods. Our findings underscore the potential of ModICT as a valuable tool for enhancing automatic generation of product descriptions in a wide range of applications. Code is at: https://github.com/HITsz-TMG/Multimodal-In-Context-Tuning
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
431,324
2408.00768
Comparing Optical Flow and Deep Learning to Enable Computationally Efficient Traffic Event Detection with Space-Filling Curves
Gathering data and identifying events in various traffic situations remains an essential challenge for the systematic evaluation of a perception system's performance. Analyzing large-scale, typically unstructured, multi-modal, time series data obtained from video, radar, and LiDAR is computationally demanding, particularly when meta-information or annotations are missing. We compare Optical Flow (OF) and Deep Learning (DL) to feed computationally efficient event detection via space-filling curves on video data from a forward-facing, in-vehicle camera. Our first approach leverages unexpected disturbances in the OF field from vehicle surroundings; the second approach is a DL model trained on human visual attention to predict a driver's gaze to spot potential event locations. We feed these results to a space-filling curve to reduce dimensionality and achieve computationally efficient event retrieval. We systematically evaluate our concept by obtaining characteristic patterns for both approaches from a large-scale virtual dataset (SMIRK) and applied our findings to the Zenseact Open Dataset (ZOD), a large multi-modal, real-world dataset, collected over two years in 14 different European countries. Our results yield that the OF approach excels in specificity and reduces false positives, while the DL approach demonstrates superior sensitivity. Both approaches offer comparable processing speed, making them suitable for real-time applications.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
477,969
1004.2870
Nurse Rostering with Genetic Algorithms
In recent years genetic algorithms have emerged as a useful tool for the heuristic solution of complex discrete optimisation problems. In particular there has been considerable interest in their use in tackling problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle constraints and successful implementations usually require some sort of modification to enable the search to exploit problem specific knowledge in order to overcome this shortcoming. This paper is concerned with the development of a family of genetic algorithms for the solution of a nurse rostering problem at a major UK hospital. The hospital is made up of wards of up to 30 nurses. Each ward has its own group of nurses whose shifts have to be scheduled on a weekly basis. In addition to fulfilling the minimum demand for staff over three daily shifts, nurses' wishes and qualifications have to be taken into account. The schedules must also be seen to be fair, in that unpopular shifts have to be spread evenly amongst all nurses, and other restrictions, such as team nursing and special conditions for senior staff, have to be satisfied. The basis of the family of genetic algorithms is a classical genetic algorithm consisting of n-point crossover, single-bit mutation and a rank-based selection. The solution space consists of all schedules in which each nurse works the required number of shifts, but the remaining constraints, both hard and soft, are relaxed and penalised in the fitness function. The talk will start with a detailed description of the problem and the initial implementation and will go on to highlight the shortcomings of such an approach, in terms of the key element of balancing feasibility, i.e. covering the demand and work regulations, and quality, as measured by the nurses' preferences. A series of experiments involving parameter adaptation, niching, intelligent weights, delta coding, local hill climbing, migration and special selection rules will then be outlined and it will be shown how a series of these enhancements were able to eradicate these difficulties. Results based on several months' real data will be used to measure the impact of each modification, and to show that the final algorithm is able to compete with a tabu search approach currently employed at the hospital. The talk will conclude with some observations as to the overall quality of this approach to this and similar problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
6,189
1910.05986
An Efficient Tensor Completion Method via New Latent Nuclear Norm
In tensor completion, the latent nuclear norm is commonly used to induce low-rank structure, while substantially failing to capture the global information due to the utilization of unbalanced unfolding scheme. To overcome this drawback, a new latent nuclear norm equipped with a more balanced unfolding scheme is defined for low-rank regularizer. Moreover, the new latent nuclear norm together with the Frank-Wolfe (FW) algorithm is developed as an efficient completion method by utilizing the sparsity structure of observed tensor. Specifically, both FW linear subproblem and line search only need to access the observed entries, by which we can instead maintain the sparse tensors and a set of small basis matrices during iteration. Most operations are based on sparse tensors, and the closed-form solution of FW linear subproblem can be obtained from rank-one SVD. We theoretically analyze the space-complexity and time-complexity of the proposed method, and show that it is much more efficient over other norm-based completion methods for higher-order tensors. Extensive experimental results of visual-data inpainting demonstrate that the proposed method is able to achieve state-of-the-art performance at smaller costs of time and space, which is very meaningful for the memory-limited equipment in practical applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
149,228
1809.01887
Travel Speed Prediction with a Hierarchical Convolutional Neural Network and Long Short-Term Memory Model Framework
Advanced travel information and warning, if provided accurately, can help road users avoid traffic congestion through dynamic route planning and behavior change. It also enables traffic control centres mitigate the impact of congestion by activating Intelligent Transport System (ITS) proactively. Deep learning has become increasingly popular in recent years, following a surge of innovative GPU technology, high-resolution, big datasets and thriving machine learning algorithms. However, there are few examples exploiting this emerging technology to develop applications for traffic prediction. This is largely due to the difficulty in capturing random, seasonal, non-linear, and spatio-temporal correlated nature of traffic data. In this paper, we propose a data-driven modelling approach with a novel hierarchical D-CLSTM-t deep learning model for short-term traffic speed prediction, a framework combined with convolutional neural network (CNN) and long short-term memory (LSTM) models. A deep CNN model is employed to learn the spatio-temporal traffic patterns of the input graphs, which are then fed into a deep LSTM model for sequence learning. To capture traffic seasonal variations, time of the day and day of the week indicators are fused with trained features. The model is trained end-to-end to predict travel speed in 15 to 90 minutes in the future. We compare the model performance against other baseline models including CNN, LGBM, LSTM, and traditional speed-flow curves. Experiment results show that the D-CLSTM-t outperforms other models considerably. Model tests show that speed upstream also responds sensibly to a sudden accident occurring downstream. Our D-CLSTM-t model framework is also highly scalable for future extension such as for network-wide traffic prediction, which can also be improved by including additional features such as weather, long term seasonality and accident information.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
106,914
1509.03000
Full-Duplex Transceiver for Future Cellular Network: A Smart Antenna Approach
In this paper, we propose a transceiver architecture for full-duplex (FD) eNodeB (eNB) and FD user equipment (UE) transceiver. For FD communication,.i.e., simultaneous in-band uplink and downlink operation, same subcarriers can be allocated to UE in both uplink and downlink. Hence, contrary to traditional LTE, we propose using single-carrier frequency division multiple accesses (SC-FDMA) for downlink along with the conventional method of using it for uplink. The use of multiple antennas at eNB and singular value decomposition (SVD) in the downlink allows multiple users (MU) to operate on the same set of ubcarriers. In the uplink, successive interference cancellation with optimal ordering (SSIC-OO) algorithm is used to decouple signals of UEs operating in the same set of subcarriers. A smart antenna approach is adopted which prevents interference, in downlink of a UE, from uplink signals of other UEs sharing same subcarriers. The approach includes using multiple antennas at UEs to form directed beams towards eNode and nulls towards other UEs. The proposed architecture results in significant improvement of the overall spectrum efficiency per cell of the cellular network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
46,787
2308.00284
CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering
Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
382,887
1209.3824
Interference Mitigation via Interference-Aware Successive Decoding
In modern wireless networks, interference is no longer negligible since each cell becomes smaller to support high throughput. The reduced size of each cell forces to install many cells, and consequently causes to increase inter-cell interference at many cell edge areas. This paper considers a practical way of mitigating interference at the receiver equipped with multiple antennas in interference channels. Recently, it is shown that the capacity region of interference channels over point-to-point codes could be established with a combination of two schemes: treating interference as noise and jointly decoding both desired and interference signals. In practice, the first scheme is straightforwardly implementable, but the second scheme needs impractically huge computational burden at the receiver. Within a practical range of complexity, this paper proposes the interference-aware successive decoding (IASD) algorithm which successively decodes desired and interference signals while updating a priori information of both signals. When multiple decoders are allowed to be used, the proposed IASD can be extended to interference-aware parallel decoding (IAPD). The proposed algorithm is analyzed with extrinsic information transfer (EXIT) chart so as to show that the interference decoding is advantageous to improve the performance. Simulation results demonstrate that the proposed algorithm significantly outperforms interference non-decoding algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
18,604
1705.07183
Large System Analysis of Power Normalization Techniques in Massive MIMO
Linear precoding has been widely studied in the context of Massive multiple-input-multiple-output (MIMO) together with two common power normalization techniques, namely, matrix normalization (MN) and vector normalization (VN). Despite this, their effect on the performance of Massive MIMO systems has not been thoroughly studied yet. The aim of this paper is to fulfill this gap by using large system analysis. Considering a system model that accounts for channel estimation, pilot contamination, arbitrary pathloss, and per-user channel correlation, we compute tight approximations for the signal-to-interference-plus-noise ratio and the rate of each user equipment in the system while employing maximum ratio transmission (MRT), zero forcing (ZF), and regularized ZF precoding under both MN and VN techniques. Such approximations are used to analytically reveal how the choice of power normalization affects the performance of MRT and ZF under uncorrelated fading channels. It turns out that ZF with VN resembles a sum rate maximizer while it provides a notion of fairness under MN. Numerical results are used to validate the accuracy of the asymptotic analysis and to show that in Massive MIMO, non-coherent interference and noise, rather than pilot contamination, are often the major limiting factors of the considered precoding schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,769
1808.02289
Predicting Visual Context for Unsupervised Event Segmentation in Continuous Photo-streams
Segmenting video content into events provides semantic structures for indexing, retrieval, and summarization. Since motion cues are not available in continuous photo-streams, and annotations in lifelogging are scarce and costly, the frames are usually clustered into events by comparing the visual features between them in an unsupervised way. However, such methodologies are ineffective to deal with heterogeneous events, e.g. taking a walk, and temporary changes in the sight direction, e.g. at a meeting. To address these limitations, we propose Contextual Event Segmentation (CES), a novel segmentation paradigm that uses an LSTM-based generative network to model the photo-stream sequences, predict their visual context, and track their evolution. CES decides whether a frame is an event boundary by comparing the visual context generated from the frames in the past, to the visual context predicted from the future. We implemented CES on a new and massive lifelogging dataset consisting of more than 1.5 million images spanning over 1,723 days. Experiments on the popular EDUB-Seg dataset show that our model outperforms the state-of-the-art by over 16% in f-measure. Furthermore, CES' performance is only 3 points below that of human annotators.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,745
2112.13408
Perlin Noise Improve Adversarial Robustness
Adversarial examples are some special input that can perturb the output of a deep neural network, in order to make produce intentional errors in the learning algorithms in the production environment. Most of the present methods for generating adversarial examples require gradient information. Even universal perturbations that are not relevant to the generative model rely to some extent on gradient information. Procedural noise adversarial examples is a new way of adversarial example generation, which uses computer graphics noise to generate universal adversarial perturbations quickly while not relying on gradient information. Combined with the defensive idea of adversarial training, we use Perlin noise to train the neural network to obtain a model that can defend against procedural noise adversarial examples. In combination with the use of model fine-tuning methods based on pre-trained models, we obtain faster training as well as higher accuracy. Our study shows that procedural noise adversarial examples are defensible, but why procedural noise can generate adversarial examples and how to defend against other kinds of procedural noise adversarial examples that may emerge in the future remain to be investigated.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
273,233
2302.10805
Repeated Bilateral Trade Against a Smoothed Adversary
We study repeated bilateral trade where an adaptive $\sigma$-smooth adversary generates the valuations of sellers and buyers. We provide a complete characterization of the regret regimes for fixed-price mechanisms under different feedback models in the two cases where the learner can post either the same or different prices to buyers and sellers. We begin by showing that the minimax regret after $T$ rounds is of order $\sqrt{T}$ in the full-feedback scenario. Under partial feedback, any algorithm that has to post the same price to buyers and sellers suffers worst-case linear regret. However, when the learner can post two different prices at each round, we design an algorithm enjoying regret of order $T^{3/4}$ ignoring log factors. We prove that this rate is optimal by presenting a surprising $T^{3/4}$ lower bound, which is the main technical contribution of the paper.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
346,956
1509.02604
Asynchronous Distributed ADMM for Large-Scale Optimization- Part II: Linear Convergence Analysis and Numerical Performance
The alternating direction method of multipliers (ADMM) has been recognized as a versatile approach for solving modern large-scale machine learning and signal processing problems efficiently. When the data size and/or the problem dimension is large, a distributed version of ADMM can be used, which is capable of distributing the computation load and the data set to a network of computing nodes. Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes. To address this issue, in a companion paper, we have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its worst-case convergence conditions. In this paper, we further the study by characterizing the conditions under which the AD-ADMM achieves linear convergence. Our conditions as well as the resulting linear rates reveal the impact that various algorithm parameters, network delay and network size have on the algorithm performance. To demonstrate the superior time efficiency of the proposed AD-ADMM, we test the AD-ADMM on a high-performance computer cluster by solving a large-scale logistic regression problem.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
46,745
2404.07122
Driver Attention Tracking and Analysis
We propose a novel method to estimate a driver's points-of-gaze using a pair of ordinary cameras mounted on the windshield and dashboard of a car. This is a challenging problem due to the dynamics of traffic environments with 3D scenes of unknown depths. This problem is further complicated by the volatile distance between the driver and the camera system. To tackle these challenges, we develop a novel convolutional network that simultaneously analyzes the image of the scene and the image of the driver's face. This network has a camera calibration module that can compute an embedding vector that represents the spatial configuration between the driver and the camera system. This calibration module improves the overall network's performance, which can be jointly trained end to end. We also address the lack of annotated data for training and evaluation by introducing a large-scale driving dataset with point-of-gaze annotations. This is an in situ dataset of real driving sessions in an urban city, containing synchronized images of the driving scene as well as the face and gaze of the driver. Experiments on this dataset show that the proposed method outperforms various baseline methods, having the mean prediction error of 29.69 pixels, which is relatively small compared to the $1280{\times}720$ resolution of the scene camera.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,722
2008.08523
Scene Text Detection with Selected Anchor
Object proposal technique with dense anchoring scheme for scene text detection were applied frequently to achieve high recall. It results in the significant improvement in accuracy but waste of computational searching, regression and classification. In this paper, we propose an anchor selection-based region proposal network (AS-RPN) using effective selected anchors instead of dense anchors to extract text proposals. The center, scales, aspect ratios and orientations of anchors are learnable instead of fixing, which leads to high recall and greatly reduced numbers of anchors. By replacing the anchor-based RPN in Faster RCNN, the AS-RPN-based Faster RCNN can achieve comparable performance with previous state-of-the-art text detecting approaches on standard benchmarks, including COCO-Text, ICDAR2013, ICDAR2015 and MSRA-TD500 when using single-scale and single model (ResNet50) testing only.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,445
1912.09528
Randomized Reactive Redundancy for Byzantine Fault-Tolerance in Parallelized Learning
This report considers the problem of Byzantine fault-tolerance in synchronous parallelized learning that is founded on the parallelized stochastic gradient descent (parallelized-SGD) algorithm. The system comprises a master, and $n$ workers, where up to $f$ of the workers are Byzantine faulty. Byzantine workers need not follow the master's instructions correctly, and might send malicious incorrect (or faulty) information. The identity of the Byzantine workers remains fixed throughout the learning process, and is unknown a priori to the master. We propose two coding schemes, a deterministic scheme and a randomized scheme, for guaranteeing exact fault-tolerance if $2f < n$. The coding schemes use the concept of reactive redundancy for isolating Byzantine workers that eventually send faulty information. We note that the computation efficiencies of the schemes compare favorably with other (deterministic or randomized) coding schemes, for exact fault-tolerance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
158,088
2304.03894
A multifidelity approach to continual learning for physical systems
We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
356,981
0905.4605
Techniques for Securing Data Exchange between a Database Server and a Client Program
The goal of the presented work is to illustrate a method by which the data exchange between a standalone computer software and a shared database server can be protected of unauthorized interceptation of the traffic in Internet network, a transport network for data managed by those two systems, interceptation by which an attacker could gain illegetimate access to the database, threatening this way the data integrity and compromising the database.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
3,786
2211.02574
Pushing AI to Wireless Network Edge: An Overview on Integrated Sensing, Communication, and Computation towards 6G
Pushing artificial intelligence (AI) from central cloud to network edge has reached board consensus in both industry and academia for materializing the vision of artificial intelligence of things (AIoT) in the sixth-generation (6G) era. This gives rise to an emerging research area known as edge intelligence, which concerns the distillation of human-like intelligence from the huge amount of data scattered at wireless network edge. In general, realizing edge intelligence corresponds to the process of sensing, communication, and computation, which are coupled ingredients for data generation, exchanging, and processing, respectively. However, conventional wireless networks design the sensing, communication, and computation separately in a task-agnostic manner, which encounters difficulties in accommodating the stringent demands of ultra-low latency, ultra-high reliability, and high capacity in emerging AI applications such as auto-driving. This thus prompts a new design paradigm of seamless integrated sensing, communication, and computation (ISCC) in a task-oriented manner, which comprehensively accounts for the use of the data in the downstream AI applications. In view of its growing interest, this article provides a timely overview of ISCC for edge intelligence by introducing its basic concept, design challenges, and enabling techniques, surveying the state-of-the-art development, and shedding light on the road ahead.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
328,618
2404.06659
Leveraging Interesting Facts to Enhance User Engagement with Conversational Interfaces
Conversational Task Assistants (CTAs) guide users in performing a multitude of activities, such as making recipes. However, ensuring that interactions remain engaging, interesting, and enjoyable for CTA users is not trivial, especially for time-consuming or challenging tasks. Grounded in psychological theories of human interest, we propose to engage users with contextual and interesting statements or facts during interactions with a multi-modal CTA, to reduce fatigue and task abandonment before a task is complete. To operationalize this idea, we train a high-performing classifier (82% F1-score) to automatically identify relevant and interesting facts for users. We use it to create an annotated dataset of task-specific interesting facts for the domain of cooking. Finally, we design and validate a dialogue policy to incorporate the identified relevant and interesting facts into a conversation, to improve user engagement and task completion. Live testing on a leading multi-modal voice assistant shows that 66% of the presented facts were received positively, leading to a 40% gain in the user satisfaction rating, and a 37% increase in conversation length. These findings emphasize that strategically incorporating interesting facts into the CTA experience can promote real-world user participation for guided task interactions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
445,540
1403.3100
Engaging with Massive Online Courses
The Web has enabled one of the most visible recent developments in education---the deployment of massive open online courses. With their global reach and often staggering enrollments, MOOCs have the potential to become a major new mechanism for learning. Despite this early promise, however, MOOCs are still relatively unexplored and poorly understood. In a MOOC, each student's complete interaction with the course materials takes place on the Web, thus providing a record of learner activity of unprecedented scale and resolution. In this work, we use such trace data to develop a conceptual framework for understanding how users currently engage with MOOCs. We develop a taxonomy of individual behavior, examine the different behavioral patterns of high- and low-achieving students, and investigate how forum participation relates to other parts of the course. We also report on a large-scale deployment of badges as incentives for engagement in a MOOC, including randomized experiments in which the presentation of badges was varied across sub-populations. We find that making badges more salient produced increases in forum engagement.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,540
2310.18998
A 0.21-ps FOM Capacitor-Less Analog LDO with Dual-Range Load Current for Biomedical Applications
This paper presents an output capacitor-less low-dropout regulator (LDO) with a bias switching scheme for biomedical applications with dual-range load currents. Power optimization is crucial for systems with multiple activation modes such as neural interfaces, IoT and edge devices with varying load currents. To enable rapid switching between low and high current states, a flipped voltage follower (FVF) configuration is utilized, along with a super source follower buffer to drive the power transistor. Two feedback loops and an on-chip compensation capacitor Cc maintain the stability of the regulator under various load conditions. The LDO was implemented in a 65nm CMOS process with 1.5V input and 1.2V output voltages. The measured quiescent current is as low as 3uA and 50uA for the 0-500uA and 5-15mA load current ranges, respectively. An undershoot voltage of 100mV is observed when the load current switches from 0 to 15mA within 80ns, with a maximum current efficiency of 99.98%. Our design achieved a low Figure-of-Merit of 0.21ps, outperforming state-of-the-art analog LDOs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
403,802
2411.15008
Evolutionary Automata and Deep Evolutionary Computation
Evolution by natural selection, which is one of the most compelling themes of modern science, brought forth evolutionary algorithms and evolutionary computation, applying mechanisms of evolution in nature to various problems solved by computers. In this paper we concentrate on evolutionary automata that constitute an analogous model of evolutionary computation compared to well-known evolutionary algorithms. Evolutionary automata provide a more complete dual model of evolutionary computation, similar like abstract automata (e.g., Turing machines) form a more formal and precise model compared to recursive algorithms and their subset - evolutionary algorithms. An evolutionary automaton is an automaton that evolves performing evolutionary computation perhaps using an infinite number of generations. This model allows for a direct modeling evolution of evolution, and leads to tremendous expressiveness of evolutionary automata and evolutionary computation. This also gives the hint to the power of natural evolution that is self-evolving by interactive feedback with the environment.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
510,405
2403.10252
Region-aware Distribution Contrast: A Novel Approach to Multi-Task Partially Supervised Learning
In this study, we address the intricate challenge of multi-task dense prediction, encompassing tasks such as semantic segmentation, depth estimation, and surface normal estimation, particularly when dealing with partially annotated data (MTPSL). The complexity arises from the absence of complete task labels for each training image. Given the inter-related nature of these pixel-wise dense tasks, our focus is on mining and capturing cross-task relationships. Existing solutions typically rely on learning global image representations for global cross-task image matching, imposing constraints that, unfortunately, sacrifice the finer structures within the images. Attempting local matching as a remedy faces hurdles due to the lack of precise region supervision, making local alignment a challenging endeavor. The introduction of Segment Anything Model (SAM) sheds light on addressing local alignment challenges by providing free and high-quality solutions for region detection. Leveraging SAM-detected regions, the subsequent challenge lies in aligning the representations within these regions. Diverging from conventional methods that directly learn a monolithic image representation, our proposal involves modeling region-wise representations using Gaussian Distributions. Aligning these distributions between corresponding regions from different tasks imparts higher flexibility and capacity to capture intra-region structures, accommodating a broader range of tasks. This innovative approach significantly enhances our ability to effectively capture cross-task relationships, resulting in improved overall performance in partially supervised multi-task dense prediction scenarios. Extensive experiments conducted on two widely used benchmarks underscore the superior effectiveness of our proposed method, showcasing state-of-the-art performance even when compared to fully supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,118
2406.04093
Scaling and evaluating sparse autoencoders
Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
461,515
2502.04045
Comparing privacy notions for protection against reconstruction attacks in machine learning
Within the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL), which was designed with privacy preservation in mind. In response to these threats, the privacy community recommends the use of differential privacy (DP) in the stochastic gradient descent algorithm, termed DP-SGD. However, the proliferation of variants of DP in recent years\textemdash such as metric privacy\textemdash has made it challenging to conduct a fair comparison between different mechanisms due to the different meanings of the privacy parameters $\epsilon$ and $\delta$ across different variants. Thus, interpreting the practical implications of $\epsilon$ and $\delta$ in the FL context and amongst variants of DP remains ambiguous. In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees, namely $(\epsilon,\delta)$-DP and metric privacy. We provide two foundational means of comparison: firstly, via the well-established $(\epsilon,\delta)$-DP guarantees, made possible through the R\'enyi differential privacy framework; and secondly, via Bayes' capacity, which we identify as an appropriate measure for reconstruction threats.
false
false
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
530,963
2404.16047
From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?
This paper investigates the influence of anthropomorphized descriptions of so-called "AI" (artificial intelligence) systems on people's self-assessment of trust in the system. Building on prior work, we define four categories of anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological metaphors, and 4. Properties of a communicator). We use a survey-based approach (n=954) to investigate whether participants are likely to trust one of two (fictitious) "AI" systems by randomly assigning people to see either an anthropomorphized or a de-anthropomorphized description of the systems. We find that participants are no more likely to trust anthropomorphized over de-anthropmorphized product descriptions overall. The type of product or system in combination with different anthropomorphic categories appears to exert greater influence on trust than anthropomorphizing language alone, and age is the only demographic factor that significantly correlates with people's preference for anthropomorphized or de-anthropomorphized descriptions. When elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that "anthropomorphism" in "AI" descriptions is an aggregate concept that may influence different groups differently, and provide nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as "AI".
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
449,333
2307.15245
A Practical Recipe for Federated Learning Under Statistical Heterogeneity Experimental Design
Federated Learning (FL) has been an area of active research in recent years. There have been numerous studies in FL to make it more successful in the presence of data heterogeneity. However, despite the existence of many publications, the state of progress in the field is unknown. Many of the works use inconsistent experimental settings and there are no comprehensive studies on the effect of FL-specific experimental variables on the results and practical insights for a more comparable and consistent FL experimental setup. Furthermore, the existence of several benchmarks and confounding variables has further complicated the issue of inconsistency and ambiguity. In this work, we present the first comprehensive study on the effect of FL-specific experimental variables in relation to each other and performance results, bringing several insights and recommendations for designing a meaningful and well-incentivized FL experimental setup. We further aid the community by releasing FedZoo-Bench, an open-source library based on PyTorch with pre-implementation of 22 state-of-the-art methods, and a broad set of standardized and customizable features available at https://github.com/MMorafah/FedZoo-Bench. We also provide a comprehensive comparison of several state-of-the-art (SOTA) methods to better understand the current state of the field and existing limitations.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
382,198
1710.04782
Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer's Disease using structural MR and FDG-PET images
Alzheimer's Disease (AD) is a progressive neurodegenerative disease. Amnestic mild cognitive impairment (MCI) is a common first symptom before the conversion to clinical impairment where the individual becomes unable to perform activities of daily living independently. Although there is currently no treatment available, the earlier a conclusive diagnosis is made, the earlier the potential for interventions to delay or perhaps even prevent progression to full-blown AD. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo view into the structure and function (glucose metabolism) of the living brain. It is hypothesized that combining different image modalities could better characterize the change of human brain and result in a more accuracy early diagnosis of AD. In this paper, we proposed a novel framework to discriminate normal control(NC) subjects from subjects with AD pathology (AD and NC, MCI subjects convert to AD in future). Our novel approach utilizing a multimodal and multiscale deep neural network was found to deliver a 85.68\% accuracy in the prediction of subjects within 3 years to conversion. Cross validation experiments proved that it has better discrimination ability compared with results in existing published literature.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
82,530
2407.21656
Comgra: A Tool for Analyzing and Debugging Neural Networks
Neural Networks are notoriously difficult to inspect. We introduce comgra, an open source python library for use with PyTorch. Comgra extracts data about the internal activations of a model and organizes it in a GUI (graphical user interface). It can show both summary statistics and individual data points, compare early and late stages of training, focus on individual samples of interest, and visualize the flow of the gradient through the network. This makes it possible to inspect the model's behavior from many different angles and save time by rapidly testing different hypotheses without having to rerun it. Comgra has applications for debugging, neural architecture design, and mechanistic interpretability. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at https://github.com/FlorianDietz/comgra.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
477,624
2407.15070
GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction
Creating high-fidelity 3D human head avatars is crucial for applications in VR/AR, digital human, and film production. Recent advances have leveraged morphable face models to generate animated head avatars from easily accessible data, representing varying identities and expressions within a low-dimensional parametric space. However, existing methods often struggle with modeling complex appearance details, e.g., hairstyles, and suffer from low rendering quality and efficiency. In this paper we introduce a novel approach, 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head, allowing precise control over both identity and expression. The Gaussian model can handle intricate details, enabling realistic representations of varying appearances and complex expressions. Furthermore, we presents a well-designed training framework to ensure smooth convergence, providing a robust guarantee for learning the rich content. Our method achieves high-quality, photo-realistic rendering with real-time efficiency, making it a valuable contribution to the field of parametric head models. Finally, we apply the 3D Gaussian Parametric Head Model to monocular video or few-shot head avatar reconstruction tasks, which enables instant reconstruction of high-quality 3D head avatars even when input data is extremely limited, surpassing previous methods in terms of reconstruction quality and training speed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
475,020
2411.19870
DeMo: Decoupled Momentum Optimization
Training large neural networks typically requires sharing gradients between accelerators through specialized high-speed interconnects. Drawing from the signal processing principles of frequency decomposition and energy compaction, we demonstrate that synchronizing full optimizer states and model parameters during training is unnecessary. By decoupling momentum updates and allowing controlled divergence in optimizer states across accelerators, we achieve improved convergence compared to state-of-the-art optimizers. We introduce {\textbf{De}}coupled {\textbf{Mo}}mentum (DeMo), a fused optimizer and data parallel algorithm that reduces inter-accelerator communication requirements by several orders of magnitude. This enables training of large neural networks even with limited network bandwidth and heterogeneous hardware. Our method is topology-agnostic and architecture-independent and supports scalable clock-synchronous distributed training with negligible compute and memory overhead. Empirical results show that models trained with DeMo match or exceed the performance of equivalent models trained with AdamW, while eliminating the need for high-speed interconnects when pre-training large scale foundation models. An open source reference PyTorch implementation is published on GitHub at https://github.com/bloc97/DeMo
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
512,414
2105.05847
Learning to Generate Novel Scene Compositions from Single Images and Videos
Training GANs in low-data regimes remains a challenge, as overfitting often leads to memorization or training divergence. In this work, we introduce One-Shot GAN that can learn to generate samples from a training set as little as one image or one video. We propose a two-branch discriminator, with content and layout branches designed to judge the internal content separately from the scene layout realism. This allows synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single-image GAN models, One-Shot GAN achieves higher diversity and quality of synthesis. It is also not restricted to the single image setting, successfully learning in the introduced setting of a single video.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
234,939
2410.14423
Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life. This study demonstrates the potential of retinal optical coherence tomography (OCT) imaging combined with fundus photographs for identifying future adverse cardiac events. We used data from 977 patients who experienced CVD within a 5-year interval post-image acquisition, alongside 1,877 control participants without CVD, totaling 2,854 subjects. We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not. Our model, trained on both imaging modalities, achieved promising results (AUROC 0.78 +/- 0.02, accuracy 0.68 +/- 0.002, precision 0.74 +/- 0.02, sensitivity 0.73 +/- 0.02, and specificity 0.68 +/- 0.01), demonstrating its efficacy in identifying patients at risk of future CVD events based on their retinal images. This study highlights the potential of retinal OCT imaging and fundus photographs as cost-effective, non-invasive alternatives for predicting cardiovascular disease risk. The widespread availability of these imaging techniques in optometry practices and hospitals further enhances their potential for large-scale CVD risk screening. Our findings contribute to the development of standardized, accessible methods for early CVD risk identification, potentially improving preventive care strategies and patient outcomes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
500,023
2001.08861
Encoding Physical Constraints in Differentiable Newton-Euler Algorithm
The recursive Newton-Euler Algorithm (RNEA) is a popular technique for computing the dynamics of robots. RNEA can be framed as a differentiable computational graph, enabling the dynamics parameters of the robot to be learned from data via modern auto-differentiation toolboxes. However, the dynamics parameters learned in this manner can be physically implausible. In this work, we incorporate physical constraints in the learning by adding structure to the learned parameters. This results in a framework that can learn physically plausible dynamics via gradient descent, improving the training speed as well as generalization of the learned dynamics models. We evaluate our method on real-time inverse dynamics control tasks on a 7 degree of freedom robot arm, both in simulation and on the real robot. Our experiments study a spectrum of structure added to the parameters of the differentiable RNEA algorithm, and compare their performance and generalization.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
161,410
2408.08918
Supervised and Unsupervised Alignments for Spoofing Behavioral Biometrics
Biometric recognition systems are security systems based on intrinsic properties of their users, usually encoded in high dimension representations called embeddings, which potential theft would represent a greater threat than a temporary password or a replaceable key. To study the threat of embedding theft, we perform spoofing attacks on two behavioral biometric systems (an automatic speaker verification system and a handwritten digit analysis system) using a set of alignment techniques. Biometric recognition systems based on embeddings work in two phases: enrollment - where embeddings are collected and stored - then authentication - when new embeddings are compared to the stored ones -.The threat of stolen enrollment embeddings has been explored by the template reconstruction attack literature: reconstructing the original data to spoof an authentication system is doable with black-box access to their encoder. In this document, we explore the options available to perform template reconstruction attacks without any access to the encoder. To perform those attacks, we suppose general rules over the distribution of embeddings across encoders and use supervised and unsupervised algorithms to align an unlabeled set of embeddings with a set from a known encoder. The use of an alignment algorithm from the unsupervised translation literature gives promising results on spoofing two behavioral biometric systems.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
481,210
1912.12171
So2Sat LCZ42: A Benchmark Dataset for Global Local Climate Zones Classification
Access to labeled reference data is one of the grand challenges in supervised machine learning endeavors. This is especially true for an automated analysis of remote sensing images on a global scale, which enables us to address global challenges such as urbanization and climate change using state-of-the-art machine learning techniques. To meet these pressing needs, especially in urban research, we provide open access to a valuable benchmark dataset named "So2Sat LCZ42," which consists of local climate zone (LCZ) labels of about half a million Sentinel-1 and Sentinel-2 image patches in 42 urban agglomerations (plus 10 additional smaller areas) across the globe. This dataset was labeled by 15 domain experts following a carefully designed labeling work flow and evaluation process over a period of six months. As rarely done in other labeled remote sensing dataset, we conducted rigorous quality assessment by domain experts. The dataset achieved an overall confidence of 85%. We believe this LCZ dataset is a first step towards an unbiased globallydistributed dataset for urban growth monitoring using machine learning methods, because LCZ provide a rather objective measure other than many other semantic land use and land cover classifications. It provides measures of the morphology, compactness, and height of urban areas, which are less dependent on human and culture. This dataset can be accessed from http://doi.org/10.14459/2018mp1483140.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
158,786
1703.06914
Applying Deep Machine Learning for psycho-demographic profiling of Internet users using O.C.E.A.N. model of personality
In the modern era, each Internet user leaves enormous amounts of auxiliary digital residuals (footprints) by using a variety of on-line services. All this data is already collected and stored for many years. In recent works, it was demonstrated that it's possible to apply simple machine learning methods to analyze collected digital footprints and to create psycho-demographic profiles of individuals. However, while these works clearly demonstrated the applicability of machine learning methods for such an analysis, created simple prediction models still lacks accuracy necessary to be successfully applied for practical needs. We have assumed that using advanced deep machine learning methods may considerably increase the accuracy of predictions. We started with simple machine learning methods to estimate basic prediction performance and moved further by applying advanced methods based on shallow and deep neural networks. Then we compared prediction power of studied models and made conclusions about its performance. Finally, we made hypotheses how prediction accuracy can be further improved. As result of this work, we provide full source code used in the experiments for all interested researchers and practitioners in corresponding GitHub repository. We believe that applying deep machine learning for psycho-demographic profiling may have an enormous impact on the society (for good or worse) and provides means for Artificial Intelligence (AI) systems to better understand humans by creating their psychological profiles. Thus AI agents may achieve the human-like ability to participate in conversation (communication) flow by anticipating human opponents' reactions, expectations, and behavior.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
70,298
2311.01195
Batch Bayesian Optimization for Replicable Experimental Design
Many real-world experimental design problems (a) evaluate multiple experimental conditions in parallel and (b) replicate each condition multiple times due to large and heteroscedastic observation noise. Given a fixed total budget, this naturally induces a trade-off between evaluating more unique conditions while replicating each of them fewer times vs. evaluating fewer unique conditions and replicating each more times. Moreover, in these problems, practitioners may be risk-averse and hence prefer an input with both good average performance and small variability. To tackle both challenges, we propose the Batch Thompson Sampling for Replicable Experimental Design (BTS-RED) framework, which encompasses three algorithms. Our BTS-RED-Known and BTS-RED-Unknown algorithms, for, respectively, known and unknown noise variance, choose the number of replications adaptively rather than deterministically such that an input with a larger noise variance is replicated more times. As a result, despite the noise heteroscedasticity, both algorithms enjoy a theoretical guarantee and are asymptotically no-regret. Our Mean-Var-BTS-RED algorithm aims at risk-averse optimization and is also asymptotically no-regret. We also show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
404,947
2309.08744
Personalized Food Image Classification: Benchmark Datasets and New Baseline
Food image classification is a fundamental step of image-based dietary assessment, enabling automated nutrient analysis from food images. Many current methods employ deep neural networks to train on generic food image datasets that do not reflect the dynamism of real-life food consumption patterns, in which food images appear sequentially over time, reflecting the progression of what an individual consumes. Personalized food classification aims to address this problem by training a deep neural network using food images that reflect the consumption pattern of each individual. However, this problem is under-explored and there is a lack of benchmark datasets with individualized food consumption patterns due to the difficulty in data collection. In this work, we first introduce two benchmark personalized datasets including the Food101-Personal, which is created based on surveys of daily dietary patterns from participants in the real world, and the VFNPersonal, which is developed based on a dietary study. In addition, we propose a new framework for personalized food image classification by leveraging self-supervised learning and temporal image feature information. Our method is evaluated on both benchmark datasets and shows improved performance compared to existing works. The dataset has been made available at: https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,303
2308.11649
Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming
In recent years, the fields of artificial intelligence and web-based programming have seen tremendous advancements, enabling developers to create dynamic and interactive websites and applications. At the forefront of these advancements, creative AI tools and game-based methodologies have emerged as potent instruments, promising enhanced user experiences and increased engagement in educational environments. This chapter explores the potential of these tools and methodologies for interactive web-based programming, examining their benefits, limitations, and real-world applications. We examine the challenges and ethical considerations that arise when integrating these technologies into web development, such as privacy concerns and the potential for bias in AI-generated content. Through this exploration, we aim to provide insights into the exciting possibilities that creative AI tools and game-based methodologies offer for the future of web-based programming.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
387,225
2201.07902
Evaluating Machine Common Sense via Cloze Testing
Language models (LMs) show state of the art performance for common sense (CS) question answering, but whether this ability implies a human-level mastery of CS remains an open question. Understanding the limitations and strengths of LMs can help researchers improve these models, potentially by developing novel ways of integrating external CS knowledge. We devise a series of tests and measurements to systematically quantify their performance on different aspects of CS. We propose the use of cloze testing combined with word embeddings to measure the LM's robustness and confidence. Our results show than although language models tend to achieve human-like accuracy, their confidence is subpar. Future work can leverage this information to build more complex systems, such as an ensemble of symbolic and distributed knowledge.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
276,162
2501.02180
Phase Retrieval by Quaternionic Reweighted Amplitude Flow on Image Reconstruction
Quaternionic signal processing provides powerful tools for efficiently managing color signals by preserving the intrinsic correlations among signal dimensions through quaternion algebra. In this paper, we address the quaternionic phase retrieval problem by systematically developing novel algorithms based on an amplitude-based model. Specifically, we propose the Quaternionic Reweighted Amplitude Flow (QRAF) algorithm, which is further enhanced by three of its variants: incremental, accelerated, and adapted QRAF algorithms. In addition, we introduce the Quaternionic Perturbed Amplitude Flow (QPAF) algorithm, which has linear convergence. Extensive numerical experiments on both synthetic data and real images, demonstrate that our proposed methods significantly improve recovery performance and computational efficiency compared to state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
522,381
2312.12541
Blood Glucose Level Prediction: A Graph-based Explainable Method with Federated Learning
In the UK, approximately 400,000 people with type 1 diabetes (T1D) rely on insulin delivery due to insufficient pancreatic insulin production. Managing blood glucose (BG) levels is crucial, with continuous glucose monitoring (CGM) playing a key role. CGM, tracking BG every 5 minutes, enables effective blood glucose level prediction (BGLP) by considering factors like carbohydrate intake and insulin delivery. Recent research has focused on developing sequential models for BGLP using historical BG data, incorporating additional attributes such as carbohydrate intake, insulin delivery, and time. These methods have shown notable success in BGLP, with some providing temporal explanations. However, they often lack clear correlations between attributes and their impact on BGLP. Additionally, some methods raise privacy concerns by aggregating participant data to learn population patterns. Addressing these limitations, we introduced a graph attentive memory (GAM) model, combining a graph attention network (GAT) with a gated recurrent unit (GRU). GAT applies graph attention to model attribute correlations, offering transparent, dynamic attribute relationships. Attention weights dynamically gauge attribute significance over time. To ensure privacy, we employed federated learning (FL), facilitating secure population pattern analysis. Our method was validated using the OhioT1DM'18 and OhioT1DM'20 datasets from 12 participants, focusing on 6 key attributes. We demonstrated our model's stability and effectiveness through hyperparameter impact analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
416,998
1902.02829
Low-cost Measurement of Industrial Shock Signals via Deep Learning Calibration
Special high-end sensors with expensive hardware are usually needed to measure shock signals with high accuracy. In this paper, we show that cheap low-end sensors calibrated by deep neural networks are also capable to measure high-g shocks accurately. Firstly we perform drop shock tests to collect a dataset of shock signals measured by sensors of different fidelity. Secondly, we propose a novel network to effectively learn both the signal peak and overall shape. The results show that the proposed network is capable to map low-end shock signals to its high-end counterparts with satisfactory accuracy. To the best of our knowledge, this is the first work to apply deep learning techniques to calibrate shock sensors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,955
1810.01018
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
In the past years, Deep convolution neural network has achieved great success in many artificial intelligence applications. However, its enormous model size and massive computation cost have become the main obstacle for deployment of such powerful algorithm in the low power and resource-limited mobile systems. As the countermeasure to this problem, deep neural networks with ternarized weights (i.e. -1, 0, +1) have been widely explored to greatly reduce the model size and computational cost, with limited accuracy degradation. In this work, we propose a novel ternarized neural network training method which simultaneously optimizes both weights and quantizer during training, differentiating from prior works. Instead of fixed and uniform weight ternarization, we are the first to incorporate the thresholds of weight ternarization into a closed-form representation using the truncated Gaussian approximation, enabling simultaneous optimization of weights and quantizer through back-propagation training. With both of the first and last layer ternarized, the experiments on the ImageNet classification task show that our ternarized ResNet-18/34/50 only has 3.9/2.52/2.16% accuracy degradation in comparison to the full-precision counterparts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
109,311
2006.11890
Graph Backdoor
One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise. Despite the plethora of prior work on DNNs for continuous data (e.g., images), the vulnerability of graph neural networks (GNNs) for discrete-structured data (e.g., graphs) is largely unexplored, which is highly concerning given their increasing use in security-sensitive domains. To bridge this gap, we present GTA, the first backdoor attack on GNNs. Compared with prior work, GTA departs in significant ways: graph-oriented -- it defines triggers as specific subgraphs, including both topological structures and descriptive features, entailing a large design spectrum for the adversary; input-tailored -- it dynamically adapts triggers to individual graphs, thereby optimizing both attack effectiveness and evasiveness; downstream model-agnostic -- it can be readily launched without knowledge regarding downstream models or fine-tuning strategies; and attack-extensible -- it can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks, constituting severe threats for a range of security-critical applications. Through extensive evaluation using benchmark datasets and state-of-the-art models, we demonstrate the effectiveness of GTA. We further provide analytical justification for its effectiveness and discuss potential countermeasures, pointing to several promising research directions.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
183,403
2409.11828
Model-Free Generic Robust Control for Servo-Driven Actuation Mechanisms with Layered Insight into Energy Conversions
To advance theoretical solutions and address limitations in modeling complex servo-driven actuation systems experiencing high non-linearity and load disturbances, this paper aims to design a practical model-free generic robust control (GRC) framework for these mechanisms. This framework is intended to be applicable across all actuator systems encompassing electrical, hydraulic, or pneumatic servomechanisms, while also functioning within complex interactions among dynamic components and adhering to control input constraints. In this respect, the state-space model of actuator systems is decomposed into smaller subsystems that incorporate the first principle equation of actuator motion dynamics and interactive energy conversion equations. This decomposition operates under the assumption that the comprehensive model of the servo-driven actuator system and energy conversion, uncertainties, load disturbances, and their bounds are unknown. Then, the GRC employs subsystem-based adaptive control strategies for each state-variant subsystem separately. Despite control input constraints and the unknown interactive system model, the GRC-applied actuator mechanism ensures uniform exponential stability and robustness in tracking desired motions. It features straightforward implementation, experimentally evaluated by applying it to two industrial applications.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
489,334
2112.06721
PM-MMUT: Boosted Phone-Mask Data Augmentation using Multi-Modeling Unit Training for Phonetic-Reduction-Robust E2E Speech Recognition
Consonant and vowel reduction are often encountered in speech, which might cause performance degradation in automatic speech recognition (ASR). Our recently proposed learning strategy based on masking, Phone Masking Training (PMT), alleviates the impact of such phenomenon in Uyghur ASR. Although PMT achieves remarkably improvements, there still exists room for further gains due to the granularity mismatch between the masking unit of PMT (phoneme) and the modeling unit (word-piece). To boost the performance of PMT, we propose multi-modeling unit training (MMUT) architecture fusion with PMT (PM-MMUT). The idea of MMUT framework is to split the Encoder into two parts including acoustic feature sequences to phoneme-level representation (AF-to-PLR) and phoneme-level representation to word-piece-level representation (PLR-to-WPLR). It allows AF-to-PLR to be optimized by an intermediate phoneme-based CTC loss to learn the rich phoneme-level context information brought by PMT. Experimental results on Uyghur ASR show that the proposed approaches outperform obviously the pure PMT. We also conduct experiments on the 960-hour Librispeech benchmark using ESPnet1, which achieves about 10% relative WER reduction on all the test set without LM fusion comparing with the latest official ESPnet1 pre-trained model.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
271,273
2410.08642
More than Memes: A Multimodal Topic Modeling Approach to Conspiracy Theories on Telegram
Research on conspiracy theories and related content online has traditionally focused on textual data. To address the increasing prevalence of (audio-)visual data on social media, and to capture the evolving and dynamic nature of this communication, researchers have begun to explore the potential of unsupervised approaches for analyzing multimodal online content. Our research contributes to this field by exploring the potential of multimodal topic modeling for analyzing conspiracy theories in German-language Telegram channels. Our work uses the BERTopic topic modeling approach in combination with CLIP for the analysis of textual and visual data. We analyze a corpus of ~40, 000 Telegram messages posted in October 2023 in 571 German-language Telegram channels known for disseminating conspiracy theories and other deceptive content. We explore the potentials and challenges of this approach for studying a medium-sized corpus of user-generated, text-image online content. We offer insights into the dominant topics across modalities, different text and image genres discovered during the analysis, quantitative inter-modal topic analyses, and a qualitative case study of textual, visual, and multimodal narrative strategies in the communication of conspiracy theories.
false
false
false
true
false
false
false
false
true
false
false
true
false
false
false
false
false
true
497,210
2111.12880
Active Learning at the ImageNet Scale
Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset. AL is especially impactful in industrial scale settings where data labeling costs are high and practitioners use every tool at their disposal to improve model performance. The recent success of self-supervised pretraining (SSP) highlights the importance of harnessing abundant unlabeled data to boost model performance. By combining AL with SSP, we can make use of unlabeled data while simultaneously labeling and training on particularly informative samples. In this work, we study a combination of AL and SSP on ImageNet. We find that performance on small toy datasets -- the typical benchmark setting in the literature -- is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner. Among the existing baselines we test, popular AL algorithms across a variety of small and large scale settings fail to outperform random sampling. To remedy the class-imbalance problem, we propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently by selecting more balanced samples for annotation than existing methods. Our code is available at: https://github.com/zeyademam/active_learning .
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
268,109
2306.08422
X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail
Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender's decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,409
2410.21119
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
One-shot federated learning (FL) limits the communication between the server and clients to a single round, which largely decreases the privacy leakage risks in traditional FLs requiring multiple communications. However, we find existing one-shot FL frameworks are vulnerable to distributional heterogeneity due to their insufficient focus on data heterogeneity while concentrating predominantly on model heterogeneity. Filling this gap, we propose a unified, data-free, one-shot federated learning framework (FedHydra) that can effectively address both model and data heterogeneity. Rather than applying existing value-only learning mechanisms, a structure-value learning mechanism is proposed in FedHydra. Specifically, a new stratified learning structure is proposed to cover data heterogeneity, and the value of each item during computation reflects model heterogeneity. By this design, the data and model heterogeneity issues are simultaneously monitored from different aspects during learning. Consequently, FedHydra can effectively mitigate both issues by minimizing their inherent conflicts. We compared FedHydra with three SOTA baselines on four benchmark datasets. Experimental results show that our method outperforms the previous one-shot FL methods in both homogeneous and heterogeneous settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
503,095
2211.03250
Uplink Sensing Using CSI Ratio in Perceptive Mobile Networks
Uplink sensing in perceptive mobile networks (PMNs), which uses uplink communication signals for sensing the environment around a base station, faces challenging issues of clock asynchronism and the requirement of a line-of-sight (LOS) path between transmitters and receivers. The channel state information (CSI) ratio has been applied to resolve these issues, however, current research on the CSI ratio is limited to Doppler estimation in a single dynamic path. This paper proposes an advanced parameter estimation scheme that can extract multiple dynamic parameters, including Doppler frequency, angle-of-arrival (AoA), and delay, in a communication uplink channel and completes the localization of multiple moving targets. Our scheme is based on the multi-element Taylor series of the CSI ratio that converts a nonlinear function of sensing parameters to linear forms and enables the applications of traditional sensing algorithms. Using the truncated Taylor series, we develop novel multiple-signal-classification grid searching algorithms for estimating Doppler frequencies and AoAs and use the least-square method to obtain delays. Both experimental and simulation results are provided, demonstrating that our proposed scheme can achieve good performances for sensing both single and multiple dynamic paths, without requiring the presence of a LOS path.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
328,879
2011.13527
TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size -- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity. The source code and data are available at https://github.com/MiuLab/TaylorGAN
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
208,503
1608.00191
New MDS codes with small sub-packetization and near-optimal repair bandwidth
An $(n, M)$ vector code $\mathcal{C} \subseteq \mathbb{F}^n$ is a collection of $M$ codewords where $n$ elements (from the field $\mathbb{F}$) in each of the codewords are referred to as code blocks. Assuming that $\mathbb{F} \cong \mathbb{B}^{\ell}$, the code blocks are treated as $\ell$-length vectors over the base field $\mathbb{B}$. Equivalently, the code is said to have the sub-packetization level $\ell$. This paper addresses the problem of constructing MDS vector codes which enable exact reconstruction of each code block by downloading small amount of information from the remaining code blocks. The repair bandwidth of a code measures the information flow from the remaining code blocks during the reconstruction of a single code block. This problem naturally arises in the context of distributed storage systems as the node repair problem [4]. Assuming that $M = |\mathbb{B}|^{k\ell}$, the repair bandwidth of an MDS vector code is lower bounded by $\big(\frac{n - 1}{n - k}\big)\cdot \ell$ symbols (over the base field $\mathbb{B}$) which is also referred to as the cut-set bound [4]. For all values of $n$ and $k$, the MDS vector codes that attain the cut-set bound with the sub-packetization level $\ell = (n-k)^{\lceil{{n}/{(n-k)}}\rceil}$ are known in the literature [23, 35]. This paper presents a construction for MDS vector codes which simultaneously ensures both small repair bandwidth and small sub-packetization level. The obtained codes have the smallest possible sub-packetization level $\ell = O(n - k)$ for an MDS vector code and the repair bandwidth which is at most twice the cut-set bound. The paper then generalizes this code construction so that the repair bandwidth of the obtained codes approach the cut-set bound at the cost of increased sub-packetization level. The constructions presented in this paper give MDS vector codes which are linear over the base field $\mathbb{B}$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
59,240
2205.13697
FedFormer: Contextual Federation with Attention in Reinforcement Learning
A core issue in multi-agent federated reinforcement learning is defining how to aggregate insights from multiple agents. This is commonly done by taking the average of each participating agent's model weights into one common model (FedAvg). We instead propose FedFormer, a novel federation strategy that utilizes Transformer Attention to contextually aggregate embeddings from models originating from different learner agents. In so doing, we attentively weigh the contributions of other agents with respect to the current agent's environment and learned relationships, thus providing a more effective and efficient federation. We evaluate our methods on the Meta-World environment and find that our approach yields significant improvements over FedAvg and non-federated Soft Actor-Critic single-agent methods. Our results compared to Soft Actor-Critic show that FedFormer achieves higher episodic return while still abiding by the privacy constraints of federated learning. Finally, we also demonstrate improvements in effectiveness with increased agent pools across all methods in certain tasks. This is contrasted by FedAvg, which fails to make noticeable improvements when scaled.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
299,042
2311.10054
When "A Helpful Assistant" Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models
Prompting serves as the major way humans interact with Large Language Models (LLM). Commercial AI systems commonly define the role of the LLM in system prompts. For example, ChatGPT uses ``You are a helpful assistant'' as part of its default system prompt. Despite current practices of adding personas to system prompts, it remains unclear how different personas affect a model's performance on objective tasks. In this study, we present a systematic evaluation of personas in system prompts. We curate a list of 162 roles covering 6 types of interpersonal relationships and 8 domains of expertise. Through extensive analysis of 4 popular families of LLMs and 2,410 factual questions, we demonstrate that adding personas in system prompts does not improve model performance across a range of questions compared to the control setting where no persona is added. Nevertheless, further analysis suggests that the gender, type, and domain of the persona can all influence the resulting prediction accuracies. We further experimented with a list of persona search strategies and found that, while aggregating results from the best persona for each question significantly improves prediction accuracy, automatically identifying the best persona is challenging, with predictions often performing no better than random selection. Overall, our findings suggest that while adding a persona may lead to performance gains in certain settings, the effect of each persona can be largely random. Code and data are available at https://github.com/Jiaxin-Pei/Prompting-with-Social-Roles.
true
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
408,400
2104.04485
A Data-Driven Approach to Full-Field Damage and Failure Pattern Prediction in Microstructure-Dependent Composites using Deep Learning
An image-based deep learning framework is developed in this paper to predict damage and failure in microstructure-dependent composite materials. The work is motivated by the complexity and computational cost of high-fidelity simulations of such materials. The proposed deep learning framework predicts the post-failure full-field stress distribution and crack pattern in two-dimensional representations of the composites based on the geometry of microstructures. The material of interest is selected to be a high-performance unidirectional carbon fiber-reinforced polymer composite. The deep learning framework contains two stacked fully-convolutional networks, namely, Generator 1 and Generator 2, trained sequentially. First, Generator 1 learns to translate the microstructural geometry to the full-field post-failure stress distribution. Then, Generator 2 learns to translate the output of Generator 1 to the failure pattern. A physics-informed loss function is also designed and incorporated to further improve the performance of the proposed framework and facilitate the validation process. In order to provide a sufficiently large data set for training and validating the deep learning framework, 4500 microstructural representations are synthetically generated and simulated in an efficient finite element framework. It is shown that the proposed deep learning approach can effectively predict the composites' post-failure full-field stress distribution and failure pattern, two of the most complex phenomena to simulate in computational solid mechanics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
229,403
2103.02396
$S^3$: Learnable Sparse Signal Superdensity for Guided Depth Estimation
Dense depth estimation plays a key role in multiple applications such as robotics, 3D reconstruction, and augmented reality. While sparse signal, e.g., LiDAR and Radar, has been leveraged as guidance for enhancing dense depth estimation, the improvement is limited due to its low density and imbalanced distribution. To maximize the utility from the sparse source, we propose $S^3$ technique, which expands the depth value from sparse cues while estimating the confidence of expanded region. The proposed $S^3$ can be applied to various guided depth estimation approaches and trained end-to-end at different stages, including input, cost volume and output. Extensive experiments demonstrate the effectiveness, robustness, and flexibility of the $S^3$ technique on LiDAR and Radar signal.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,952
2211.00147
A Machine Learning Tutorial for Operational Meteorology, Part II: Neural Networks and Deep Learning
Over the past decade the use of machine learning in meteorology has grown rapidly. Specifically neural networks and deep learning have been used at an unprecedented rate. In order to fill the dearth of resources covering neural networks with a meteorological lens, this paper discusses machine learning methods in a plain language format that is targeted for the operational meteorological community. This is the second paper in a pair that aim to serve as a machine learning resource for meteorologists. While the first paper focused on traditional machine learning methods (e.g., random forest), here a broad spectrum of neural networks and deep learning methods are discussed. Specifically this paper covers perceptrons, artificial neural networks, convolutional neural networks and U-networks. Like the part 1 paper, this manuscript discusses the terms associated with neural networks and their training. Then the manuscript provides some intuition behind every method and concludes by showing each method used in a meteorological example of diagnosing thunderstorms from satellite images (e.g., lightning flashes). This paper is accompanied with an open-source code repository to allow readers to explore neural networks using either the dataset provided (which is used in the paper) or as a template for alternate datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
327,766
1102.1261
Symmetry in behavior of complex social systems - discussion of models of crowd evacuation organized in agreement with symmetry conditions
The evacuation of football stadium scenarios are discussed as model realizing ordered states, described as movements of individuals according to fields of displacements, calculated correspondingly to given scenario. The symmetry of the evacuation space is taken into account in calculation of displacements field - the displacements related to every point of this space are presented in the coordinate frame in the best way adapted to given symmetry space group, which the set of basic vectors of irreducible representation of given group is. The speeds of individuals at every point in the presented model have the same quantity. As the results the times of evacuation and average forces acting on individuals during the evacuation are given. Both parameters are compared with the same parameters got without symmetry considerations. They are calculated in the simulation procedure. The new program (using modified Helbing model) has been elaborated and presented in this work for realization the simulation tasks the.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
9,058
2406.11353
$\texttt{MoE-RBench}$: Towards Building Reliable Language Models with Sparse Mixture-of-Experts
Mixture-of-Experts (MoE) has gained increasing popularity as a promising framework for scaling up large language models (LLMs). However, the reliability assessment of MoE lags behind its surging applications. Moreover, when transferred to new domains such as in fine-tuning MoE models sometimes underperform their dense counterparts. Motivated by the research gap and counter-intuitive phenomenon, we propose $\texttt{MoE-RBench}$, the first comprehensive assessment of SMoE reliability from three aspects: $\textit{(i)}$ safety and hallucination, $\textit{(ii)}$ resilience to adversarial attacks, and $\textit{(iii)}$ out-of-distribution robustness. Extensive models and datasets are tested to compare the MoE to dense networks from these reliability dimensions. Our empirical observations suggest that with appropriate hyperparameters, training recipes, and inference techniques, we can build the MoE model more reliably than the dense LLM. In particular, we find that the robustness of SMoE is sensitive to the basic training settings. We hope that this study can provide deeper insights into how to adapt the pre-trained MoE model to other tasks with higher-generation security, quality, and stability. Codes are available at https://github.com/UNITES-Lab/MoE-RBench
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
464,858
1801.09946
"23andMe confirms: I'm super white" -- Analyzing Twitter Discourse On Genetic Testing
Recent progress in genomics is bringing genetic testing to the masses. Participatory public initiatives are underway to sequence the genome of millions of volunteers, and a new market is booming with a number of companies like 23andMe and AncestryDNA offering affordable tests directly to consumers. Consequently, news, experiences, and views on genetic testing are increasingly shared and discussed online and on social networks like Twitter. In this paper, we present a large-scale analysis of Twitter discourse on genetic testing. We collect 302K tweets from 113K users, posted over 2.5 years, by using thirteen keywords related to genetic testing companies and public initiatives as search keywords. We study both the tweets and the users posting them along several axes, aiming to understand who tweets about genetic testing, what they talk about, and how they use Twitter for that. Among other things, we find that tweets about genetic testing originate from accounts that overall appear to be interested in digital health and technology. Also, marketing efforts as well as announcements, such as the FDA's suspension of 23andMe's health reports, influence the type and the nature of user engagement.Finally, we report on users who share screenshots of their results, and raise a few ethical and societal questions as we find evidence of groups associating genetic testing to racist ideologies.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
89,201
1909.02027
An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction
Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
144,069
2003.00639
Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
166,360
2211.04031
Hilbert Distillation for Cross-Dimensionality Networks
3D convolutional neural networks have revealed superior performance in processing volumetric data such as video and medical imaging. However, the competitive performance by leveraging 3D networks results in huge computational costs, which are far beyond that of 2D networks. In this paper, we propose a novel Hilbert curve-based cross-dimensionality distillation approach that facilitates the knowledge of 3D networks to improve the performance of 2D networks. The proposed Hilbert Distillation (HD) method preserves the structural information via the Hilbert curve, which maps high-dimensional (>=2) representations to one-dimensional continuous space-filling curves. Since the distilled 2D networks are supervised by the curves converted from dimensionally heterogeneous 3D features, the 2D networks are given an informative view in terms of learning structural information embedded in well-trained high-dimensional representations. We further propose a Variable-length Hilbert Distillation (VHD) method to dynamically shorten the walking stride of the Hilbert curve in activation feature areas and lengthen the stride in context feature areas, forcing the 2D networks to pay more attention to learning from activation features. The proposed algorithm outperforms the current state-of-the-art distillation techniques adapted to cross-dimensionality distillation on two classification tasks. Moreover, the distilled 2D networks by the proposed method achieve competitive performance with the original 3D networks, indicating the lightweight distilled 2D networks could potentially be the substitution of cumbersome 3D networks in the real-world scenario.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
329,113
2501.04766
Decoding rank metric Reed-Muller codes
In this article, we investigate the decoding of the rank metric Reed--Muller codes introduced by Augot, Couvreur, Lavauzelle and Neri in 2021. We propose a polynomial time algorithm that rests on the structure of Dickson matrices, works on any such code and corrects up to half the minimum distance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
523,340
2108.06281
Modal-Adaptive Gated Recoding Network for RGB-D Salient Object Detection
The multi-modal salient object detection model based on RGB-D information has better robustness in the real world. However, it remains nontrivial to better adaptively balance effective multi-modal information in the feature fusion phase. In this letter, we propose a novel gated recoding network (GRNet) to evaluate the information validity of the two modes, and balance their influence. Our framework is divided into three phases: perception phase, recoding mixing phase and feature integration phase. First, A perception encoder is adopted to extract multi-level single-modal features, which lays the foundation for multi-modal semantic comparative analysis. Then, a modal-adaptive gate unit (MGU) is proposed to suppress the invalid information and transfer the effective modal features to the recoding mixer and the hybrid branch decoder. The recoding mixer is responsible for recoding and mixing the balanced multi-modal information. Finally, the hybrid branch decoder completes the multi-level feature integration under the guidance of an optional edge guidance stream (OEGS). Experiments and analysis on eight popular benchmarks verify that our framework performs favorably against 9 state-of-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
250,560
2302.13498
Pretraining De-Biased Language Model with Large-scale Click Logs for Document Ranking
Pre-trained language models have achieved great success in various large-scale information retrieval tasks. However, most of pretraining tasks are based on counterfeit retrieval data where the query produced by the tailored rule is assumed as the user's issued query on the given document or passage. Therefore, we explore to use large-scale click logs to pretrain a language model instead of replying on the simulated queries. Specifically, we propose to use user behavior features to pretrain a debiased language model for document ranking. Extensive experiments on Baidu desensitization click logs validate the effectiveness of our method. Our team on WSDM Cup 2023 Pre-training for Web Search won the 1st place with a Discounted Cumulative Gain @ 10 (DCG@10) score of 12.16525 on the final leaderboard.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
347,971
2004.08439
Scaling the training of particle classification on simulated MicroBooNE events to multiple GPUs
Measurements in Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors, such as the MicroBooNE detector at Fermilab, feature large, high fidelity event images. Deep learning techniques have been extremely successful in classification tasks of photographs, but their application to LArTPC event images is challenging, due to the large size of the events. Events in these detectors are typically two orders of magnitude larger than images found in classical challenges, like recognition of handwritten digits contained in the MNIST database or object recognition in the ImageNet database. Ideally, training would occur on many instances of the entire event data, instead of many instances of cropped regions of interest from the event data. However, such efforts lead to extremely long training cycles, which slow down the exploration of new network architectures and hyperparameter scans to improve the classification performance. We present studies of scaling a LArTPC classification problem on multiple architectures, spanning multiple nodes. The studies are carried out on simulated events in the MicroBooNE detector. We emphasize that it is beyond the scope of this study to optimize networks or extract the physics from any results here. Institutional computing at Pacific Northwest National Laboratory and the SummitDev machine at Oak Ridge National Laboratory's Leadership Computing Facility have been used. To our knowledge, this is the first use of state-of-the-art Convolutional Neural Networks for particle physics and their attendant compute techniques onto the DOE Leadership Class Facilities. We expect benefits to accrue particularly to the Deep Underground Neutrino Experiment (DUNE) LArTPC program, the flagship US High Energy Physics (HEP) program for the coming decades.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
173,062
2407.20876
Automatic Die Studies for Ancient Numismatics
Die studies are fundamental to quantifying ancient monetary production, providing insights into the relationship between coinage, politics, and history. The process requires tedious manual work, which limits the size of the corpora that can be studied. Few works have attempted to automate this task, and none have been properly released and evaluated from a computer vision perspective. We propose a fully automatic approach that introduces several innovations compared to previous methods. We rely on fast and robust local descriptors matching that is set automatically. Second, the core of our proposal is a clustering-based approach that uses an intrinsic metric (that does not need the ground truth labels) to determine its critical hyper-parameters. We validate the approach on two corpora of Greek coins, propose an automatic implementation and evaluation of previous baselines, and show that our approach significantly outperforms them.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
477,315
2502.05679
Federated Learning with Reservoir State Analysis for Time Series Anomaly Detection
With a growing data privacy concern, federated learning has emerged as a promising framework to train machine learning models without sharing locally distributed data. In federated learning, local model training by multiple clients and model integration by a server are repeated only through model parameter sharing. Most existing federated learning methods assume training deep learning models, which are often computationally demanding. To deal with this issue, we propose federated learning methods with reservoir state analysis to seek computational efficiency and data privacy protection simultaneously. Specifically, our method relies on Mahalanobis Distance of Reservoir States (MD-RS) method targeting time series anomaly detection, which learns a distribution of reservoir states for normal inputs and detects anomalies based on a deviation from the learned distribution. Iterative updating of statistical parameters in the MD-RS enables incremental federated learning (IncFed MD-RS). We evaluate the performance of IncFed MD-RS using benchmark datasets for time series anomaly detection. The results show that IncFed MD-RS outperforms other federated learning methods with deep learning and reservoir computing models particularly when clients' data are relatively short and heterogeneous. We demonstrate that IncFed MD-RS is robust against reduced sample data compared to other methods. We also show that the computational cost of IncFed MD-RS can be reduced by subsampling from the reservoir states without performance degradation. The proposed method is beneficial especially in anomaly detection applications where computational efficiency, algorithm simplicity, and low communication cost are required.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,721
1302.0324
A New Constructive Method to Optimize Neural Network Architecture and Generalization
In this paper, after analyzing the reasons of poor generalization and overfitting in neural networks, we consider some noise data as a singular value of a continuous function - jump discontinuity point. The continuous part can be approximated with the simplest neural networks, which have good generalization performance and optimal network architecture, by traditional algorithms such as constructive algorithm for feed-forward neural networks with incremental training, BP algorithm, ELM algorithm, various constructive algorithm, RBF approximation and SVM. At the same time, we will construct RBF neural networks to fit the singular value with every error in, and we prove that a function with jumping discontinuity points can be approximated by the simplest neural networks with a decay RBF neural networks in by each error, and a function with jumping discontinuity point can be constructively approximated by a decay RBF neural networks in by each error and the constructive part have no generalization influence to the whole machine learning system which will optimize neural network architecture and generalization performance, reduce the overfitting phenomenon by avoid fitting the noisy data.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
21,708
1909.12321
Variational point-obstacle avoidance on Riemannian manifolds
In this letter we study variational obstacle avoidance problems on complete Riemannian manifolds. The problem consists of minimizing an energy functional depending on the velocity, covariant acceleration and a repulsive potential function used to avoid a static obstacle on the manifold, among a set of admissible curves. We derive the dynamical equations for extrema of the variational problem, in particular on compact connected Lie groups and Riemannian symmetric spaces. Numerical examples are presented to illustrate the proposed method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
147,090
2306.12795
Learning Unseen Modality Interaction
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences. In this paper, we challenge this modality-complete assumption for multimodal learning and instead strive for generalization to unseen modality combinations during inference. We pose the problem of unseen modality interaction and introduce a first solution. It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved. This allows the information to be accumulated with a simple summation operation across available modalities. To reduce overfitting to less discriminative modality combinations during training, we further improve the model learning with pseudo-supervision indicating the reliability of a modality's prediction. We demonstrate that our approach is effective for diverse tasks and modalities by evaluating it for multimodal video classification, robot state regression, and multimedia retrieval. Project website: https://xiaobai1217.github.io/Unseen-Modality-Interaction/.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
375,068
2207.03692
Mining Discriminative Food Regions for Accurate Food Recognition
Automatic food recognition is the very first step towards passive dietary monitoring. In this paper, we address the problem of food recognition by mining discriminative food regions. Taking inspiration from Adversarial Erasing, a strategy that progressively discovers discriminative object regions for weakly supervised semantic segmentation, we propose a novel network architecture in which a primary network maintains the base accuracy of classifying an input image, an auxiliary network adversarially mines discriminative food regions, and a region network classifies the resulting mined regions. The global (the original input image) and the local (the mined regions) representations are then integrated for the final prediction. The proposed architecture denoted as PAR-Net is end-to-end trainable, and highlights discriminative regions in an online fashion. In addition, we introduce a new fine-grained food dataset named as Sushi-50, which consists of 50 different sushi categories. Extensive experiments have been conducted to evaluate the proposed approach. On three food datasets chosen (Food-101, Vireo-172, and Sushi-50), our approach performs consistently and achieves state-of-the-art results (top-1 testing accuracy of $90.4\%$, $90.2\%$, $92.0\%$, respectively) compared with other existing approaches. Dataset and code are available at https://github.com/Jianing-Qiu/PARNet
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
306,948
1601.07678
Extremal Relations Between Shannon Entropy and $\ell_{\alpha}$-Norm
The paper examines relationships between the Shannon entropy and the $\ell_{\alpha}$-norm for $n$-ary probability vectors, $n \ge 2$. More precisely, we investigate the tight bounds of the $\ell_{\alpha}$-norm with a fixed Shannon entropy, and vice versa. As applications of the results, we derive the tight bounds between the Shannon entropy and several information measures which are determined by the $\ell_{\alpha}$-norm, e.g., R\'{e}nyi entropy, Tsallis entropy, the $R$-norm information, and some diversity indices. Moreover, we apply these results to uniformly focusing channels. Then, we show the tight bounds of Gallager's $E_{0}$ functions with a fixed mutual information under a uniform input distribution.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,455
0805.4560
Rock mechanics modeling based on soft granulation theory
This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pockets), are obtained. Validations of our proposed methods, on the data set of in-situ permeability in rock masses in Shivashan dam, Iran have been highlighted.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
1,847
1809.11158
Universal and Dynamic Locally Repairable Codes with Maximal Recoverability via Sum-Rank Codes
Locally repairable codes (LRCs) are considered with equal or unequal localities, local distances and local field sizes. An explicit two-layer architecture with a sum-rank outer code is obtained, having disjoint local groups and achieving maximal recoverability (MR) for all families of local linear codes (MDS or not) simultaneously, up to a specified maximum locality $ r $. Furthermore, the local linear codes (thus the localities, local distances and local fields) can be efficiently and dynamically modified without global recoding or changes in architecture or outer code, while preserving the MR property, easily adapting to new configurations in storage or new hot and cold data. In addition, local groups and file components can be added, removed or updated without global recoding. The construction requires global fields of size roughly $ g^r $, for $ g $ local groups and maximum or specified locality $ r $. For equal localities, these global fields are smaller than those of previous MR-LRCs when $ r \leq h $ (global parities). For unequal localities, they provide an exponential field size reduction on all previous best known MR-LRCs. For bounded localities and a large number of local groups, the global erasure-correction complexity of the given construction is comparable to that of Tamo-Barg codes or Reed-Solomon codes with local replication, while local repair is as efficient as for the Cartesian product of the local codes. Reed-Solomon codes with local replication and Cartesian products are recovered from the given construction when $ r=1 $ and $ h = 0 $, respectively. The given construction can also be adapted to provide hierarchical MR-LRCs for all types of hierarchies and parameters. Finally, subextension subcodes and sum-rank alternant codes are introduced to obtain further exponential field size reductions, at the expense of lower information rates.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
109,067
2209.14245
Framework for Highway Traffic Profiling using Connected Vehicle Data
The connected vehicle (CV) data could potentially revolutionize the traffic monitoring landscape as a new source of CV data that are collected exclusively from original equipment manufactures (OEMs) have emerged in the commercial market in recent years. Compared to existing CV data that are used by agencies, the new-generation of CV data have certain advantages including nearly ubiquitous coverage, high temporal resolution, high spatial accuracy, and enriched vehicle telematics data (e.g., hard braking events). This paper proposed a traffic profiling framework that target vehicle-level performance indexes across mobility, safety, riding comfort, traffic flow stability, and fuel consumption. The proof-of-concept study of a major interstate highway (i.e., I-280 NJ), using the CV data, illustrates the feasibility of going beyond traditional aggregated traffic metrics. Lastly, potential applications for either historical analysis and even near real-time monitoring are discussed. The proposed framework can be easily scaled and is particularly valuable for agencies that wish to systemically monitoring regional or statewide roadways without substantial investment on infrastructure-based sensing (and the associated on-going maintenance costs)
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
320,185
cs/9904001
A Proposal for the Establishment of Review Boards - a flexible approach to the selection of academic knowledge
Paper journals use a small number of trusted academics to select information on behalf of all their readers. This inflexibility in the selection was justified due to the expense of publishing. The advent of cheap distribution via the internet allows a new trade-off between time and expense and the flexibility of the selection process. This paper explores one such possible process one where the role of mark-up and archiving is separated from that of review. The idea is that authors publish their papers on their own web pages or in a public paper archive, a board of reviewers judge that paper on a number of different criteria. The detailed results of the reviews are stored in such a way as to enable readers to use these judgements to find the papers they want using search engines on the web. Thus instead of journals using generic selection criteria readers can set their own to suit their needs. The resulting system might be even cheaper than web-journals to implement.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
true
540,490
2111.00873
Probabilistic prediction of the heave motions of a semi-submersible by a deep learning problem model
The real-time motion prediction of a floating offshore platform refers to forecasting its motions in the following one- or two-wave cycles, which helps improve the performance of a motion compensation system and provides useful early warning information. In this study, we extend a deep learning (DL) model, which could predict the heave and surge motions of a floating semi-submersible 20 to 50 seconds ahead with good accuracy, to quantify its uncertainty of the predictive time series with the help of the dropout technique. By repeating the inference several times, it is found that the collection of the predictive time series is a Gaussian process (GP). The DL model with dropout learned a kernel inside, and the learning procedure was similar to GP regression. Adding noise into training data could help the model to learn more robust features from the training data, thereby leading to a better performance on test data with a wide noise level range. This study extends the understanding of the DL model to predict the wave excited motions of an offshore platform.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
264,369
2404.18972
Impact of whole-body vibrations on electrovibration perception varies with target stimulus duration
This study explores the impact of whole-body vibrations induced by external vehicle perturbations, such as aircraft turbulence, on the perception of electrovibration displayed on touchscreens. Electrovibration holds promise as a technology for providing tactile feedback on future touchscreens, addressing usability challenges in vehicle cockpits. However, its performance under dynamic conditions, such as during whole-body vibrations induced by turbulence, still needs to be explored. We measured the absolute detection thresholds of 15 human participants for short- and long-duration electrovibration stimuli displayed on a touchscreen, both in the absence and presence of two types of turbulence motion generated by a motion simulator. Concurrently, we measured participants' applied contact force and finger scan speeds. Significantly higher (38%) absolute detection thresholds were observed for short electrovibration stimuli than for long stimuli. Finger scan speeds in the direction of turbulence, applied forces, and force fluctuation rates increased during whole-body vibrations due to biodynamic feedthrough. As a result, turbulence also significantly increased the perception thresholds, but only for short-duration electrovibration stimuli. The results reveal that whole-body vibrations can impede the perception of short-duration electrovibration stimuli, due to involuntary finger movements and increased normal force fluctuations. Our findings offer valuable insights for the future design of touchscreens with tactile feedback in vehicle cockpits.
true
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
450,461
1508.01585
Applying Deep Learning to Answer Selection: A Study and An Open Task
We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3% on a test set, which indicates a great potential for practical use.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
45,800
1104.1745
Multi-User Diversity with Random Number of Users
Multi-user diversity is considered when the number of users in the system is random. The complete monotonicity of the error rate as a function of the (deterministic) number of users is established and it is proved that randomization of the number of users always leads to deterioration of average system performance at any average SNR. Further, using stochastic ordering theory, a framework for comparison of system performance for different user distributions is provided. For Poisson distributed users, the difference in error rate of the random and deterministic number of users cases is shown to asymptotically approach zero as the average number of users goes to infinity for any fixed average SNR. In contrast, for a finite average number of users and high SNR, it is found that randomization of the number of users deteriorates performance significantly, and the diversity order under fading is dominated by the smallest possible number of users. For Poisson distributed users communicating over Rayleigh faded channels, further closed-form results are provided for average error rate, and the asymptotic scaling law for ergodic capacity is also provided. Simulation results are provided to corroborate our analytical findings.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,929
2306.01264
Convex and Non-convex Optimization Under Generalized Smoothness
Classical analysis of convex and non-convex optimization methods often requires the Lipshitzness of the gradient, which limits the analysis to functions bounded by quadratics. Recent work relaxed this requirement to a non-uniform smoothness condition with the Hessian norm bounded by an affine function of the gradient norm, and proved convergence in the non-convex setting via gradient clipping, assuming bounded noise. In this paper, we further generalize this non-uniform smoothness condition and develop a simple, yet powerful analysis technique that bounds the gradients along the trajectory, thereby leading to stronger results for both convex and non-convex optimization problems. In particular, we obtain the classical convergence rates for (stochastic) gradient descent and Nesterov's accelerated gradient method in the convex and/or non-convex setting under this general smoothness condition. The new analysis approach does not require gradient clipping and allows heavy-tailed noise with bounded variance in the stochastic setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
370,375
2308.02339
Improving Scene Graph Generation with Superpixel-Based Interaction Learning
Recent advances in Scene Graph Generation (SGG) typically model the relationships among entities utilizing box-level features from pre-defined detectors. We argue that an overlooked problem in SGG is the coarse-grained interactions between boxes, which inadequately capture contextual semantics for relationship modeling, practically limiting the development of the field. In this paper, we take the initiative to explore and propose a generic paradigm termed Superpixel-based Interaction Learning (SIL) to remedy coarse-grained interactions at the box level. It allows us to model fine-grained interactions at the superpixel level in SGG. Specifically, (i) we treat a scene as a set of points and cluster them into superpixels representing sub-regions of the scene. (ii) We explore intra-entity and cross-entity interactions among the superpixels to enrich fine-grained interactions between entities at an earlier stage. Extensive experiments on two challenging benchmarks (Visual Genome and Open Image V6) prove that our SIL enables fine-grained interaction at the superpixel level above previous box-level methods, and significantly outperforms previous state-of-the-art methods across all metrics. More encouragingly, the proposed method can be applied to boost the performance of existing box-level approaches in a plug-and-play fashion. In particular, SIL brings an average improvement of 2.0% mR (even up to 3.4%) of baselines for the PredCls task on Visual Genome, which facilitates its integration into any existing box-level method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
383,580
1707.07605
Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking
Deep neural networks have become a primary tool for solving problems in many fields. They are also used for addressing information retrieval problems and show strong performance in several tasks. Training these models requires large, representative datasets and for most IR tasks, such data contains sensitive information from users. Privacy and confidentiality concerns prevent many data owners from sharing the data, thus today the research community can only benefit from research on large-scale datasets in a limited manner. In this paper, we discuss privacy preserving mimic learning, i.e., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal. We present the results of preliminary experiments in which we apply the idea of mimic learning and privacy preserving mimic learning for the task of document re-ranking as one of the core IR tasks. This research is a step toward laying the ground for enabling researchers from data-rich environments to share knowledge learned from actual users' data, which should facilitate research collaborations.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
77,663
2104.06548
Solving weakly supervised regression problem using low-rank manifold regularization
We solve a weakly supervised regression problem. Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources. The solution process requires to optimize a certain objective function (the loss function), which combines manifold regularization and low-rank matrix decomposition techniques. These low-rank approximations allow us to speed up all matrix calculations and reduce storage requirements. This is especially crucial for large datasets. Ensemble clustering is used for obtaining the co-association matrix, which we consider as the similarity matrix. The utilization of these techniques allows us to increase the quality and stability of the solution. In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
230,108
1601.07446
A First Attempt to Cloud-Based User Verification in Distributed System
In this paper, the idea of client verification in distributed systems is presented. The proposed solution presents a sample system where client verification through cloud resources using input signature is discussed. For different signatures the proposed method has been examined. Research results are presented and discussed to show potential advantages.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
false
true
51,430
2303.03786
Stability of the personal relationship networks in a longitudinal study of middle school students
The personal network of relationships is structured in circles of friendships, that go from the most intense relationships to the least intense ones. While this is a well established result, little is known about the stability of those circles and their evolution in time. To shed light on this issue, we study the temporal evolution of friendships among teenagers during two consecutive academic years by means of a survey administered on five occasions. We show that the first two circles, best friends and friends, can be clearly observed in the survey but also that being in one or the other leads to more or less stable relationships. We find that being in the same class is one of the key drivers of friendship evolution. We also observe an almost constant degree of reciprocity in the relationships, around 60%, a percentage influenced both by being in the same class and by gender homophily. Not only do our results confirm the mounting evidence supporting the circle structure of human social networks, but they also show that these structures persist in time despite the turnover of individual relationships -- a fact that may prove particularly useful for understanding the social environment in middle schools.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
349,852
1904.04084
ContextDesc: Local Descriptor Augmentation with Cross-Modality Context
Most existing studies on learning local features focus on the patch-based descriptions of individual keypoints, whereas neglecting the spatial relations established from their keypoint locations. In this paper, we go beyond the local detail representation by introducing context awareness to augment off-the-shelf local feature descriptors. Specifically, we propose a unified learning framework that leverages and aggregates the cross-modality contextual information, including (i) visual context from high-level image representation, and (ii) geometric context from 2D keypoint distribution. Moreover, we propose an effective N-pair loss that eschews the empirical hyper-parameter search and improves the convergence. The proposed augmentation scheme is lightweight compared with the raw local feature description, meanwhile improves remarkably on several large-scale benchmarks with diversified scenes, which demonstrates both strong practicality and generalization ability in geometric matching applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,936
2411.17863
LongKey: Keyphrase Extraction for Long Documents
In an era of information overload, manually annotating the vast and growing corpus of documents and scholarly papers is increasingly impractical. Automated keyphrase extraction addresses this challenge by identifying representative terms within texts. However, most existing methods focus on short documents (up to 512 tokens), leaving a gap in processing long-context documents. In this paper, we introduce LongKey, a novel framework for extracting keyphrases from lengthy documents, which uses an encoder-based language model to capture extended text intricacies. LongKey uses a max-pooling embedder to enhance keyphrase candidate representation. Validated on the comprehensive LDKP datasets and six diverse, unseen datasets, LongKey consistently outperforms existing unsupervised and language model-based keyphrase extraction methods. Our findings demonstrate LongKey's versatility and superior performance, marking an advancement in keyphrase extraction for varied text lengths and domains.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
511,637
1905.10720
Gated Group Self-Attention for Answer Selection
Answer selection (answer ranking) is one of the key steps in many kinds of question answering (QA) applications, where deep models have achieved state-of-the-art performance. Among these deep models, recurrent neural network (RNN) based models are most popular, typically with better performance than convolutional neural network (CNN) based models. Nevertheless, it is difficult for RNN based models to capture the information about long-range dependency among words in the sentences of questions and answers. In this paper, we propose a new deep model, called gated group self-attention (GGSA), for answer selection. GGSA is inspired by global self-attention which is originally proposed for machine translation and has not been explored in answer selection. GGSA tackles the problem of global self-attention that local and global information cannot be well distinguished. Furthermore, an interaction mechanism between questions and answers is also proposed to enhance GGSA by a residual structure. Experimental results on two popular QA datasets show that GGSA can outperform existing answer selection models to achieve state-of-the-art performance. Furthermore, GGSA can also achieve higher accuracy than global self-attention for the answer selection task, with a lower computation cost.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
132,171
2406.20044
Electrostatics-based particle sampling and approximate inference
A new particle-based sampling and approximate inference method, based on electrostatics and Newton mechanics principles, is introduced with theoretical ground, algorithm design and experimental validation. This method simulates an interacting particle system (IPS) where particles, i.e. the freely-moving negative charges and spatially-fixed positive charges with magnitudes proportional to the target distribution, interact with each other via attraction and repulsion induced by the resulting electric fields described by Poisson's equation. The IPS evolves towards a steady-state where the distribution of negative charges conforms to the target distribution. This physics-inspired method offers deterministic, gradient-free sampling and inference, achieving comparable performance as other particle-based and MCMC methods in benchmark tasks of inferring complex densities, Bayesian logistic regression and dynamical system identification. A discrete-time, discrete-space algorithmic design, readily extendable to continuous time and space, is provided for usage in more general inference problems occurring in probabilistic machine learning scenarios such as Bayesian inference, generative modelling, and beyond.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
468,646