id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2308.08167
A Quantum Approximation Scheme for k-Means
We give a quantum approximation scheme (i.e., $(1 + \varepsilon)$-approximation for every $\varepsilon > 0$) for the classical $k$-means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset $V$ with $N$ points in $\mathbb{R}^d$ stored in QRAM data structure, our quantum algorithm runs in time $\tilde{O} \left( 2^{\tilde{O}(\frac{k}{\varepsilon})} \eta^2 d\right)$ and with high probability outputs a set $C$ of $k$ centers such that $cost(V, C) \leq (1+\varepsilon) \cdot cost(V, C_{OPT})$. Here $C_{OPT}$ denotes the optimal $k$-centers, $cost(.)$ denotes the standard $k$-means cost function (i.e., the sum of the squared distance of points to the closest center), and $\eta$ is the aspect ratio (i.e., the ratio of maximum distance to minimum distance). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of $(1+\varepsilon)$ for the $k$-means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
385,795
2407.04061
Detect Closer Surfaces that can be Seen: New Modeling and Evaluation in Cross-domain 3D Object Detection
The performance of domain adaptation technologies has not yet reached an ideal level in the current 3D object detection field for autonomous driving, which is mainly due to significant differences in the size of vehicles, as well as the environments they operate in when applied across domains. These factors together hinder the effective transfer and application of knowledge learned from specific datasets. Since the existing evaluation metrics are initially designed for evaluation on a single domain by calculating the 2D or 3D overlap between the prediction and ground-truth bounding boxes, they often suffer from the overfitting problem caused by the size differences among datasets. This raises a fundamental question related to the evaluation of the 3D object detection models' cross-domain performance: Do we really need models to maintain excellent performance in their original 3D bounding boxes after being applied across domains? From a practical application perspective, one of our main focuses is actually on preventing collisions between vehicles and other obstacles, especially in cross-domain scenarios where correctly predicting the size of vehicles is much more difficult. In other words, as long as a model can accurately identify the closest surfaces to the ego vehicle, it is sufficient to effectively avoid obstacles. In this paper, we propose two metrics to measure 3D object detection models' ability of detecting the closer surfaces to the sensor on the ego vehicle, which can be used to evaluate their cross-domain performance more comprehensively and reasonably. Furthermore, we propose a refinement head, named EdgeHead, to guide models to focus more on the learnable closer surfaces, which can greatly improve the cross-domain performance of existing models not only under our new metrics, but even also under the original BEV/3D metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
470,409
2402.09650
Foul prediction with estimated poses from soccer broadcast video
Recent advances in computer vision have made significant progress in tracking and pose estimation of sports players. However, there have been fewer studies on behavior prediction with pose estimation in sports, in particular, the prediction of soccer fouls is challenging because of the smaller image size of each player and of difficulty in the usage of e.g., the ball and pose information. In our research, we introduce an innovative deep learning approach for anticipating soccer fouls. This method integrates video data, bounding box positions, image details, and pose information by curating a novel soccer foul dataset. Our model utilizes a combination of convolutional and recurrent neural networks (CNNs and RNNs) to effectively merge information from these four modalities. The experimental results show that our full model outperformed the ablated models, and all of the RNN modules, bounding box position and image, and estimated pose were useful for the foul prediction. Our findings have important implications for a deeper understanding of foul play in soccer and provide a valuable reference for future research and practice in this area.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
429,613
1809.09747
Analyzing CDR/IPDR data to find People Network from Encrypted Messaging Services
Criminals are increasingly using mobile based communication applications, like WhatsApp, that have end-to-end encryption to connect to each other. This makes traditional analysis of call graphs, or traffic analysis, virtually impossible and so is a hindrance for law enforcement personnel. Old methods of traffic analysis have been rendered useless and criminals, including arms dealers and terrorists, are able to engage in criminal activity undetected by police. At present, law enforcement must use extensive manual effort to parse data provided by cell companies to extract information. We have built a system that analyses cellular GPRS metadata and builds a profile and finds potential call connections explicitly which are implicit in the dataset. This paper describes our approach and system in detail and includes results of our evaluation using an anonymized dataset from Delhi Police. Our system permits call graph analysis to be done, and significantly reduces the time needed from the data analysis process.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
108,763
2410.24200
Length-Induced Embedding Collapse in Transformer-based Models
Text embeddings enable various applications, but their performance deteriorates on longer texts. In this paper, we find that the performance degradation is due to a phenomenon called Length Collapse, where longer text embeddings collapse into a narrow space. This collapse results in a distributional inconsistency between embeddings of different text lengths, ultimately hurting the performance of downstream tasks. Theoretically, by considering the self-attention mechanism inherently functions as a low-pass filter, we prove that long sequences increase the attenuation rate of the low-pass filter effect of the self-attention mechanism. With layers going deeper, excessive low-pass filtering causes the token signals to retain only their Direct-Current (DC) component, which means the input token feature maps will collapse into a narrow space, especially in long texts. Based on the above analysis, we propose to mitigate the undesirable length collapse limitation by introducing a temperature in softmax(), which achieves a higher low-filter attenuation rate. The tuning-free method, called TempScale, can be plugged into multiple transformer-based embedding models. Empirically, we demonstrate that TempScale can improve existing embedding models, especially on long text inputs, bringing up to 0.53% performance gains on 40 datasets from Massive Text Embedding Benchmark (MTEB) and 0.82% performance gains on 4 datasets from LongEmbed, which specifically focuses on long context retrieval.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
504,373
2308.09285
RFDforFin: Robust Deep Forgery Detection for GAN-generated Fingerprint Images
With the rapid development of the image generation technologies, the malicious abuses of the GAN-generated fingerprint images poses a significant threat to the public safety in certain circumstances. Although the existing universal deep forgery detection approach can be applied to detect the fake fingerprint images, they are easily attacked and have poor robustness. Meanwhile, there is no specifically designed deep forgery detection method for fingerprint images. In this paper, we propose the first deep forgery detection approach for fingerprint images, which combines unique ridge features of fingerprint and generation artifacts of the GAN-generated images, to the best of our knowledge. Specifically, we firstly construct a ridge stream, which exploits the grayscale variations along the ridges to extract unique fingerprint-specific features. Then, we construct a generation artifact stream, in which the FFT-based spectrums of the input fingerprint images are exploited, to extract more robust generation artifact features. At last, the unique ridge features and generation artifact features are fused for binary classification (i.e., real or fake). Comprehensive experiments demonstrate that our proposed approach is effective and robust with low complexities.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
386,221
1507.03729
Optimization of Code Rates in SISOME Wiretap Channels
We propose a new framework for determining the wiretap code rates of single-input single-output multi-antenna eavesdropper (SISOME) wiretap channels when the capacity of the eavesdropper's channel is not available at the transmitter. In our framework we introduce the effective secrecy throughput (EST) as a new performance metric that explicitly captures the two key features of wiretap channels, namely, reliability and secrecy. Notably, the EST measures the average rate of the confidential information transmitted from the transmitter to the intended receiver without being eavesdropped on. We provide easy-to-implement methods to determine the wiretap code rates for two transmission schemes: 1) adaptive transmission scheme in which the capacity of the main channel is available at the transmitter and 2) fixed-rate transmission scheme in which the capacity of the main channel is not available at the transmitter. Such determinations are further extended into an absolute-passive eavesdropping scenario where even the average signal-to-noise ratio of the eavesdropper's channel is not available at the transmitter. Notably, our solutions for the wiretap code rates do not require us to set reliability or secrecy constraints for the transmission within wiretap channels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
45,099
2405.15778
Investigation of Energy-efficient AI Model Architectures and Compression Techniques for "Green" Fetal Brain Segmentation
Artificial intelligence have contributed to advancements across various industries. However, the rapid growth of artificial intelligence technologies also raises concerns about their environmental impact, due to associated carbon footprints to train computational models. Fetal brain segmentation in medical imaging is challenging due to the small size of the fetal brain and the limited image quality of fast 2D sequences. Deep neural networks are a promising method to overcome this challenge. In this context, the construction of larger models requires extensive data and computing power, leading to high energy consumption. Our study aims to explore model architectures and compression techniques that promote energy efficiency by optimizing the trade-off between accuracy and energy consumption through various strategies such as lightweight network design, architecture search, and optimized distributed training tools. We have identified several effective strategies including optimization of data loading, modern optimizers, distributed training strategy implementation, and reduced floating point operations precision usage with light model architectures while tuning parameters according to available computer resources. Our findings demonstrate that these methods lead to satisfactory model performance with low energy consumption during deep neural network training for medical image segmentation.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
457,094
2301.01087
Neural Point Catacaustics for Novel-View Synthesis of Reflections
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
339,126
1403.2871
Shape-Based Plagiarism Detection for Flowchart Figures in Texts
Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offence that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide this checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts results in look holes that people can take advantage. That means people can plagiarized figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts. This paper presents a method for detecting flow chart figure plagiarism based on shape-based image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
31,516
1805.08349
A Solvable High-Dimensional Model of GAN
We present a theoretical analysis of the training process for a single-layer GAN fed by high-dimensional input data. The training dynamics of the proposed model at both microscopic and macroscopic scales can be exactly analyzed in the high-dimensional limit. In particular, we prove that the macroscopic quantities measuring the quality of the training process converge to a deterministic process characterized by an ordinary differential equation (ODE), whereas the microscopic states containing all the detailed weights remain stochastic, whose dynamics can be described by a stochastic differential equation (SDE). This analysis provides a new perspective different from recent analyses in the limit of small learning rate, where the microscopic state is always considered deterministic, and the contribution of noise is ignored. From our analysis, we show that the level of the background noise is essential to the convergence of the training process: setting the noise level too strong leads to failure of feature recovery, whereas setting the noise too weak causes oscillation. Although this work focuses on a simple copy model of GAN, we believe the analysis methods and insights developed here would prove useful in the theoretical understanding of other variants of GANs with more advanced training algorithms.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
98,115
2111.10302
Instance-Adaptive Video Compression: Improving Neural Codecs by Training on the Test Set
We introduce a video compression algorithm based on instance-adaptive learning. On each video sequence to be transmitted, we finetune a pretrained compression model. The optimal parameters are transmitted to the receiver along with the latent code. By entropy-coding the parameter updates under a suitable mixture model prior, we ensure that the network parameters can be encoded efficiently. This instance-adaptive compression algorithm is agnostic about the choice of base model and has the potential to improve any neural video codec. On UVG, HEVC, and Xiph datasets, our codec improves the performance of a scale-space flow model by between 21% and 27% BD-rate savings, and that of a state-of-the-art B-frame model by 17 to 20% BD-rate savings. We also demonstrate that instance-adaptive finetuning improves the robustness to domain shift. Finally, our approach reduces the capacity requirements of compression models. We show that it enables a competitive performance even after reducing the network size by 70%.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
267,280
1805.03110
Secret Key Generation for Minimally Connected Hypergraphical Sources
This paper investigates the secret key generation in the multiterminal source model, where users observing correlated sources discuss interactively under limited rates to agree on a secret key. We focus on a class of sources representable by minimally connected hypergraphs. For such sources, we give a single-letter explicit characterization of the region of achievable secret key rate and public discussion rate tuple. This is the first result that completely characterizes the achievable rate region for a multiterminal source model, which is beyond the PIN model on a tree. We also obtain an explicit formula for the maximum achievable secret key rate, called the constrained secrecy capacity, as a function of the total discussion rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
96,988
2012.12265
Generative Interventions for Causal Learning
We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts. Discriminative models often learn naturally occurring spurious correlations, which cause them to fail on images outside of the training distribution. In this paper, we show that we can steer generative models to manufacture interventions on features caused by confounding factors. Experiments, visualizations, and theoretical results show this method learns robust representations more consistent with the underlying causal relationships. Our approach improves performance on multiple datasets demanding out-of-distribution generalization, and we demonstrate state-of-the-art performance generalizing from ImageNet to ObjectNet dataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
212,875
2304.10984
IBBT: Informed Batch Belief Trees for Motion Planning Under Uncertainty
In this work, we propose the Informed Batch Belief Trees (IBBT) algorithm for motion planning under motion and sensing uncertainties. The original stochastic motion planning problem is divided into a deterministic motion planning problem and a graph search problem. We solve the deterministic planning problem using sampling-based methods such as PRM or RRG to construct a graph of nominal trajectories. Then, an informed cost-to-go heuristic for the original problem is computed based on the nominal trajectory graph. Finally, we grow a belief tree by searching over the graph using the proposed heuristic. IBBT interleaves between batch state sampling, nominal trajectory graph construction, heuristic computing, and search over the graph to find belief space motion plans. IBBT is an anytime, incremental algorithm. With an increasing number of batches of samples added to the graph, the algorithm finds motion plans that converge to the optimal one. IBBT is efficient by reusing results between sequential iterations. The belief tree searching is an ordered search guided by an informed heuristic. We test IBBT in different planning environments. Our numerical investigation confirms that IBBT finds non-trivial motion plans and is faster compared with previous similar methods.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
359,625
2210.05839
SEAL : Interactive Tool for Systematic Error Analysis and Labeling
With the advent of Transformers, large language models (LLMs) have saturated well-known NLP benchmarks and leaderboards with high aggregate performance. However, many times these models systematically fail on tail data or rare groups not obvious in aggregate evaluation. Identifying such problematic data groups is even more challenging when there are no explicit labels (e.g., ethnicity, gender, etc.) and further compounded for NLP datasets due to the lack of visual features to characterize failure modes (e.g., Asian males, animals indoors, waterbirds on land, etc.). This paper introduces an interactive Systematic Error Analysis and Labeling (\seal) tool that uses a two-step approach to first identify high error slices of data and then, in the second step, introduce methods to give human-understandable semantics to those underperforming slices. We explore a variety of methods for coming up with coherent semantics for the error groups using language models for semantic labeling and a text-to-image model for generating visual features. SEAL toolkit and demo screencast is available at https://huggingface.co/spaces/nazneen/seal.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
323,022
2206.12464
Motion Estimation for Large Displacements and Deformations
Large displacement optical flow is an integral part of many computer vision tasks. Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness, making them sensitive to noise in the sparse matches, deformations, and arbitrarily large displacements. This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations. A multi-scale hybrid matching approach is performed on the image pairs. Coarse-scale clusters formed by classifying pixels according to their feature descriptors are matched using the clusters' context descriptors. We apply a multi-scale graph matching on the finer-scale superpixels contained within each matched pair of coarse-scale clusters. Small clusters that cannot be further subdivided are matched using localized feature matching. Together, these initial matches form the flow, which is propagated by an edge-preserving interpolation and variational refinement. Our approach does not require training and is robust to substantial displacements and rigid and non-rigid transformations due to motion in the scene, making it ideal for large-scale imagery such as Wide-Area Motion Imagery (WAMI). More notably, HybridFlow works on directed graphs of arbitrary topology representing perceptual groups, which improves motion estimation in the presence of significant deformations. We demonstrate HybridFlow's superior performance to state-of-the-art variational techniques on two benchmark datasets and report comparable results with state-of-the-art deep-learning-based techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
304,608
2104.07878
Histopathology WSI Encoding based on GCNs for Scalable and Efficient Retrieval of Diagnostically Relevant Regions
Content-based histopathological image retrieval (CBHIR) has become popular in recent years in the domain of histopathological image analysis. CBHIR systems provide auxiliary diagnosis information for pathologists by searching for and returning regions that are contently similar to the region of interest (ROI) from a pre-established database. While, it is challenging and yet significant in clinical applications to retrieve diagnostically relevant regions from a database that consists of histopathological whole slide images (WSIs) for a query ROI. In this paper, we propose a novel framework for regions retrieval from WSI-database based on hierarchical graph convolutional networks (GCNs) and Hash technique. Compared to the present CBHIR framework, the structural information of WSI is preserved through graph embedding of GCNs, which makes the retrieval framework more sensitive to regions that are similar in tissue distribution. Moreover, benefited from the hierarchical GCN structures, the proposed framework has good scalability for both the size and shape variation of ROIs. It allows the pathologist defining query regions using free curves according to the appearance of tissue. Thirdly, the retrieval is achieved based on Hash technique, which ensures the framework is efficient and thereby adequate for practical large-scale WSI-database. The proposed method was validated on two public datasets for histopathological WSI analysis and compared to the state-of-the-art methods. The proposed method achieved mean average precision above 0.857 on the ACDC-LungHP dataset and above 0.864 on the Camelyon16 dataset in the irregular region retrieval tasks, which are superior to the state-of-the-art methods. The average retrieval time from a database within 120 WSIs is 0.802 ms.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
230,582
2501.02410
JammingSnake: A follow-the-leader continuum robot with variable stiffness based on fiber jamming
Follow-the-leader (FTL) motion is essential for continuum robots operating in fragile and confined environments. It allows the robot to exert minimal force on its surroundings, reducing the risk of damage. This paper presents a novel design of a snake-like robot capable of achieving FTL motion by integrating fiber jamming modules (FJMs). The proposed robot can dynamically adjust its stiffness during propagation and interaction with the environment. An algorithm is developed to independently control the tendon and FJM insertion movements, allowing the robot to maintain its shape while minimizing the forces exerted on surrounding structures. To validate the proposed design, comparative tests were conducted between a traditional tendon-driven robot and the novel design under different configurations. The results demonstrate that our design relies significantly less on contact with the surroundings to maintain its shape. This highlights its potential for safer and more effective operations in delicate environments, such as minimally invasive surgery (MIS) or industrial in-situ inspection.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
522,471
2102.00461
Multilingual Email Zoning
The segmentation of emails into functional zones (also dubbed email zoning) is a relevant preprocessing step for most NLP tasks that deal with emails. However, despite the multilingual character of emails and their applications, previous literature regarding email zoning corpora and systems was developed essentially for English. In this paper, we analyse the existing email zoning corpora and propose a new multilingual benchmark composed of 625 emails in Portuguese, Spanish and French. Moreover, we introduce OKAPI, the first multilingual email segmentation model based on a language agnostic sentence encoder. Besides generalizing well for unseen languages, our model is competitive with current English benchmarks, and reached new state-of-the-art performances for domain adaptation tasks in English.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
217,790
2202.00817
Do Differentiable Simulators Give Better Policy Gradients?
Differentiable simulators promise faster computation time for reinforcement learning by replacing zeroth-order gradient estimates of a stochastic objective with an estimate based on first-order gradients. However, it is yet unclear what factors decide the performance of the two estimators on complex landscapes that involve long-horizon planning and control on physical systems, despite the crucial relevance of this question for the utility of differentiable simulators. We show that characteristics of certain physical systems, such as stiffness or discontinuities, may compromise the efficacy of the first-order estimator, and analyze this phenomenon through the lens of bias and variance. We additionally propose an $\alpha$-order gradient estimator, with $\alpha \in [0,1]$, which correctly utilizes exact gradients to combine the efficiency of first-order estimates with the robustness of zero-order methods. We demonstrate the pitfalls of traditional estimators and the advantages of the $\alpha$-order estimator on some numerical examples.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
278,271
2301.03167
Machining feature recognition using descriptors with range constraints for mechanical 3D models
In machining feature recognition, geometric elements generated in a three-dimensional computer-aided design model are identified. This technique is used in manufacturability evaluation, process planning, and tool path generation. Here, we propose a method of recognizing 16 types of machining features using descriptors, often used in shape-based part retrieval studies. The base face is selected for each feature type, and descriptors express the base face's minimum, maximum, and equal conditions. Furthermore, the similarity in the three conditions between the descriptors extracted from the target face and those from the base face is calculated. If the similarity is greater than or equal to the threshold, the target face is determined as the base face of the feature. Machining feature recognition tests were conducted for two test cases using the proposed method, and all machining features included in the test cases were successfully recognized. Also, it was confirmed through an additional test that the proposed method in this study showed better feature recognition performance than the latest artificial neural network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
339,720
2207.03435
Sociable and Ergonomic Human-Robot Collaboration through Action Recognition and Augmented Hierarchical Quadratic Programming
The recognition of actions performed by humans and the anticipation of their intentions are important enablers to yield sociable and successful collaboration in human-robot teams. Meanwhile, robots should have the capacity to deal with multiple objectives and constraints, arising from the collaborative task or the human. In this regard, we propose vision techniques to perform human action recognition and image classification, which are integrated into an Augmented Hierarchical Quadratic Programming (AHQP) scheme to hierarchically optimize the robot's reactive behavior and human ergonomics. The proposed framework allows one to intuitively command the robot in space while a task is being executed. The experiments confirm increased human ergonomics and usability, which are fundamental parameters for reducing musculoskeletal diseases and increasing trust in automation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
306,844
1809.05890
Semantic Interoperability Middleware Architecture for Heterogeneous Environmental Data Sources
Data heterogeneity hampers the effort to integrate and infer knowledge from vast heterogeneous data sources. An application case study is described, in which the objective was to semantically represent and integrate structured data from sensor devices with unstructured data in the form of local indigenous knowledge. However, the semantic representation of these heterogeneous data sources for environmental monitoring systems is not well supported yet. To combat the incompatibility issues, a dedicated semantic middleware solution is required. In this paper, we describe and evaluate a cross-domain middleware architecture that semantically integrates and generate inference from heterogeneous data sources. These use of semantic technology for predicting and forecasting complex environmental phenomenon will increase the degree of accuracy of environmental monitoring systems.
false
false
false
false
true
true
false
false
false
false
false
false
false
true
false
false
false
false
107,904
2012.03386
SoK: Training Machine Learning Models over Multiple Sources with Privacy Preservation
Nowadays, gathering high-quality training data from multiple data sources with privacy preservation is a crucial challenge to training high-performance machine learning models. The potential solutions could break the barriers among isolated data corpus, and consequently enlarge the range of data available for processing. To this end, both academic researchers and industrial vendors are recently strongly motivated to propose two main-stream folders of solutions mainly based on software constructions: 1) Secure Multi-party Learning (MPL for short); and 2) Federated Learning (FL for short). The above two technical folders have their advantages and limitations when we evaluate them according to the following five criteria: security, efficiency, data distribution, the accuracy of trained models, and application scenarios. Motivated to demonstrate the research progress and discuss the insights on the future directions, we thoroughly investigate these protocols and frameworks of both MPL and FL. At first, we define the problem of Training machine learning Models over Multiple data sources with Privacy Preservation (TMMPP for short). Then, we compare the recent studies of TMMPP from the aspects of the technical routes, the number of parties supported, data partitioning, threat model, and machine learning models supported, to show their advantages and limitations. Next, we investigate and evaluate five popular FL platforms. Finally, we discuss the potential directions to resolve the problem of TMMPP in the future.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
210,087
1706.01406
NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps
Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6.3mm$^2$. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real time interactive demonstrations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
74,797
2401.15739
SegmentAnyTree: A sensor and platform agnostic deep learning model for tree segmentation using laser scanning data
This research advances individual tree crown (ITC) segmentation in lidar data, using a deep learning model applicable to various laser scanning types: airborne (ULS), terrestrial (TLS), and mobile (MLS). It addresses the challenge of transferability across different data characteristics in 3D forest scene analysis. The study evaluates the model's performance based on platform (ULS, MLS) and data density, testing five scenarios with varying input data, including sparse versions, to gauge adaptability and canopy layer efficacy. The model, based on PointGroup architecture, is a 3D CNN with separate heads for semantic and instance segmentation, validated on diverse point cloud datasets. Results show point cloud sparsification enhances performance, aiding sparse data handling and improving detection in dense forests. The model performs well with >50 points per sq. m densities but less so at 10 points per sq. m due to higher omission rates. It outperforms existing methods (e.g., Point2Tree, TLS2trees) in detection, omission, commission rates, and F1 score, setting new benchmarks on LAUTx, Wytham Woods, and TreeLearn datasets. In conclusion, this study shows the feasibility of a sensor-agnostic model for diverse lidar data, surpassing sensor-specific approaches and setting new standards in tree segmentation, particularly in complex forests. This contributes to future ecological modeling and forest management advancements.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
424,567
2008.06255
WAN: Watermarking Attack Network
Multi-bit watermarking (MW) has been developed to improve robustness against signal processing operations and geometric distortions. To this end, benchmark tools that test robustness by applying simulated attacks on watermarked images are available. However, limitations in these general attacks exist since they cannot exploit specific characteristics of the targeted MW. In addition, these attacks are usually devised without consideration of visual quality, which rarely occurs in the real world. To address these limitations, we propose a watermarking attack network (WAN), a fully trainable watermarking benchmark tool that utilizes the weak points of the target MW and induces an inversion of the watermark bit, thereby considerably reducing the watermark extractability. To hinder the extraction of hidden information while ensuring high visual quality, we utilize a residual dense blocks-based architecture specialized in local and global feature learning. A novel watermarking attack loss is introduced to break the MW systems. We empirically demonstrate that the WAN can successfully fool various block-based MW systems. Moreover, we show that existing MW methods can be improved with the help of the WAN as an add-on module.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
true
191,750
1709.02435
An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software
Machine learning (ML) plays an ever-increasing role in advanced automotive functionality for driver assistance and autonomous operation; however, its adequacy from the perspective of safety certification remains controversial. In this paper, we analyze the impacts that the use of ML as an implementation approach has on ISO 26262 safety lifecycle and ask what could be done to address them. We then provide a set of recommendations on how to adapt the standard to accommodate ML.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
true
80,260
2207.04384
Sparse and Safe Frequency Regulation for Inverter Intensive Microgrids
This paper developed a novel control approach for the sparse and safe frequency regulation for inverter intensive microgrids (MGs). In the scenario, the inverters and external grids are expected to reach a synchronized desired frequency under regulations. To this end, the active power set-point acting as a control from a high-level controller is designed while considering two important performance metrics "sparsity" and "safety", which are to reduce the information exchange between controllers and ensure that the frequency keeps in safety regions during the whole operation process. Our proposed control design framework allows the sparse linear feedback controller (SLFC) to be unified with a family of conditions for safe control using control barrier functions. A quadratic programming (QP) problem is then constructed, and the real-time control policy is obtained by solving the QP problem. Importantly, we also found that the real-time control for each inverter depends on the cross-layer communication network topology which is the union of the one between controllers from SLFC and the one determined by the power flow network. Furthermore, our approach has been validated through extensive numerical simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
307,183
2208.00031
Paddy Leaf diseases identification on Infrared Images based on Convolutional Neural Networks
Agriculture is the mainstay of human society because it is an essential need for every organism. Paddy cultivation is very significant so far as humans are concerned, largely in the Asian continent, and it is one of the staple foods. However, plant diseases in agriculture lead to depletion in productivity. Plant diseases are generally caused by pests, insects, and pathogens that decrease productivity to a large scale if not controlled within a particular time. Eventually, one cannot see an increase in paddy yield. Accurate and timely identification of plant diseases can help farmers mitigate losses due to pests and diseases. Recently, deep learning techniques have been used to identify paddy diseases and overcome these problems. This paper implements a convolutional neural network (CNN) based on a model and tests a public dataset consisting of 636 infrared image samples with five paddy disease classes and one healthy class. The proposed model proficiently identified and classified paddy diseases of five different types and achieved an accuracy of 88.28%
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
310,719
2304.10770
DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards
Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards. Recent studies have shown the effectiveness of encouraging exploration with intrinsic rewards estimated from novelties in observations. However, there is a gap between the novelty of an observation and an exploration, as both the stochasticity in the environment and the agent's behavior may affect the observation. To evaluate exploratory behaviors accurately, we propose DEIR, a novel method in which we theoretically derive an intrinsic reward with a conditional mutual information term that principally scales with the novelty contributed by agent explorations, and then implement the reward with a discriminative forward model. Extensive experiments on both standard and advanced exploration tasks in MiniGrid show that DEIR quickly learns a better policy than the baselines. Our evaluations on ProcGen demonstrate both the generalization capability and the general applicability of our intrinsic reward. Our source code is available at https://github.com/swan-utokyo/deir.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
359,553
2408.08379
Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions
The emergence of synthetic data represents a pivotal shift in modern machine learning, offering a solution to satisfy the need for large volumes of data in domains where real data is scarce, highly private, or difficult to obtain. We investigate the feasibility of creating realistic, large-scale synthetic datasets of user-generated content, noting that such content is increasingly prevalent and a source of frequently sought information. Large language models (LLMs) offer a starting point for generating synthetic social media discussion threads, due to their ability to produce diverse responses that typify online interactions. However, as we demonstrate, straightforward application of LLMs yields limited success in capturing the complex structure of online discussions, and standard prompting mechanisms lack sufficient control. We therefore propose a multi-step generation process, predicated on the idea of creating compact representations of discussion threads, referred to as scaffolds. Our framework is generic yet adaptable to the unique characteristics of specific social media platforms. We demonstrate its feasibility using data from two distinct online discussion platforms. To address the fundamental challenge of ensuring the representativeness and realism of synthetic data, we propose a portfolio of evaluation measures to compare various instantiations of our framework.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
480,970
2012.10053
Instance Space Analysis for the Car Sequencing Problem
We investigate an important research question for solving the car sequencing problem, that is, which characteristics make an instance hard to solve? To do so, we carry out an instance space analysis for the car sequencing problem, by extracting a vector of problem features to characterize an instance. In order to visualize the instance space, the feature vectors are projected onto a two-dimensional space using dimensionality reduction techniques. The resulting two-dimensional visualizations provide new insights into the characteristics of the instances used for testing and how these characteristics influence the behaviours of an optimization algorithm. This analysis guides us in constructing a new set of benchmark instances with a range of instance properties. We demonstrate that these new instances are more diverse than the previous benchmarks, including some instances that are significantly more difficult to solve. We introduce two new algorithms for solving the car sequencing problem and compare them with four existing methods from the literature. Our new algorithms are shown to perform competitively for this problem but no single algorithm can outperform all others over all instances. This observation motivates us to build an algorithm selection model based on machine learning, to identify the niche in the instance space that an algorithm is expected to perform well on. Our analysis helps to understand problem hardness and select an appropriate algorithm for solving a given car sequencing problem instance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
212,243
2408.11920
Modular Hypernetworks for Scalable and Adaptive Deep MIMO Receivers
Deep neural networks (DNNs) were shown to facilitate the operation of uplink multiple-input multiple-output (MIMO) receivers, with emerging architectures augmenting modules of classic receiver processing. Current designs consider static DNNs, whose architecture is fixed and weights are pre-trained. This induces a notable challenge, as the resulting MIMO receiver is suitable for a given configuration, i.e., channel distribution and number of users, while in practice these parameters change frequently with network variations and users leaving and joining the network. In this work, we tackle this core challenge of DNN-aided MIMO receivers. We build upon the concept of hypernetworks, augmenting the receiver with a pre-trained deep model whose purpose is to update the weights of the DNN-aided receiver upon instantaneous channel variations. We design our hypernetwork to augment modular deep receivers, leveraging their modularity to have the hypernetwork adapt not only the weights, but also the architecture. Our modular hypernetwork leads to a DNN-aided receiver whose architecture and resulting complexity adapts to the number of users, in addition to channel variations, without retraining. Our numerical studies demonstrate superior error-rate performance of modular hypernetworks in time-varying channels compared to static pre-trained receivers, while providing rapid adaptivity and scalability to network variations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
482,504
2306.11548
Graph-Based Conditions for Feedback Stabilization of Switched and LPV Systems
This paper presents novel stabilizability conditions for switched linear systems with arbitrary and uncontrollable underlying switching signals. We distinguish and study two particular settings: i) the \emph{robust} case, in which the active mode is completely unknown and unobservable, and ii) the \emph{mode-dependent} case, in which the controller depends on the current active switching mode. The technical developments are based on graph-theory tools, relying in particular on the path-complete Lyapunov functions framework. The main idea is to use directed and labeled graphs to encode Lyapunov inequalities to design robust and mode-dependent piecewise linear state-feedback controllers. This results in novel and flexible conditions, with the particular feature of being in the form of linear matrix inequalities (LMIs). Our technique thus provides a first controller-design strategy allowing piecewise linear feedback maps and piecewise quadratic (control) Lyapunov functions by means of semidefinite programming. Numerical examples illustrate the application of the proposed techniques, the relations between the graph order, the robustness, and the performance of the closed loop.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
374,635
2012.11582
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation
We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the weights at each decoder block vary spatially. For this purpose, we design a new type of hypernetwork, composed of a nested U-Net for drawing higher level context features, a multi-headed weight generating module which generates the weights of each block in the decoder immediately before they are consumed, for efficient memory utilization, and a primary network that is composed of novel dynamic patch-wise convolutions. Despite the usage of less-conventional blocks, our architecture obtains real-time performance. In terms of the runtime vs. accuracy trade-off, we surpass state of the art (SotA) results on popular semantic segmentation benchmarks: PASCAL VOC 2012 (val. set) and real-time semantic segmentation on Cityscapes, and CamVid. The code is available: https://nirkin.com/hyperseg.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,678
2211.11656
SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization
Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. While several FU methods have been proposed, we currently lack a general approach providing formal unlearning guarantees to the FedAvg routine, while ensuring scalability and generalization beyond the convex assumption on the clients' loss functions. We aim at filling this gap by proposing SIFU (Sequential Informed Federated Unlearning), a new FU method applying to both convex and non-convex optimization regimes. SIFU naturally applies to FedAvg without additional computational cost for the clients and provides formal guarantees on the quality of the unlearning task. We provide a theoretical analysis of the unlearning properties of SIFU, and practically demonstrate its effectiveness as compared to a panel of unlearning methods from the state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
331,822
2005.03776
Mapping Natural Language Instructions to Mobile UI Action Sequences
We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PIXELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
176,251
2502.10417
Evolutionary Power-Aware Routing in VANETs using Monte-Carlo Simulation
This work addresses the reduction of power consumption of the AODV routing protocol in vehicular networks as an optimization problem. Nowadays, network designers focus on energy-aware communication protocols, specially to deploy wireless networks. Here, we introduce an automatic method to search for energy-efficient AODV configurations by using an evolutionary algorithm and parallel Monte-Carlo simulations to improve the accuracy of the evaluation of tentative solutions. The experimental results demonstrate that significant power consumption improvements over the standard configuration can be attained, with no noteworthy loss in the quality of service.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
533,862
2211.07096
Alternating Implicit Projected SGD and Its Efficient Variants for Equality-constrained Bilevel Optimization
Stochastic bilevel optimization, which captures the inherent nested structure of machine learning problems, is gaining popularity in many recent applications. Existing works on bilevel optimization mostly consider either unconstrained problems or constrained upper-level problems. This paper considers the stochastic bilevel optimization problems with equality constraints both in the upper and lower levels. By leveraging the special structure of the equality constraints problem, the paper first presents an alternating implicit projected SGD approach and establishes the $\tilde{\cal O}(\epsilon^{-2})$ sample complexity that matches the state-of-the-art complexity of ALSET \citep{chen2021closing} for unconstrained bilevel problems. To further save the cost of projection, the paper presents two alternating implicit projection-efficient SGD approaches, where one algorithm enjoys the $\tilde{\cal O}(\epsilon^{-2}/T)$ upper-level and $\tilde{\cal O}(\epsilon^{-1.5}/T^{\frac{3}{4}})$ lower-level projection complexity with ${\cal O}(T)$ lower-level batch size, and the other one enjoys $\tilde{\cal O}(\epsilon^{-1.5})$ upper-level and lower-level projection complexity with ${\cal O}(1)$ batch size. Application to federated bilevel optimization has been presented to showcase the empirical performance of our algorithms. Our results demonstrate that equality-constrained bilevel optimization with strongly-convex lower-level problems can be solved as efficiently as stochastic single-level optimization problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
330,126
2407.11894
Deep Learning without Global Optimization by Random Fourier Neural Networks
We introduce a new training algorithm for variety of deep neural networks that utilize random complex exponential activation functions. Our approach employs a Markov Chain Monte Carlo sampling procedure to iteratively train network layers, avoiding global and gradient-based optimization while maintaining error control. It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions, determined by network complexity. Additionally, it enables efficient learning of multiscale and high-frequency features, producing interpretable parameter distributions. Despite using sinusoidal basis functions, we do not observe Gibbs phenomena in approximating discontinuous target functions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
473,662
1802.07623
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be %necessarily and minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emph{absent} (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically \emph{absent} is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
90,932
2204.12786
Machines of finite depth: towards a formalization of neural networks
We provide a unifying framework where artificial neural networks and their architectures can be formally described as particular cases of a general mathematical construction--machines of finite depth. Unlike neural networks, machines have a precise definition, from which several properties follow naturally. Machines of finite depth are modular (they can be combined), efficiently computable and differentiable. The backward pass of a machine is again a machine and can be computed without overhead using the same procedure as the forward pass. We prove this statement theoretically and practically, via a unified implementation that generalizes several classical architectures--dense, convolutional, and recurrent neural networks with a rich shortcut structure--and their respective backpropagation rules.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
293,603
1703.03773
Evolutionary Image Composition Using Feature Covariance Matrices
Evolutionary algorithms have recently been used to create a wide range of artistic work. In this paper, we propose a new approach for the composition of new images from existing ones, that retain some salient features of the original images. We introduce evolutionary algorithms that create new images based on a fitness function that incorporates feature covariance matrices associated with different parts of the images. This approach is very flexible in that it can work with a wide range of features and enables targeting specific regions in the images. For the creation of the new images, we propose a population-based evolutionary algorithm with mutation and crossover operators based on random walks. Our experimental results reveal a spectrum of aesthetically pleasing images that can be obtained with the aid of our evolutionary process.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
69,776
2105.04301
ADASYN-Random Forest Based Intrusion Detection Model
Intrusion detection has been a key topic in the field of cyber security, and the common network threats nowadays have the characteristics of varieties and variation. Considering the serious imbalance of intrusion detection datasets will result in low classification performance on attack behaviors of small sample size and difficulty to detect network attacks accurately and efficiently, using Adaptive Synthetic Sampling (ADASYN) method to balance datasets was proposed in this paper. In addition, Random Forest algorithm was used to train intrusion detection classifiers. Through the comparative experiment of Intrusion detection on CICIDS 2017 dataset, it is found that ADASYN with Random Forest performs better. Based on the experimental results, the improvement of precision, recall, F1 scores and AUC values after ADASYN is then analyzed. Experiments show that the proposed method can be applied to intrusion detection with large data, and can effectively improve the classification accuracy of network attack behaviors. Compared with traditional machine learning models, it has better performance, generalization ability and robustness.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
234,466
1606.09222
Penambahan emosi menggunakan metode manipulasi prosodi untuk sistem text to speech bahasa Indonesia
Adding an emotions using prosody manipulation method for Indonesian text to speech system. Text To Speech (TTS) is a system that can convert text in one language into speech, accordance with the reading of the text in the language used. The focus of this research is a natural sounding concept, the make "humanize" for the pronunciation of voice synthesis system Text To Speech. Humans have emotions / intonation that may affect the sound produced. The main requirement for the system used Text To Speech in this research is eSpeak, the database MBROLA using id1, Human Speech Corpus database from a website that summarizes the words with the highest frequency (Most Common Words) used in a country. And there are 3 types of emotional / intonation designed base. There is a happy, angry and sad emotion. Method for develop the emotional filter is manipulate the relevant features of prosody (especially pitch and duration value) using a predetermined rate factor that has been established by analyzing the differences between the standard output Text To Speech and voice recording with emotional prosody / a particular intonation. The test results for the perception tests of Human Speech Corpus for happy emotion is 95 %, 96.25 % for angry emotion and 98.75 % for sad emotions. For perception test system carried by intelligibility and naturalness test. Intelligibility test for the accuracy of sound with the original sentence is 93.3%, and for clarity rate for each sentence is 62.8%. For naturalness, accuracy emotional election amounted to 75.6 % for happy emotion, 73.3 % for angry emotion, and 60 % for sad emotions. ----- Text To Speech (TTS) merupakan suatu sistem yang dapat mengonversi teks dalam format suatu bahasa menjadi ucapan sesuai dengan pembacaan teks dalam bahasa yang digunakan.
false
false
true
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
57,965
2204.11488
Mining and searching association relation of scientific papers based on deep learning
There is a complex correlation among the data of scientific papers. The phenomenon reveals the data characteristics, laws, and correlations contained in the data of scientific and technological papers in specific fields, which can realize the analysis of scientific and technological big data and help to design applications to serve scientific researchers. Therefore, the research on mining and searching the association relationship of scientific papers based on deep learning has far-reaching practical significance.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
293,172
1512.03501
ClusPath: A Temporal-driven Clustering to Infer Typical Evolution Paths
We propose ClusPath, a novel algorithm for detecting general evolution tendencies in a population of entities. We show how abstract notions, such as the Swedish socio-economical model (in a political dataset) or the companies fiscal optimization (in an economical dataset) can be inferred from low-level descriptive features. Such high-level regularities in the evolution of entities are detected by combining spatial and temporal features into a spatio-temporal dissimilarity measure and using semi-supervised clustering techniques. The relations between the evolution phases are modeled using a graph structure, inferred simultaneously with the partition, by using a "slow changing world" assumption. The idea is to ensure a smooth passage for entities along their evolution paths, which catches the long-term trends in the dataset. Additionally, we also provide a method, based on an evolutionary algorithm, to tune the parameters of ClusPath to new, unseen datasets. This method assesses the fitness of a solution using four opposed quality measures and proposes a balanced compromise.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
50,044
2108.01504
Ontology Modeling for Decentralized Household Energy Systems
In a decentralized household energy system consisting of various devices such as washing machines, heat pumps, and solar panels, understanding the electric energy consumption and production data at the granularity of the device helps end-users be closer to the system and further achieve the sustainability of energy use. However, many datasets in this area are isolated from other domains with records of only energy-related data. This may raise a loss of information (e.g. weather) that is relevant to the energy use of each device. A noticeable disadvantage is that many of those datasets have to be used in computational modeling approaches such as machine learning models, which are vulnerable to the data feed, to advance the understanding of energy consumption and production. Although such computational methods have achieved a high benchmark merely through a local view of datasets, the reusability cannot be firmly guaranteed when the information omission is taken into account. This paper addresses the data isolation problem in the smart energy systems area by exploring Semantic Web techniques on top of a household energy system. We propose an ontology modeling solution for the management of decentralized data at the resolution of a device in the system. As a result, the scope of the data concerning each device can be easily extended to be wider across the web and more information that may be of interest such as weather can be retrieved from the Web if the data are structured by the ontology.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
249,048
2112.07082
DeepDiffusion: Unsupervised Learning of Retrieval-adapted Representations via Diffusion-based Ranking on Latent Feature Manifold
Unsupervised learning of feature representations is a challenging yet important problem for analyzing a large collection of multimedia data that do not have semantic labels. Recently proposed neural network-based unsupervised learning approaches have succeeded in obtaining features appropriate for classification of multimedia data. However, unsupervised learning of feature representations adapted to content-based matching, comparison, or retrieval of multimedia data has not been explored well. To obtain such retrieval-adapted features, we introduce the idea of combining diffusion distance on a feature manifold with neural network-based unsupervised feature learning. This idea is realized as a novel algorithm called DeepDiffusion (DD). DD simultaneously optimizes two components, a feature embedding by a deep neural network and a distance metric that leverages diffusion on a latent feature manifold, together. DD relies on its loss function but not encoder architecture. It can thus be applied to diverse multimedia data types with their respective encoder architectures. Experimental evaluation using 3D shapes and 2D images demonstrates versatility as well as high accuracy of the DD algorithm. Code is available at https://github.com/takahikof/DeepDiffusion
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
271,367
2304.08024
Smart farming using iot for efficient crop growth
In general. automated farming systems make decisions based on static models built from the properties of the plant. in the contrast, irrigation decisions in our suggested method are dynamically changing environmental conditions. the model"s learning process reveals the mathematical links between the environmental factors employed in the determining the irrigation habit and gradually improves its learning techniques as irrigation data accumulates int the model. to analyze overall system overall system performance, we constructed a test environment for the sensor edge, mobile client, and decision service in the cloud.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
358,565
2305.17323
Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization
We consider (stochastic) subgradient methods for strongly convex but potentially nonsmooth non-Lipschitz optimization. We provide new equivalent dual descriptions (in the style of dual averaging) for the classic subgradient method, the proximal subgradient method, and the switching subgradient method. These equivalences enable $O(1/T)$ convergence guarantees in terms of both their classic primal gap and a not previously analyzed dual gap for strongly convex optimization. Consequently, our theory provides these classic methods with simple, optimal stopping criteria and optimality certificates at no added computational cost. Our results apply to a wide range of stepsize selections and of non-Lipschitz ill-conditioned problems where the early iterations of the subgradient method may diverge exponentially quickly (a phenomenon which, to the best of our knowledge, no prior works address). Even in the presence of such undesirable behaviors, our theory still ensures and bounds eventual convergence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,522
2105.08534
PNLSS Toolbox 1.0
This is a demonstration of the PNLSS Toolbox 1.0. The toolbox is designed to identify polynomial nonlinear state-space models from data. Nonlinear state-space models can describe a wide range of nonlinear systems. An illustration is provided on experimental data of an electrical system mimicking the forced Duffing oscillator, and on numerical data of a nonlinear fluid dynamics problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
235,787
2311.11193
Assessing AI Impact Assessments: A Classroom Study
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems. Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards. However, to date that has been limited evaluation of existing AIIA instruments. We conduct a classroom study (N = 38) at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI. We assign students to different organizational roles (for example, an ML scientist or product manager) and ask participant teams to complete one of three existing AI impact assessments for one of two imagined generative AI systems. In our thematic analysis of participants' responses to pre- and post-activity questionnaires, we find preliminary evidence that impact assessments can influence participants' perceptions of the potential risks of generative AI systems, and the level of responsibility held by AI experts in addressing potential harm. We also discover a consistent set of limitations shared by several existing AIIA instruments, which we group into concerns about their format and content, as well as the feasibility and effectiveness of the activity in foreseeing and mitigating potential harms. Drawing on the findings of this study, we provide recommendations for future work on developing and validating AIIAs.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
408,840
2012.11195
Personalized fall detection monitoring system based on learning from the user movements
Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,568
1804.00863
Deep Appearance Maps
We propose a deep representation of appearance, i. e., the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have useddeep learning to extract classic appearance representationsrelating to reflectance model parameters (e. g., Phong) orillumination (e. g., HDR environment maps). We suggest todirectly represent appearance itself as a network we call aDeep Appearance Map (DAM). This is a 4D generalizationover 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showingmultiple materials to multiple deep appearance maps.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
94,135
2307.06189
Cooperative Localization for Autonomous Underwater Vehicles -- a comprehensive review
Cooperative localization is an important technique in environments devoid of GPS-based localization, more so in underwater scenarios, where none of the terrestrial localization techniques based on radio frequency or optics are suitable due to severe attenuation. Given the large swaths of oceans and seas where autonomous underwater vehicles (AUVs) operate, traditional acoustic positioning systems fall short on many counts. Cooperative localization (CL), which involves sharing mutual information amongst the vehicles, has thus emerged as a viable option in the past decade. This paper assimilates the research carried out in AUV cooperative localization and presents a qualitative overview. The cooperative localization approaches are categorized by their cooperation and localization strategies, while the algorithms employed are reviewed on the various challenges posed by the underwater acoustic channel and environment. Furthermore, existing problems and future scope in the domain of underwater cooperative localization are discussed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
378,997
2312.11476
The geometry of flow: Advancing predictions of river geometry with multi-model machine learning
Hydraulic geometry parameters describing river hydrogeomorphic is important for flood forecasting. Although well-established, power-law hydraulic geometry curves have been widely used to understand riverine systems and mapping flooding inundation worldwide for the past 70 years, we have become increasingly aware of the limitations of these approaches. In the present study, we have moved beyond these traditional power-law relationships for river geometry, testing the ability of machine-learning models to provide improved predictions of river width and depth. For this work, we have used an unprecedentedly large river measurement dataset (HYDRoSWOT) as well as a suite of watershed predictor data to develop novel data-driven approaches to better estimate river geometries over the contiguous United States (CONUS). Our Random Forest, XGBoost, and neural network models out-performed the traditional, regionalized power law-based hydraulic geometry equations for both width and depth, providing R-squared values of as high as 0.75 for width and as high as 0.67 for depth, compared with R-squared values of 0.57 for width and 0.18 for depth from the regional hydraulic geometry equations. Our results also show diverse performance outcomes across stream orders and geographical regions for the different machine-learning models, demonstrating the value of using multi-model approaches to maximize the predictability of river geometry. The developed models have been used to create the newly publicly available STREAM-geo dataset, which provides river width, depth, width/depth ratio, and river and stream surface area (%RSSA) for nearly 2.7 million NHDPlus stream reaches across the rivers and streams across the contiguous US.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
416,584
1309.1349
Ergodic Randomized Algorithms and Dynamics over Networks
Algorithms and dynamics over networks often involve randomization, and randomization may result in oscillating dynamics which fail to converge in a deterministic sense. In this paper, we observe this undesired feature in three applications, in which the dynamics is the randomized asynchronous counterpart of a well-behaved synchronous one. These three applications are network localization, PageRank computation, and opinion dynamics. Motivated by their formal similarity, we show the following general fact, under the assumptions of independence across time and linearities of the updates: if the expected dynamics is stable and converges to the same limit of the original synchronous dynamics, then the oscillations are ergodic and the desired limit can be locally recovered via time-averaging.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
26,855
2311.12052
MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion
In this work, we propose MagicPose, a diffusion-based model for 2D human pose and facial expression retargeting. Specifically, given a reference image, we aim to generate a person's new images by controlling the poses and facial expressions while keeping the identity unchanged. To this end, we propose a two-stage training strategy to disentangle human motions and appearance (e.g., facial expressions, skin tone and dressing), consisting of (1) the pre-training of an appearance-control block and (2) learning appearance-disentangled pose control. Our novel design enables robust appearance control over generated human images, including body, facial attributes, and even background. By leveraging the prior knowledge of image diffusion models, MagicPose generalizes well to unseen human identities and complex poses without the need for additional fine-tuning. Moreover, the proposed model is easy to use and can be considered as a plug-in module/extension to Stable Diffusion. The code is available at: https://github.com/Boese0601/MagicDance
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,180
2201.10739
Infrared and visible image fusion based on Multi-State Contextual Hidden Markov Model
The traditional two-state hidden Markov model divides the high frequency coefficients only into two states (large and small states). Such scheme is prone to produce an inaccurate statistical model for the high frequency subband and reduces the quality of fusion result. In this paper, a fine-grained multi-state contextual hidden Markov model (MCHMM) is proposed for infrared and visible image fusion in the non-subsampled Shearlet domain, which takes full consideration of the strong correlations and level of details of NSST coefficients. To this end, an accurate soft context variable is designed correspondingly from the perspective of context correlation. Then, the statistical features provided by MCHMM are utilized for the fusion of high frequency subbands. To ensure the visual quality, a fusion strategy based on the difference in regional energy is proposed as well for lowfrequency subbands. Experimental results demonstrate that the proposed method can achieve a superior performance compared with other fusion methods in both subjective and objective aspects.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
277,085
2407.13766
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark
Large Multimodal Models (LMMs) have made significant strides in visual question-answering for single images. Recent advancements like long-context LMMs have allowed them to ingest larger, or even multiple, images. However, the ability to process a large number of visual tokens does not guarantee effective retrieval and reasoning for multi-image question answering (MIQA), especially in real-world applications like photo album searches or satellite imagery analysis. In this work, we first assess the limitations of current benchmarks for long-context LMMs. We address these limitations by introducing a new vision-centric, long-context benchmark, "Visual Haystacks (VHs)". We comprehensively evaluate both open-source and proprietary models on VHs, and demonstrate that these models struggle when reasoning across potentially unrelated images, perform poorly on cross-image reasoning, as well as exhibit biases based on the placement of key information within the context window. Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented Generation), an open-source, lightweight visual-RAG framework that processes up to 10k images on a single 40G A100 GPU -- far surpassing the 1k-image limit of contemporary models. MIRAGE demonstrates up to 13% performance improvement over existing open-source LMMs on VHs, sets a new state-of-the-art on the RetVQA multi-image QA benchmark, and achieves competitive performance on single-image QA with state-of-the-art LMMs. Our dataset, model, and code are available at: https://visual-haystacks.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,514
2209.15036
Large-Scale Spatial Cross-Calibration of Hinode/SOT-SP and SDO/HMI
We investigate the cross-calibration of the Hinode/SOT-SP and SDO/HMI instrument meta-data, specifically the correspondence of the scaling and pointing information. Accurate calibration of these datasets gives the correspondence needed by inter-instrument studies and learning-based magnetogram systems, and is required for physically-meaningful photospheric magnetic field vectors. We approach the problem by robustly fitting geometric models on correspondences between images from each instrument's pipeline. This technique is common in computer vision, but several critical details are required when using scanning slit spectrograph data like Hinode/SOT-SP. We apply this technique to data spanning a decade of the Hinode mission. Our results suggest corrections to the published Level 2 Hinode/SOT-SP data. First, an analysis on approximately 2,700 scans suggests that the reported pixel size in Hinode/SOT-SP Level 2 data is incorrect by around 1%. Second, analysis of over 12,000 scans show that the pointing information is often incorrect by dozens of arcseconds with a strong bias. Regression of these corrections indicates that thermal effects have caused secular and cyclic drift in Hinode/SOT-SP pointing data over its mission. We offer two solutions. First, direct co-alignment with SDO/HMI data via our procedure can improve alignments for many Hinode/SOT-SP scans. Second, since the pointing errors are predictable, simple post-hoc corrections can substantially improve the pointing. We conclude by illustrating the impact of this updated calibration on derived physical data products needed for research and interpretation. Among other things, our results suggest that the pointing errors induce a hemispheric bias in estimates of radial current density.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
320,437
2006.14109
Scalable Data Classification for Security and Privacy
Content based data classification is an open challenge. Traditional Data Loss Prevention (DLP)-like systems solve this problem by fingerprinting the data in question and monitoring endpoints for the fingerprinted data. With a large number of constantly changing data assets in Facebook, this approach is both not scalable and ineffective in discovering what data is where. This paper is about an end-to-end system built to detect sensitive semantic types within Facebook at scale and enforce data retention and access controls automatically. The approach described here is our first end-to-end privacy system that attempts to solve this problem by incorporating data signals, machine learning, and traditional fingerprinting techniques to map out and classify all data within Facebook. The described system is in production achieving a 0.9+ average F2 scores across various privacy classes while handling a large number of data assets across dozens of data stores.
false
false
false
false
true
false
false
false
false
false
false
false
true
true
false
false
false
false
184,126
1905.12698
Leveraging Latent Features for Local Explanations
As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level. Locally, most explanation methods have focused on identifying relevance of features, limiting the types of explanations possible. In this paper, we investigate a new direction by leveraging latent features to generate contrastive explanations; predictions are explained not only by highlighting aspects that are in themselves sufficient to justify the classification, but also by new aspects which if added will change the classification. The key contribution of this paper lies in how we add features to rich data in a formal yet humanly interpretable way that leads to meaningful results. Our new definition of "addition" uses latent features to move beyond the limitations of previous explanations and resolve an open question laid out in Dhurandhar, et. al. (2018), which creates local contrastive explanations but is limited to simple datasets such as grayscale images. The strength of our approach in creating intuitive explanations that are also quantitatively superior to other methods is demonstrated on three diverse image datasets (skin lesions, faces, and fashion apparel). A user study with 200 participants further exemplifies the benefits of contrastive information, which can be viewed as complementary to other state-of-the-art interpretability methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,844
cmp-lg/9606005
Part-of-Speech-Tagging using morphological information
This paper presents the results of an experiment to decide the question of authenticity of the supposedly spurious Rhesus - a attic tragedy sometimes credited to Euripides. The experiment involves use of statistics in order to test whether significant deviations in the distribution of word categories between Rhesus and the other works of Euripides can or cannot be found. To count frequencies of word categories in the corpus, a part-of-speech-tagger for Greek has been implemented. Some special techniques for reducing the problem of sparse data are used resulting in an accuracy of ca. 96.6%.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,571
2306.12700
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experimental results show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training. We demonstrate that these gains translate into real wall-clock training speedups.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
375,038
2209.08982
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?
Current language models have been criticised for learning language from text alone without connection between words and their meaning. Consequently, multimodal training has been proposed as a way for creating models with better language understanding by providing the lacking connection. We focus on pre-trained multimodal vision-and-language (VL) models for which there already are some results on their language understanding capabilities. An unresolved issue with evaluating the linguistic skills of these models, however, is that there is no established method for adapting them to text-only input without out-of-distribution uncertainty. To find the best approach, we investigate and compare seven possible methods for adapting three different pre-trained VL models to text-only input. Our evaluations on both GLUE and Visual Property Norms (VPN) show that care should be put into adapting VL models to zero-shot text-only tasks, while the models are less sensitive to how we adapt them to non-zero-shot tasks. We also find that the adaptation methods perform differently for different models and that unimodal model counterparts perform on par with the VL models regardless of adaptation, indicating that current VL models do not necessarily gain better language understanding from their multimodal training.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
318,332
2211.11191
Correlative Preference Transfer with Hierarchical Hypergraph Network for Multi-Domain Recommendation
Advanced recommender systems usually involve multiple domains (such as scenarios or categories) for various marketing strategies, and users interact with them to satisfy diverse demands. The goal of multi-domain recommendation (MDR) is to improve the recommendation performance of all domains simultaneously. Conventional graph neural network based methods usually deal with each domain separately, or train a shared model to serve all domains. The former fails to leverage users' cross-domain behaviors, making the behavior sparseness issue a great obstacle. The latter learns shared user representation with respect to all domains, which neglects users' domain-specific preferences. In this paper we propose $\mathsf{H^3Trans}$, a hierarchical hypergraph network based correlative preference transfer framework for MDR, which represents multi-domain user-item interactions into a unified graph to help preference transfer. $\mathsf{H^3Trans}$ incorporates two hyperedge-based modules, namely dynamic item transfer (Hyper-I) and adaptive user aggregation (Hyper-U). Hyper-I extracts correlative information from multi-domain user-item feedbacks for eliminating domain discrepancy of item representations. Hyper-U aggregates users' scattered preferences in multiple domains and further exploits the high-order (not only pair-wise) connections to improve user representations. Experiments on both public and production datasets verify the superiority of $\mathsf{H^3Trans}$ for MDR.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
331,628
2406.03746
Efficient Knowledge Infusion via KG-LLM Alignment
To tackle the problem of domain-specific knowledge scarcity within large language models (LLMs), knowledge graph-retrievalaugmented method has been proven to be an effective and efficient technique for knowledge infusion. However, existing approaches face two primary challenges: knowledge mismatch between public available knowledge graphs and the specific domain of the task at hand, and poor information compliance of LLMs with knowledge graphs. In this paper, we leverage a small set of labeled samples and a large-scale corpus to efficiently construct domain-specific knowledge graphs by an LLM, addressing the issue of knowledge mismatch. Additionally, we propose a three-stage KG-LLM alignment strategyto enhance the LLM's capability to utilize information from knowledge graphs. We conduct experiments with a limited-sample setting on two biomedical question-answering datasets, and the results demonstrate that our approach outperforms existing baselines.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
461,373
1510.01116
Centrality metrics and localization in core-periphery networks
Two concepts of centrality have been defined in complex networks. The first considers the centrality of a node and many different metrics for it has been defined (e.g. eigenvector centrality, PageRank, non-backtracking centrality, etc). The second is related to a large scale organization of the network, the core-periphery structure, composed by a dense core plus an outlying and loosely-connected periphery. In this paper we investigate the relation between these two concepts. We consider networks generated via the Stochastic Block Model, or its degree corrected version, with a strong core-periphery structure and we investigate the centrality properties of the core nodes and the ability of several centrality metrics to identify them. We find that the three measures with the best performance are marginals obtained with belief propagation, PageRank, and degree centrality, while non-backtracking and eigenvector centrality (or MINRES}, showed to be equivalent to the latter in the large network limit) perform worse in the investigated networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
47,582
1102.1963
On quantum limit of optical communications: concatenated codes and joint-detection receivers
When classical information is sent over a channel with quantum-state modulation alphabet, such as the free-space optical (FSO) channel, attaining the ultimate (Holevo) limit to channel capacity requires the receiver to make joint measurements over long codeword blocks. In recent work, we showed a receiver for a pure-state channel that can attain the ultimate capacity by applying a single-shot optical (unitary) transformation on the received codeword state followed by simultaneous (but separable) projective measurements on the single-modulation-symbol state spaces. In this paper, we study the ultimate tradeoff between photon efficiency and spectral efficiency for the FSO channel. Based on our general results for the pure-state quantum channel, we show some of the first concrete examples of codes and laboratory-realizable joint-detection optical receivers that can achieve fundamentally higher (superadditive) channel capacity than receivers that physically detect each modulation symbol one at a time, as is done by all conventional (coherent or direct-detection) optical receivers.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,096
2001.01506
Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations
Deep neural networks (DNNs) have achieved impressive performance on handling computer vision problems, however, it has been found that DNNs are vulnerable to adversarial examples. For such reason, adversarial perturbations have been recently studied in several respects. However, most previous works have focused on image classification tasks, and it has never been studied regarding adversarial perturbations on Image-to-image (Im2Im) translation tasks, showing great success in handling paired and/or unpaired mapping problems in the field of autonomous driving and robotics. This paper examines different types of adversarial perturbations that can fool Im2Im frameworks for autonomous driving purpose. We propose both quasi-physical and digital adversarial perturbations that can make Im2Im models yield unexpected results. We then empirically analyze these perturbations and show that they generalize well under both paired for image synthesis and unpaired settings for style transfer. We also validate that there exist some perturbation thresholds over which the Im2Im mapping is disrupted or impossible. The existence of these perturbations reveals that there exist crucial weaknesses in Im2Im models. Lastly, we show that our methods illustrate how perturbations affect the quality of outputs, pioneering the improvement of the robustness of current SOTA networks for autonomous driving.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
159,500
2102.05960
Comparative Analysis of Machine Learning Approaches to Analyze and Predict the Covid-19 Outbreak
Background. Forecasting the time of forthcoming pandemic reduces the impact of diseases by taking precautionary steps such as public health messaging and raising the consciousness of doctors. With the continuous and rapid increase in the cumulative incidence of COVID-19, statistical and outbreak prediction models including various machine learning (ML) models are being used by the research community to track and predict the trend of the epidemic, and also in developing appropriate strategies to combat and manage its spread. Methods. In this paper, we present a comparative analysis of various ML approaches including Support Vector Machine, Random Forest, K-Nearest Neighbor and Artificial Neural Network in predicting the COVID-19 outbreak in the epidemiological domain. We first apply the autoregressive distributed lag (ARDL) method to identify and model the short and long-run relationships of the time-series COVID-19 datasets. That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL for predicting and forecasting the trend of the epidemic. Results. Statistical measures i.e., Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used for model accuracy. The values of MAPE for the best selected models for confirmed, recovered and deaths cases are 0.407, 0.094 and 0.124 respectively, which falls under the category of highly accurate forecasts. In addition, we computed fifteen days ahead forecast for the daily deaths, recover, and confirm patients and the cases fluctuated across time in all aspects. Besides, the results reveal the advantages of ML algorithms for supporting decision making of evolving short term policies.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
219,592
2007.08095
Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis
The use of deep learning techniques has achieved significant progress for program synthesis from input-output examples. However, when the program semantics become more complex, it still remains a challenge to synthesize programs that are consistent with the specification. In this work, we propose SED, a neural program generation framework that incorporates synthesis, execution, and debugging stages. Instead of purely relying on the neural program synthesizer to generate the final program, SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs. The integration of the debugger component enables SED to modify the programs based on the execution results and specification, which resembles the coding process of human programmers. On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
187,522
2407.17216
Reduced-Space Iteratively Reweighted Second-Order Methods for Nonconvex Sparse Regularization
This paper explores a specific type of nonconvex sparsity-promoting regularization problems, namely those involving $\ell_p$-norm regularization, in conjunction with a twice continuously differentiable loss function. We propose a novel second-order algorithm designed to effectively address this class of challenging nonconvex and nonsmooth problems, showcasing several innovative features: (i) The use of an alternating strategy to solve a reweighted $\ell_1$ regularized subproblem and the subspace approximate Newton step. (ii) The reweighted $\ell_1$ regularized subproblem relies on a convex approximation to the nonconvex regularization term, enabling a closed-form solution characterized by the soft-thresholding operator. This feature allows our method to be applied to various nonconvex regularization problems. (iii) Our algorithm ensures that the iterates maintain their sign values and that nonzero components are kept away from 0 for a sufficient number of iterations, eventually transitioning to a perturbed Newton method. (iv) We provide theoretical guarantees of global convergence, local superlinear convergence in the presence of the Kurdyka-\L ojasiewicz (KL) property, and local quadratic convergence when employing the exact Newton step in our algorithm. We also showcase the effectiveness of our approach through experiments on a diverse set of model prediction problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
475,890
2307.00181
Influence maximization on temporal networks: a review
Influence maximization (IM) is an important topic in network science where a small seed set is chosen to maximize the spread of influence on a network. Recently, this problem has attracted attention on temporal networks where the network structure changes with time. IM on such dynamically varying networks is the topic of this review. We first categorize methods into two main paradigms: single and multiple seeding. In single seeding, nodes activate at the beginning of the diffusion process, and most methods either efficiently estimate the influence spread and select nodes with a greedy algorithm, or use a node-ranking heuristic. Nodes activate at different time points in the multiple seeding problem, via either sequential seeding, maintenance seeding or node probing paradigms. Throughout this review, we give special attention to deploying these algorithms in practice while also discussing existing solutions for real-world applications. We conclude by sharing important future research directions and challenges.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
376,903
1304.0018
Statistical inference framework for source detection of contagion processes on arbitrary network structures
In this paper we introduce a statistical inference framework for estimating the contagion source from a partially observed contagion spreading process on an arbitrary network structure. The framework is based on a maximum likelihood estimation of a partial epidemic realization and involves large scale simulation of contagion spreading processes from the set of potential source locations. We present a number of different likelihood estimators that are used to determine the conditional probabilities associated to observing partial epidemic realization with particular source location candidates. This statistical inference framework is also applicable for arbitrary compartment contagion spreading processes on networks. We compare estimation accuracy of these approaches in a number of computational experiments performed with the SIR (susceptible-infected-recovered), SI (susceptible-infected) and ISS (ignorant-spreading-stifler) contagion spreading models on synthetic and real-world complex networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
23,358
1910.03875
How Well Do WGANs Estimate the Wasserstein Metric?
Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the $c$-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the $c$-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,616
2311.01993
Active Exploration in Iterative Gaussian Process Regression for Uncertainty Modeling in Autonomous Racing
Autonomous racing creates challenging control problems, but Model Predictive Control (MPC) has made promising steps toward solving both the minimum lap-time problem and head-to-head racing. Yet, accurate models of the system are necessary for model-based control, including models of vehicle dynamics and opponent behavior. Both dynamics model error and opponent behavior can be modeled with Gaussian Process (GP) regression. GP models can be updated iteratively from data collected using the controller, but the strength of the GP model depends on the diversity of the training data. We propose a novel active exploration mechanism for iterative GP regression that purposefully collects additional data at regions of higher uncertainty in the GP model. In the exploration, a MPC collects diverse data by balancing the racing objectives and the exploration criterion; then the GP is re-trained. The process is repeated iteratively; in later iterations, the exploration is deactivated, and only the racing objectives are optimized. Thus, the MPC can achieve better performance by leveraging the improved GP model. We validate our approach in the highly realistic racing simulation platform Gran Turismo Sport of Sony Interactive Entertainment Inc for a minimum lap time challenge, and in numerical simulation of head-to-head. Our active exploration mechanism yields a significant improvement in the GP prediction accuracy compared to previous approaches and, thus, an improved racing performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
405,255
2211.03267
Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following
Embodied Instruction Following (EIF) studies how autonomous mobile manipulation robots should be controlled to accomplish long-horizon tasks described by natural language instructions. While much research on EIF is conducted in simulators, the ultimate goal of the field is to deploy the agents in real life. This is one of the reasons why recent methods have moved away from training models end-to-end and take modular approaches, which do not need the costly expert operation data. However, as it is still in the early days of importing modular ideas to EIF, a search for modules effective in the EIF task is still far from a conclusion. In this paper, we propose to extend the modular design using knowledge obtained from two external sources. First, we show that embedding the physical constraints of the deployed robots into the module design is highly effective. Our design also allows the same modular system to work across robots of different configurations with minimal modifications. Second, we show that the landmark-based object search, previously implemented by a trained model requiring a dedicated set of data, can be replaced by an implementation that prompts pretrained large language models for landmark-object relationships, eliminating the need for collecting dedicated training data. Our proposed Prompter achieves 41.53\% and 45.32\% on the ALFRED benchmark with high-level instructions only and step-by-step instructions, respectively, significantly outperforming the previous state of the art by 5.46\% and 9.91\%.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
328,885
2102.08314
Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients
We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many of the sparse variational approach benefits while reducing the bias introduced into parameter learning. The basis of our bound is a more careful analysis of the log-determinant term appearing in the log marginal likelihood, as well as using the method of conjugate gradients to derive tight lower bounds on the term involving a quadratic form. Our approach is a step forward in unifying methods relying on lower bound maximisation (e.g. variational methods) and iterative approaches based on conjugate gradients for training Gaussian processes. In experiments, we show improved predictive performance with our model for a comparable amount of training time compared to other conjugate gradient based approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,408
2211.01484
Data Level Lottery Ticket Hypothesis for Vision Transformers
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within a dense neural network and a proper random initialization method called the winning ticket, such that it can be trained from scratch to almost as good as the dense counterpart. Meanwhile, the research of LTH in vision transformers (ViTs) is scarcely evaluated. In this paper, we first show that the conventional winning ticket is hard to find at the weight level of ViTs by existing methods. Then, we generalize the LTH for ViTs to input data consisting of image patches inspired by the input dependence of ViTs. That is, there exists a subset of input image patches such that a ViT can be trained from scratch by using only this subset of patches and achieve similar accuracy to the ViTs trained by using all image patches. We call this subset of input patches the em winning tickets, which represent a significant amount of information in the input data. We use a ticket selector to generate the winning tickets based on the informativeness of patches for various types of ViT, including DeiT, LV-ViT, and Swin Transformers. The experiments show that there is a clear difference between the performance of models trained with winning tickets and randomly selected subsets, which verifies our proposed theory. We elaborate on the analogical similarity between our proposed Data-LTH-ViTs and the conventional LTH to further verify the integrity of our theory. The Source codes are available at https://github.com/shawnricecake/vit-lottery-ticket-input.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
328,241
1903.11207
Information Maximizing Visual Question Generation
Though image-to-sequence generation models have become overwhelmingly popular in human-computer communications, they suffer from strongly favoring safe generic questions ("What is in this picture?"). Generating uninformative but relevant questions is not sufficient or useful. We argue that a good question is one that has a tightly focused purpose --- one that is aimed at expecting a specific type of response. We build a model that maximizes mutual information between the image, the expected answer and the generated question. To overcome the non-differentiability of discrete natural language tokens, we introduce a variational continuous latent space onto which the expected answers project. We regularize this latent space with a second latent space that ensures clustering of similar answers. Even when we don't know the expected answer, this second latent space can generate goal-driven questions specifically aimed at extracting objects ("what is the person throwing"), attributes, ("What kind of shirt is the person wearing?"), color ("what color is the frisbee?"), material ("What material is the frisbee?"), etc. We quantitatively show that our model is able to retain information about an expected answer category, resulting in more diverse, goal-driven questions. We launch our model on a set of real world images and extract previously unseen visual concepts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
125,451
1904.09609
TiK-means: $K$-means clustering for skewed groups
The $K$-means algorithm is extended to allow for partitioning of skewed groups. Our algorithm is called TiK-Means and contributes a $K$-means type algorithm that assigns observations to groups while estimating their skewness-transformation parameters. The resulting groups and transformation reveal general-structured clusters that can be explained by inverting the estimated transformation. Further, a modification of the jump statistic chooses the number of groups. Our algorithm is evaluated on simulated and real-life datasets and then applied to a long-standing astronomical dispute regarding the distinct kinds of gamma ray bursts.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
128,424
2002.07033
Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing
Knowledge tracing, the act of modeling a student's knowledge through learning activities, is an extensively studied problem in the field of computer-aided education. Although models with attention mechanism have outperformed traditional approaches such as Bayesian knowledge tracing and collaborative filtering, they share two limitations. Firstly, the models rely on shallow attention layers and fail to capture complex relations among exercises and responses over time. Secondly, different combinations of queries, keys and values for the self-attention layer for knowledge tracing were not extensively explored. Usual practice of using exercises and interactions (exercise-response pairs) as queries and keys/values respectively lacks empirical support. In this paper, we propose a novel Transformer based model for knowledge tracing, SAINT: Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder structure where exercise and response embedding sequence separately enter the encoder and the decoder respectively, which allows to stack attention layers multiple times. To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately. The empirical evaluations on a large-scale knowledge tracing dataset show that SAINT achieves the state-of-the-art performance in knowledge tracing with the improvement of AUC by 1.8% compared to the current state-of-the-art models.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
164,364
1903.01864
Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection
In this work, we propose a novel method termed \emph{Frustum ConvNet (F-ConvNet)} for amodal 3D object detection from point clouds. Given 2D region proposals in an RGB image, our method first generates a sequence of frustums for each region proposal, and uses the obtained frustums to group local points. F-ConvNet aggregates point-wise features as frustum-level feature vectors, and arrays these feature vectors as a feature map for use of its subsequent component of fully convolutional network (FCN), which spatially fuses frustum-level features and supports an end-to-end and continuous estimation of oriented boxes in the 3D space. We also propose component variants of F-ConvNet, including an FCN variant that extracts multi-resolution frustum features, and a refined use of F-ConvNet over a reduced 3D space. Careful ablation studies verify the efficacy of these component variants. F-ConvNet assumes no prior knowledge of the working 3D environment and is thus dataset-agnostic. We present experiments on both the indoor SUN-RGBD and outdoor KITTI datasets. F-ConvNet outperforms all existing methods on SUN-RGBD, and at the time of submission it outperforms all published works on the KITTI benchmark. Code has been made available at: {\url{https://github.com/zhixinwang/frustum-convnet}.}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
123,355
2410.01731
ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation
The practical use of text-to-image generation has evolved from simple, monolithic models to complex workflows that combine multiple specialized components. While workflow-based approaches can lead to improved image quality, crafting effective workflows requires significant expertise, owing to the large number of available components, their complex inter-dependence, and their dependence on the generation prompt. Here, we introduce the novel task of prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt. We propose two LLM-based approaches to tackle this task: a tuning-based method that learns from user-preference data, and a training-free method that uses the LLM to select existing flows. Both approaches lead to improved image quality when compared to monolithic models or generic, prompt-independent workflows. Our work shows that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation quality, complementing existing research directions in the field.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
true
493,915
2406.03141
Floating Anchor Diffusion Model for Multi-motif Scaffolding
Motif scaffolding seeks to design scaffold structures for constructing proteins with functions derived from the desired motif, which is crucial for the design of vaccines and enzymes. Previous works approach the problem by inpainting or conditional generation. Both of them can only scaffold motifs with fixed positions, and the conditional generation cannot guarantee the presence of motifs. However, prior knowledge of the relative motif positions in a protein is not readily available, and constructing a protein with multiple functions in one protein is more general and significant because of the synergies between functions. We propose a Floating Anchor Diffusion (FADiff) model. FADiff allows motifs to float rigidly and independently in the process of diffusion, which guarantees the presence of motifs and automates the motif position design. Our experiments demonstrate the efficacy of FADiff with high success rates and designable novel scaffolds. To the best of our knowledge, FADiff is the first work to tackle the challenge of scaffolding multiple motifs without relying on the expertise of relative motif positions in the protein. Code is available at https://github.com/aim-uofa/FADiff.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
461,107
2401.10773
Multilevel lattice codes from Hurwitz quaternion integers
This work presents an extension of the Construction $\pi_A$ lattices proposed in \cite{huang2017construction}, to Hurwitz quaternion integers. This construction is provided by using an isomorphism from a version of the Chinese remainder theorem applied to maximal orders in contrast to natural orders in prior works. Exploiting this map, we analyze the performance of the resulting multilevel lattice codes, highlight via computer simulations their notably reduced computational complexity provided by the multistage decoding. Moreover it is shown that this construction effectively attain the Poltyrev-limit.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
422,768
1908.05567
Deep reinforcement learning in World-Earth system models to discover sustainable management strategies
Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models for informing policy makers and the wider public, e.g. pathways leading to a robust mitigation of dangerous anthropogenic climate change, is a challenging and widely investigated task in the field of climate research and broader Earth system science. This problem is particularly difficult when constraints on avoiding transgressions of planetary boundaries and social foundations need to be taken into account. In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system. Based on the concept of the agent-environment interface, we develop an agent that is generally able to act and learn in variable manageable environment models of the Earth system. We demonstrate the potential of our framework by applying DRL algorithms to two stylized World-Earth system models. Conceptually, we explore thereby the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries. The artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable on the long term.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
141,749
2006.00814
Online Versus Offline NMT Quality: An In-depth Analysis on English-German and German-English
We conduct in this work an evaluation study comparing offline and online neural machine translation architectures. Two sequence-to-sequence models: convolutional Pervasive Attention (Elbayad et al. 2018) and attention-based Transformer (Vaswani et al. 2017) are considered. We investigate, for both architectures, the impact of online decoding constraints on the translation quality through a carefully designed human evaluation on English-German and German-English language pairs, the latter being particularly sensitive to latency constraints. The evaluation results allow us to identify the strengths and shortcomings of each model when we shift to the online setup.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
179,577
2103.05255
Improving Generalizability in Limited-Angle CT Reconstruction with Sinogram Extrapolation
Computed tomography (CT) reconstruction from X-ray projections acquired within a limited angle range is challenging, especially when the angle range is extremely small. Both analytical and iterative models need more projections for effective modeling. Deep learning methods have gained prevalence due to their excellent reconstruction performances, but such success is mainly limited within the same dataset and does not generalize across datasets with different distributions. Hereby we propose ExtraPolationNetwork for limited-angle CT reconstruction via the introduction of a sinogram extrapolation module, which is theoretically justified. The module complements extra sinogram information and boots model generalizability. Extensive experimental results show that our reconstruction model achieves state-of-the-art performance on NIH-AAPM dataset, similar to existing approaches. More importantly, we show that using such a sinogram extrapolation module significantly improves the generalization capability of the model on unseen datasets (e.g., COVID-19 and LIDC datasets) when compared to existing approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
223,923
2411.00208
Using Large Language Models for a standard assessment mapping for sustainable communities
This paper presents a new approach to urban sustainability assessment through the use of Large Language Models (LLMs) to streamline the use of the ISO 37101 framework to automate and standardise the assessment of urban initiatives against the six "sustainability purposes" and twelve "issues" outlined in the standard. The methodology includes the development of a custom prompt based on the standard definitions and its application to two different datasets: 527 projects from the Paris Participatory Budget and 398 activities from the PROBONO Horizon 2020 project. The results show the effectiveness of LLMs in quickly and consistently categorising different urban initiatives according to sustainability criteria. The approach is particularly promising when it comes to breaking down silos in urban planning by providing a holistic view of the impact of projects. The paper discusses the advantages of this method over traditional human-led assessments, including significant time savings and improved consistency. However, it also points out the importance of human expertise in interpreting results and ethical considerations. This study hopefully can contribute to the growing body of work on AI applications in urban planning and provides a novel method for operationalising standardised sustainability frameworks in different urban contexts.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
504,484
1712.03073
DeepWear: Adaptive Local Offloading for On-Wearable Deep Learning
Due to their on-body and ubiquitous nature, wearables can generate a wide range of unique sensor data creating countless opportunities for deep learning tasks. We propose DeepWear, a deep learning (DL) framework for wearable devices to improve the performance and reduce the energy footprint. DeepWear strategically offloads DL tasks from a wearable device to its paired handheld device through local network. Compared to the remote-cloud-based offloading, DeepWear requires no Internet connectivity, consumes less energy, and is robust to privacy breach. DeepWear provides various novel techniques such as context-aware offloading, strategic model partition, and pipelining support to efficiently utilize the processing capacity from nearby paired handhelds. Deployed as a user-space library, DeepWear offers developer-friendly APIs that are as simple as those in traditional DL libraries such as TensorFlow. We have implemented DeepWear on the Android OS and evaluated it on COTS smartphones and smartwatches with real DL models. DeepWear brings up to 5.08X and 23.0X execution speedup, as well as 53.5% and 85.5% energy saving compared to wearable-only and handheld-only strategies, respectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
86,390
1706.05572
Information Structure Design in Team Decision Problems
We consider a problem of information structure design in team decision problems and team games. We propose simple, scalable greedy algorithms for adding a set of extra information links to optimize team performance and resilience to non-cooperative and adversarial agents. We show via a simple counterexample that the set function mapping additional information links to team performance is in general not supermodular. Although this implies that the greedy algorithm is not accompanied by worst-case performance guarantees, we illustrate through numerical experiments that it can produce effective and often optimal or near optimal information structure modifications.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
75,535
2311.09172
Enhancing AmBC Systems with Deep Learning for Joint Channel Estimation and Signal Detection
The era of ubiquitous, affordable wireless connectivity has opened doors to countless practical applications. In this context, ambient backscatter communication (AmBC) stands out, utilizing passive tags to establish connections with readers by harnessing reflected ambient radio frequency (RF) signals. However, conventional data detectors face limitations due to their inadequate knowledge of channel and RF-source parameters. To address this challenge, we propose an innovative approach using a deep neural network (DNN) for channel state estimation (CSI) and signal detection within AmBC systems. Unlike traditional methods that separate CSI estimation and data detection, our approach leverages a DNN to implicitly estimate CSI and simultaneously detect data. The DNN model, trained offline using simulated data derived from channel statistics, excels in online data recovery, ensuring robust performance in practical scenarios. Comprehensive evaluations validate the superiority of our proposed DNN method over traditional detectors, particularly in terms of bit error rate (BER). In high signal-to-noise ratio (SNR) conditions, our method exhibits an impressive approximately 20% improvement in BER performance compared to the maximum likelihood (ML) approach. These results underscore the effectiveness of our developed approach for AmBC channel estimation and signal detection. In summary, our method outperforms traditional detectors, bolstering the reliability and efficiency of AmBC systems, even in challenging channel conditions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
408,016
0906.0744
Ergodic Fading Interference Channels: Sum-Capacity and Separability
The sum-capacity for specific sub-classes of ergodic fading Gaussian two-user interference channels (IFCs) is developed under the assumption of perfect channel state information at all transmitters and receivers. For the sub-classes of uniformly strong (every fading state is strong) and ergodic very strong two-sided IFCs (a mix of strong and weak fading states satisfying specific fading averaged conditions) the optimality of completely decoding the interference, i.e., converting the IFC to a compound multiple access channel (C-MAC), is proved. It is also shown that this capacity-achieving scheme requires encoding and decoding jointly across all fading states. As an achievable scheme and also as a topic of independent interest, the capacity region and the corresponding optimal power policies for an ergodic fading C-MAC are developed. For the sub-class of uniformly weak IFCs (every fading state is weak), genie-aided outer bounds are developed. The bounds are shown to be achieved by treating interference as noise and by separable coding for one-sided fading IFCs. Finally, for the sub-class of one-sided hybrid IFCs (a mix of weak and strong states that do not satisfy ergodic very strong conditions), an achievable scheme involving rate splitting and joint coding across all fading states is developed and is shown to perform at least as well as a separable coding scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,825
2202.12575
Mining Naturally-occurring Corrections and Paraphrases from Wikipedia's Revision History
Naturally-occurring instances of linguistic phenomena are important both for training and for evaluating automatic processes on text. When available in large quantities, they also prove interesting material for linguistic studies. In this article, we present a new resource built from Wikipedia's revision history, called WiCoPaCo (Wikipedia Correction and Paraphrase Corpus), which contains numerous editings by human contributors, including various corrections and rewritings. We discuss the main motivations for building such a resource, describe how it was built and present initial applications on French.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
282,294