id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1806.04611 | A Hierarchical Fuzzy System for an Advanced Driving Assistance System | In this study, we present a hierarchical fuzzy system by evaluating the risk state for a Driver Assistance System in order to contribute in reducing the road accident's number. A key component of this system is its ability to continually detect and test the inside and outside risks in real time: The outside car risks by detecting various road moving objects; this proposed system stands on computer vision approaches. The inside risks by presenting an automatic system for drowsy driving identification or detection by evaluating EEG signals of the driver; this developed system is based on computer vision techniques and biometrics factors (electroencephalogram EEG). This proposed system is then composed of three main modules. The first module is responsible for identifying the driver drowsiness state through his eye movements (physical drowsiness). The second one is responsible for detecting and analysing his physiological signals to also identify his drowsiness state (moral drowsiness). The third module is responsible to evaluate the road driving risks by detecting of the road different moving objects in a real time. The final decision will be obtained by merging of the three detection systems through the use of fuzzy decision rules. Finally, the proposed approach has been improved on ten samples from a proposed dataset. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 100,282 |
2203.15535 | Game-theoretical trajectory planning enhances social acceptability for
humans | Since humans and robots are increasingly sharing portions of their operational spaces, experimental evidence is needed to ascertain the safety and social acceptability of robots in human-populated environments. Although several studies have aimed at devising strategies for robot trajectory planning to perform \emph{safe} motion in populated environments, a few efforts have \emph{measured} to what extent a robot trajectory is \emph{accepted} by humans. Here, we present a navigation system for autonomous robotics that ensures safety and social acceptability of robotic trajectories. We overcome the typical reactive nature of state-of-the-art trajectory planners by leveraging non-cooperative game theory to design a planner that encapsulates human-like features of preservation of a vital space, recognition of groups, sequential and strategized decision making, and smooth obstacle avoidance. Social acceptability is measured through a variation of the Turing test administered in the form of a survey questionnaire to a pool of 691 participants. Comparison terms for our tests are a state-of-the-art navigation algorithm (Enhanced Vector Field Histogram, VFH) and purely human trajectories. While all participants easily recognized the non-human nature of VFH-generated trajectories, the distinction between game-theoretical trajectories and human ones were hardly revealed. These results mark a strong milestone toward the full integration of robots in social environments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 288,444 |
2208.10494 | Dataset Condensation with Latent Space Knowledge Factorization and
Sharing | In this paper, we introduce a novel approach for systematically solving dataset condensation problem in an efficient manner by exploiting the regularity in a given dataset. Instead of condensing the dataset directly in the original input space, we assume a generative process of the dataset with a set of learnable codes defined in a compact latent space followed by a set of tiny decoders which maps them differently to the original input space. By combining different codes and decoders interchangeably, we can dramatically increase the number of synthetic examples with essentially the same parameter count, because the latent space is much lower dimensional and since we can assume as many decoders as necessary to capture different styles represented in the dataset with negligible cost. Such knowledge factorization allows efficient sharing of information between synthetic examples in a systematic way, providing far better trade-off between compression ratio and quality of the generated examples. We experimentally show that our method achieves new state-of-the-art records by significant margins on various benchmark datasets such as SVHN, CIFAR10, CIFAR100, and TinyImageNet. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 314,084 |
1801.04651 | Deep Net Triage: Analyzing the Importance of Network Layers via
Structural Compression | Despite their prevalence, deep networks are poorly understood. This is due, at least in part, to their highly parameterized nature. As such, while certain structures have been found to work better than others, the significance of a model's unique structure, or the importance of a given layer, and how these translate to overall accuracy, remains unclear. In this paper, we analyze these properties of deep neural networks via a process we term deep net triage. Like medical triage---the assessment of the importance of various wounds---we assess the importance of layers in a neural network, or as we call it, their criticality. We do this by applying structural compression, whereby we reduce a block of layers to a single layer. After compressing a set of layers, we apply a combination of initialization and training schemes, and look at network accuracy, convergence, and the layer's learned filters to assess the criticality of the layer. We apply this analysis across four data sets of varying complexity. We find that the accuracy of the model does not depend on which layer was compressed; that accuracy can be recovered or exceeded after compression by fine-tuning across the entire model; and, lastly, that Knowledge Distillation can be used to hasten convergence of a compressed network, but constrains the accuracy attainable to that of the base model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 88,321 |
1806.10333 | A Generalized Data Representation and Training-Performance Analysis for
Deep Learning-Based Communications Systems | Deep learning (DL)-based autoencoder is a potential architecture to implement end-to-end communication systems. In this letter, we first give a brief introduction to the autoencoder-represented communication system. Then, we propose a novel generalized data representation (GDR) aiming to improve the data rate of DL-based communication systems. Finally, simulation results show that the proposed GDR scheme has lower training complexity, comparable block error rate performance and higher channel capacity than the conventional one-hot vector scheme. Furthermore, we investigate the effect of signal-to-noise ratio (SNR) in DL-based communication systems and prove that training at a high SNR could produce a good training performance for autoencoder. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 101,529 |
2209.02332 | Ecosystem for Demand-side Flexibility Revisited: The Danish Solution | Denmark has recently set a legislation called Market Model 3.0 to make the ecosystem for demand-side flexibility more attractive to stakeholders involved. The main change is to relax the previous mandate that required each aggregator to be associated with a retailer and a balance responsible party. We explain the rationale behind such a change and its implications, particularly on the pre-qualification of demand-side portfolios providing ancillary services. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 316,171 |
2210.14678 | Investigating the Role of Centering Theory in the Context of Neural
Coreference Resolution Systems | Centering theory (CT; Grosz et al., 1995) provides a linguistic analysis of the structure of discourse. According to the theory, local coherence of discourse arises from the manner and extent to which successive utterances make reference to the same entities. In this paper, we investigate the connection between centering theory and modern coreference resolution systems. We provide an operationalization of centering and systematically investigate if neural coreference resolvers adhere to the rules of centering theory by defining various discourse metrics and developing a search-based methodology. Our information-theoretic analysis reveals a positive dependence between coreference and centering; but also shows that high-quality neural coreference resolvers may not benefit much from explicitly modeling centering ideas. Our analysis further shows that contextualized embeddings contain much of the coherence information, which helps explain why CT can only provide little gains to modern neural coreference resolvers which make use of pretrained representations. Finally, we discuss factors that contribute to coreference which are not modeled by CT such as world knowledge and recency bias. We formulate a version of CT that also models recency and show that it captures coreference information better compared to vanilla CT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 326,646 |
1212.6930 | Private Broadcasting over Independent Parallel Channels | We study private broadcasting of two messages to two groups of receivers over independent parallel channels. One group consists of an arbitrary number of receivers interested in a common message, whereas the other group has only one receiver. Each message must be kept confidential from the receiver(s) in the other group. Each of the sub-channels is degraded, but the order of receivers on each channel can be different. While corner points of the capacity region were characterized in earlier works, we establish the capacity region and show the optimality of a superposition strategy. For the case of parallel Gaussian channels, we show that a Gaussian input distribution is optimal. We also discuss an extension of our setup to broadcasting over a block-fading channel and demonstrate significant performance gains using the proposed scheme over a baseline time-sharing scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,678 |
2211.06212 | From Competition to Collaboration: Making Toy Datasets on Kaggle
Clinically Useful for Chest X-Ray Diagnosis Using Federated Learning | Chest X-ray (CXR) datasets hosted on Kaggle, though useful from a data science competition standpoint, have limited utility in clinical use because of their narrow focus on diagnosing one specific disease. In real-world clinical use, multiple diseases need to be considered since they can co-exist in the same patient. In this work, we demonstrate how federated learning (FL) can be used to make these toy CXR datasets from Kaggle clinically useful. Specifically, we train a single FL classification model (`global`) using two separate CXR datasets -- one annotated for presence of pneumonia and the other for presence of pneumothorax (two common and life-threatening conditions) -- capable of diagnosing both. We compare the performance of the global FL model with models trained separately on both datasets (`baseline`) for two different model architectures. On a standard, naive 3-layer CNN architecture, the global FL model achieved AUROC of 0.84 and 0.81 for pneumonia and pneumothorax, respectively, compared to 0.85 and 0.82, respectively, for both baseline models (p>0.05). Similarly, on a pretrained DenseNet121 architecture, the global FL model achieved AUROC of 0.88 and 0.91 for pneumonia and pneumothorax, respectively, compared to 0.89 and 0.91, respectively, for both baseline models (p>0.05). Our results suggest that FL can be used to create global `meta` models to make toy datasets from Kaggle clinically useful, a step forward towards bridging the gap from bench to bedside. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 329,814 |
1911.13060 | Orthogonal Wasserstein GANs | Wasserstein-GANs have been introduced to address the deficiencies of generative adversarial networks (GANs) regarding the problems of vanishing gradients and mode collapse during the training, leading to improved convergence behaviour and improved image quality. However, Wasserstein-GANs require the discriminator to be Lipschitz continuous. In current state-of-the-art Wasserstein-GANs this constraint is enforced via gradient norm regularization. In this paper, we demonstrate that this regularization does not encourage a broad distribution of spectral-values in the discriminator weights, hence resulting in less fidelity in the learned distribution. We therefore investigate the possibility of substituting this Lipschitz constraint with an orthogonality constraint on the weight matrices. We compare three different weight orthogonalization techniques with regards to their convergence properties, their ability to ensure the Lipschitz condition and the achieved quality of the learned distribution. In addition, we provide a comparison to Wasserstein-GANs trained with current state-of-the-art methods, where we demonstrate the potential of solely using orthogonality-based regularization. In this context, we propose an improved training procedure for Wasserstein-GANs which utilizes orthogonalization to further increase its generalization capability. Finally, we provide a novel metric to evaluate the generalization capabilities of the discriminators of different Wasserstein-GANs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 155,574 |
2206.04525 | Large-Scale Crosstalk-Corrected Thermo-Optic Phase Shifter Arrays in
Silicon Photonics | We introduce a thermo-optic phase shifter (TOPS) array architecture with independent phase control of each phase shifter for large-scale and high-density photonic integrated circuits with two different control schemes: pulse amplitude modulation (PAM) and pulse width modulation (PWM). We realize a compact spiral TOPS and a 288-element high-density row-column TOPS array with this architecture and drive TOPS with waveforms of both control schemes and of different array sizes. We present a thermal excitation model and a finite difference method-based simulation to simulate large-scale TOPS arrays and compare both schemes experimentally and theoretically. We also analyze the effects of thermal crosstalk in the realized TOPS array and implement a thermal crosstalk correction algorithm with the developed model. The high-density TOPS array architecture and the thermal crosstalk correction algorithm pave the way for high-density TOPS arrays with independent phase control in large-scale photonic integrated circuits interfaced with electronics limited in voltage swing and bandwidth. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 301,660 |
1604.06751 | evt_MNIST: A spike based version of traditional MNIST | Benchmarks and datasets have important role in evaluation of machine learning algorithms and neural network implementations. Traditional dataset for images such as MNIST is applied to evaluate efficiency of different training algorithms in neural networks. This demand is different in Spiking Neural Networks (SNN) as they require spiking inputs. It is widely believed, in the biological cortex the timing of spikes is irregular. Poisson distributions provide adequate descriptions of the irregularity in generating appropriate spikes. Here, we introduce a spike-based version of MNSIT (handwritten digits dataset),using Poisson distribution and show the Poissonian property of the generated streams. We introduce a new version of evt_MNIST which can be used for neural network evaluation. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 54,988 |
1408.6493 | Adaptive Quadrature Detection for Multicarrier Continuous-Variable
Quantum Key Distribution | We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable valuable information in diverse measurement and transmission conditions. The framework is particularly convenient for experimental CVQKD scenarios. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 35,632 |
2309.04508 | Spatial-Temporal Graph Attention Fuser for Calibration in IoT Air
Pollution Monitoring Systems | The use of Internet of Things (IoT) sensors for air pollution monitoring has significantly increased, resulting in the deployment of low-cost sensors. Despite this advancement, accurately calibrating these sensors in uncontrolled environmental conditions remains a challenge. To address this, we propose a novel approach that leverages graph neural networks, specifically the graph attention network module, to enhance the calibration process by fusing data from sensor arrays. Through our experiments, we demonstrate the effectiveness of our approach in significantly improving the calibration accuracy of sensors in IoT air pollution monitoring platforms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 390,753 |
2305.19406 | PaintSeg: Training-free Segmentation via Painting | The paper introduces PaintSeg, a new unsupervised method for segmenting objects without any training. We propose an adversarial masked contrastive painting (AMCP) process, which creates a contrast between the original image and a painted image in which a masked area is painted using off-the-shelf generative models. During the painting process, inpainting and outpainting are alternated, with the former masking the foreground and filling in the background, and the latter masking the background while recovering the missing part of the foreground object. Inpainting and outpainting, also referred to as I-step and O-step, allow our method to gradually advance the target segmentation mask toward the ground truth without supervision or training. PaintSeg can be configured to work with a variety of prompts, e.g. coarse masks, boxes, scribbles, and points. Our experimental results demonstrate that PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and point-prompt segmentation tasks, providing a training-free solution suitable for unsupervised segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 369,509 |
2105.04616 | Automatic Classification of Human Translation and Machine Translation: A
Study from the Perspective of Lexical Diversity | By using a trigram model and fine-tuning a pretrained BERT model for sequence classification, we show that machine translation and human translation can be classified with an accuracy above chance level, which suggests that machine translation and human translation are different in a systematic way. The classification accuracy of machine translation is much higher than of human translation. We show that this may be explained by the difference in lexical diversity between machine translation and human translation. If machine translation has independent patterns from human translation, automatic metrics which measure the deviation of machine translation from human translation may conflate difference with quality. Our experiment with two different types of automatic metrics shows correlation with the result of the classification task. Therefore, we suggest the difference in lexical diversity between machine translation and human translation be given more attention in machine translation evaluation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 234,556 |
2303.00551 | Optimal Placement of Electric Vehicle Charging Stations in Populated
Regions of Tehran for Various Demands Distribution | Redacted by arXiv. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 348,637 |
2107.14593 | Neural Variational Learning for Grounded Language Acquisition | We propose a learning system in which language is grounded in visual percepts without specific pre-defined categories of terms. We present a unified generative method to acquire a shared semantic/visual embedding that enables the learning of language about a wide range of real-world objects. We evaluate the efficacy of this learning by predicting the semantics of objects and comparing the performance with neural and non-neural inputs. We show that this generative approach exhibits promising results in language grounding without pre-specifying visual categories under low resource settings. Our experiments demonstrate that this approach is generalizable to multilingual, highly varied datasets. | false | false | false | false | true | false | true | true | true | false | false | false | false | false | false | false | false | false | 248,516 |
2412.01495 | Adversarial Attacks on Hyperbolic Networks | As hyperbolic deep learning grows in popularity, so does the need for adversarial robustness in the context of such a non-Euclidean geometry. To this end, this paper proposes hyperbolic alternatives to the commonly used FGM and PGD adversarial attacks. Through interpretable synthetic benchmarks and experiments on existing datasets, we show how the existing and newly proposed attacks differ. Moreover, we investigate the differences in adversarial robustness between Euclidean and fully hyperbolic networks. We find that these networks suffer from different types of vulnerabilities and that the newly proposed hyperbolic attacks cannot address these differences. Therefore, we conclude that the shifts in adversarial robustness are due to the models learning distinct patterns resulting from their different geometries. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 513,139 |
2407.16602 | Functional Acceleration for Policy Mirror Descent | We apply functional acceleration to the Policy Mirror Descent (PMD) general family of algorithms, which cover a wide range of novel and fundamental methods in Reinforcement Learning (RL). Leveraging duality, we propose a momentum-based PMD update. By taking the functional route, our approach is independent of the policy parametrization and applicable to large-scale optimization, covering previous applications of momentum at the level of policy parameters as a special case. We theoretically analyze several properties of this approach and complement with a numerical ablation study, which serves to illustrate the policy optimization dynamics on the value polytope, relative to different algorithmic design choices in this space. We further characterize numerically several features of the problem setting relevant for functional acceleration, and lastly, we investigate the impact of approximation on their learning mechanics. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 475,652 |
1603.08070 | A generalized flow for multi-class and binary classification tasks: An
Azure ML approach | The constant growth in the present day real-world databases pose computational challenges for a single computer. Cloud-based platforms, on the other hand, are capable of handling large volumes of information manipulation tasks, thereby necessitating their use for large real-world data set computations. This work focuses on creating a novel Generalized Flow within the cloud-based computing platform: Microsoft Azure Machine Learning Studio (MAMLS) that accepts multi-class and binary classification data sets alike and processes them to maximize the overall classification accuracy. First, each data set is split into training and testing data sets, respectively. Then, linear and nonlinear classification model parameters are estimated using the training data set. Data dimensionality reduction is then performed to maximize classification accuracy. For multi-class data sets, data centric information is used to further improve overall classification accuracy by reducing the multi-class classification to a series of hierarchical binary classification tasks. Finally, the performance of optimized classification model thus achieved is evaluated and scored on the testing data set. The classification characteristics of the proposed flow are comparatively evaluated on 3 public data sets and a local data set with respect to existing state-of-the-art methods. On the 3 public data sets, the proposed flow achieves 78-97.5% classification accuracy. Also, the local data set, created using the information regarding presence of Diabetic Retinopathy lesions in fundus images, results in 85.3-95.7% average classification accuracy, which is higher than the existing methods. Thus, the proposed generalized flow can be useful for a wide range of application-oriented "big data sets". | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 53,711 |
2412.20032 | Online Low-Carbon Workload, Energy, and Temperature Management of
Distributed Data Centers | Data centers have become one of the major energy consumers, making their low-carbon operations critical to achieving global carbon neutrality. Although distributed data centers have the potential to reduce costs and emissions through cooperation, they are facing challenges due to uncertainties. This paper proposes an online approach to co-optimize the workload, energy, and temperature strategies across distributed data centers, targeting minimal total cost, controlled carbon emissions, and adherence to operational constraints. Lyapunov optimization technique is adopted to derive a parametric real-time strategy that accommodates uncertainties in workload demands, ambient temperature, electricity prices, and carbon intensities, without requiring prior knowledge of their distributions. A theoretical upper bound for the optimality gap is derived, based on which a linear programming problem is proposed to optimize the strategy parameters, enhancing performance while ensuring operational constraints. Case studies and method comparison validate the proposed method's effectiveness in reducing costs and carbon emissions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 521,065 |
1908.06527 | The Runtime of the Compact Genetic Algorithm on Jump Functions | In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasen\"ohrl and Sutton (GECCO 2018) showed for any $k = o(n)$ that the compact genetic algorithm (cGA) with any hypothetical population size $\mu = \Omega(ne^{4k} + n^{3.5+\varepsilon})$ with high probability finds the optimum of the $n$-dimensional jump function with jump size $k$ in time $O(\mu n^{1.5} \log n)$. We significantly improve this result for small jump sizes $k \le \frac 1 {20} \ln n -1$. In this case, already for $\mu = \Omega(\sqrt n \log n) \cap \text{poly}(n)$ the runtime of the cGA with high probability is only $O(\mu \sqrt n)$. For the smallest admissible values of $\mu$, our result gives a runtime of $O(n \log n)$, whereas the previous one only shows $O(n^{5+\varepsilon})$. Since it is known that the cGA with high probability needs at least $\Omega(\mu \sqrt n)$ iterations to optimize the unimodal OneMx function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross moderate-sized valleys of low fitness at no extra cost. For large $k$, we show that the exponential (in $k$) runtime guarantee of Hasen\"ohrl and Sutton is tight and cannot be improved, also not by using a smaller hypothetical population size. We prove that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size $k$. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 142,037 |
2012.01542 | Mutual Information Maximization on Disentangled Representations for
Differential Morph Detection | In this paper, we present a novel differential morph detection framework, utilizing landmark and appearance disentanglement. In our framework, the face image is represented in the embedding domain using two disentangled but complementary representations. The network is trained by triplets of face images, in which the intermediate image inherits the landmarks from one image and the appearance from the other image. This initially trained network is further trained for each dataset using contrastive representations. We demonstrate that, by employing appearance and landmark disentanglement, the proposed framework can provide state-of-the-art differential morph detection performance. This functionality is achieved by the using distances in landmark, appearance, and ID domains. The performance of the proposed framework is evaluated using three morph datasets generated with different methodologies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,444 |
2502.00396 | Dexterous Cable Manipulation: Taxonomy, Multi-Fingered Hand Design, and
Long-Horizon Manipulation | Existing research that addressed cable manipulation relied on two-fingered grippers, which make it difficult to perform similar cable manipulation tasks that humans perform. However, unlike dexterous manipulation of rigid objects, the development of dexterous cable manipulation skills in robotics remains underexplored due to the unique challenges posed by a cable's deformability and inherent uncertainty. In addition, using a dexterous hand introduces specific difficulties in tasks, such as cable grasping, pulling, and in-hand bending, for which no dedicated task definitions, benchmarks, or evaluation metrics exist. Furthermore, we observed that most existing dexterous hands are designed with structures identical to humans', typically featuring only one thumb, which often limits their effectiveness during dexterous cable manipulation. Lastly, existing non-task-specific methods did not have enough generalization ability to solve these cable manipulation tasks or are unsuitable due to the designed hardware. We have three contributions in real-world dexterous cable manipulation in the following steps: (1) We first defined and organized a set of dexterous cable manipulation tasks into a comprehensive taxonomy, covering most short-horizon action primitives and long-horizon tasks for one-handed cable manipulation. This taxonomy revealed that coordination between the thumb and the index finger is critical for cable manipulation, which decomposes long-horizon tasks into simpler primitives. (2) We designed a novel five-fingered hand with 25 degrees of freedom (DoF), featuring two symmetric thumb-index configurations and a rotatable joint on each fingertip, which enables dexterous cable manipulation. (3) We developed a demonstration collection pipeline for this non-anthropomorphic hand, which is difficult to operate by previous motion capture methods. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 529,361 |
2307.09023 | LA-Net: Landmark-Aware Learning for Reliable Facial Expression
Recognition under Label Noise | Facial expression recognition (FER) remains a challenging task due to the ambiguity of expressions. The derived noisy labels significantly harm the performance in real-world scenarios. To address this issue, we present a new FER model named Landmark-Aware Net~(LA-Net), which leverages facial landmarks to mitigate the impact of label noise from two perspectives. Firstly, LA-Net uses landmark information to suppress the uncertainty in expression space and constructs the label distribution of each sample by neighborhood aggregation, which in turn improves the quality of training supervision. Secondly, the model incorporates landmark information into expression representations using the devised expression-landmark contrastive loss. The enhanced expression feature extractor can be less susceptible to label noise. Our method can be integrated with any deep neural network for better training supervision without introducing extra inference costs. We conduct extensive experiments on both in-the-wild datasets and synthetic noisy datasets and demonstrate that LA-Net achieves state-of-the-art performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 380,030 |
2112.11157 | Integral representations of shallow neural network with Rectified Power
Unit activation function | In this effort, we derive a formula for the integral representation of a shallow neural network with the Rectified Power Unit activation function. Mainly, our first result deals with the univariate case of representation capability of RePU shallow networks. The multidimensional result in this paper characterizes the set of functions that can be represented with bounded norm and possibly unbounded width. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 272,619 |
2208.07528 | Integrating Satellites and Mobile Edge Computing for 6G Wide-Area Edge
Intelligence: Minimal Structures and Systematic Thinking | The sixth-generation (6G) network will shift its focus to supporting everything including various machine-type devices (MTDs) in an everyone-centric manner. To ubiquitously cover the MTDs working in rural and disastrous areas, satellite communications become indispensable, while mobile edge computing (MEC) also plays an increasingly crucial role. Their sophisticated integration enables wide-area edge intelligence which promises to facilitate globally-distributed customized services. In this article, we present typical use cases of integrated satellite-MEC networks and discuss the main challenges therein. Inspired by the protein structure and the systematic engineering methodology, we propose three minimal integrating structures, based on which a complex integrated satellite-MEC network can be treated as their extension and combination. We discuss the unique characteristics and key problems of each minimal structure. Accordingly, we establish an on-demand network orchestration framework to enrich the hierarchy of network management, which further leads to a process-oriented network optimization method. On that basis, a case study is utilized to showcase the benefits of on-demand network orchestration and process-oriented network optimization. Finally, we outline potential research issues to envision a more intelligent, more secure, and greener integrated network. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 313,071 |
2112.02399 | VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts | Contrastive Language-Image Pre-training (CLIP) has drawn increasing attention recently for its transferable visual representation learning. However, due to the semantic gap within datasets, CLIP's pre-trained image-text alignment becomes sub-optimal on downstream tasks, which severely harms its transferring performance. To better adapt the cross-modality embedding space, we propose to enhance CLIP via Visual-guided Texts, named VT-CLIP. Specifically, we guide textual features of different categories to adaptively explore informative regions on the image and aggregate visual features by attention mechanisms. In this way, the texts become visual-guided, namely, more semantically correlated with downstream images, which greatly benefits the category-wise matching process. In few-shot settings, we evaluate our VT-CLIP on 11 well-known classification datasets to demonstrate its effectiveness. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 269,833 |
2110.06049 | Improved Pillar with Fine-grained Feature for 3D Object Detection | 3D object detection with LiDAR point clouds plays an important role in autonomous driving perception module that requires high speed, stability and accuracy. However, the existing point-based methods are challenging to reach the speed requirements because of too many raw points, and the voxel-based methods are unable to ensure stable speed because of the 3D sparse convolution. In contrast, the 2D grid-based methods, such as PointPillar, can easily achieve a stable and efficient speed based on simple 2D convolution, but it is hard to get the competitive accuracy limited by the coarse-grained point clouds representation. So we propose an improved pillar with fine-grained feature based on PointPillar that can significantly improve detection accuracy. It consists of two modules, including height-aware sub-pillar and sparsity-based tiny-pillar, which get fine-grained representation respectively in the vertical and horizontal direction of 3D space. For height-aware sub-pillar, we introduce a height position encoding to keep height information of each sub-pillar during projecting to a 2D pseudo image. For sparsity-based tiny-pillar, we introduce sparsity-based CNN backbone stacked by dense feature and sparse attention module to extract feature with larger receptive field efficiently. Experimental results show that our proposed method significantly outperforms previous state-of-the-art 3D detection methods on the Waymo Open Dataset. The related code will be released to facilitate the academic and industrial study. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 260,485 |
2412.09006 | Motor Imagery Classification for Asynchronous EEG-Based Brain-Computer
Interfaces | Motor imagery (MI) based brain-computer interfaces (BCIs) enable the direct control of external devices through the imagined movements of various body parts. Unlike previous systems that used fixed-length EEG trials for MI decoding, asynchronous BCIs aim to detect the user's MI without explicit triggers. They are challenging to implement, because the algorithm needs to first distinguish between resting-states and MI trials, and then classify the MI trials into the correct task, all without any triggers. This paper proposes a sliding window prescreening and classification (SWPC) approach for MI-based asynchronous BCIs, which consists of two modules: a prescreening module to screen MI trials out of the resting-state, and a classification module for MI classification. Both modules are trained with supervised learning followed by self-supervised learning, which refines the feature extractors. Within-subject and cross-subject asynchronous MI classifications on four different EEG datasets validated the effectiveness of SWPC, i.e., it always achieved the highest average classification accuracy, and outperformed the best state-of-the-art baseline on each dataset by about 2%. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,315 |
2101.02409 | On the Management of Type 1 Diabetes Mellitus with IoT Devices and ML
Techniques | The purpose of this Conference is to present the main lines of base projects that are founded on research already begun in previous years. In this sense, this manuscript will present the main lines of research in Diabetes Mellitus type 1 and Machine Learning techniques in an Internet of Things environment, so that we can summarize the future lines to be developed as follows: data collection through biosensors, massive data processing in the cloud, interconnection of biodevices, local computing vs. cloud computing, and possibilities of machine learning techniques to predict blood glucose values, including both variable selection algorithms and predictive techniques. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 214,619 |
1810.03553 | ISS Property with Respect to Boundary Disturbances for a Class of
Riesz-Spectral Boundary Control Systems | This paper deals with the establishment of Input-to-State Stability (ISS) estimates for infinite dimensional systems with respect to both boundary and distributed disturbances. First, a new approach is developed for the establishment of ISS estimates for a class of Riesz-spectral boundary control systems satisfying certain eigenvalue constraints. Second, a concept of weak solutions is introduced in order to relax the disturbances regularity assumptions required to ensure the existence of classical solutions. The proposed concept of weak solutions, that applies to a large class of boundary control systems which is not limited to the Riesz-spectral ones, provides a natural extension of the concept of both classical and mild solutions. Assuming that an ISS estimate holds true for classical solutions, we show the existence, the uniqueness, and the ISS property of the weak solutions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 109,837 |
1301.1373 | Regularized Zero-Forcing Interference Alignment for the Two-Cell MIMO
Interfering Broadcast Channel | In this paper, we propose transceiver design strategies for the two-cell multiple-input multiple-output (MIMO) interfering broadcast channel where inter-cell interference (ICI) exists in addition to interuser interference (IUI). We first formulate the generalized zero-forcing interference alignment (ZF-IA) method based on the alignment of IUI and ICI in multi-dimensional subspace. We then devise a minimum weighted-mean-square-error (WMSE) method based on regularizing the precoders and decoders of the generalized ZF-IA scheme. In contrast to the existing weighted-sum-rate-maximizing transceiver, our method does not require an iterative calculation of the optimal weights. Because of this, the proposed scheme, while not designed specifically to maximize the sum rate, is computationally efficient and achieves a faster convergence compared to the known weighted-sum-rate maximizing scheme. Through analysis and simulation, we show the effectiveness of the proposed regularized ZF-IA scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,843 |
2207.12504 | Unsupervised Speaker Diarization that is Agnostic to Language,
Overlap-Aware, and Tuning Free | Podcasts are conversational in nature and speaker changes are frequent -- requiring speaker diarization for content understanding. We propose an unsupervised technique for speaker diarization without relying on language-specific components. The algorithm is overlap-aware and does not require information about the number of speakers. Our approach shows 79% improvement on purity scores (34% on F-score) against the Google Cloud Platform solution on podcast data. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 310,018 |
2011.05399 | Learning for Integer-Constrained Optimization through Neural Networks
with Limited Training | In this paper, we investigate a neural network-based learning approach towards solving an integer-constrained programming problem using very limited training. To be specific, we introduce a symmetric and decomposed neural network structure, which is fully interpretable in terms of the functionality of its constituent components. By taking advantage of the underlying pattern of the integer constraint, as well as of the affine nature of the objective function, the introduced neural network offers superior generalization performance with limited training, as compared to other generic neural network structures that do not exploit the inherent structure of the integer constraint. In addition, we show that the introduced decomposed approach can be further extended to semi-decomposed frameworks. The introduced learning approach is evaluated via the classification/symbol detection task in the context of wireless communication systems where available training sets are usually limited. Evaluation results demonstrate that the introduced learning strategy is able to effectively perform the classification/symbol detection task in a wide variety of wireless channel environments specified by the 3GPP community. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,900 |
2402.01239 | PRIME: Protect Your Videos From Malicious Editing | With the development of generative models, the quality of generated content keeps increasing. Recently, open-source models have made it surprisingly easy to manipulate and edit photos and videos, with just a few simple prompts. While these cutting-edge technologies have gained popularity, they have also given rise to concerns regarding the privacy and portrait rights of individuals. Malicious users can exploit these tools for deceptive or illegal purposes. Although some previous works focus on protecting photos against generative models, we find there are still gaps between protecting videos and images in the aspects of efficiency and effectiveness. Therefore, we introduce our protection method, PRIME, to significantly reduce the time cost and improve the protection performance. Moreover, to evaluate our proposed protection method, we consider both objective metrics and human subjective metrics. Our evaluation results indicate that PRIME only costs 8.3% GPU hours of the cost of the previous state-of-the-art method and achieves better protection results on both human evaluation and objective metrics. Code can be found in https://github.com/GuanlinLee/prime. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 425,927 |
2412.01511 | Many-User Multiple Access with Random User Activity: Achievability
Bounds and Efficient Schemes | We study the Gaussian multiple access channel with random user activity, in the regime where the number of users is proportional to the code length. The receiver may know some statistics about the number of active users, but does not know the exact number nor the identities of the active users. We derive two achievability bounds on the probabilities of misdetection, false alarm, and active user error, and propose an efficient CDMA-type scheme whose performance can be compared against these bounds. The first bound is a finite-length result based on Gaussian random codebooks and maximum-likelihood decoding. The second is an asymptotic bound, established using spatially coupled Gaussian codebooks and approximate message passing (AMP) decoding. These bounds can be used to compute an achievable trade-off between the active user density and energy-per-bit, for a fixed user payload and target error rate. The efficient CDMA scheme uses a spatially coupled signature matrix and AMP decoding, and we give rigorous asymptotic guarantees on its error performance. Our analysis provides the first state evolution result for spatially coupled AMP with matrix-valued iterates, which may be of independent interest. Numerical experiments demonstrate the promising error performance of the CDMA scheme for both small and large user payloads, when compared with the two achievability bounds. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 513,146 |
2103.10673 | Cost-effective Deployment of BERT Models in Serverless Environment | In this study we demonstrate the viability of deploying BERT-style models to serverless environments in a production setting. Since the freely available pre-trained models are too large to be deployed in this way, we utilize knowledge distillation and fine-tune the models on proprietary datasets for two real-world tasks: sentiment analysis and semantic textual similarity. As a result, we obtain models that are tuned for a specific domain and deployable in serverless environments. The subsequent performance analysis shows that this solution results in latency levels acceptable for production use and that it is also a cost-effective approach for small-to-medium size deployments of BERT models, all without any infrastructure overhead. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 225,530 |
2402.05972 | Gaussian-process-regression-based method for the localization of
exceptional points in complex resonance spectra | Resonances in open quantum systems depending on at least two controllable parameters can show the phenomenon of exceptional points (EPs), where not only the eigenvalues but also the eigenvectors of two or more resonances coalesce. Their exact localization in the parameter space is challenging, in particular in systems, where the computation of the quantum spectra and resonances is numerically very expensive. We introduce an efficient machine learning algorithm to find exceptional points based on Gaussian process regression (GPR). The GPR-model is trained with an initial set of eigenvalue pairs belonging to an EP and used for a first estimation of the EP position via a numerically cheap root search. The estimate is then improved iteratively by adding selected exact eigenvalue pairs as training points to the GPR-model. The GPR-based method is developed and tested on a simple low-dimensional matrix model and then applied to a challenging real physical system, viz., the localization of EPs in the resonance spectra of excitons in cuprous oxide in external electric and magnetic fields. The precise computation of EPs, by taking into account the complete valence band structure and central-cell corrections of the crystal, can be the basis for the experimental observation of EPs in this system. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 428,102 |
1811.07471 | Exploring Small-World Network with an Elite-Clique: Bringing
Embeddedness Theory into the Dynamic Evolution of a Venture Capital Network | This paper uses a network dynamics model to explain the formation of a small-world network with an elite-clique. This network is a small-world network with an elite-clique at its center in which elites are also the centers of many small groups. These leaders also act as bridges between different small groups. Network dynamics are an important research topic due to their ability to explain the evolution of network structures. In this paper, a Chinese Venture Capital (VC) network was coded from joint investments between VC firms and then analyzed to uncover its network properties and factors that influence its evolution. We first built a random graph model to control for factors such as network scale, network growth, investment frequency and syndication tendency. Then we added a partner-selection mechanism and used two theories to analyze the formation of network structure: relational embeddedness and structural embeddedness. After that, we ran simulations and compared the three models with the actual Chinese VC network. To do this we computed the elite-clique's EI index, degree distribution, clustering coefficient distribution and motifs. Results show that adding embeddedness theories significantly improved the network dynamic model's predictive power, and help us uncover the mechanisms that affect the formation of a small-world industrial network with an elite-clique at its center. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 113,777 |
2304.03124 | FeFET-based MirrorBit cell for High-density NVM storage | HfO2-based Ferroelectric field-effect transistor (FeFET) has become a center of attraction for non-volatile memory applications because of their low power, fast switching speed, high scalability, and CMOS compatibility. In this work, we show an n-channel FeFET-based Multibit memory, termed MirrorBit, which effectively doubles the chip density via programming the gradient ferroelectric polarizations in the gate using an appropriate biasing scheme. We have experimentally demonstrated MirrorBit on GlobalFoundries HfO2-based FeFET devices fabricated at 28 nm bulk HKMG CMOS technology. Retention of MirrorBit states has been shown up to $10^5$ s at different temperatures. Also, the endurance is found to be more than $10^3$ cycles. A TCAD simulation is also presented to explain the origin and working of MirrorBit states based on the FeFET model calibrated using the GlobalFoundries FeFET device. We have also proposed the array-level implementation and sensing methodology of the MirrorBit memory. Thus, we have converted 1-bit FeFET into 2-bit FeFET using a particular programming scheme in existing FeFET, without needing any notable fabrication process alteration, to double the chip density for high-density non-volatile memory storage. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 356,676 |
2405.19527 | Flexible Agent-based Modeling Framework to Evaluate Integrated
Microtransit and Fixed-route Transit Designs: Mode Choice, Supernetworks, and
Fleet Simulation | The integration of traditional fixed-route transit (FRT) and more flexible microtransit has been touted as a means of improving mobility and access to opportunity, increasing transit ridership, and promoting environmental sustainability. To help evaluate integrated FRT and microtransit public transit (PT) system (henceforth ``integrated fixed-flex PT system'') designs, we propose a high-fidelity modeling framework that provides reliable estimates for a wide range of (i) performance metrics and (ii) integrated fixed-flex PT system designs. We formulate the mode choice equilibrium problem as a fixed-point problem wherein microtransit demand is a function of microtransit performance, and microtransit performance depends on microtransit demand. We propose a detailed agent-based simulation modeling framework that includes (i) a binary logit mode choice model (private auto vs. transit), (ii) a supernetwork-based model and pathfinding algorithm for multi-modal transit path choice where the supernetwork includes pedestrian, FRT, and microtransit layers, (iii) a detailed mobility-on-demand fleet simulator called FleetPy to model the supply-demand dynamics of the microtransit service. In this paper, we illustrate the capabilities of the modeling framework by analyzing integrated fixed-flex PT system designs that vary the following design parameters: FRT frequencies and microtransit fleet size, service region structure, virtual stop coverage, and operating hours. We include case studies in downtown San Diego and Lemon Grove, California. The computational results show that the proposed modeling framework converges to a mode choice equilibrium. Moreover, the scenario results imply that introducing a new microtransit service decreases FRT ridership and requires additional subsidies, but it significantly increases job accessibility and slightly reduces total VMT. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 458,932 |
2011.07916 | Text Information Aggregation with Centrality Attention | A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining the representations of all the words, such as pooling or self-attention. However, these widely used aggregation approaches did not take higher-order relationship among the words into consideration. Hence we propose a new way of obtaining aggregation weights, called eigen-centrality self-attention. More specifically, we build a fully-connected graph for all the words in a sentence, then compute the eigen-centrality as the attention score of each word. The explicit modeling of relationships as a graph is able to capture some higher-order dependency among words, which helps us achieve better results in 5 text classification tasks and one SNLI task than baseline models such as pooling, self-attention and dynamic routing. Besides, in order to compute the dominant eigenvector of the graph, we adopt power method algorithm to get the eigen-centrality measure. Moreover, we also derive an iterative approach to get the gradient for the power method process to reduce both memory consumption and computation requirement.} | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 206,701 |
2312.05852 | Real-time Estimation of DoS Duration and Frequency for Security Control | In this paper, we develop a new denial-of-service (DoS) estimator, enabling defenders to identify duration and frequency parameters of any DoS attacker, except for three edge cases, exclusively using real-time data. The key advantage of the estimator lies in its capability to facilitate security control in a wide range of practical scenarios, even when the attacker's information is previously unknown. We demonstrate the advantage and application of our new estimator in the context of two classical control scenarios, namely consensus of multi-agent systems and impulsive stabilization of nonlinear systems, for illustration. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 414,268 |
2407.21135 | Physical Modelling and Cancellation of External Passive Intermodulation
in FDD MIMO | In this paper, the physical approach to model external (air-induced) passive intermodulation (PIM) is presented in a frequency-division duplexing (FDD) multiple-input multiple-output (MIMO) system with an arbitrary number of transceiver chains. The external PIM is a special case of intermodulation distortion (IMD), mainly generated by metallic objects possessing nonlinear properties ("rusty bolt" effect). Typically, such sources are located in the near-field or transition region of the antenna array. PIM products may fall into the receiver band of the FDD system, negatively affecting the uplink signal. In contrast to other works, this one directly simulates the physical external PIM. The system includes models of a point-source external PIM, a finite-length dipole antenna, a MIMO antenna array, and a baseband multicarrier 5G NR OFDM signal. The Channel coefficients method for multi-PIM-source compensation is replicated to verify the proposed external PIM modelling approach. Simulation results of artificially generated PIM cancellation show similar performance as real-life experiments. Therefore, the proposed approach allows testing PIM compensation algorithms on large systems with many antennas and arbitrary array structures. This eliminates the need for experiments with real hardware at the development stage of the PIM cancellation algorithm. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 477,417 |
1411.6279 | Logics of Temporal-Epistemic Actions | We present Dynamic Epistemic Temporal Logic, a framework for reasoning about operations on multi-agent Kripke models that contain a designated temporal relation. These operations are natural extensions of the well-known "action models" from Dynamic Epistemic Logic. Our "temporal action models" may be used to define a number of informational actions that can modify the "objective" temporal structure of a model along with the agents' basic and higher-order knowledge and beliefs about this structure, including their beliefs about the time. In essence, this approach provides one way to extend the domain of action model-style operations from atemporal Kripke models to temporal Kripke models in a manner that allows actions to control the flow of time. We present a number of examples to illustrate the subtleties involved in interpreting the effects of our extended action models on temporal Kripke models. We also study preservation of important epistemic-temporal properties of temporal Kripke models under temporal action model-induced operations, provide complete axiomatizations for two theories of temporal action models, and connect our approach with previous work on time in Dynamic Epistemic Logic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | true | 37,823 |
1909.00105 | Generating Personalized Recipes from Historical User Preferences | Existing approaches to recipe generation are unable to create recipes for users with culinary preferences but incomplete knowledge of ingredients in specific dishes. We propose a new task of personalized recipe generation to help these users: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user's historical preferences. We attend on technique- and recipe-level representations of a user's previously consumed recipes, fusing these 'user-aware' representations in an attention fusion layer to control recipe text generation. Experiments on a new dataset of 180K recipes and 700K interactions show our model's ability to generate plausible and personalized recipes compared to non-personalized baselines. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 143,519 |
2003.10339 | Diffusion-based Deep Active Learning | The remarkable performance of deep neural networks depends on the availability of massive labeled data. To alleviate the load of data annotation, active deep learning aims to select a minimal set of training points to be labelled which yields maximal model accuracy. Most existing approaches implement either an `exploration'-type selection criterion, which aims at exploring the joint distribution of data and labels, or a `refinement'-type criterion which aims at localizing the detected decision boundaries. We propose a versatile and efficient criterion that automatically switches from exploration to refinement when the distribution has been sufficiently mapped. Our criterion relies on a process of diffusing the existing label information over a graph constructed from the hidden representation of the data set as provided by the neural network. This graph representation captures the intrinsic geometry of the approximated labeling function. The diffusion-based criterion is shown to be advantageous as it outperforms existing criteria for deep active learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,306 |
1605.05566 | Naughton's Wisconsin Bibliography: A Brief Guide | Over nearly three decades at the University of Wisconsin, Jeff Naughton has left an indelible mark on computer science. He has been a global leader of the database research field, deepening its core and pushing its boundaries. Many of Naughton's ideas were translated directly into practice in commercial and open-source systems. But software comes and goes. In the end, it is the ideas themselves that have had impact, ideas written down in papers. Naughton has been a prolific scholar over the last thirty years, with over 175 publications in his bibliography, covering a wide range of topics. This document does not attempt to enumerate or even summarize the wealth of ideas that Naughton has published over the course of his academic career--the task is too daunting. Instead, the best this short note aims to do is to serve as a rough map of the territory: something to help other researchers navigate the wide spaces of Naughton's work. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 56,020 |
1504.07339 | Convolutional Channel Features | Deep learning methods are powerful tools but often suffer from expensive computation and limited flexibility. An alternative is to combine light-weight models with deep representations. As successful cases exist in several visual problems, a unified framework is absent. In this paper, we revisit two widely used approaches in computer vision, namely filtered channel features and Convolutional Neural Networks (CNN), and absorb merits from both by proposing an integrated method called Convolutional Channel Features (CCF). CCF transfers low-level features from pre-trained CNN models to feed the boosting forest model. With the combination of CNN features and boosting forest, CCF benefits from the richer capacity in feature representation compared with channel features, as well as lower cost in computation and storage compared with end-to-end CNN methods. We show that CCF serves as a good way of tailoring pre-trained CNN models to diverse tasks without fine-tuning the whole network to each task by achieving state-of-the-art performances in pedestrian detection, face detection, edge detection and object proposal generation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 42,526 |
2104.02233 | TENT: Efficient Quantization of Neural Networks on the tiny Edge with
Tapered FixEd PoiNT | In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models. We introduce a tapered fixed-point quantization algorithm that matches the numerical format's dynamic range and distribution to that of the deep neural network model's parameter distribution at each layer. An accelerator architecture for the tapered fixed-point with TENT framework is proposed. Results show that the accuracy on classification tasks improves up to ~31 % with an energy overhead of ~17-30 % as compared to fixed-point, for ConvNet and ResNet-18 models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,640 |
1907.00618 | CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark | A long-term visual object tracking performance evaluation methodology and a benchmark are proposed. Performance measures are designed by following a long-term tracking definition to maximize the analysis probing strength. The new measures outperform existing ones in interpretation potential and in better distinguishing between different tracking behaviors. We show that these measures generalize the short-term performance measures, thus linking the two tracking problems. Furthermore, the new measures are highly robust to temporal annotation sparsity and allow annotation of sequences hundreds of times longer than in the current datasets without increasing manual annotation labor. A new challenging dataset of carefully selected sequences with many target disappearances is proposed. A new tracking taxonomy is proposed to position trackers on the short-term/long-term spectrum. The benchmark contains an extensive evaluation of the largest number of long-term tackers and comparison to state-of-the-art short-term trackers. We analyze the influence of tracking architecture implementations to long-term performance and explore various re-detection strategies as well as influence of visual model update strategies to long-term tracking drift. The methodology is integrated in the VOT toolkit to automate experimental analysis and benchmarking and to facilitate future development of long-term trackers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 137,099 |
2208.14295 | PanorAMS: Automatic Annotation for Detecting Objects in Urban Context | Large collections of geo-referenced panoramic images are freely available for cities across the globe, as well as detailed maps with location and meta-data on a great variety of urban objects. They provide a potentially rich source of information on urban objects, but manual annotation for object detection is costly, laborious and difficult. Can we utilize such multimedia sources to automatically annotate street level images as an inexpensive alternative to manual labeling? With the PanorAMS framework we introduce a method to automatically generate bounding box annotations for panoramic images based on urban context information. Following this method, we acquire large-scale, albeit noisy, annotations for an urban dataset solely from open data sources in a fast and automatic manner. The dataset covers the City of Amsterdam and includes over 14 million noisy bounding box annotations of 22 object categories present in 771,299 panoramic images. For many objects further fine-grained information is available, obtained from geospatial meta-data, such as building value, function and average surface area. Such information would have been difficult, if not impossible, to acquire via manual labeling based on the image alone. For detailed evaluation, we introduce an efficient crowdsourcing protocol for bounding box annotations in panoramic images, which we deploy to acquire 147,075 ground-truth object annotations for a subset of 7,348 images, the PanorAMS-clean dataset. For our PanorAMS-noisy dataset, we provide an extensive analysis of the noise and how different types of noise affect image classification and object detection performance. We make both datasets, PanorAMS-noisy and PanorAMS-clean, benchmarks and tools presented in this paper openly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 315,280 |
2212.02421 | Score-based denoising for atomic structure identification | We propose an effective method for removing thermal vibrations that complicate the task of analyzing complex dynamics in atomistic simulation of condensed matter. Our method iteratively subtracts thermal noises or perturbations in atomic positions using a denoising score function trained on synthetically noised but otherwise perfect crystal lattices. The resulting denoised structures clearly reveal underlying crystal order while retaining disorder associated with crystal defects. Purely geometric, agnostic to interatomic potentials, and trained without inputs from explicit simulations, our denoiser can be applied to simulation data generated from vastly different interatomic interactions. The denoiser is shown to improve existing classification methods such as common neighbor analysis and polyhedral template matching, reaching perfect classification accuracy on a recent benchmark dataset of thermally perturbed structures up to the melting point. Demonstrated here in a wide variety of atomistic simulation contexts, the denoiser is general, robust, and readily extendable to delineate order from disorder in structurally and chemically complex materials. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,785 |
2210.02830 | KnowledgeShovel: An AI-in-the-Loop Document Annotation System for
Scientific Knowledge Base Construction | Constructing a comprehensive, accurate, and useful scientific knowledge base is crucial for human researchers synthesizing scientific knowledge and for enabling Al-driven scientific discovery. However, the current process is difficult, error-prone, and laborious due to (1) the enormous amount of scientific literature available; (2) the highly-specialized scientific domains; (3) the diverse modalities of information (text, figure, table); and, (4) the silos of scientific knowledge in different publications with inconsistent formats and structures. Informed by a formative study and iterated with participatory design workshops, we designed and developed KnowledgeShovel, an Al-in-the-Loop document annotation system for researchers to construct scientific knowledge bases. The design of KnowledgeShovel introduces a multi-step multi-modal human-AI collaboration pipeline that aligns with users' existing workflows to improve data accuracy while reducing the human burden. A follow-up user evaluation with 7 geoscience researchers shows that KnowledgeShovel can enable efficient construction of scientific knowledge bases with satisfactory accuracy. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 321,791 |
2301.12525 | Composer's Assistant: An Interactive Transformer for Multi-Track MIDI
Infilling | We introduce Composer's Assistant, a system for interactive human-computer composition in the REAPER digital audio workstation. We consider the task of multi-track MIDI infilling when arbitrary track-measures have been deleted from a contiguous slice of measures from a MIDI file, and we train a T5-like model to accomplish this task. Composer's Assistant consists of this model together with scripts that enable interaction with the model in REAPER. We conduct objective and subjective tests of our model. We release our complete system, consisting of source code, pretrained models, and REAPER scripts. Our models were trained only on permissively-licensed MIDI files. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 342,562 |
2009.10189 | Resilient In-Season Crop Type Classification in Multispectral Satellite
Observations using Growth Stage Normalization | Crop type classification using satellite observations is an important tool for providing insights about planted area and enabling estimates of crop condition and yield, especially within the growing season when uncertainties around these quantities are highest. As the climate changes and extreme weather events become more frequent, these methods must be resilient to changes in domain shifts that may occur, for example, due to shifts in planting timelines. In this work, we present an approach for within-season crop type classification using moderate spatial resolution (30 m) satellite data that addresses domain shift related to planting timelines by normalizing inputs by crop growth stage. We use a neural network leveraging both convolutional and recurrent layers to predict if a pixel contains corn, soybeans, or another crop or land cover type. We evaluated this method for the 2019 growing season in the midwestern US, during which planting was delayed by as much as 1-2 months due to extreme weather that caused record flooding. We show that our approach using growth stage-normalized time series outperforms fixed-date time series, and achieves overall classification accuracy of 85.4% prior to harvest (September-November) and 82.8% by mid-season (July-September). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 196,813 |
2309.15855 | Temporally-Evolving Generalised Networks and their Reproducing Kernels | This paper considers generalised network, intended as networks where (a) the edges connecting the nodes are nonlinear, and (b) stochastic processes are continuously indexed over both vertices and edges. Such topological structures are normally represented through special classes of graphs, termed graphs with Euclidean edges. We build generalised networks in which topology changes over time instants. That is, vertices and edges can disappear at subsequent time instants and edges may change in shape and length. We consider both cases of linear or circular time. For the second case, the generalised network exhibits a periodic structure. Our findings allow to illustrate pros and cons of each setting. Generalised networks become semi-metric spaces whenever equipped with a proper semi-metric. Our approach allows to build proper semi-metrics for the temporally-evolving topological structures of the networks. Our final effort is then devoted to guiding the reader through appropriate choice of classes of functions that allow to build proper reproducing kernels when composed with the temporally-evolving semi-metrics topological structures. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 395,143 |
2410.06425 | Embedded State Estimation for Optimization of Cislunar Space Domain
Awareness Constellation Design | The traffic in cislunar space is expected to increase over the coming years, leading to a higher likelihood of conjunction events among active satellites, orbital debris, and non-cooperative satellites. This increase necessitates enhanced space domain awareness (SDA) capabilities that include state estimation for targets of interest. Both Earth surface-based and space-based observation platforms in geosynchronous orbit or below face challenges such as range, exclusion, and occlusion that hinder observation. Motivated by the need to place space-based observers in the cislunar space regime to overcome these challenges, this paper proposes a cislunar SDA constellation design and analysis framework that integrates state estimation into an optimization problem for determining the placement of observers for optimal state estimation performance on a set of targets. The proposed multi-observer placement optimization problem samples from a range of possible target orbits. Upon convergence, the optimized constellation is validated against a broader set of targets to assess its effectiveness. Two comparative analyses are presented to evaluate the effects of changes in the sensor tasking procedure and sensor fidelity on the optimized constellation, comparing these to a single observer baseline case. The results demonstrate that the optimized constellations can provide accurate state estimation for various orbit families. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 496,196 |
2411.17141 | Learning Robust Anymodal Segmentor with Unimodal and Cross-modal
Distillation | Simultaneously using multimodal inputs from multiple sensors to train segmentors is intuitively advantageous but practically challenging. A key challenge is unimodal bias, where multimodal segmentors over rely on certain modalities, causing performance drops when others are missing, common in real world applications. To this end, we develop the first framework for learning robust segmentor that can handle any combinations of visual modalities. Specifically, we first introduce a parallel multimodal learning strategy for learning a strong teacher. The cross-modal and unimodal distillation is then achieved in the multi scale representation space by transferring the feature level knowledge from multimodal to anymodal segmentors, aiming at addressing the unimodal bias and avoiding over-reliance on specific modalities. Moreover, a prediction level modality agnostic semantic distillation is proposed to achieve semantic knowledge transferring for segmentation. Extensive experiments on both synthetic and real-world multi-sensor benchmarks demonstrate that our method achieves superior performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 511,326 |
2005.00096 | An Early Study on Intelligent Analysis of Speech under COVID-19:
Severity, Sleep Quality, Fatigue, and Anxiety | The COVID-19 outbreak was announced as a global pandemic by the World Health Organisation in March 2020 and has affected a growing number of people in the past few weeks. In this context, advanced artificial intelligence techniques are brought to the fore in responding to fight against and reduce the impact of this global health crisis. In this study, we focus on developing some potential use-cases of intelligent speech analysis for COVID-19 diagnosed patients. In particular, by analysing speech recordings from these patients, we construct audio-only-based models to automatically categorise the health state of patients from four aspects, including the severity of illness, sleep quality, fatigue, and anxiety. For this purpose, two established acoustic feature sets and support vector machines are utilised. Our experiments show that an average accuracy of .69 obtained estimating the severity of illness, which is derived from the number of days in hospitalisation. We hope that this study can foster an extremely fast, low-cost, and convenient way to automatically detect the COVID-19 disease. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 175,123 |
1601.04530 | Domain based classification | The majority of traditional classification ru les minimizing the expected probability of error (0-1 loss) are inappropriate if the class probability distributions are ill-defined or impossible to estimate. We argue that in such cases class domains should be used instead of class distributions or densities to construct a reliable decision function. Proposals are presented for some evaluation criteria and classifier learning schemes, illustrated by an example. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 51,028 |
1905.10482 | Multi-Model Investigative Exploration of Social Media Data with
boutique: A Case Study in Public Health | We present our experience with a data science problem in Public Health, where researchers use social media (Twitter) to determine whether the public shows awareness of HIV prevention measures offered by Public Health campaigns. To help the researcher, we develop an investigative exploration system called boutique that allows a user to perform a multi-step visualization and exploration of data through a dashboard interface. Unique features of boutique includes its ability to handle heterogeneous types of data provided by a polystore, and its ability to use computation as part of the investigative exploration process. In this paper, we present the design of the boutique middleware and walk through an investigation process for a real-life problem. | true | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 132,071 |
2112.08949 | Slot-VPS: Object-centric Representation Learning for Video Panoptic
Segmentation | Video Panoptic Segmentation (VPS) aims at assigning a class label to each pixel, uniquely segmenting and identifying all object instances consistently across all frames. Classic solutions usually decompose the VPS task into several sub-tasks and utilize multiple surrogates (e.g. boxes and masks, centres and offsets) to represent objects. However, this divide-and-conquer strategy requires complex post-processing in both spatial and temporal domains and is vulnerable to failures from surrogate tasks. In this paper, inspired by object-centric learning which learns compact and robust object representations, we present Slot-VPS, the first end-to-end framework for this task. We encode all panoptic entities in a video, including both foreground instances and background semantics, with a unified representation called panoptic slots. The coherent spatio-temporal object's information is retrieved and encoded into the panoptic slots by the proposed Video Panoptic Retriever, enabling it to localize, segment, differentiate, and associate objects in a unified manner. Finally, the output panoptic slots can be directly converted into the class, mask, and object ID of panoptic objects in the video. We conduct extensive ablation studies and demonstrate the effectiveness of our approach on two benchmark datasets, Cityscapes-VPS (\textit{val} and test sets) and VIPER (\textit{val} set), achieving new state-of-the-art performance of 63.7, 63.3 and 56.2 VPQ, respectively. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 271,987 |
2003.00304 | Voice trigger detection from LVCSR hypothesis lattices using
bidirectional lattice recurrent neural networks | We propose a method to reduce false voice triggers of a speech-enabled personal assistant by post-processing the hypothesis lattice of a server-side large-vocabulary continuous speech recognizer (LVCSR) via a neural network. We first discuss how an estimate of the posterior probability of the trigger phrase can be obtained from the hypothesis lattice using known techniques to perform detection, then investigate a statistical model that processes the lattice in a more explicitly data-driven, discriminative manner. We propose using a Bidirectional Lattice Recurrent Neural Network (LatticeRNN) for the task, and show that it can significantly improve detection accuracy over using the 1-best result or the posterior. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 166,255 |
2112.09519 | Correlated Product of Experts for Sparse Gaussian Process Regression | Gaussian processes (GPs) are an important tool in machine learning and statistics with applications ranging from social and natural science through engineering. They constitute a powerful kernelized non-parametric method with well-calibrated uncertainty estimates, however, off-the-shelf GP inference procedures are limited to datasets with several thousand data points because of their cubic computational complexity. For this reason, many sparse GPs techniques have been developed over the past years. In this paper, we focus on GP regression tasks and propose a new approach based on aggregating predictions from several local and correlated experts. Thereby, the degree of correlation between the experts can vary between independent up to fully correlated experts. The individual predictions of the experts are aggregated taking into account their correlation resulting in consistent uncertainty estimates. Our method recovers independent Product of Experts, sparse GP and full GP in the limiting cases. The presented framework can deal with a general kernel function and multiple variables, and has a time and space complexity which is linear in the number of experts and data samples, which makes our approach highly scalable. We demonstrate superior performance, in a time vs. accuracy sense, of our proposed method against state-of-the-art GP approximation methods for synthetic as well as several real-world datasets with deterministic and stochastic optimization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 272,175 |
2410.10584 | STACKFEED: Structured Textual Actor-Critic Knowledge Base Editing with
FeedBack | Large Language Models (LLMs) often generate incorrect or outdated information, especially in low-resource settings or when dealing with private data. To address this, Retrieval-Augmented Generation (RAG) uses external knowledge bases (KBs), but these can also suffer from inaccuracies. We introduce STACKFEED, a novel Structured Textual Actor-Critic Knowledge base editing with FEEDback approach that iteratively refines the KB based on expert feedback using a multi-actor, centralized critic reinforcement learning framework. Each document is assigned to an actor, modeled as a ReACT agent, which performs structured edits based on document-specific targeted instructions from a centralized critic. Experimental results show that STACKFEED significantly improves KB quality and RAG system performance, enhancing accuracy by up to 8% over baselines. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 498,144 |
1811.10786 | Adaptive Wavelet Clustering for Highly Noisy Data | In this paper we make progress on the unsupervised task of mining arbitrarily shaped clusters in highly noisy datasets, which is a task present in many real-world applications. Based on the fundamental work that first applies a wavelet transform to data clustering, we propose an adaptive clustering algorithm, denoted as AdaWave, which exhibits favorable characteristics for clustering. By a self-adaptive thresholding technique, AdaWave is parameter free and can handle data in various situations. It is deterministic, fast in linear time, order-insensitive, shape-insensitive, robust to highly noisy data, and requires no pre-knowledge on data models. Moreover, AdaWave inherits the ability from the wavelet transform to cluster data in different resolutions. We adopt the "grid labeling" data structure to drastically reduce the memory consumption of the wavelet transform so that AdaWave can be used for relatively high dimensional data. Experiments on synthetic as well as natural datasets demonstrate the effectiveness and efficiency of our proposed method. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 114,584 |
2209.11869 | The Role of Symmetry in Constructing Geometric Flat Outputs for
Free-Flying Robotic Systems | Mechanical systems naturally evolve on principal bundles describing their inherent symmetries. The ensuing factorization of the configuration manifold into a symmetry group and an internal shape space has provided deep insights into the locomotion of many robotic and biological systems. On the other hand, the property of differential flatness has enabled efficient, effective planning and control algorithms for various robotic systems. Yet, a practical means of finding a flat output for an arbitrary robotic system remains an open question. In this work, we demonstrate surprising new connections between these two domains, for the first time employing symmetry directly to construct a flat output. We provide sufficient conditions for the existence of a trivialization of the bundle in which the group variables themselves are a flat output. We call this a geometric flat output, since it is equivariant (i.e. maintains the symmetry) and is often global or almost-global, properties not typically enjoyed by other flat outputs. In such a trivialization, the motion planning problem is easily solved, since a given trajectory for the group variables will fully determine the trajectory for the shape variables that exactly achieves this motion. We provide a partial catalog of robotic systems with geometric flat outputs and worked examples for the planar rocket, planar aerial manipulator, and quadrotor. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 319,323 |
2406.14401 | Fair Streaming Feature Selection | Streaming feature selection techniques have become essential in processing real-time data streams, as they facilitate the identification of the most relevant attributes from continuously updating information. Despite their performance, current algorithms to streaming feature selection frequently fall short in managing biases and avoiding discrimination that could be perpetuated by sensitive attributes, potentially leading to unfair outcomes in the resulting models. To address this issue, we propose FairSFS, a novel algorithm for Fair Streaming Feature Selection, to uphold fairness in the feature selection process without compromising the ability to handle data in an online manner. FairSFS adapts to incoming feature vectors by dynamically adjusting the feature set and discerns the correlations between classification attributes and sensitive attributes from this revised set, thereby forestalling the propagation of sensitive data. Empirical evaluations show that FairSFS not only maintains accuracy that is on par with leading streaming feature selection methods and existing fair feature techniques but also significantly improves fairness metrics. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 466,295 |
2007.07788 | CANet: Context Aware Network for 3D Brain Glioma Segmentation | Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 187,439 |
0810.5551 | A Theory of Truncated Inverse Sampling | In this paper, we have established a new framework of truncated inverse sampling for estimating mean values of non-negative random variables such as binomial, Poisson, hyper-geometrical, and bounded variables. We have derived explicit formulas and computational methods for designing sampling schemes to ensure prescribed levels of precision and confidence for point estimators. Moreover, we have developed interval estimation methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 2,583 |
2211.13491 | Spatial Mixture-of-Experts | Many data have an underlying dependence on spatial location; it may be weather on the Earth, a simulation on a mesh, or a registered image. Yet this feature is rarely taken advantage of, and violates common assumptions made by many neural network layers, such as translation equivariance. Further, many works that do incorporate locality fail to capture fine-grained structure. To address this, we introduce the Spatial Mixture-of-Experts (SMoE) layer, a sparsely-gated layer that learns spatial structure in the input domain and routes experts at a fine-grained level to utilize it. We also develop new techniques to train SMoEs, including a self-supervised routing loss and damping expert errors. Finally, we show strong results for SMoEs on numerous tasks, and set new state-of-the-art results for medium-range weather prediction and post-processing ensemble weather forecasts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 332,494 |
2204.07018 | From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks | This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network, namely ResNet-18. Our main motivation for focusing on such a front-end classifier rather than other complex architectures is balancing recognition accuracy and the total number of training parameters. Herein, we measure the impact of different settings required for generating more informative Mel-frequency cepstral coefficient (MFCC), short-time Fourier transform (STFT), and discrete wavelet transform (DWT) representations on our front-end model. This measurement involves comparing the classification performance over the adversarial robustness. We demonstrate an inverse relationship between recognition accuracy and model robustness against six benchmarking attack algorithms on the balance of average budgets allocated by the adversary and the attack cost. Moreover, our experimental results have shown that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary than other 2D representations. We also report some results on different convolutional neural network architectures such as ResNet-34, ResNet-56, AlexNet, and GoogLeNet, SB-CNN, and LSTM-based. | false | false | true | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 291,526 |
2107.02264 | Empowering cyberphysical systems of systems with intelligence | Cyber Physical Systems have been going into a transition phase from individual systems to a collecttives of systems that collaborate in order to achieve a highly complex cause, realizing a system of systems approach. The automotive domain has been making a transition to the system of system approach aiming to provide a series of emergent functionality like traffic management, collaborative car fleet management or large-scale automotive adaptation to physical environment thus providing significant environmental benefits (e.g air pollution reduction) and achieving significant societal impact. Similarly, large infrastructure domains, are evolving into global, highly integrated cyber-physical systems of systems covering all parts of the value chain. In practice, there are significant challenges in CPSoS applicability and usability to be addressed, i.e. even a small CPSoS such as a car consists several subsystems Decentralization of CPSoS appoints tasks to individual CPSs within the System of Systems. CPSoSs are heterogenous systems. They comprise of various, autonomous, CPSs, each one of them having unique performance capabilities, criticality level, priorities and pursued goals. all CPSs must also harmonically pursue system-based achievements and collaborate in order to make system-of-system based decisions and implement the CPSoS functionality. This survey will provide a comprehensive review on current best practices in connected cyberphysical systems. The basis of our investigation is a dual layer architecture encompassing a perception layer and a behavioral layer. Perception algorithms with respect to scene understanding (object detection and tracking, pose estimation), localization mapping and path planning are thoroughly investigated. Behavioural part focuses on decision making and human in the loop control. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 244,742 |
2111.07634 | Pseudo-domains in imaging data improve prediction of future disease
status in multi-center studies | In multi-center randomized clinical trials imaging data can be diverse due to acquisition technology or scanning protocols. Models predicting future outcome of patients are impaired by this data heterogeneity. Here, we propose a prediction method that can cope with a high number of different scanning sites and a low number of samples per site. We cluster sites into pseudo-domains based on visual appearance of scans, and train pseudo-domain specific models. Results show that they improve the prediction accuracy for steatosis after 48 weeks from imaging data acquired at an initial visit and 12-weeks follow-up in liver disease | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 266,439 |
2211.13203 | Inversion-Based Style Transfer with Diffusion Models | The artistic style within a painting is the means of expression, which includes not only the painting material, colors, and brushstrokes, but also the high-level attributes including semantic elements, object shapes, etc. Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements. The pre-trained text-to-image synthesis diffusion probabilistic models have achieved remarkable quality, but it often requires extensive textual descriptions to accurately portray attributes of a particular painting. We believe that the uniqueness of an artwork lies precisely in the fact that it cannot be adequately explained with normal language. Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions. Specifically, we assume style as a learnable textual description of a painting. We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image, thus capturing and transferring the artistic style of a painting. We demonstrate the quality and efficiency of our method on numerous paintings of various artists and styles. Code and models are available at https://github.com/zyxElsa/InST. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 332,383 |
2311.03067 | Forest aboveground biomass estimation using GEDI and earth observation
data through attention-based deep learning | Accurate quantification of forest aboveground biomass (AGB) is critical for understanding carbon accounting in the context of climate change. In this study, we presented a novel attention-based deep learning approach for forest AGB estimation, primarily utilizing openly accessible EO data, including: GEDI LiDAR data, C-band Sentinel-1 SAR data, ALOS-2 PALSAR-2 data, and Sentinel-2 multispectral data. The attention UNet (AU) model achieved markedly higher accuracy for biomass estimation compared to the conventional RF algorithm. Specifically, the AU model attained an R2 of 0.66, RMSE of 43.66 Mg ha-1, and bias of 0.14 Mg ha-1, while RF resulted in lower scores of R2 0.62, RMSE 45.87 Mg ha-1, and bias 1.09 Mg ha-1. However, the superiority of the deep learning approach was not uniformly observed across all tested models. ResNet101 only achieved an R2 of 0.50, an RMSE of 52.93 Mg ha-1, and a bias of 0.99 Mg ha-1, while the UNet reported an R2 of 0.65, an RMSE of 44.28 Mg ha-1, and a substantial bias of 1.84 Mg ha-1. Moreover, to explore the performance of AU in the absence of spatial information, fully connected (FC) layers were employed to eliminate spatial information from the remote sensing data. AU-FC achieved intermediate R2 of 0.64, RMSE of 44.92 Mgha-1, and bias of -0.56 Mg ha-1, outperforming RF but underperforming AU model using spatial information. We also generated 10m forest AGB maps across Guangdong for the year 2019 using AU and compared it with that produced by RF. The AGB distributions from both models showed strong agreement with similar mean values; the mean forest AGB estimated by AU was 102.18 Mg ha-1 while that of RF was 104.84 Mg ha-1. Additionally, it was observed that the AGB map generated by AU provided superior spatial information. Overall, this research substantiates the feasibility of employing deep learning for biomass estimation based on satellite data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 405,701 |
2406.09805 | Two-Step Blackout Mitigation by Flexibility-Enabled Microgrid Islanding | Blackouts are disastrous events with a low probability of occurrence but a high impact on the system and its users. With the help of more distributed and controllable generation and sector-coupled flexibility, microgrids could be prepared to operate in islanded mode during a blackout. This paper discusses a two-step blackout mitigation approach for highly renewable microgrids that utilizes user flexibility and energy storage systems for power balance in islanded grid operation. The proposed method includes a proactive flexibility reservation step, which derives a minimal reservation schedule for microgrid resources under uncertainty considering related operational costs. As a second step, during a blackout, a fully distributed control is implemented to maximize the usage of available resources based on a sequence of max and min-consensus rounds. This paper focuses on the second step, for which the effectiveness of blackstart and long-term coordination is shown. Load shedding can be reduced by 40\% compared to the forecast value. A hardware-in-the-loop simulation of a grid-forming converter further showed a fast convergence toward the optimal operation point. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 464,091 |
2008.02150 | COVID-19 in CXR: from Detection and Severity Scoring to Patient Disease
Monitoring | In this work, we estimate the severity of pneumonia in COVID-19 patients and conduct a longitudinal study of disease progression. To achieve this goal, we developed a deep learning model for simultaneous detection and segmentation of pneumonia in chest Xray (CXR) images and generalized to COVID-19 pneumonia. The segmentations were utilized to calculate a "Pneumonia Ratio" which indicates the disease severity. The measurement of disease severity enables to build a disease extent profile over time for hospitalized patients. To validate the model relevance to the patient monitoring task, we developed a validation strategy which involves a synthesis of Digital Reconstructed Radiographs (DRRs - synthetic Xray) from serial CT scans; we then compared the disease progression profiles that were generated from the DRRs to those that were generated from CT volumes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 190,537 |
1804.03124 | Leveraging Intra-User and Inter-User Representation Learning for
Automated Hate Speech Detection | Hate speech detection is a critical, yet challenging problem in Natural Language Processing (NLP). Despite the existence of numerous studies dedicated to the development of NLP hate speech detection approaches, the accuracy is still poor. The central problem is that social media posts are short and noisy, and most existing hate speech detection solutions take each post as an isolated input instance, which is likely to yield high false positive and negative rates. In this paper, we radically improve automated hate speech detection by presenting a novel model that leverages intra-user and inter-user representation learning for robust hate speech detection on Twitter. In addition to the target Tweet, we collect and analyze the user's historical posts to model intra-user Tweet representations. To suppress the noise in a single Tweet, we also model the similar Tweets posted by all other users with reinforced inter-user representation learning techniques. Experimentally, we show that leveraging these two representations can significantly improve the f-score of a strong bidirectional LSTM baseline model by 10.1%. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 94,555 |
2005.04644 | Radar-on-Lidar: metric radar localization on prior lidar maps | Radar and lidar, provided by two different range sensors, each has pros and cons of various perception tasks on mobile robots or autonomous driving. In this paper, a Monte Carlo system is used to localize the robot with a rotating radar sensor on 2D lidar maps. We first train a conditional generative adversarial network to transfer raw radar data to lidar data, and achieve reliable radar points from generator. Then an efficient radar odometry is included in the Monte Carlo system. Combining the initial guess from odometry, a measurement model is proposed to match the radar data and prior lidar maps for final 2D positioning. We demonstrate the effectiveness of the proposed localization framework on the public multi-session dataset. The experimental results show that our system can achieve high accuracy for long-term localization in outdoor scenes. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 176,526 |
1606.08089 | This before That: Causal Precedence in the Biomedical Domain | Causal precedence between biochemical interactions is crucial in the biomedical domain, because it transforms collections of individual interactions, e.g., bindings and phosphorylations, into the causal mechanisms needed to inform meaningful search and inference. Here, we analyze causal precedence in the biomedical domain as distinct from open-domain, temporal precedence. First, we describe a novel, hand-annotated text corpus of causal precedence in the biomedical domain. Second, we use this corpus to investigate a battery of models of precedence, covering rule-based, feature-based, and latent representation models. The highest-performing individual model achieved a micro F1 of 43 points, approaching the best performers on the simpler temporal-only precedence tasks. Feature-based and latent representation models each outperform the rule-based models, but their performance is complementary to one another. We apply a sieve-based architecture to capitalize on this lack of overlap, achieving a micro F1 score of 46 points. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 57,825 |
2012.13707 | Are Guessing, Source Coding, and Tasks Partitioning Birds of a Feather? | This paper establishes a close relationship among the four information theoretic problems, namely Campbell source coding, Arikan guessing, Huleihel et al. memoryless guessing and Bunte and Lapidoth tasks partitioning problems. We first show that the aforementioned problems are mathematically related via a general moment minimization problem whose optimum solution is given in terms of Renyi entropy. We then propose a general framework for the mismatched version of these problems and establish all the asymptotic results using this framework. Further, we study an ordered tasks partitioning problem that turns out to be a generalisation of Arikan's guessing problem. Finally, with the help of this general framework, we establish an equivalence among all these problems, in the sense that, knowing an asymptotically optimal solution in one problem helps us find the same in all other problems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 213,309 |
2304.05991 | Maximum-likelihood Estimators in Physics-Informed Neural Networks for
High-dimensional Inverse Problems | Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE). Typical inverse PINNs are formulated as soft-constrained multi-objective optimization problems with several hyperparameters. In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from interpolation to the physical model space through Taylor expansion, without the need of hyperparameter tuning. We explore its application to high-dimensional coupled ODEs constrained by differential algebraic equations that are common in transient chemical and biological kinetics. Furthermore, we show that singular-value decomposition (SVD) of the ODE coupling matrices (reaction stoichiometry matrix) provides reduced uncorrelated subspaces in which PINNs solutions can be represented and over which residuals can be projected. Finally, SVD bases serve as preconditioners for the inversion of covariance matrices in this hyperparameter-free robust application of MLE to ``kinetics-informed neural networks''. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 357,808 |
2411.15295 | Frequency-Guided Posterior Sampling for Diffusion-Based Image
Restoration | Image restoration aims to recover high-quality images from degraded observations. When the degradation process is known, the recovery problem can be formulated as an inverse problem, and in a Bayesian context, the goal is to sample a clean reconstruction given the degraded observation. Recently, modern pretrained diffusion models have been used for image restoration by modifying their sampling procedure to account for the degradation process. However, these methods often rely on certain approximations that can lead to significant errors and compromised sample quality. In this paper, we provide the first rigorous analysis of this approximation error for linear inverse problems under distributional assumptions on the space of natural images, demonstrating cases where previous works can fail dramatically. Motivated by our theoretical insights, we propose a simple modification to existing diffusion-based restoration methods. Our approach introduces a time-varying low-pass filter in the frequency domain of the measurements, progressively incorporating higher frequencies during the restoration process. We develop an adaptive curriculum for this frequency schedule based on the underlying data distribution. Our method significantly improves performance on challenging image restoration tasks including motion deblurring and image dehazing. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 510,558 |
2107.02845 | Logit-based Uncertainty Measure in Classification | We introduce a new, reliable, and agnostic uncertainty measure for classification tasks called logit uncertainty. It is based on logit outputs of neural networks. We in particular show that this new uncertainty measure yields a superior performance compared to existing uncertainty measures on different tasks, including out of sample detection and finding erroneous predictions. We analyze theoretical foundations of the measure and explore a relationship with high density regions. We also demonstrate how to test uncertainty using intermediate outputs in training of generative adversarial networks. We propose two potential ways to utilize logit-based uncertainty in real world applications, and show that the uncertainty measure outperforms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,958 |
2502.01512 | Wrapped Gaussian on the manifold of Symmetric Positive Definite Matrices | Circular and non-flat data distributions are prevalent across diverse domains of data science, yet their specific geometric structures often remain underutilized in machine learning frameworks. A principled approach to accounting for the underlying geometry of such data is pivotal, particularly when extending statistical models, like the pervasive Gaussian distribution. In this work, we tackle those issue by focusing on the manifold of symmetric positive definite matrices, a key focus in information geometry. We introduced a non-isotropic wrapped Gaussian by leveraging the exponential map, we derive theoretical properties of this distribution and propose a maximum likelihood framework for parameter estimation. Furthermore, we reinterpret established classifiers on SPD through a probabilistic lens and introduce new classifiers based on the wrapped Gaussian model. Experiments on synthetic and real-world datasets demonstrate the robustness and flexibility of this geometry-aware distribution, underscoring its potential to advance manifold-based data analysis. This work lays the groundwork for extending classical machine learning and statistical methods to more complex and structured data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 529,887 |
0911.2746 | Model Selection: Two Fundamental Measures of Coherence and Their
Algorithmic Significance | The problem of model selection arises in a number of contexts, such as compressed sensing, subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper generalizes the notion of \emph{incoherence} in the existing literature on model selection and introduces two fundamental measures of coherence---termed as the worst-case coherence and the average coherence---among the columns of a design matrix. In particular, it utilizes these two measures of coherence to provide an in-depth analysis of a simple one-step thresholding (OST) algorithm for model selection. One of the key insights offered by the ensuing analysis is that OST is feasible for model selection as long as the design matrix obeys an easily verifiable property. In addition, the paper also characterizes the model-selection performance of OST in terms of the worst-case coherence, \mu, and establishes that OST performs near-optimally in the low signal-to-noise ratio regime for N x C design matrices with \mu = O(N^{-1/2}). Finally, in contrast to some of the existing literature on model selection, the analysis in the paper is nonasymptotic in nature, it does not require knowledge of the true model order, it is applicable to generic (random or deterministic) design matrices, and it neither requires submatrices of the design matrix to have full rank, nor does it assume a statistical prior on the values of the nonzero entries of the data vector. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,935 |
2312.09398 | RNA: Relightable Neural Assets | High-fidelity 3D assets with materials composed of fibers (including hair), complex layered material shaders, or fine scattering geometry are ubiquitous in high-end realistic rendering applications. Rendering such models is computationally expensive due to heavy shaders and long scattering paths. Moreover, implementing the shading and scattering models is non-trivial and has to be done not only in the 3D content authoring software (which is necessarily complex), but also in all downstream rendering solutions. For example, web and mobile viewers for complex 3D assets are desirable, but frequently cannot support the full shading complexity allowed by the authoring application. Our goal is to design a neural representation for 3D assets with complex shading that supports full relightability and full integration into existing renderers. We provide an end-to-end shading solution at the first intersection of a ray with the underlying geometry. All shading and scattering is precomputed and included in the neural asset; no multiple scattering paths need to be traced, and no complex shading models need to be implemented to render our assets, beyond a single neural architecture. We combine an MLP decoder with a feature grid. Shading consists of querying a feature vector, followed by an MLP evaluation producing the final reflectance value. Our method provides high-fidelity shading, close to the ground-truth Monte Carlo estimate even at close-up views. We believe our neural assets could be used in practical renderers, providing significant speed-ups and simplifying renderer implementations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 415,697 |
1908.06881 | SDIT: Scalable and Diverse Cross-domain Image Translation | Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 142,139 |
1712.08917 | Building a Sentiment Corpus of Tweets in Brazilian Portuguese | The large amount of data available in social media, forums and websites motivates researches in several areas of Natural Language Processing, such as sentiment analysis. The popularity of the area due to its subjective and semantic characteristics motivates research on novel methods and approaches for classification. Hence, there is a high demand for datasets on different domains and different languages. This paper introduces TweetSentBR, a sentiment corpora for Brazilian Portuguese manually annotated with 15.000 sentences on TV show domain. The sentences were labeled in three classes (positive, neutral and negative) by seven annotators, following literature guidelines for ensuring reliability on the annotation. We also ran baseline experiments on polarity classification using three machine learning methods, reaching 80.99% on F-Measure and 82.06% on accuracy in binary classification, and 59.85% F-Measure and 64.62% on accuracy on three point classification. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 87,277 |
2212.06142 | Towards Better Long-range Time Series Forecasting using Generative
Forecasting | Long-range time series forecasting is usually based on one of two existing forecasting strategies: Direct Forecasting and Iterative Forecasting, where the former provides low bias, high variance forecasts and the latter leads to low variance, high bias forecasts. In this paper, we propose a new forecasting strategy called Generative Forecasting (GenF), which generates synthetic data for the next few time steps and then makes long-range forecasts based on generated and observed data. We theoretically prove that GenF is able to better balance the forecasting variance and bias, leading to a much smaller forecasting error. We implement GenF via three components: (i) a novel conditional Wasserstein Generative Adversarial Network (GAN) based generator for synthetic time series data generation, called CWGAN-TS. (ii) a transformer based predictor, which makes long-range predictions using both generated and observed data. (iii) an information theoretic clustering algorithm to improve the training of both the CWGAN-TS and the transformer based predictor. The experimental results on five public datasets demonstrate that GenF significantly outperforms a diverse range of state-of-the-art benchmarks and classical approaches. Specifically, we find a 5% - 11% improvement in predictive performance (mean absolute error) while having a 15% - 50% reduction in parameters compared to the benchmarks. Lastly, we conduct an ablation study to further explore and demonstrate the effectiveness of the components comprising GenF. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 336,021 |
2211.06332 | Autonomous Multirotor Landing on Landing Pads and Lava Flows | Landing is a challenging part of autonomous drone flight and a great research opportunity. This PhD proposes to improve on fiducial autonomous landing algorithms by making them more flexible. Further, it leverages its location, Iceland, to develop a method for landing on lava flows in cooperation with analog Mars exploration missions taking place in Iceland now - and potentially for future Mars landings. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 329,854 |
2202.00531 | PRIMA: Planner-Reasoner Inside a Multi-task Reasoning Agent | We consider the problem of multi-task reasoning (MTR), where an agent can solve multiple tasks via (first-order) logic reasoning. This capability is essential for human-like intelligence due to its strong generalizability and simplicity for handling multiple tasks. However, a major challenge in developing effective MTR is the intrinsic conflict between reasoning capability and efficiency. An MTR-capable agent must master a large set of "skills" to tackle diverse tasks, but executing a particular task at the inference stage requires only a small subset of immediately relevant skills. How can we maintain broad reasoning capability and also efficient specific-task performance? To address this problem, we propose a Planner-Reasoner framework capable of state-of-the-art MTR capability and high efficiency. The Reasoner models shareable (first-order) logic deduction rules, from which the Planner selects a subset to compose into efficient reasoning paths. The entire model is trained in an end-to-end manner using deep reinforcement learning, and experimental studies over a variety of domains validate its effectiveness. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 278,177 |
2403.01964 | The Heterogeneous Productivity Effects of Generative AI | We analyse the individual productivity effects of Italy's ban on ChatGPT, a generative pretrained transformer chatbot. We compile data on the daily coding output quantity and quality of over 36,000 GitHub users in Italy and other European countries and combine these data with the sudden announcement of the ban in a difference-in-differences framework. Among the affected users in Italy, we find a short-term increase in output quantity and quality for less experienced users and a decrease in productivity on more routine tasks for experienced users. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 434,652 |
2112.08428 | An approach for the aggregation of power system controllers with
different topologies | This paper proposes an approach to aggregate nonstructured power system controllers preserving the dynamical characteristics of the original devices. The method is based on linear operations that use the frequency response of the elements, resulting in an accurate input-output description of the equivalent controller when compared to the original ones. The developed method was applied to a model of the future interconnected Paraguayan-Argentinean power system to produce a dynamic equivalent used in a real-time simulator to test the special protection scheme needed for the safe operation of the this future system. Transient and small-signal stability studies presented matching simulation results in the time domain with significantly reduced computational burden and processing time. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 271,787 |
2010.08806 | Modeling and Implementation of Quadcopter Autonomous Flight Based on
Alternative Methods to Determine Propeller Parameters | To properly simulate and implement a quadcopter flight control for intended load and flight conditions, the quadcopter model must have parameters on various relationships including propeller thrust-torque, thrust-PWM, and thrust--angular speed to a certain level of accuracy. Thrust-torque modeling requires an expensive reaction torque measurement sensor. In the absence of sophisticated equipment, the study comes up with alternative methods to complete the quadcopter model. The study also presents a method of modeling the rotational aerodynamic drag on the quadcopter. Although the resulting model of the reaction torque generated by the quadcopter's propellers and the model of the drag torque acting on the quadcopter body that are derived using the methods in this study may not yield the true values of these quantities, the experimental modeling techniques presented in this work ensure that the derived dynamic model for the quadcopter will nevertheless behave identically with the true model for the quadcopter. The derived dynamic model is validated by basic flight controller simulation and actual flight implementation. The model is used as basis for a quadcopter design, which eventually is used for test purposes of basic flight control. This study serves as a baseline for fail-safe control of a quadcopter experiencing an unexpected motor failure. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 201,310 |
2406.10478 | From Words to Worlds: Transforming One-line Prompt into Immersive
Multi-modal Digital Stories with Communicative LLM Agent | Digital storytelling, essential in entertainment, education, and marketing, faces challenges in production scalability and flexibility. The StoryAgent framework, introduced in this paper, utilizes Large Language Models and generative tools to automate and refine digital storytelling. Employing a top-down story drafting and bottom-up asset generation approach, StoryAgent tackles key issues such as manual intervention, interactive scene orchestration, and narrative consistency. This framework enables efficient production of interactive and consistent narratives across multiple modalities, democratizing content creation and enhancing engagement. Our results demonstrate the framework's capability to produce coherent digital stories without reference videos, marking a significant advancement in automated digital storytelling. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 464,418 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.